I’m thinking of a number between 1 and You’re Dumb

Since this is about adventure games, I feel like I should make my usual disclaimer explicit: this is a personal blog, I don’t speak for my company, and vice-versa. Any opinions I spew out here are not necessarily my coworkers’; in fact, when somebody at work tells me, “I read your blog,” it’s most often followed by, “I didn’t agree, but….”

harveybirdmanmentok.jpgApparently, “Yahtzee” Croshaw has a column in the back of PC Gamer now, and the one in the July 2008 issue is about how he’s bored with adventure games. They always devolve into the same old thing; and sure the SCUMM games were excellent, but that was in spite of their gameplay, not because of it; and ever since Half-Life came out and proved that action games don’t need to be mindless and shallow, do we even need adventure games anymore?

Fair enough. A few years ago, I would’ve probably agreed completely. When I first got into videogames, I was only into SCUMM games, because shooters were dumb. And even then, it was rarely because of the puzzles; the puzzles were almost always something you had to slog through to get to the next cool story moment. When Dark Forces proved that DOOM could have a cool story and characters, and then Jedi Knight and Half-Life proved that cinematic storytelling could actually be fun to play, I said, “Well, that about does it for adventure games.” Until I started working for Telltale, I can’t remember playing an adventure game since Zork Grand Inquisitor. (Which is still a fantastic game, by the way, one of the best I’ve ever played).

But that was eight years ago. I tend to like Croshaw’s video reviews, because buried amongst the Britishisms and dildos, there’s frequently some genuine, bullshit-free insight in there. Even when I don’t agree, I like hearing someone cut through conventional wisdom and hype and just get at the heart of whether a game is fun or not, and why.

And that’s why I was disappointed in that PC Gamer column, because it doesn’t say anything new. Basically, he says the exact same thing anyone says whenever the topic of adventure games comes up:

Myth 7: Adventure games suck because they’re artificially complicated and there’s only one correct solution to every puzzle and it’s never what you would do in the real world so you have to READ THE DESIGNER’S MIND!!!!

Whenever this observation gets trotted out on the internet, it’s invariably followed by a link to the Death of Adventure Games article from Old Man Murray. That’s the one from 2000 where a particularly ridiculous puzzle from Gabriel Knight 3 gets ripped apart, and adventure game fans and creators both get exposed for the smug, self-important bastards that they are. And as soon as you link to the OMM article, the crowd scatters like cockroaches, adventure game apologists hanging their heads in shame. The issue was definitively settled, eight years ago: Adventure Games Just Aren’t Cool Anymore.

And then the writer of that article went on to get a job at Valve, working on Portal, which is more like an adventure game than most adventure games I’ve played.

Continue reading “I’m thinking of a number between 1 and You’re Dumb”

Who’s in control here?

Alexander_Haig.jpgThat’s Alexander Haig. Look him up.

Also: This post has spoilers for BioShock and Grand Theft Auto IV, in case you’re paranoid about that kind of thing.

Previously on Spectre Collie, I made the claim that videogame developers can learn more from “non-interactive” media than just how to make more cinematic cut-scenes and more literary dialogue. If interactivity is the key aspect of videogame storytelling, then how come everything we borrow from traditional media is non-interactive? Why not look for the ways in which movies, books, TV, and comics interact with the audience, and then try to build on that?

The example I used last time was the “don’t go into that room” scene in horror & suspense movies. Those scenes build tension not by showing the audience what happens next, but by asking the audience what they think is going to happen next. In effect, they’re turning the storytelling duties over to the audience.

This only works because there are always at least two versions of the narrative being told simultaneously: the filmmaker’s version, and the audience’s version. It’s as true for movies, books, and TV as it is for storytelling games. In games, obviously, you put more emphasis on the player’s narrative. Which leads to the assumption:

Myth 6: Player narrative is always more important than developer narrative.

On the one side, you’ve got the arrogant, control-freak game designer, forcing his lame story onto players who don’t want to hear it. One of the designers at Telltale, Heather Logas, described this phenomenon better than I’ve heard anywhere else: “A lot of game designers act like they don’t want players coming in and messing up their story.” So we’ve developed all kinds of ways to ensure our stories don’t get messed up: cut-scenes; choke points; and linear sections that trick the player into believing he has control, when in reality he’s only allowed to do the one thing we want him to do.

On the other side, you’ve got the players, a bunch of whiny malcontents with an inflated sense of entitlement. They insist that their $50-$60 has bought a team of professionals who should dance at their command. The interactivity of a game is supposed to let the player tell his own story. That’s the only story that players care about. Besides, everybody knows that games will never have storytelling and writing that’s as good as movies or even television. If a game developer just wants to tell a story, he should get out of games and just make movies. So players have developed all kinds of ways to ensure their stories don’t get messed up: basically, insisting repeatedly on blogs and message boards that developer’s stories be kept quiet and unobtrusive, and that cut-scenes should be kept skippable if not cut altogether.

So which narrative is the more important one? If the real potential of interactive storytelling is giving the audience the freedom to tell whatever story they want, then the answer’s obvious: the player’s narrative is everything.

But the real potential of interactive storytelling isn’t giving the audience the freedom to tell whatever story they want. That’s the real potential of the pencil. And if you give someone a pencil and a blank sheet of paper, or a blank page in Microsoft Word, or a blank workspace in Flash, you don’t automatically end up with great storytelling. If you end up with anything at all, more often than not it’s insipid, derivative, filled with cliches. That’s as true of the best screenwriters alive as it is of the guy who writes “FIRST!” on blog comments. Great stories are rare, because great stories are hard. So the player’s narrative isn’t the most important.

But the developer’s narrative isn’t the most important, either. After all, if a game developer just wants to tell a story, he should get out of games and just make movies.

Tear down this wall!

The real potential of interactive storytelling is delivering a story that’s a collaboration between the storyteller and the audience. It’s not the player’s narrative, and it’s not the developer’s narrative; it’s this third thing that’s better than either. As you play the game, the pieces of the story start to come together, and you feel not like you’ve played a part in someone else’s story, but you helped write the story.

So how does a game design make that happen?

Continue reading “Who’s in control here?”

The Calls Are Coming From Within the Ice Level!

shiningroom237.jpg
Previously on Spectre Collie, I made the claim that storytelling in “passive” media like books and movies isn’t as passive as people like to think. A well-told story demands that the audience stay actively engaged in the telling, processing what’s come so far and anticipating what happens next.

The interesting thing is: this is so integral a part of storytelling that even the not-so-well-told stories do it, sometimes without even realizing it. Last time, I compared the movie Adaptation to the game BioShock, because each uses the limitations of its format (a cliche-filled Hollywood action movie, or a linear first-person shooter game) to feed back into its story and deliver a more significant message (about the misguided passion for perfection, or the nature of free will).

The most common criticism of both of those is that they’re “meta” stories, based solely on a gimmick, with the director (or screenwriter) or designer dangling his message just out of the audience’s grasp, all the while thinking he’s so clever. But the idea of manipulating the audience’s expectations isn’t particularly new or post-modern; it’s a fundamental building block of storytelling.

Any story worth hearing (or reading, or watching, or playing) is going to have moments where the audience has to fill in the gaps and make predictions, forming its own parallel version of events that’ll get rewritten in collaboration with the storyteller. On its own, that’s not the type of activity that people mean when they talk about interactive entertainment. And that’s a problem, because it’s the most interesting type of activity. And understanding how it works will lead to better storytelling in games.

Myth 5: A story is a sequence of events leading to a conclusion.

Whenever anybody says that storytelling is “passive,” I have to wonder if they’ve ever seen a horror movie with a big crowd. The first time I saw Scream, it was in a theater packed with Marin County high school students taking advantage of Tightwad Tuesday. I’d have a hard time calling that audience “passive;” they were screaming, laughing, and yelling back at the screen.

Now, Scream came out during the crest of the Irony Wave of the mid-90s, so it’s definitely overloaded with gimmicky “meta” moments. But it didn’t really do anything to change the rules of horror movies; all it did was explicitly spell out the rules before it carried through on them. And the first rule of any horror movie, from the most highbrow suspense thriller to the cheesiest B-movie, is “don’t go into that room.”

birdsdoor.jpgScream‘s most memorable “don’t go into that room” moment kind of sucked (seriously, who thought death by automatic garage door was scary?), so look at the most famous one from The Birds: Melanie Daniels is sitting in a dark living room after everyone else has fallen asleep. She hears a noise. She picks up a flashlight and gets up to check it out. It’s not the lovebirds in the next room, so it must be upstairs. She looks at the stairs to the door for a moment, deciding whether to go in. She walks up the stairs. When she gets to the top, she reaches for the doorknob. She opens the door and goes inside. (Spoiler: there’s a bunch of birds in there).

Now, that scene goes on for like three or four minutes, and taken out of context, it’s every bit as tedious as I just described. Seriously, nothing happens. It’s even less inherently creepy than a little boy riding his Big Wheel through the halls of an empty hotel. You’d think that with as much praise as Hitchcock gets, he would’ve had the sense to cut that scene shorter, or out altogether.

Except we all know, on a gut level, why this scene is in the movie. The short answer is “pacing,” but that’s an over-simplification. It’s not just a case of shifting from loud to quiet, or action to rest, but shifting the audience’s role from passive observer to active participant. There’s still a story going on, but the storyteller is inviting the audience to compare their version of things to the one that’s playing out on screen. The story isn’t just a sequence of events, but also the decisions leading up to those events — it’s not just what’s happening, but how it’s happening and why it happens.

What do you, the audience, think?

We all know that something scary is behind that door. Considering what we’ve seen so far, including the title of the movie, we know that it probably somehow involves birds. But we don’t know what exactly it’s going to be. Much of the scene is shot from a first-person view; we’re not just watching stuff happen to the star, we’re making decisions about what she should do next, and what’s going to happen as a result.

Should she try harder to wake up the others? Should she get a weapon? Should she devise some way to find out what’s behind the door without opening it? Should she just forget about the door altogether, and leave it until morning? What’s going to be on the other side? Is she really at risk of dying when she sees it? Would the movie really kill her off without a resolution of the love story?

Once we get through the door, that’s when movies and videogames diverge: movies become completely passive, showing the audience whatever nasty monster or expensive CG effect the storyteller’s come up with. And games become completely active, inviting the audience to run around and mash buttons until everything’s dead. The pay-off’s not the key, the build-up is. It’s during the build-up that videogames and movies are the most similar.

Of course, the audience doesn’t have real control over what happens; we’re inexorably pulled up the stairs and through that door no matter what. But does that really matter? “Survival Horror” is the videogame world’s attempt at horror and suspense, but I don’t know of any game that lets you do the sensible thing, just forget about the zombies and just dial 911. And if such a game exists, I don’t think I’d want to play it. You’re going to go through the door, but that’s not the interesting part. It’s not about what happens, but about what could happen.

No one will be admitted during the chilling Boss Fight sequence!

But games still don’t get this. We’ve been conditioned to think that “interactivity” makes games an entirely new medium, and we’re adamant that we have nothing to learn from the movies that have already mastered a lot of this stuff. So we liberally borrow the most shallow aspects of movie storytelling and try to graft those on top of a videogame. We pretend that there’s a clean division between “gameplay” and “story,” putting all the cinematic stuff into the “story” section to make the “gameplay” section seem cooler, instead of learning what the cinematic stuff really does.

So our games end up playing like long sequences of pay-offs, with interminable, dull storytelling spots in the middle. We assume that we have no control over pacing. And we insist on a clean break between passive storytelling and active playing, which means “cut-scenes” and “interactive sections.” Basically, we throw pacing out the window, letting the player run around unsupervised for 90% of the game, until we grab control back from him to show him parts of a story he doesn’t really care about.

For example: every time BioShock tried to do straight-up horror, it failed for me. It came across more like the cheesy Castle movie remakes like House on Haunted Hill and Thirteen Ghosts. Messages scrawled in blood, gruesome medical facilities, bodies sprawled out all over the place, and loads of rusty hooks. But the best moment of the game, and I’d say of any game last year, was the “don’t go into that door!” moment leading up to your showdown with Andrew Ryan. Everything in the game has been building up to this point, and you know that something big is going to happen on the other side, even if you don’t know exactly what it is. You run through a couple of empty corridors, building up to an epic confrontation, speculating on which combination of weapons and superpowers you’re going to use, putting together the bits of story you’ve seen so far. Then, in a quiet anteroom, you see the biggest reveal of the entire game, written on a wall (in blood, of course). The following cutscene is basically just clean-up work; the climax just happened, in an “interactive” section. And it didn’t involve shooting anyone or leveling up, but piecing together the story without having it handed to you.

The best example of “don’t go into that door” that I’ve seen in games is in the Silent Hill series. I’ve never been impressed with the games overall, from what I’ve seen, but the radio mechanic is just genius. As you get closer to danger, the static on a handheld radio the protagonist carries gets stronger. It’s creepy, it serves a function in the game, and it serves several functions in the story, not the least of which is to remind the player that something supernatural is going on. Basically, the storytelling never stops, since you’re given constant feedback as to whether something spooky is happening.

In games, you’ll find a lot more examples of the “don’t go into that door” moment’s evil twin, the “oh, it’s just the cat!” moment. In a movie, a cat (or even a monster) suddenly jumping out of nowhere is the worst of cheap scares, because it breaks the contract between the audience and the director. We’ve watched this young, almost naked college girl walking down a dark hallway, we’ve invested thought into whatever horrible thing is going to happen to her at the end of the hallway, so don’t cheat us out of that by making it something we couldn’t have predicted.

But even in games without monster closets, we’ve got no problem just throwing a ton of monsters (where “monster” is shorthand for “any obstacle”) at the player, with no predictability or reason. The story gets shut down completely, reduced to an insultingly simple “You’re at point A and need to get to point B.” The level designer will usually make a token stab at pacing by the order he places enemies and power-ups, but for the most part, all storytelling conventions have been thrown out the window. So there’s a short gauntlet of having enemies thrown at you for a few minutes, until you get to the next cutscene; sometimes you’re asked to push a button or pull a lever.

How is that not passive?

The End… OR IS IT?!?

All of this stuff may seem specific to horror and suspense, but it’s not. All comedy is based on playing with the audience’s expectations, as well. Horror movies are just a good example because they prove that none of this is all that hard: if Friday the 13th can do it, why can’t we?

The basic lesson is this: game developers like to think of games as semi-controlled environments, where we have control during cut-scenes and chokepoints, and relinquish it for the interactive sections. This is bad; it leads to shallow games annoyingly interrupted by bad stories. What we need to realize is that we never have complete control over the audience. Not even “passive” media like movies and TV have that.

And to realize that, first we have to realize that the audience — even the droolingest fanboy in the comments section of a videogame blog somewhere — is always thinking. You can’t stop it; it’s the curse of being human. So don’t try to divide the game into “the time when I do the thinking,” and “the time when you do the thinking.” Instead, find a way to use it to your advantage; as movies prove, you can use the audience’s creativity to tell your story.

Open up the game, let the player figure out the story as he goes along. Don’t worry that everything has to be revealed in a cutscene before you relinquish control to the player, or he’ll be completely lost — it’s a joystick and some buttons; it’s not rocket science. And stay open to the idea that the player’s got his own version of events that’s constantly being updated and compared to the version that you’re trying to show. As it stands now, we’re putting all our energy into making what happens on the other side of the door. We need to put more effort into what happens in the long hallway leading up to the door.

There’s no second chance to make a first impression

Update 2/22/2021: Removed broken links and images

Previously on Spectre Collie: about once a month I’ve been squeezing out a lengthy treatise intended to debunk some “myth” about storytelling in videogames. I’m still headed towards making a point with those, more or less, eventually. But they take a long time to write, and sometimes it’s easier just to state the obvious.

This one was perpetuated in that “The Case Against Writers” article I mentioned a few days ago. It was addressed to some degree in the two rebuttals, but I think there’s a little bit more to be said about it. Especially since it’s something I always took for granted, until I stopped to think about it:

Myth 4: Games aren’t like stories because stories are inherently linear, and games are non-linear

Seems to make sense: a player sees most games as a series of choices, each one opening up a new part of the experience. And most designers see their games as interconnected systems with difficulty curves and AI subroutines and event handlers, divided up into chokepoints where a linear chunk opens up a new, larger non-linear one.

But the basic fact is: all games are linear. Each game is a sequence of events that starts when the player puts in the disc (or downloads the game), and ends with him taking the disc out for the last time to go on the internet and bitch about it. I’ll explain my point by arguing with an imaginary belligerent person.

But my game has multiple solutions for each puzzle, and over 100 different possible endings!
That’s great. But the player is still only ever going to see one solution to each of those puzzles, and one of those endings.

That’s what multiple play-throughs and savegames are for. Haven’t you heard about replayability?
Sure I have; it’s still listed in some game reviews as “lasting value,” as if it were a universal goal for all games. The fact remains that unless your players have all suffered some sort of massive head trauma, they’re going to remember what happened the first time they played through your game, or solved your puzzle. They’re not going into each case blind, but knowing how they did things and what the repercussions were the first time. So your multiple endings and branching paths are like deleted scenes and alternate endings on a DVD; they can in some cases give more depth to the story, but they don’t supplant the “real” version.

So this is really just saying that you don’t like branching and alternate endings.
I do happen to think that story branches and alternate endings are a waste of development effort, when they’d be better off just getting folded into the main game.

But that’s not the main point I’m trying to get at here. I’m saying that we’d be better off looking at what branching, multiple endings, and nonlinearity in general are trying to achieve in games, and finding real ways to do it.

So you’re saying that all games should have a linear narrative.
No, I’m saying that all games do have a linear narrative, even if that narrative is as tedious as “first he swapped the red gem with the gold gem, then he swapped the blue one with the green one….” It’s not as if story-telling games are some completely separate entity; the narrative is there, whether you like it or not. You just have to decide how much you want to direct the narrative and how much emphasis you want to put on it.

You would say that all games should be linear, seeing as how you work in adventure games.
Actually, when you’re making an adventure game, you have to think the hardest about keeping the game non-linear. Because the linearity is baked into a story-based game, and there are practical reasons for giving the player a branch or a choice. In multiplayer games, it’s useful for crowd control. In adventure games, it’s useful for keeping the player from being completely stuck and unable to progress until figuring out the solution to a single puzzle.

But as much as we say that there are three things the player can do at this point, there’s still really only one they care about: getting to the next plot point. The player only feels “stuck” because he’s not progressing along the linear path to the game’s conclusion. So is non-linearity in a point-and-click adventure game (or collecting things and jumping on enemies in a platformer, or side quests in an RTS or FPS) really doing anything other than covering for the fact that walking around and clicking on stuff in an adventure game isn’t really all that fun?

This only applies to storytelling games, not strategy or sandbox games.
Not so; my linear narrative with Civilization IV started sometime last year and hasn’t ended yet. I can try different games with different leaders, maps, win conditions, and strategies, but 80% of each game is identical to the ones I played before.

It’s the same with The Sims 2, which has a “story” that’s been going on for years now. I can and do keep creating new characters and new families, but it’s not like each one starts a whole new story. Most of what I do with the new characters, I’ve already done before lots of times, even back to the first game. I’ve already seen at least 80% of the game, so all I’m doing now is adding new appendices to the book, or alternate endings.

You can’t write about storytelling in games in 2008 without mentioning Portal or BioShock. Which is it?
Portal. I’d say that the real writing achievement in this game isn’t all the one-liners, or its ability to start annoying internet memes, but the pacing. They recognized that there’d be a narrative inherent to the game, even if it were “just” a first-person puzzle game with a cool gun. So they piggybacked another narrative on top of the built-in one. The more you play, the more you become familiar with the game mechanic, and the difficulty and complexity increase — much like a story builds up to a climax.

At the same time, you’re finding out more about your character and the world you’ve been dropped into. And the story game is developing at pretty much the same rate as the puzzle game’s “narrative.” They’re not completely in sync, but the genius of the game design is how it smoothly transitions between emphasis on the story narrative and emphasis on the puzzle narrative.

When you’re just wandering around a room looking for a solution to a fairly straightforward switch puzzle, you happen onto a hidden room that delivers your first big story moment. Later, when you’re in a room filled with platforms and switches and light balls and acid pits, the story shuts up for a little while and gives you a chance to think about how to solve the puzzle. And both the story narrative and puzzle narrative reach a climax at the same point.

All of this is either completely obvious, or completely irrelevant. What’s your point?
Just that sandbox games, open-ended games, or simulated worlds that the player is completely immersed in and can interact with, aren’t by themselves the holy grail of videogames. You have to impose some kind of rule set to make it interesting. And a set of rules implies a winner, and a winner implies an ending. Therefore, you’ve got a linear narrative, more or less, baked into every game.

You’ll often hear the claim that games and other interactive entertainment are just stepping stones on the way to the real end goal, which is something like Star Trek’s holodeck. That we’re all just covering up for the fact that we can’t yet build a world that the player can jump into and do anything he wants. I’ll just geekily point out that they only showed the holodeck right as someone was leaving it, or they cut away right after someone entered. And if they spent any time inside, it was when someone was pretending to be Sherlock Holmes and solving a murder mystery, or otherwise telling a story. In other words: a simulation is only interesting if there’s some kind of point to it.

It’s inevitable that we’re going to get more sophisticated AI characters, and more realistic physics systems, and games that in general do a better job of dynamically reacting and responding to what the player does. And still, for the player, it’s inevitably going to be a linear experience. That means that the basics of storytelling, the ones we’ve spent thousands of years developing, are still going to apply: characters, plot, pacing, a dramatic arc, and a beginning, middle, and end.

There is no places for writers in the games industry.

Last week there was a bit of fallout on the internets from an article titled “The Case Against Writers in the Games Industry”. It was by a game designer named Adam Maxwell, and it basically makes the claim that having a dedicated writer on a development team is a waste; it’s always better to have another game designer who knows how to write.

It’s easy to see why it got a strong reaction; it’s written to provoke a reaction. It dredges up Roger Ebert’s old “authorial control” argument, which has already been shot full of holes for the last couple of years. It makes terrible assumptions about the role of writing and storytelling in game development. Of course, it’s also filled with so many typos, unfortunate word choices, and wacky grammar mishaps, that it’s like porn for people who love irony.

But see, here’s where the problem comes in: I agree with the conclusion of that article more than I do with those of the various rebuttals. The article, and Maxwell’s followup on his own blog, are both so full of wrong that I’m hesitant to say I agree with any of it. But having a game designer who can write well really is more valuable to a studio than someone who writes well but has no talent for, or desire to do game design.

That’s not even provocative; it’s trivially true. Even better than that would be a designer who can write and is an excellent concept artist. And better still would be a designer who can do all that and also be good at character modeling, animation, scene creation, level design, and composing music. Best of all would be someone who can do all that and make shadow clones of himself so that he could get the game finished on schedule.

In a rebuttal to that article, Ron Toland of the IGDA Game Writers’ Special Interest Group points out all of the erroneous assumptions, and describes game development as a lot of people working in concert. The designer, writer, artist, animator, composer are all equally important, each contributing his own work to the game, with the end result suffering if any part is missing.

For example: you’d be hard pressed to find anyone who seriously believes that good music isn’t important to a game. And it’s ridiculous to say that having a dedicated composer is a waste, that it’d be better handled by the game designer. Because most people understand that not everyone is equally good at making music. So why do people assume that everyone is equally good at writing?

Another rebuttal came from Kelly Wand, who talked specifically about the game mentioned in the original article. He got slightly less philosophical than the others, talking more about the industry-wide perception of writing in games. The best part is this:

[…] I find it a remarkably revealing insight as to just how derisively they view the creative process in general and the legacy of electronic entertainment in particular. It’s indifference to mediocrity, usually posed as a loaded “either-or” analogy.

That perfectly describes the reaction you get any time you try to broach the topic of storytelling in games. The complaint goes that you can either have a game or a story. It’s either The Sims or Final Fantasy, action or cut-scenes, activity or passivity, players’ fun or the writer’s ego. You hear that there are plenty of games that are perfectly fun without stories, so clearly story and writing aren’t necessary — why do you hate Tetris so much?

Except I’d say it goes past “indifference” and crosses over to open hostility to anything other than mediocrity. I would be encouraged to see more people “indifferent” to storytelling in games; at least it would mean that they’re no longer trying to undermine its importance, marginalize it, and drive it out completely. “Sure, adventure games can have stories in them, but keep it short and simple, dammit. And don’t ever, ever assume that what you’re doing is as important as the game design.”

In the “case against writers” article, Maxwell says that BioShock was “hamstrung” by its insistence on story, which to me is like saying that The Seven Samurai was hamstrung by its insistence on having so many samurai. In that presentation about writing in games I made fun of a while back, the presenter made a list of game types, to help you determine “how much story you actually need.” He listed “story-based gameplay” as its own category, even separate from role-playing games!

And with attitudes like that being so prevalent, I think even the writers are being a little short-sighted about this. Wand ends his rebuttal with the observation that people don’t have to be so fearful that writing is going to “take over” gameplay. That writing isn’t meant to be the “food” of a game; “It’s the salt.” That’s a fine analogy: modest, non-threatening, acknowledging that too much writing can ruin the end result, while at the same time emphasizing that the end result is completely unpalatable if the writing is missing or done poorly.

Wand and Toland’s rebuttals do a good job of defending the role of “game writer” as it exists now, but I think they’re overly defeatist about the potential for game writing to improve. We’re so used to the idea that game design is the master discipline in game development, and that storytelling and writing are the antithesis of interactivity, that we’re willing to argue even to get promoted to “salt.”

As long as we keep thinking of writing as this completely separate discipline, that it’s important to the game but not the “food” of the game, then both writing and game design are going to stagnate. You wouldn’t design PaRappa the Rapper or Rock Band while leaving the music to be some autonomous thing that gets added in later. You integrate it from the start. But while music-based games are still relatively rare, there are tons of games that try to tell stories. So why are we content to keep treating the writing as some separate thing, that doesn’t need to be integrated from the start?

Tons of games try to tell stories, and tons fail, or are mediocre at best. We can add better writers all we want, and we’ll still just end up with grammatically correct descriptions of the ice level, flowery and evocative descriptions of what it means to be a space marine fighting demons, and stirring speeches from our spiky-haired amnesiac hero right before he does battle against the clever-quip-spewing boss monster.

For my part, I’ve worked on more games as “just” a writer (or a writer/content programmer) than I have as a designer. And from a practical standpoint, that’s sometimes a necessity: even fairly short games can require a lot of writing, and there’s just not enough time to do double duty. But I will say that the work that I’ve been most proud of has come from designing the game as a writer would; not from saying, “and another puzzle goes here” but “this section should be interactive with a puzzle works like this because of the way these two characters interact and the way the pacing is building up to this moment.”

That’s less a statement for or against game writers, and more an interpretation of what “game design” is. “Designer” and “writer” in videogames aren’t directly analogous to movies, like we often assume they are. In movies, a screenplay isn’t just dialogue, but scene descriptions and story flow and occasionally even camera direction. You wouldn’t automatically assume a novelist would be a good screenwriter — that’s the fear most people have when they talk about writing overtaking game design, that you’re dragging the tedious elements of one medium into another where it doesn’t fit.

But you shouldn’t automatically assume the director would be a good screenwriter, either. Or that the director can plan out an entire movie, calling the screenwriter in for a couple hours every week to suggest lines for the characters to say. If a game is going to have a story at all, then the designer and the writer need to think of the game as a story.

Or don’t even bother, because we know a stupid, or poorly-integrated, or just-slapped-on-for-the-sake-of-it story when we see one. And those aren’t doing any good for the perception of writing in games.

The Old Man and the Realistically Rendered Water Volume

marlinget.jpgI’m just arrogant enough that I tend to automatically dismiss anything presented as a list of rules or guidelines about writing. There’s obviously a ton of craft involved in writing, independent of any concerns about talent or personal style. But attempts to codify it are always either too vague to be practically useful, or too specific to apply to anything but the most pedestrian writing. We’ve already got a set of rules: high school English. Learn those, and then read (and watch) examples of good writing, practice your own writing, and you’ll learn by doing, to the point where you’re confident enough to split an infinitive in your opening sentence.

So I was automatically skeptical of the “Learn Better Game Writing” tutorial, given by Vicarious Visions producer Evan Skolnick and described in this Gamasutra article. I became even more skeptical after reading this quote:

Video games are a product where the buyer didn’t buy to read something — they may not even want a story. You have to accept certain realities when writing in this business. You’re not the next Hemingway, but even if you are, this isn’t the place to show it. Your job is to write tight, efficient, serviceable story content.

So remember that, kids: your goal is to write succinctly and efficiently. Not like that Hemingway blowhard, always droning on and on. Man, that guy liked to hear himself talk!

It’s unfortunate, because one of my own biggest faults as a writer is a tendency to over-write, a failure to be concise, and a habit of unnecessarily repeating myself. So maybe there are still some good tips there, and I’m being overly antagonistic to assume that using the worst possible example of “Insert Famous Author Name Here” means that the guy doesn’t know what he’s talking about.

The problem is that the lecture, at least as described in a brief online article, starts out with such a defeatist tone, it’d be charitable to call it “uninspiring.” Launching into a lecture with the loaded words and phrases “product” and “buyer” and “may not even want a story” and “accept certain realities” and “business” and “you’re not the next blank” and “serviceable” and “how much story does your game actually need?” all work together to create a giant vacuum from which inspiration is not allowed to escape.

“Get over yourself” is fine advice which would be good for all writers to remember, regardless of their field. But that’s generally followed by an example of great writing to which we should aspire. Instead, Skolnick takes a completely dismissive tone towards game writing, presenting it as a necessary evil at best.

His quote: “The amount of story content you put in is generally how much the player will tolerate, and if you break those expectations, you do that at your peril.” There’s your objective, writers: to be tolerable. Spoken like someone who uses the term “creatives” like high school students use “drama fags.” Just do your job and get out of the producers’ way, so we can check you off the task list and move on.

He does give an example of a game that does it right:

As an example, Skolnick showed the opening cinematic from Grand Theft Auto III. He then broke down the timeline: 1:30 credits, 2:45 cutscene 1, 10 seconds for the transition to gameplay. […] Your required viewing time 2:55 seconds, and you’re into the game. Quite reasonable. Now it’s time to bring up the whipping boy — Metal Gear Solid 2.

No discussion of the quality of the writing in each game, the way writing is used in each game, or the effect it’s trying to achieve. There’s a single quantifiable measure of the quality and usefulness of game writing, and that’s oh my God are you still talking press A skip cutscene press A!!!!!.

The quality of writing in games in general, and my own writing in particular, still has plenty of room for improvement. We’re not going to get there by following the teachings of a caffeine-addled 14-year-old with attention deficit disorder.

Even those of us with normal attention spans don’t like to be barraged with reams of dialogue coming out of nowhere with no regard to pacing or story flow. But even a well-placed dialogue-heavy passage can be annoying if it goes on too long, for the simple reason that the writing in most videogames sucks. Why does the writing in most videogames suck? Mostly because so many people in game development consider it to be secondary to everything else, a necessary evil that must be tolerated, whose only virtue is its brevity.

Using films as an example, because “movies are our culture’s main shared storytelling experience, for better or for worse,” Skolnick leapt into discussions of the classic three act structure, delineating the acts and plot points of films before turning to the audience to suggest examples. At this point the class became a classic creative writing workshop at a basic level, so if you’re interested in pursuing the ideas presented here, you could easily find some books to read.

Or, you know, watch some movies or something. Whatever. They’re all the same three acts with plot points pretty much, for better or worse. The Matrix was pretty bad-ass. And you should watch Aliens, or if you’re making a game with gangsters instead of space marines, see Scarface.

What Skolnick did note that is worth emphasizing is that the structure of games, with a series of levels building toward a climactic final boss encounter, maps very well to the classic act structure of continual conflicts.

I guess you could make a game that wasn’t just a series of levels building up to a final boss level, you know, bring some level of art and creativity to the storytelling process to tell an unconventional story, but you do that at your peril.

After discussing act structure, Skolnick moved into the Monomyth as presented by Joseph Campbell in Hero With A Thousand Faces and more latterly, Christopher Vogler in The Writer’s Journey, which he recommended as popular with Hollywood writers.

Thanks, dude. Hero With a Thousand Faces: let me write that down; I don’t believe that’s ever been recommended before. I tried getting through it one time, but it was way too long. I’m still trying to slog through The Old Man and the Sea.

Ready… Be fought against!

Defensive Blanka is DefensivePreviously on Spectre Collie, I got alarmed at what I saw as the rising sentiment against storytelling in videogames. The people on the message boards and blog comments kept saying that storytelling and interactivity are mutually exclusive, that story-based games aren’t games at all! And notable people like Will Wright were making proclamations that the old ways are dead, and sandbox games are the future.

I did the most sensible thing in response: I made a game that proves storytelling and gameplay can not only co-exist peacefully, but can support and enhance each other, turning videogames into the most engaging storytelling medium there is.

Wait, hang on — I didn’t do that, because really, who has that kind of time? Instead, I started writing a series of lengthy posts on a low-traffic weblog about it. And as it turns out, I was being a little reactionary. It’s never a good idea to interpret postings on message boards and comments on weblogs as being accurate, objective indicators of public opinion. And Will Wright’s championing sandbox games is about as alarming as Frank Miller advocating stories about whores.

Three of last year’s biggest releases — The Orange Box, Mass Effect, and BioShock — were mostly story-driven, and the two that I’ve played found ways to start innovating with storytelling in a big-budget high-profile title. And if you look at the schedule for this year’s Game Developers’ Conference, you’ll see dozens of seminars about how you approach videogame storytelling. So either the field is still wide open for story-based games, or game developers will say anything to get a free pass to a conference.

Still, I know where my paychecks are coming from, and I do like to pontificate, so I’m going to keep on trying to debunk the Myths of Interactive Storytelling, responding to actual statements I have read on the internet.

Myth 3: Storytelling is inherently passive.

This one usually comes up whenever a Hollywood type announces plans to get into the videogame industry. They’re all doomed to fail, apparently, because movies and TV shows have nothing in common with games, and there’s nothing to be learned from passive, old-school media. Every time you try to apply the techniques of cinematic storytelling to a game, you’re killing the interactivity and stabbing a dagger into Mario’s heart.

The reason this is bunk is pretty simple: it assumes that communication between an artist and audience can only go in one direction at a time. In a movie, you shut up and watch while the filmmakers tell you a story. In a game, you’d like to get to playing at killing bad guys and saving the world, but the designers refuse to shut up and instead keep trying to tell you a story.

Continue reading “Ready… Be fought against!”

Pro Choice

Save/Harvest screen from Bioshock via Gamespot.com
It’s nice to get a day off, because it’s a hell of a lot easier to sit in front of a blog editor and pontificate about how games should be made than it is to actually make a game. (Even one that’s already been plotted out and designed for you, and you just have to make with the comedy jokes.)

So back to my ill-conceived quest to dispel the myths of videogame storytelling. Spoiler note if you haven’t yet played BioShock: I’m not going to explicitly say what the game’s big plot reveal is, but I am going to come just short of it. So if you want to go in unspoiled, avoid the rest of this post.

I still say that Half-Life 2 is the game to beat when it comes to videogame storytelling. BioShock does an outstanding job of raising the bar in writing, music, art direction, and getting at genuine meaning in a game’s story, but by the end, it’s dragged down by the weight of videogame conventions and a final act that invalidates the game’s climax.

By contrast, Half-Life 2 is completely, unapologetically linear, and it tells a much simpler and more straightforward story. But because they put so much energy into immersion, you’re more engaged in the story. The interface fades into the background, and the cutscenes and encounters seem to grow from the environment instead of being triggered by unseen level designers. Because the world surrounding you feels so real, your brain fills in the gaps and invents backstory for these situations.

Of course, whenever you put forward Half-Life 2 as a great videogame, you’re countered with the claim that it’s not a game at all. Because it’s completely linear. And as we all know,

Myth 2: Videogames are all about choices.

It seems trivially true: videogames are an interactive medium. Interactivity means choices. Therefore, the best games are the ones that give you the most choices, the ones that let the player completely shape the experience.

All of the pre-release hype around BioWare’s new game Mass Effect emphasizes the game’s choices. A marketing video released last month had the 1up.com crew getting all excited about how gloriously open-ended the game is going to be. In particular, there’s one playable character who has his own back-story of an alien race, leading to a crucial climactic decision that affects the final outcome of the game. But you could ignore him on your first meeting and not see that entire storyline at all! What tremendous scope!

When I hear that, my first thought is, “What a tremendous waste of time.” Why go to all the effort of modeling, animating, and voicing a character, much less coming up with a back-story and lengthy passages of dialogue to support it, if that story is so superfluous it can be completely ignored by the player without harming the overall game?

Now, Mass Effect seems like a very cool game, and I’m very much looking forward to playing it. And BioWare’s put an emphasis on choices and branching narrative in all of its games — that’s their gimmick, and it works for them. I enjoyed Knights of the Old Republic a lot, even though I thought its light side/dark side choice was pretty trite and shallow, but there are plenty of other players who’d point to that choice as their favorite aspect of the game.

I’m not against the choice. I’m against the idea that branching narratives, multiple puzzle solutions, and multiple endings, are the holy grail of videogame storytelling. The idea that all storytelling games would offer this kind of choice, if only they had enough budget and time. And the idea that a linear game fails to be a “game” because it doesn’t offer this kind of choice.

Rescue, Harvest, or (c) None of the Above

Most of the time I spent playing BioShock, I was divided between being impressed with everything I saw, and patting myself on the back for being smarter than the developers. All of the effort put into making an immersive game world was undermined by having bright colorful buttons over everything you could interact with, and a plot that dragged you forward with a big flashing yellow arrow telling you exactly where to go. And all that talk about moral ambiguity was silly, when your central moral choice was a simple binary “good” or “evil”, each with its own button and its own ending cutscene.

What a shame, I thought, that they built the game around something as clumsy and unsubtle as that. And how frustrating, since you don’t have any real choice apart from that. Over the course of the game, I developed more empathy for the Big Daddy characters, the lumbering guys in diving suits who trudged around the levels making whale songs, who wouldn’t harm you unless you harmed them first, and who were completely altruistic, existing only to protect these little girls left in their care. And no matter whether you chose the “rescue” or “harvest” path for the Little Sisters, you always have to kill the Big Daddy.

How insightful I am, I thought, for realizing that the most interesting choice in the game is the one you’re not explicitly allowed to make. It’s like the theme of morality that’s central to Shadow of the Colossus, but in a game set in an underwater city with robots and magic spells and weapons and people who actually talk. Over and over again, you choose to kill these characters, without having any real idea of the implications; in fact, you’re probably coming up with more and more clever and efficient ways to do it.

(And as should be obvious by now, it turns out I’m not all that insightful; that’s one of the main points of the game.)

It Could Happen to You!

So I say that the greatest potential for videogames as a storytelling medium doesn’t come from choice, but from agency. The plot and themes of the game have importance because you are the one driving the story forward.

I’ve heard the complaint that a linear videogame might as well be a movie, since what you do in the game doesn’t affect the final outcome. But what separates the alien invasion story of Half-Life 2 from the one in War of the Worlds is that in Half-Life, you’re not just watching things happen to someone else, it’s all happening to you.

And what separates the climactic reveal of BioShock from the one at the finale of The Sixth Sense is that the reveal has significance only because of the things you’ve been doing in the game up to that point.

If it can be compared to any movie, it’s most similar to the scene in Rear Window when Torvald finally discovers he’s being spied on. At the beginning of the scene, he’s a threat to Grace Kelly’s character, who we’ve been getting more and more attached to throughout the movie. But then he looks up, not at Jimmy Stewart’s character, our protagonist, but directly at us in the audience. It breaks right through the fourth wall, and injects the movie’s main theme right into your soul. It’s not a cheap-shock violation like The Tingler, but leaves you feeling simultaneously vulnerable and guilty for taking part in all this voyeurism. Being able to compare a scene from a videogame to one of Hitchcock’s finest moments is high praise for BioShock, and the effect is even stronger because we’ve got eight or nine hours invested in the experience at that point, instead of just one.

I’ve also heard the complaint that a linear game might as well be a book, which is interactive only in that the player decides when to turn to the next page. But in a game that’s designed well, the player has to know how to turn the page before he can continue. In the ideal book or movie, the “A-HA!” moment comes after something’s happened in the story.

But in the ideal narrative game, the “A-HA!” comes first, and only then can the story continue. For that to happen, the player’s got to be so immersed that he no longer feels that he’s being told a story, but that he’s the one doing the telling.

That can mean a branching narrative and multiple endings, sure. But it’s not a requirement — each ending you provide discounts an infinite number of other endings, so how is that really any less artificial than just providing one? Sliding Doors and Run Lola Run and “Choose Your Own Adventure” books all have their own audiences and succeed or fail to different degrees, but they’re all basically based on a gimmick.

If I have to decide between a game with 12 different but equally shallow story paths, and a game with a single, genuinely compelling, complex, detailed, and well thought-out story, the choice is clear.

Steady now, your genetic code is being rewritten.

Creepy shadows FTWThe Xbox 360 demo for BioShock came out last night (a PC demo is in the works), and I tried it out for a little less than an hour. Key discoveries:

  1. It’s awesome.
  2. It looks like it may be able to live up to most of the hype it’s been getting.
  3. They weren’t lying when they claimed it was a shooter; the emphasis is definitely less on character-building and stats like a “hybrid shooter/RPG,” and more on short action sequences. I haven’t played much of the demo, but I haven’t run into any real RPG-like stuff yet.
  4. I have a fundamental problem with first-person shooters on consoles. The controls for the 360 seem to be as straightforward as you can get, and I still felt like I was fumbling around. I’m getting the PC version.
  5. I was already sold by the time the opening sequence ended. But when I found out the first music you hear is a 30s-style jazzy version of “Beyond the Sea,” I pre-ordered the limited edition version with the soundtrack.
  6. I almost wish it weren’t a videogame.

I should explain that last one, but if you’re the type who’s avoiding any knowledge of the game before you play it, you’ll want to avert your eyes. Because I’m about to ruin the best part of the demo (and presumably, the full game).

I’ve been thinking a lot about storytelling in videogames, and occasionally pontificating about it on the internet. Watching the opening sequence of the Bioshock demo, from the beginning to walking out of the bathysphere, I really felt like I was seeing a step forward. It wasn’t a huge spectacle like a Final Fantasy game, and there’s technically not much there that we haven’t already seen in Half-Life 2 and a dozen other first-person games. But in terms of how much they said and didn’t say, and how they seamlessly blended the interactive and non-interactive segments — you feel disoriented when you’re supposed to feel disoriented, frightened in the right places, confused in the right places — I was struck with the feeling that this is the kind of thing you can only do in a videogame. The plane crash is similar to the feeling I got watching the “Lost” pilot, except the stakes are higher: partly because you’re not watching Jack watch everything, you are Jack.

But then the videogame conventions started making themselves more noticeable. The ubiquitous radio exposition guy, telling you what to do. The pop-ups telling you what button to press and the effects of power-ups (perfectly acceptable in a demo/tutorial, but still…) The barely-disguised save points. The events that are so obviously pre-scripted that you can almost see the trigger boxes. And somehow worst of all, the on-screen encyclopedia giving you back-story and explanations for the different concepts of the game.

All of this stuff is necessary in a videogame, and faulting a game for having them is the most unfair of nitpicks. So I’m not faulting BioShock — having to decide between totally immersive storytelling and accessible gameplay, they chose the gameplay. I’m just left feeling a little disappointed and frustrated. Disappointed that it looks to be a fun first-person shooter with some great art direction and an interesting back-story, made by a bunch of guys who really liked Fallout and Half-Life and System Shock. Instead of a huge, revolutionary milestone in videogame storytelling.

And frustrated that I know there’s got to be some way to do world-building and exposition and have all the detail and back-story they’ve developed for BioShock, and present it in an even more subtle way than the respectable job they’ve done. I’m just stuck trying to think of what that would be, exactly. Would a player be able to tell the relationship between Little Sisters and Big Daddies without having some Irish guy in his ear, making it explicit? Would it help to have the main character talk to himself as in an adventure game, instead of insisting that he be silent like Gordon Freeman? How much do you have to show in the UI (current weapon, ammo, health) as opposed to relying on the on-screen version? Can a game still be fun if you have to learn how things work by trial and error, and end up dying a few times?

You’ve unlocked… Rosebud!

But our childhood innocence is in another castle!When I started writing about storytelling in videogames almost a month ago, I’d intended to turn it into a series, with more arguments and maybe even some examples more concrete than “videogame stories should be good.” To keep things going, I’ll take some of the comments I’ve read about the topic online (in blogs, articles, and on message boards) in the past few weeks, and offer up a rebuttal to each.

Previously on Spectre Collie…

To recap: writing and storytelling in videogames has traditionally been weak at best. It’s common knowledge — whether it’s accurate or not — that videogame stories suck, and even the best don’t measure up to the level of the worst movies and books. And objection to cut-scenes and lengthy non-interactive segments has evolved into a whole school of thought saying that story has no place in videogames. According to this, games are defined by interactivity and their game mechanics, and that’s all that’s important. Trying to apply aspects of other media into videogames has not only failed in the past, but it’s always doomed to fail.

I say that not only can you tell a good story in a game, but that it’s important to games. In fact, it’s the only way that videogames are going to realize their true potential. Now, this requires a looser definition of “story” to make sense. It’s not just the narrative, or the premise, but everything that’s not purely the game mechanic: setting, characters, dialogue, narrative, and theme.

Myth 1: Videogames are Young

So the first myth about storytelling in videogames always comes in response to the whole “are videogames art?” debate, and you’ll see it repeated in this article from Wired. It usually goes something like this:

Somebody, like say Roger Ebert, asks why, if videogames are capable of art, hasn’t there been the great masterpiece worthy of comparison to the greatest works of film and literature? In other words, why is there no Citizen Kane of videogames?

Inevitably followed by the reply: Videogames and interactive entertainment are still a new medium, and developers are still figuring out how to use it. It took the movie industry decades to produce its definitive classics.

Which seems to me a pretty weak argument. A big deal was made on the internet about the recent 40-year anniversary of videogames, and this article by Kyle Orland in Joystiq compares other media at their 40-year mark (using a somewhat arbitrary start date for each, which I won’t argue with here). By the standard presented in that article, it would seem that we’re still in pretty good shape, and we’re due for our greatest achievement in just a few years now.

But there are problems with that. For starters, the development of a medium of art or entertainment doesn’t place in a vacuum. Looking back at the Joystiq article, compare the state of film after 40 years versus that of TV, and it’s clear that TV advanced a lot more quickly. They list “The Flintstones” as one of the most popular shows at the cut-off point, while movies had just released integrated soundtracks and introduced the Academy Awards.

“The Flintstones” as Postmodernist Masterpiece

While it may be tough to recognize today, “The Flintstones” was pretty experimental: an animated series airing in prime time, that was itself a parody of an earlier series. Depending on how much credit you want to give Hanna Barbera, it was either a postmodernist reference back to “The Honeymooners;” or the character types created in “The Honeymooners” were so established at that point, that they were default for a family comedy. Either way, it took a lot of evolution (no pun intended) and maturity before you could have something like “The Flintstones” even air, much less be one of the top series.

By that standard, games should have been maturing twice as fast as television did. And at least monetarily, that’s the case: the industry is making mad money, and game budgets are already rivaling those of movies. Production values are plenty high, too — there are plenty of scenes in Gears of War and Half-Life 2 that were more convincing to me than the effects in Starship Troopers and the recent War of the Worlds. The videogame business clearly isn’t pacing itself by the same schedule as movies & TV.

My biggest objection to the “games are still new” defense, though, is that artistic media are improved not just by time, but by milestones. You can’t just say that in x number of years, you’re due for your Wizard of Oz or Casablanca or Citizen Kane. The medium doesn’t really grow by evolution, but by intelligent design — you’ve got to have somebody who recognizes the potential of the medium, and then makes something that exploits it, showing the next generation what’s possible.

So just saying “give it time” doesn’t really cut it. The industry has got to put up or shut up. A better rebuttal to the question, “What is the videogame equivalent of Citizen Kane?” is to ask, “What’s so great about Citizen Kane?” It’s universally considered a classic, one of the greatest achievements in film. So what did it accomplish for movies as a form of art?

Orson Welles May Be a Hero To Most…

It’d be easiest for all of us if I could just say, “It’s the story. The end.” But it’s clearly not. “A reporter looks back on the life of an ambitious and powerful man to discover what was his greatest desire” is definitely a solid premise, but on its own, isn’t enough to warrant universal praise.

It’s not even the way the narrative is structured (part of the storytelling as Hanford Lemoore described it in an earlier comment, and thanks to him for bringing up the distinction). Setting up the central mystery of “What is ‘Rosebud?'” was a brilliant way to drive the story, and it’s one of the best-known conventions in the history of movies. But it’s also one that could’ve happened in any other narrative medium; it’s not so novel that you couldn’t do the same thing in, well, a novel.

And that is why people are still pointing to Citizen Kane as one of the definitive medium-defining movies: it takes a good story, and then tells it in a way that only a movie can. There’s the composition of shots that clearly and instantly establish characters and the relationships between them (the iconic image of Kane in front of his campaign poster, and the careful placement of characters in the foreground or background to show Kane growing distant from the people close to him). There’s the breakfast table montage that shows Kane’s marriage deteriorating over time via the placement of the actors and the editing of the scenes. And there are all the match-on-action shots that give the movie the sense of jumping around in time. (And more than that, just served as Orson Welles’ showing off.)

You can use a lot of the same gimmicks in other media — in comic books, The Watchmen gets a lot of use out of symbols and icons to show character, and images that carry through from one scene to the next. But a comic adaptation of novelization of Citizen Kane would fail if it just attempted a direct recreation, just as the cinematic adaptation of The Watchmen will fail if it just tries to film the comic book. Unless you exploit the medium to its fullest, doing the things that only that medium can do, you’re going to fall short of the medium’s potential.

“Games are not movies!”

Obviously, what videogames add is interactivity. And that’s the source of the whole debate: games just aren’t yet exploiting that interactivity as well as they could. Because they have all the same storytelling elements as movies (or television, in the case of episodic games such as Telltale’s hilarious Sam and Max series available for the low low price of $34.95 for the entire first season), the tendency has been to make shambling Frankenstein’s Monster creations stitching together cinematic sequences and interactive sequences that never quite meld. You either get games that periodically stop being interactive to make you watch a movie, or interesting story sequences that are held together by a predictable and uninspired game. And sometimes the most fun, perfectly-designed, pure games-for-their-own-sake games feel obligated to throw in some token effort at story, putting in an opening cutscene explaining that you’re playing Breakout to rescue a space princess from some evil galactic mega-corporation.

That results in the “Games are not movies! Down with story!” backlash. “Stop with all the pretentious ‘are videogames art?’ talk and just get back to asking ‘are videogames fun?'” Which is pretty unambitious. We already know videogames have to the potential to be fun; the industry wouldn’t be making billions and billions of dollars and taking up hours and hours of our time if they weren’t.

But they’re capable of more than that. So why not try to achieve more than that? We can just keep on Unreal Tournament until we come up with flashier versions of a game that’s undeniably fun but ultimately without purpose. Or, we could try to make interactivity meaningful. I don’t like the Grand Theft Auto series, for example, but I can’t deny that it was hugely significant in showing what you can do with a truly interactive environment. Now, what if you had a GTA with something of more substance than shooting hookers?

The various aspects of Citizen Kane — montages, staging, different types of editing — aren’t interesting on their own. They only stand out because they serve the story and its characters. Plenty of lesser movies use the same techniques, and they remain lesser movies. The cinematic elements alone don’t make it a great movie, just as a great game mechanic by itself doesn’t guarantee a great game. Even if you say that Kane is a case of form over function, that it’s only regarded a classic because of the way it mastered the cinematic elements in service of a fairly simple story: what’s wrong with that? Why not apply that to games? Having that kind of filter during game development would be a great improvement to what we have now: imagine how many hours of frustration we could’ve avoided if developers had simply asked themselves, “Does this jumping puzzle actually serve any purpose in the overall story?”

It All Has Purpose

That doesn’t mean that every game has to mean something, any more than the existence of “important” movies means we can no longer have movies like Big Trouble in Little China. And I don’t want to live in a world that doesn’t have Guitar Hero.

It just means that we start rewarding the developers who try to move things forward. It’s not just going to happen naturally over time. You’ve got to have people who are willing to step up and experiment with how interactivity and narrative feed off each other, instead of being mutually exclusive. And they’ve got to be able to do it without hearing the old story about how it won’t sell as well as Quake or Madden. Half-Life 2 is experimenting with things from the linear, cinematic perspective, and it seems to be selling all right. And The Sims went at it from the pure game-mechanic/sandbox angle, but it still managed to make a subtle commentary on consumerism and the nature of storytelling, and I believe it made a few dollars for Electronic Arts. I wouldn’t call either of those the Citizen Kane of videogames, but they’re on the right track.

At present, videogames aren’t like the fresh-faced young high-school graduate finding out how to make his way in the world, just years away from his first greatest achievement. They’re like the 40-year-old stoner who maybe will get around to accomplishing something, eventually. But for now it’s just easier and more cost-effective to just sit on the couch and watch shit blow up.

(While I was doing Google searches, I found this article from CBS News from last year, about the “Citizen Kane of videogames.” It gets perspective from a few people and then comes to many of the same conclusions. None of this is particularly new ground.)

Previously:
Is there anybody going to listen to my story?

Is there anybody going to listen to my story?

Viva la cutscene!There’s been quite a bit written on the internets lately about writing and storytelling in videogames, and frankly, I’m relieved. I was starting to get concerned that everybody had just given up.

People have been bitching for years about how the writing in videogames sucks. Long cut-scenes suck. And nobody cares about that stuff anyway, because they just want to get back to punching and shooting stuff. But I always assumed that that was just because they didn’t know any better. Even the most hard-line defender of videogames has to admit that the state of the art has been pretty dismal.

As much as I hate to trot out the old cliches, they’re mostly dead-on: the focus has always been on technology and visuals, with story and writing as an afterthought. More often than not, the stories and dialogue have been made by programmers, or artists and producers whose idea of high art is The Matrix. And even when publishers bring in the “real” talent, it’s usually been at the last minute. They’ll contract a science fiction novelist for the last couple months of development to write dialogue for their story about space marines with cybernetic implants and no memory of their past, and then act like that’s the highest achievement you can expect. And even that sorry level of non-commitment is only for the projects with the highest budgets; most titles haven’t aspired to reach the quality levels of even the worst television and best anime.

I always thought that if a game finally got it right, people would catch on. As evidence: the Old Man Murray guys put themselves out as the curmudgeonly, anti-intellectual voice of the unpretentious videogame audience, and they’d frequently complain when games tried to get all uppity and pretend like they were art. But you could still see their eyes light up about No One Lives Forever. And that game was a pretty standard stealth-FPS except for its storyline and some pretty clever dialogue.

Lately, though, a frightful rumbling has been going on across the weboblogosphere. Roger Ebert stirred up a shitstorm of angry, Halo-addled shut-ins when he said that videogames were incapable of being art, but he was hardly the first person to ask the question. It used to be that the question would result in a long and tedious debate about the role of “meaning” in videogames, lots of knee-jerk defensive arguments about how the industry is still in its infancy and they must be judged on different criteria than other media and by the way have you seen ICO/Grim Fandango/Rez/Deus Ex?

But nowadays when you ask “are videogames art?”, the response is less likely to be the usual debate about the role of “meaning” in videogames and more likely to be “Who cares? Shut up! Nobody understands me. Fuck you!” Games aren’t supposed to be art, they’re supposed to be fun. The game mechanic is what’s most important. Human beings are better than any AI, so it’s all about multiplayer, and cut-scenes just get in the way, so get them out of my face and let me start playing.

It’s not just the undereducated gaming masses saying this stuff, either. At this year’s SXSW Conference, Will Wright delivered a keynote about Spore and procedurally-generated game content. The actual transcript of his speech is pretty even-handed, but if you were to go just off the recaps posted in blogs everywhere, you’d think the key take-away was this: developer-created stories are an anachronism. Player-created stories are the future. Every time you make a cut-scene, you’re crushing a child’s soul.

Now, Will Wright’s one of the only people working in games that I have no reservations about calling a genius. Even better, an entertaining genius. So there’s no way I’m going to go on record as flat-out disagreeing with him.

But I will say that taking his presentation as the Grand Unified Theory of videogame creation is a path doomed to disappointment. I don’t even believe that was his intention; he’s got his biases and preferences — in his case, playing meta-games, treating the released version of a game as a toy or sandbox to create his own version of the game. And the Spore model is not the way to make videogames, but a way.

Most troubling is this quote:

I wanna take the player out of the protagonist of Luke Skywalker, and put them in the world of George Lucas.

Of course, when you put people into the role of George Lucas, you end up with stories like those written by George Lucas.

That’s not just a slam against Lucas, it’s getting at the basic truth: making a good story is hard. Even the guy who made Star Wars and The Empire Strikes Back and Raiders of the Lost Ark isn’t guaranteed to hit it out of the park every time. Will Wright puts his own optimistic spin on it when talking about user-created game content and social networking sites:

But most of the content is not so good, and a smaller percentage is great, but as we give them better and better tools, we’ll increase the quality of what they’re doing.

Which is a nice thought, I guess, but even the best tools can’t create talent where there is none. You can get a copy of Photoshop and a fancy graphics tablet, and that doesn’t make you an artist. (I speak from experience here).

But say you take a less pessimistic view, and assume that every person has a great story inside him that’s just bursting to get out, if only he could find the right tools. There’s still the question of inspiration. I want a game, or a movie, or a TV show, to show me something that I haven’t seen before. I want to see stuff that’s better than the stuff I’m capable of making. I like to think I’m a pretty imaginative guy, but the best story I’ve ever come up with in The Sims is one that I’d dismiss as pointless trash if I saw it on TV or read it in a book. And it didn’t teach me anything I didn’t already know.

Opening things up to multiplayer isn’t a cure-all, either. It just turns your story into “And then I was crouching behind that wall when I saw the red team getting closer to the bomb and I pulled out the sniper rifle right before he reached it and just nailed him but then a red guy came around and circle-strafed me with an AK-47 and I died.” Awesome story, I want to hear that one again. The current crop of multiplayer games, even ones with great game mechanics, are social experiences. Calling them a “story” would be like building a theater, getting the audience together, and then never putting on the play.

There’s absolutely nothing wrong with a well-designed multiplayer game, especially if it’s got a solid game mechanic underneath everything. But I know that games are capable of more than that; I’ve seen it. Wouldn’t it be great if I could play Counter Strike for a couple of hours and feel that I’d actually learned something, experienced something, or accomplished something greater than just getting better at Counter Strike?

I’d be the first to say I’m defensive about keeping story — real stories, not just settings or scenarios or interactive toolsets — in games. Obviously, the main reason is because I want to get the chance to make one, dammit. The work I’ve done that I’m proudest of has been writing for videogames, but it’s always been adding dialogue to other people’s stories. And they’ve been pretty traditional, putting what I like to think is a higher level of polish, but on an already established format. I want to try to come up with a new way of telling a story in a videogame, that shows more of what the medium’s capable of. If only to find out whether I really do understand how these things work, or if I’m just all talk.

The other reason is that I can tell you pretty much exactly when I decided I wanted to work in videogames. I was in college working on an ill-conceived art major after having given up on film school, and I bought The Secret of Monkey Island. I didn’t know anything about the game other than having seen the demo; it was a parody of Citibank commercials at the time, and it was the first example of videogame material I’d ever seen that was genuinely funny, not just funny “for a game.” Now that I had the full game, I went through the opening and finished reading the first few bits of dialogue, and then exited the bar. The screen said “Meanwhile…” and cut to a scene on LeChuck’s pirate ship. They were miles away, talking about my character and what I’d been doing. The scene finished, and the game cut back to my character, standing outside the bar.

Baby’s first cut-scene, a magic moment. It’s pretty standard stuff now, and in fact is exactly the kind of thing that videogame fans have become jaded about and are now railing against, but at the time it was genuinely mind-altering. It had never occurred to me before that going into computer science didn’t necessarily mean a future of creative famine. Or that something on a computer could be as satisfying an experience as a movie or a TV show.

And I think it’s significant that the big moment for me wasn’t when a game gave me the freedom to tell my own story, but when it took control away from me and said: this is the kind of story we’re capable of telling.

That’s all preamble, and it’s already too long. Actual ideas of how to go about it will have to come in other posts. For background reading while you steel yourself for the coming onslaught of baseless conjecture, check out: Ben Kuchera’s articles about storytelling in games currently running on Ars Technica, and Warren Spector’s article about next-gen storytelling in The Escapist.