At the time, that piece of lovingly crafted artisanal comedy got exactly the amount of attention it deserved, which was almost none. I’ve been making dad jokes since before I turned gray, so I’m intimately familiar with having friends and loved ones roll their eyes and go on about their business.
Last night, surprisingly, it got retweeted by somebody asking “are you kidding me right now?” I responded that yeah I was kidding, I’d thought it was obvious; I got a polite “sorry” that was appreciated but completely unnecessary; that was the end of that.
But then I saw another retweet, and then another, and another, all of them saying some variant of “is this guy for real smh” or “lmao” some such. No big deal. Then I noticed that one of them exclaimed “FOUND IT,” which seemed weird, like it was part of an ongoing conversation. What was going on?
As it turns out, this was going on: a piece of investigative journalism on Buzzfeed mocking all the clueless racist people on Twitter and Facebook complaining about The Wiz. (Which, in a completely fortuitous coincidence for Buzzfeed’s site traffic, is airing live on NBC tonight).
As I said on Twitter, I’m not sure what aspect of that post bugs me more: the attempt to stir up a controversy, or the assumption that a gay man in his 40s would have never heard of The Wizard of Oz, or that one who grew up in the 70s had never heard of The Wiz.
If I’m honest, though, what hurts the most is the last bit, where he says my dumb joke is one that was already done on Glee. Manufacturing outrage for page views is one thing, but using screengrabs of Kristin Chenoweth in order to call me derivative is just cruel.
I won’t be even more derivative by going into a long explanation of why I think the outrage-as-engagement content mills are awful, because it’s already been covered elsewhere. There’s an entire site brilliantly parodying it (Wow). And a writer named Parker Molloy wrote “5 Things the Media Does to Manufacture Outrage”, which explains it as clearly as anything I’ve ever seen. (And is written in Buzzfeed format, which I’m assuming was a clever stylistic choice even if it wasn’t).
Here’s the thing, though: the “writer” of that Buzzfeed post blurred out photos and names — which I guess is at least something positive, or there’d be even more nonsense coming my way — but it’s still super-easy to find stuff just by searching for the body of the message. Computers, and all that. So I searched for the other ones. I only found two others, but it was immediately obvious that they were both corny jokes, too. It’s not even painstaking research, either. 90% of the time you can tell who’s a troll, who’s a batshit extremist, and who’s just making goofy jokes within 15 seconds of reading a twitter feed. It took Buzzfeed longer to blur the profile photos than it would’ve taken to do a quick scan for context.
But nobody bothers to scan for context, so it’s been a day of getting notifications of people calling me an asshole or an idiot. To be clear: it’s just been a couple dozen retweets and one cartoonishly overwrought jerk trying to pick a fight, which barely even registers on the scale of internet harassment. I am white and male after all, so I didn’t have anybody calling me fat or mocking my religion or ethnicity. Still, for somebody who gets about four or five notifications a day, it’s been a drag.
And it’s the laziness of the whole thing that gets me. Sure, making a corny and obvious joke is lazy, but short, ephemeral bits of nonsense are what Twitter’s for. It’s the laziness of serving up “content” that’s just a twitter search for a bunch of seemingly inflammatory tweets interspersed with TV show GIFs and sarcastic comments. The laziness of acting as if that’s really getting a handle on the cultural zeitgeist and making some kind of statement about social justice. The laziness of retweeting something without even taking a few seconds to look for context. The laziness of immediately assuming that people are impossibly crass, selfish, and stupid, so of course you’re going to give them a 140-characters-or-less piece of your mind. Just the willful incuriousness of not wanting to find out more about the thing you’re angrily responding to.
Of course, I do it too. I’ve been trying to do a better job of vetting stuff before I send it along, instead of just sharing and retweeting everything I see that pisses me off. But I’m still frequently happy to dive headfirst into the gears of the outrage machine, shaking my tiny fist at whatever the Blogging Illuminati have decided is to be the Controversy of the Week.
And why not? Companies have spent millions and millions of dollars to make it so easy. Reading stuff is a chore, but it just takes a fraction of a second to hit the RT or Share button. These days, they even serve up a convenient menu of stuff to be angry about. “That mostly forgotten actor from the 80s said what?! This aggression will not stand!” Engagement. Content. People may just be saying vapid, mean-spirited, insight-free nonsense, but that’s okay as long as they’re saying something.
I’m starting to think I had a better handle on it years ago, before I “learned” why Internet Activism is Important. I used to think it was futile to pull out the pitchforks and torches every time someone said something inflammatory on the internet. But over time, I was reassured that it was bringing about real social change.
Instead, though, it’s just created an environment where people treat the most vapid statements as if they were profound declarations. I’m taking a stand against hate! While I’m sure that’s a blow to the pro-hate lobby, it’s not actually doing anything.
And you don’t have to be a statistician to understand that a bunch of randos spewing shit on Twitter isn’t a representative sample of anything. Even if that Buzzfeed “story” weren’t weighted with corny liberals making clumsy attempts at satire, and it were in fact a bunch of clueless racists spewing toxic nonsense on social media… really, so what? Do you really want to amplify that crap, to act like it’s something that intelligent people should waste their time responding to?
It’s not a case of ignoring something pernicious, just hoping it’ll go away. And it’s absolutely not a case of ignoring harassment and pretending it doesn’t exist. It is recognizing the difference between meaningful engagement, and just looking for something to get pissed off about. Which is worse than a waste of time, because it gives a voice to ideas and opinions that don’t deserve it. Keep doing it long enough, and you create an environment where even the most basic human respect — like, say, not harassing women or people of a different religion — gets treated as if it were a controversial topic on which reasonable adults can disagree.
It used to be that I’d see people committing themselves to “think positive” or “promote good” or “be the change you want to see in the world” and think that they were being impossibly naive and sheltered. You can’t just ignore injustice! You’ve got to root it out, and fight it! But that just puts you in the mindset of always looking for a fight. And I mean that literally — I confess I’ve absolutely posted stuff on Facebook with the express intent of looking for someone to disagree with me, so I could feel like I’d accomplished something with my righteous conviction. And I’ve spent a depressing amount of time in my life ranting about Mike Huckabee, somebody who has no chance of ever being President or in any kind of influential position, and who just says things to get attention and piss off people like me.
So gradually, I’ve been starting to see the appeal of this whole “positivity” business. If you really want to see an end to bigotry, misogyny, and general awfulness, you could yell at awful people until they stop being awful. Or you could pledge to be as un-awful as possible, and spread that around instead of the nastiness. The latter seems a lot more fun, and a lot less error-prone. Learn to recognize who’s actually influential, and who’s just trying to manipulate people into thinking they’re more influential than they really are.
Having been both the yeller and the yelled-at in Twitter flare-ups, I can tell you that it’s completely unproductive both ways. At the risk of tanking the entire social media economy, I think it makes a lot more sense to just disengage from the outrage machine and spend more time celebrating people doing great things and ignoring the assholes until we starve them of oxygen.
Except for that pharmaceutical CEO guy who jacked up the price of that AIDS drug. He is just the worst.
Reluctantly coming to the conclusion that the computer I’ve always wanted isn’t the computer I’ve always wanted
It’s a reliable source of tragicomedy to see people working themselves into an indignant rage over gadget reviews. When I was looking for reviews of the iPad Pro this Wednesday (to do my due diligence), Google had helpfully highlighted some fun guy on Twitter calling the tech journalists’ coverage of the device “shameful.” The reviews themselves had hundreds of comments from people outraged that even the notion of a larger, more expensive iPad was an assault to everything we hold dear as Americans.
The complaints about the rampant “Apple bias” are especially ludicrous in regards to the iPad Pro, since the consensus has been overwhelmingly cautious: out of all the reviews I read, there’s only one that could be considered an unqualified recommendation. Even John Gruber wasn’t interested in getting one. (But he did still believe that it’s the dawning of a new age in personal computing; it’s still Daring Fireball, after all). Every single one of the others offered some variation on “It’s nice, I don’t have any interest in it, but I’m sure it’s perfect for some people.”
Yes, I thought, I am exactly those some people.
Designed By Apple In Cupertino Specifically For Me
I’ve spent the better part of this year trying to justify getting a smaller and lighter laptop computer. I’ve spent the better part of the last decade wanting a good tablet computer for drawing. And I’ve tried — and been happy with — most of the variations on tablets and laptops that Apple’s been cranking out since the PowerBook G4. (One thing people don’t mention when they complain about how expensive Apple products are is that they also retain their resale value exceptionally well. I’ve managed to find a buyer for every Apple computer or tablet I’ve wanted to sell).
I’ve tried just about every stylus I could find for the iPad. I tried a Galaxy Note. I tried a Microsoft Surface. I got dangerously excited about that Microsoft Courier prototype video. Years ago, I tried a huge tablet PC from HP. None of them have been right, for one reason or another.
But when they announced the iPad Pro this Fall, it sounded like Apple had finally made exactly what I wanted: a thin and relatively light iPad with a high-resolution display, better support for keyboards, faster processor, and a pressure-sensitive stylus designed specifically for the device. Essentially, a “retina” MacBook Air with a removable screen that could turn into a drawing tablet. The only way it could be more exactly what I want would be if it came with a lifetime supply of Coke.
Still, I decided to show some constraint and caution for once, which meant having the calm and patience to get one a few hours into opening day instead of ordering one online the night before.
I read all the reviews, watched all the videos, paid closest attention to what artists were saying about using it. The artists at Pixar who tried it seemed to be super-happy with it. All the reviews were positive about the weight and the display and the sound and the keyboards.
I went to the Apple Store and tried one out, on its own and with the Logitech keyboard case. It makes a hell of a first impression. The screen is fantastic. The sound is surprisingly good. It is huge, but it doesn’t feel heavy or all that unwieldy when compared to the other iPads; it’s more like the difference between carrying around a clipboard vs carrying a notepad. (And it doesn’t have the problem I had with the Surface, where its aspect ratio made using it as a tablet felt awkward).
And inside the case, it gets a real, full-size keyboard that feels to me just like a MacBook Air’s. It really does do everything shown in the demo videos. I imagined it becoming the perfectly versatile personal computer: laptop for writing, sketchpad for drawing, huge display for reading comics or websites, watching video, or playing games. (I’m not going to lie: the thought of playing touchscreen XCOM on a screen this big is what finally sold me).
But Not For Me
But I don’t plan to keep it.
It’s not a case of bait-and-switch, or anything: it’s exactly what it advertises, which is a big-ass iPad. The question is whether you really need a big-ass iPad.
The iPad Pro isn’t a “hybrid” computer, and Apple’s made sure to market it as 100% an iPad first. But it’s obvious that they’re responding to the prevalence of hybrids in Windows and Android, even if not to the Surface and Galaxy Note specifically. And I think Apple’s approach is the right one: differentiating it as a tablet with optional (but strongly encouraged) accessories that add laptop-like functionality, instead of as some kind of all-in-one device that can seamlessly function as both.
But a few days of using the iPad Pro has convinced me that the hybrid approach isn’t the obviously perfect solution that common sense would tell you it is. It’s not really the best of both words, but the worst of each:
Big keyboards: The Apple-designed keyboard is almost as bad for typing as the new MacBook’s is, which is almost as bad as typing on a Timex Sinclair. Maybe some people are fine with it, and to be fair, even the on-screen keyboard on the iPad Pro is huge and full-featured and easy to use. But for me, the Logitech keyboard case is the only option. And it’s pretty great (I’m using it to type this, as a cruel final gesture before I return it) but it turns the iPad Pro from being surprisingly light and thin into something that’s almost as big and almost as heavy as a MacBook Air.
Big-ass tablet: Removed from the case, the iPad Pro quickly becomes just a more unwieldy iPad. The “surprisingly” part of “surprisingly light and thin” means that it’s genuinely remarkable considering its processor speed and its fantastic screen, but it still feels clumsy to do all the stuff that felt natural on the regular iPad. It really wants to be set down on a table or desktop.
It’s not cheap: I wouldn’t even consider it overpriced, considering how well it’s made and how much technology went into it. But it does cost about as much as a MacBook Air. That implies that it’s a laptop replacement, instead of the “supplemental computer” role of other iPads.
Touching laptop computer screens is weird: Nobody’s yet perfected the UI that seamlessly combines keyboards and touch input. Even just scrolling through an article makes me wish I had a laptop with a touchpad, where it’s so much more convenient. When it feels like the touchpad is conspicuously absent while you’re using a device that’s essentially a gigantic touchpad, that means that something has broken down in the user experience.
Aggressive Auto-correct: Because iOS was designed for touch input on much smaller screens, it was designed for clumsy typing with fat fingers. Which means it aggressively autocorrects. Which means I’ve had to re-enter every single HTML tag in this post. And it still refuses to let me type “big-ass” on the first try.
It’s missing much of OS X’s gesture support: Despite all the clever subtle and not-so-subtle things they’ve done to make iOS seamless, it’s still got all the rough edges as a result of never being designed for a screen this large. In fact, having your hands anchored to a keyboard goes directly against the “philosophy” of iOS, which was designed to have an unobtrusive UI that gets out of the way while you directly interact with your content. Ironically, it’s all the gesture recognition and full-screen stuff that made its way from iOS to OS X that I find mysef missing the most — I wish I could just quickly swipe between full-screen apps, or get an instant overview of everything I have open.
No file system: This has been a long-running complaint about iOS, but I’ve frankly never had much problem with it. But now that the iPad is being positioned as a product that will help you do bigger and more sophisticated projects, it becomes more of a problem. I just have a hard time visualizing a project without being able to see the files.
The old “walled-garden” complaints: Apple’s restrictions aren’t nearly as draconian as they’re often made out to be, but they still exist. Occasionally I need to look at a site that still insists on using Flash. And the bigger screen size and keyboard support of the iPad Pro suggest that programming would be a lot of fun on this device, but Apple’s restrictions on distributing executable code make the idea of an IDE completely impractical.
Third-party support: App developers and web developers haven’t fully embraced variable-sized screens on iOS yet. (As an iOS programmer, I can definitely understand why that is, and I sympathize). So apps don’t resize themselves appropriately, or don’t support split screen. Some apps (like Instagram, for instance) still don’t have iPad versions at all. Some web sites insist I use the “mobile” version of the site, even though I’m reading it on a screen that’s as large as my laptop’s.
If You Don’t See a Stylus, They Blew It
For me, the ultimate deciding factor is simply that the Apple “Pencil” isn’t available at launch. They’re currently back-ordered for at least four weeks, and that’s past the company’s 14-day return window. Maybe they really have been convinced that the stylus is a niche product, and they weren’t able to meet the demand. Whatever the case, it seems impossible for me to really get a feel for how valuable this device is with such a significant piece missing.
The one unanimous conclusion — from both artists and laypeople — is that the Pencil is excellent. And I don’t doubt it at all. Part of what gets the tech-blog-commenters so angrily flummoxed about “Apple bias” is that Apple tends to get the details right. Their stuff just feels better, even if it’s difficult or impossible to describe exactly how or why, and even if it’s the kind of detail that doesn’t make for practical, non-“magical” marketing or points on a spec sheet.
Even though I haven’t been able to use it, I have been impressed with how Apple’s pitched the stylus. They emphasize both creativity and precision. There’s something aspirational about that: you can use this device to create great things. Microsoft has probably done more over the years to popularize “pen computing” than any company other than Wacom, but they’ve always emphasized the practical: showing it being used to write notes or sign documents. It’s as if they still need to convince people that it’s okay for “normal” people to want a stylus.
Part of the reason I like Apple’s marketing of the Pencil is that it reminds me of the good old days before the iPhone. Back when Apple was pitching computers to a niche market of “creative types.” It was all spreadsheets vs. painting and music programs, as clearly differentiated as the rich jocks vs the sloppy underdogs in an 80s movie.
I only saw a brief snippet of Microsoft’s presentation about the Surface and Surface Book. In it, the Microsoft rep was talking about the Surface’s pen as if he’d discovered the market-differentiating mic-drop finishing-move against Apple’s failed effort: unlike “the other guys,” Microsoft’s pen has an eraser. I’ve been using a Wacom stylus with an eraser for some time, and its always too big and clumsy to be useful, and it always ends up with me using the wrong end for a few minutes and wondering why it’s not drawing anything.
Meanwhile, Apple’s ads talk about how they’ve painstakingly redesigned the iPad screen to have per-pixel accuracy with double the sampling rate and no lag, combining their gift for plausible-sounding techno-marketing jargon with GIFs that show the pen drawing precise lines on an infinite grid. That difference seems symbolic of something, although I’m not exactly sure what.
The Impersonal Computer
I’ve been pretty critical of Microsoft in a post that’s ostensibly about how I don’t like an Apple product. To be fair, the Surface Book looks good enough to be the best option for a laptop/tablet hybrid, and it’s clear some ingenious work went into the design of it — in particular, putting the “guts” of the machine into the keyoard.
I’m just convinced now that a laptop/tablet hybrid isn’t actually what I want. And I think the reason I keep going back to marketing and symbolism and presentation and the “good old days” of Apple is that computers have developed to the point where the best computer experience has very little to do with what’s practical.
I get an emotional attachment to computers, in the same way that Arnie Cunningham loved Christine. There have been several that I liked using, but a few that I’ve straight-up loved. My first Mac was a Mac Plus that had no hard drive and was constantly having to swap floppy disks and had screen burn-in from being used as a display model and would frequently shut down in the middle of doing something important. But it had HyperCard and Dark Castle and MacPaint and the floppy drive made it look like it was perpetually smirking and it as an extravagant graduation gift from my parents, so I loved it. I liked the design of OS X and the PowerBook so much that I even enjoyed using the Finder. I tried setting up my Mac mini as a home theater PC mostly as an attempt to save money on cable, but really I just enjoyed seeing it there under the TV. Even a year into using my first MacBook Air, I’d frequently clean it, ostensibly to maintain its resale value but really because I just liked to marvel at how thin and well-designed it was.
I used to think that was pretty common (albeit to healthier and less obsessive degres). But I get the impression that most people see computers, even underneath all their stickers and cases to “personalize” them, as ultimately utilitarian. A while ago I had a coworker ask why I bring my laptop to work every day when the company provided me with an identical-if-not-better one. The question seemed absolutely alien to me: that laptop is for work; this laptop has all my stuff.
Another friend occasionally chastises me for parading my conspicuous consumption all over the internet. I can see his point, especially since the Apple logo has gone from a symbol of “I am a creative free-thinker” to “I have enough money to buy expensive things, as I will now demonstrate in this coffee shop.” But I’ve really never understood the idea of Apple as status symbol; I’ve never thought of it as “look at this fancy thing I bought!” but “look at this amazing thing people designed!”
The iPad was the perfect manifestation of that, and the iPad mini was even more. Like a lot of people, I just got one mainly out of devotion to a brand: “If Apple made it, it’s probably pretty good.” I had no idea what I’d use it for, but I was confident enough that a use would present itself.
What’s interesting is that a use did present itself. I don’t think it’s hyperbolic to say that it created an entirely new category of device, because it became something I never would’ve predicted before I used it. And it’s not a matter of technology: what’s remarkable about it isn’t that it was a portable touch screen, since I’ve known I wanted one of those ever since I first went to Epcot Center. I think what’s ultimately so remarkable about the iPad is that it was completely and unapologetically as supplemental computer.
Since its release, people (including me) have been eager to justify the iPad by showing how productive it could be. Releasing a version called the “Pro” would seem like the ultimate manifestation of that. But I’m only now realizing that what appealed to me most about the iPad had nothing to do with productivity. I don’t need it to replace my laptop, since I’m fortunate enough to be able to have a laptop. And the iPhone has wedged itself so firmly into the culture that it’s become all but essential; at this point it just feels too useful to be a “personal” device. (Plus Apple’s business model depends on replacing it every couple of years, so it’s difficult to get too attached to one).
Apple’s been pitching the watch as “their most personal device ever,” but I wouldn’t be devastated if I somehow lost or broke the watch. My iPad mini, on the other hand, is the thing that has all my stuff. Not even the “important” stuff, which is scattered around and backed up in various places. The frivolous, inconsequential stuff that makes it as personal as a well-worn notebook.
Once I had the iPad Pro set up with all my stuff, I was demoing it to a few people who wanted to see it. And obviously with coworkers but even, surprisingly, when showing it to my boyfriend, there was a brief moment of hesitation where I wondered if I was showing something too personal. I don’t mind anybody using my laptop or desktop, or sharing my phone with someoen who needs it, but I’ve got a weird, very personal attachment to the iPad. (And not just because I treat my Tumblr app like the forbidden room in a gothic novel which no one must ever enter).
It’s entirely possible that I’m in the minority, and whatever attachment most people have to “their stuff” is to the stuff itself in some nebulous cloud, and not the device that’s currently showing it to them. It’s even more likely that there’s simply no money to be made in selling people devices that they become so attached to that they never want to give them up. It may be that Convergence is The Future of Personal Computing, and one day we’ll all have the one device that does everything.
After using the iPad Pro, I’m no longer convinced that a big iPad that also functions as a laptop is what I want. I really want a “normal”-sized iPad that’s just really good at being an iPad. Which means adding support for the Apple Pencil to the iPad Air.
So I’m back to hoping Apple’s already got one of those in the pipeline, and waiting until it’s announced at some point next year, and then ordering one the second they’re available and then trying to justify it as a rational and well-considered purchase. Next time for sure it’s going to be exactly the computer I want.
The “problem” was that it was too good at the mood it was trying to establish. The tension of the series relies on the feeling of a city that’s irreparably broken, where the corruption goes so deep that it taints even the people trying to fight against it. It remains a solid series throughout, but it’s not a carefree, fun romp.
Now, I’ve finally finished watching the first season, and my opinion of it’s changed. Before, I thought it was really good. Now, I think it’s kind of a master work. If it just existed in a vacuum as a one-hour drama/action television series, it’d be really well-done if not groundbreaking; the hyperbole comes in when you consider it as an adaptation. Not just of a long-running series, but of a franchise and a format.
Really, what’s most amazing to me is that it exists at all, when you consider all the different ways it could’ve gone wrong. It could’ve collapsed under the weight of its own cliches, being unabashedly an adaptation of a comic book. It could’ve been pulled apart in any number of directions — too enamored of its fight scenes to allow for long stretches with nothing but dialogue, or too enamored of its “important” dialogue to realize how much storytelling it can accomplish with choreographed fight scenes. It could’ve quickly revealed itself as too derivative, or tried to crib too much from the Christopher Nolan version of Batman, considering that it’s based on a character that was already derivative. It could’ve suffocated from having its head too far up its own ass, being based on what’s maybe the most self-consciously “adult” of mainstream comics characters, and gone the route of “grim and gritty” comics’ facile understanding of what’s “mature.” It could’ve had performances that were too Law & Order for the comic-book stuff to read, or too comic-book for the dramatic stuff. The character of Foggy could’ve been so self-aware as to be insufferable, or the character of Karen could’ve been nothing more than a damsel in distress or a dead weight. It could’ve all been completely torn apart once they let Vincent D’Onofrio loose.
But it all works. (Almost). It’s a self-contained arc and a hero’s journey story and a tragedy and a character study and a crime drama and a martial arts series and a morality play and a franchise builder. It’s never so high-minded that it forgets to be entertaining, but it does insist that entertainment doesn’t have to be stupid. Yes, it is going to show you Daredevil fighting a ninja, but you’re also going to watch a scene that’s entirely in Mandarin, so don’t complain about having to turn the subtitles on.
If, like me, you were unfamiliar with the character other than at the most basic level — blind lawyer with super-senses who fights criminals with a cane that turns into nunchucks — then take a second to read an overview of the character’s history. And be impressed not only at how much they managed to retain, but how many horrible pitfalls they avoided.
My least favorite episode of the season — by far, since it’s really the only sour note in the entire thing that I can think of — is titled “Stick.” I had never heard of the character, but of course it’s from the comics. And of course it’s from Frank Miller, because it’s just an eyepatch and laser gun short of being the culmination of everything a testosterone-addled 12-year-old in the 80s would think is “rad.” As someone who was a testosterone-addled 12-year-old in the 80s, I can acknowledge this was a part of my past, but it’s not anything to be cherished, celebrated, or re-imagined. (Everybody was obsessed with ninjas back then. This was a time when Marvel thought they needed to make their immortal Canadian anti-hero with a metal-laced skeleton and claws that come out of his hands “more interesting” by having him go to Japan).
So the character of Stick is straight-up bullshit. It’s a perfect Alien 3-style example of not being able to handle what you’re given and instead, tearing down everything that came before in order to write about something else. Except even worse, because it tears everything down to replace it with something that is itself derivative: a sensei with a mysterious past in the form of a wise, blind martial arts master. (Except it’s the 80s, so he’s “flawed.” Which means he’s even more rad). It undermines the main character of the story by saying, “Here’s a guy who can do everything your hero can, even better than your hero can, and without the benefit of super powers.”
The makers of the series did the best they could. First, they cast Scott Glenn to come in and Scott Glenn it up. Then, they spun it the best they could, figuring out how to take the elements of the story that would fit into their own story arc: the idea that loyalty and connection to other people is a weakness, and the idea that it’s the choices Matt Murdock makes that define him as a hero, and not his super powers. (And then towards the end of the series, they have Foggy make a reference to how cliched and dumb the whole notion of a blind sensei is, so all is forgiven).
Throughout, there’s a respect for the source material that’s more skill than reverence. They understand not only how to take elements from the original and fit them into the story they’re trying to tell, but how and why they worked in the original. A lot of adaptations, especially comic book adaptations that try to move the story into “the real world,” are so obsessed with the first part that they lose sight of the second. I’m realizing now that that’s a big part of why Christopher Nolan’s Batman movies don’t work for me: they treat the characters and their origin stories as these disconnected bits of mythology floating around in the ether, without much consideration for how they originally worked and why they became so iconic. Especially with the last movie, it seemed to be more about mashing up familiar references instead of meaning. (Take that to its extreme, and you get a version of The Joker who has a panel from an iconic comic book about The Joker tattooed on his own chest).
But the Daredevil series takes stuff that was used as fairly empty symbolism in the comics — a vigilante in a Devil suit standing on top of a building overlooking a church — and pumps enough depth into it to make it meaningful again.
There’ve been so many “adult” interpretations of Batman that the whole notion of a vigilante hero has pretty much lost any tension or dramatic weight. Daredevil makes it interesting again. Even though it’s an unapologetically bleak setup, there’s still never a question that Daredevil is eventually going to win the fight. The question is what he’s going to lose in the process.
That in itself isn’t uncharted story, and the series doesn’t attempt to explore the material by going all-in on realism. Instead, it takes all the familiar elements and symbols and fits them into a structure where they all support each other and build off of each other. We see every single character faced with temptation, and we see how each character responds to it. None of the stories are self-contained origin stories presented for their own sake; they all reflect on that idea of holding on to your soul despite any corrupting influences. Foggy isn’t just the comic relief character; he’s the constant reminder of the ideals they’re supposed to be fighting for. Karen isn’t a story of an innocent saved by a hero; she has actual agency, and she’s an example of how corruption can gradually and subtly chip away at the soul of a good person.
The villains are straight out of the Stock Gritty Urban Bad Guy warehouse, but as with the best comic book stories, they all reflect on some aspect of the hero and illustrate why the hero’s the star of the story. Some of the corrupt cops show what results when people try to appoint themselves as above the law. One of the cops’ stories shows how he succumbed to corruption out of a desire to keep his loved ones safe. The Russian mobsters are depicted as people who did whatever they had to in order to overcome a horrible upbringing. The character of Madame Gao seems to be about moral relativism, a rejection of the idea that there are good people who do bad things. The Chinese drug-smuggling ring is a rejection of the idea that corruption is passive; it seems to insist that people aren’t forced to do bad things but choose to, an idea that’s reinforced by Karen’s story. And the Yakuza aren’t used much for other than a bit of exotic intrigue and a ninja fight, but there’s still some sense of how a devotion to honor above all else is itself a kind of corruption.
Of course, the first season is as much Kingpin’s origin story as Daredevil’s, so his is the most interesting. And again, it takes what could be the often simplistic moralizing of “comic book stories” and pumps depth back into it. There’s a scene in which he’s dramatically reciting the story of The Good Samaritan that keeps threatening to go over the edge into self-important super-villain monologuing scene, where the writer is a little too eager to make sure you get the point of what he’s been trying to say. But when taken as the culmination of his story, it’s the climactic moment that marks his story as a tragedy. It’s fairly typical for writers and actors to say that the most interesting villains are the ones who see themselves as the heroes, so it’s fascinating to see this series try to take that a step further. They’ve spent the entire season letting us into Fisk’s head, building up empathy if not sympathy, showing us how he became what he is. Then they say, “Wouldn’t it be even more interesting to show him accepting and embracing the fact that he’s the villain?” And it is, because it suggests that his story is just getting started.
Even more interesting to me, in a 2015 adaptation of a comic book that originated in 1964, is how it shows Kingpin as a male character created and defined by women. (Maybe not that surprising, considering that the source material is as well known for its relatively short-lived bad-ass female ninja character as it is for its hero). Every defining moment of his character — from his childhood to the climax of his story — is in reaction to something done by a man, but driven by the decision of a woman. His mother covers for him and protects him. Madame Gao intimidates him and backs him into a corner, effectively forcing him to abandon his pretense of fighting for good. And Gao insisted that Vanessa was a distraction for him, when in fact she was helping define him: all of the aspects of his character that he was trying to keep hidden and keep her shielded from, were the very aspects of his character that most attracted her.
In fact, all of the female characters in Daredevil are defined by their agency, while almost all of the male characters (except Matt and possibly Foggy) are shown either as passive products of their environment or as character simply living out their true nature. Ben Urich’s wife encourages Urich to stay true to his ideals, while acknowledging that being a reporter is simply in his nature, and there’s little he can do about it. Wilson Fisk tries to put a positive spin on his motivations, but both Vanessa and Gao encourage him to acknowledge that he’s doing it for power, not for good. Clare chooses to help Matt Murdock, and it’s ultimately her who chooses how to define their relationship. There’s even an element of it with Foggy and Marci — he’s incorruptible by nature, while she has to actively choose to do the right thing.
When you step back and look at it as part of the overall Marvel franchise, it makes it seem even more that the freak-out over Black Widow was missing the point. The internet would have you believe that the issue comes down to the ratio of how many men she defeats vs how many times we’re shown her ass. The bigger issue (and I’m definitely not the first person to point it out!) is that the movies are so dominated by male characters that she has to represent All Women. And even in a comic book story, “strong female characters” aren’t about super powers or who’d win in a fight.
And still, the thing that impressed me the most in the first couple of episodes stayed true throughout: Daredevil is fantastic at maintaining its tone. Sure, dialogue-heavy scenes peacefully coexist with fight scenes, but it goes even deeper than that. Some of the dialogue-heavy scenes are entirely plot driven, while a fight scene is all about establishing character. Some of the scenes are about dramatic monologuing, while others are about more subtle implications and things left unsaid. There are several moments I would’ve expected to be spun out into multi-episode arcs, but are instead left lingering in the background: for instance, a particularly well-acted moment when Foggy realizes that Karen isn’t attracted to him in the same way she is to Matt. It’s fairly subtle and heartbreaking, and to the best of my memory, no character ever utters the despicable phrase “friend zone.”
Everybody knows Vincent D’Onofrio is great at playing a psychopath, but what I didn’t appreciate is that he’s so good at maintaining it. I would’ve thought that by spending so many episodes building up anticipation for his appearance, when he first explodes and kills a guy, they’d have used up all the value of that for the rest of the season. But he keeps it going for episode after episode, filled with rage and menace and perpetually just on the verge of boiling over. And Ayelet Zurer perfectly underplays Vanessa — never trying to compete with Fisk in bombastic scene-stealing but always conveying a sense of power and control. Once she starts making her motivations perfectly clear, it’s every bit as chilling as any of Fisk’s outbursts.
And there’s a scene where Foggy and Matt are fighting because of course there is; any story about a super-hero with a secret identity demands it. I was never particularly invested in their relationship, or unsure of how it would play out, so I thought the entire thing would be a rote case of doing what it needed to for the season arc and then moving on. But it’s so well-acted (and under-written) that it actually got to me. Matt sobs in the middle of a line, and it really feels like the entire weight of the season up to that point just came crashing down on top of him.
As always, it’s another case of understanding exactly how and why a scene works, instead of simply including it because it’s supposed to be there. I’m tempted to say this should be the template for every live-action adaptation of a comic book, but I honestly don’t know how much of it is reproducible. I am excited to see how it plays out in the second season and all the spin-off series. At this point, I’d even watch a show about Cable.
On the 90s and the Internet and how Kate Bush is amazing.
For a couple of months in 1990, I was completely obsessed with The Sensual World by Kate Bush. It didn’t last for too long before I moved on to be obsessed with The Pogues and the Pixies, and I’d completely forgotten about it until just recently. A few nights ago, YouTube recommended I re-watch Noel Fielding’s brilliant parody of Wuthering Heights, and I had a vague memory that oh yeah, I used to be kind of infatuated with her.
I was trying to remember my favorite song of hers based on a few half-remembered details — what’s that one Kate Bush video where she’s against a black background and flinging gold sparkles everywhere? — and a memory of the chorus but not the actual title Love and Anger. That meant stumbling around all her videos on YouTube trying to find the right one, and coming to a series of conclusions, in roughly this order:
Watching these now is like suddenly remembering vivid details from a dream I had 20 years ago.
Holy crap, Kate Bush is brilliant.
Even if I’d tried, I don’t think I could’ve fully appreciated all this stuff in 1990.
I’d forgotten how different the music industry was back before Napster and the ubiquitous internet.
I never thought much about how important context is to appreciating a work of art.
No really, she’s just the best.
Until I went to college in a city that prides itself on its music, I only listened to whatever was popular at the time. So when MTV started playing that video for Love and Anger and commenting on what a big deal Kate Bush was and how significant it was to be getting a new album, it was all lost on me. To me, she was just “that woman who sang on that Peter Gabriel song.” I had a vague memory of Running Up That Hill, but had just filed it away in the same folder as Bonnie Tyler and Total Eclipse of the Heart: a synthesizer-heavy pop song by someone who was apparently a lot more popular in the UK than in the US. I can’t remember if I was even aware of Wuthering Heights at the time; if so, I almost certainly dismissed it as someone screeching over overly-precious lyrics. Knowing myself at the time, I probably picked up The Sensual World mostly for the cover, thinking that she looked like Jane Wiedlin and sounded like Cyndi Lauper and was probably worth a listen.
The album is a lot more interesting and varied than the pop record I’d been expecting. Rocket’s Tail in particular is fascinating; I don’t believe I’ve heard it in 25 years, but it all came back suddenly as if it’s been looping constantly in the recesses of my brain. I also suddenly remembered why I didn’t become an obsessive fan back then, and it’s two of the most 1990s reasons imaginable.
One is just raw early 90s proto-hipsterism. I thought the song Deeper Understanding‘s story about a man who retreats from human contact into his computer was facile and paranoid, the same way there a lot of stories around the same time talked about virtual reality and rogue AIs but didn’t seem to understand how computers actually worked, and instead panicked about a super-advanced, cold cyber-world that looked like Second Life. I dismissed it as out-of-touch and irrelevant. (And of course, I’m saying that as someone who now wakes up every morning and immediately grabs his cell phone to check on Twitter and Facebook).
The other reason is that doing a “deep dive” on anyone’s work was, at the time, an investment. In 2015, I started digging around YouTube, Wikipedia, and Apple Music, and within a couple of hours had seen and heard 90% of Bush’s artistic output since 1978. It’s hard for me now to imagine a time without YouTube, much less a time before the web and even USENET, even though I was a computer-fixated nerd back when a 300 bps Vicmodem was a novelty. But essentially, in a time without hypertext, I didn’t have much chance to appreciate what I was listening to.
There’s a 2014 documentary from the BBC called The Kate Bush Story that’s a lot better than it would seem on the surface. It’s the typical VH1 format, where a bunch of celebrities gush about Bush’s work intercut with clips from her videos. It seems about as vapid as a Behind the Music or I Love the 80s show, right down to the sour note of ending with Steve Coogan making a pun about “bush.” And they have interviews with the usual suspects, where Elton John and Tori Amos and Neil Gaiman say that they’re huge fans of Kate Bush.
But of course they are, right? I’m not being entirely dismissive; I was an enormous fan of Amos and Gaiman (I wrote Gaiman a fan letter on the GEnie network! And he sent a personal response with a great story about seeing The Pogues in concert!). But even my vague awareness of Kate Bush as “British and feminine and lots of pianos and literary references and scarves and dancing” fits solidly and predictably into the same category as The Sandman and Little Earthquakes.
And speaking of USENET, I was aware even back during those days that there was a pretty substantial fandom around Bush’s music. But even there, it was named after a song that was relatively obscure in the US. So I thought of it in kind of the same way as Doctor Who pre-Russell Davies: super-popular in Britain and good for them! but completely inaccessible to me.
So to watch that documentary and see St. Vincent and Johnny Rotten and Big Boi and Tricky pop up to say how much her work influenced them, it didn’t just grab my attention, but put everything into a context I hadn’t considered before. And made observations that I wouldn’t have come up with on my own, but seem kind of obvious once they’ve been pointed out to me.
Ooh, it gets dark!
For instance, that Noel Fielding parody of Wuthering Heights. I’d seen it years ago, back when my obsession of the moment was The Mighty Boosh. At the time, I hadn’t appreciated that it was a parody of two versions of the video: the iconic one of her dancing in a field wearing a red dress, but also the studio version with its cartwheels and 70s video-trail effects.
I also hadn’t appreciated that it’s not a mocking parody but a reverential one. The joke isn’t about how weird or fey Bush’s performance is in that video, but that she’s the only person who could pull it off without looking silly. And for that matter, what a touchstone the performance was for her fans. (Proof that it’s not mocking but instead a love letter from a fan is that Bush included Fielding in her remake of the Deeper Understanding video that year).
It’s also about how iconic the imagery is, and how indelibly it’s associated with that song. In several of her early interviews, Bush says (paraphrased) that she studied dance, mime, stage production, and eventually, filmmaking, in order to make visual extensions of her songs. To someone raised on MTV and cynicism, that could sound pretentious or disingenuous — videos are promotional material used to sell music, and only inadvertently become artistic works. But then you remember that Bush was doing this before videos were a thing. And you remember how strong the imagery is: I’m about as close to the polar opposite of “waif-like” as a person can get, but I still find it difficult to keep from making the same gesture when I hear the lyric “let me into your window.”
And — even more embarrassingly for me — I hadn’t put any thought into what the song was about. As someone with the perpetual mindset of a teenage boy rolling his eyes at “girls’ stuff” like gothic romances, I hadn’t considered that it was the voice of a dead woman appearing at her lover’s window in the night, pleading to be let inside. So what I’d dismissed as just weird screeching was, of course, completely intentional. For a female songwriter and singer in 1978, The Man with the Child in his Eyes would’ve been a much more accessible debut song. And it would’ve been successful; it’s a beautiful and memorable song that, in my opinion at least, evokes Karen Carpenter’s considerable talent and holds its own. But she deliberately chose her first appearance to be literary and otherworldly.
This Woman’s Work
And she’s done that throughout everything that I’ve seen and heard. Her stuff is clearly influenced by whatever else is going on in music at the time, but there’s a sense that she won’t bother doing anything unless it’s something she finds interesting and unique.
When I first saw the video to Eat the Music from 1993, I thought I’d figured it out: ah, here’s where she went through her World Music phase just like Peter Gabriel and the Talking Heads and pretty much everyone else in the mid 80s through early 90s. But it’s gloriously sinister right from the start, with the lyric “Split me open with devotion, put your hand in and rip my heart out.” As the video goes on, it gets even weirder and more sinister, as the spinning becomes unstoppable, and the other dancer’s eyes roll into the back of his head, and it becomes so frenzied that everyone collapses. What I’d mistaken as a novelty song or a one-off becomes (obviously, in retrospect) a crucial part of a concept album about obsession and loss.
Rubberband Girl from the same album sounds a little like an early 90s Eurythmics song, and the warehouse in which its video was filmed is the same one that supplied the backing bands and ceiling fans for countless other 90s videos. But then there’s that choreography, which suggests that her seemingly effortless grace is actually the result of her being pulled, exhausted and against her will. And then it, too, descends into a kind of frenzy that belies the “bend without breaking” sentiment of the lyrics. She’s bound into a straightjacket and is compelled to wave her arms around, all filmed with the harsh light of a Twin Peaks murder scene.
Apparently, all the videos from The Red Shoes are from a long-form video that featured Miranda Richardson (she has a lot of videos that feature British comedic actors) and one of her early mentors, Lindsay Kemp, which explains the non-sequitur beginnings and endings. Hilariously, in 2005 interview she describes it “a load of bollocks,” while I’m here 10 years later trying to make sense of its bizarre transitions. Removed from that context, Moments of Pleasure is even more fascinating — starting with a whispered soliloquy and then showing nothing but her spinning and tumbling over a series of backdrops. It’s at least as beautiful and powerful a song as This Woman’s Work, but what’s most remarkable to me is how conversational, almost extemporaneous, the lyrics are. It seems like the natural impulse for a song about death and loss would be to make the lyrics flowery and poetic, but having something so prosaic against such a moving orchestration just makes it all the more real.
And speaking of This Woman’s Work, I suspect that the real reason I stopped listening to The Sensual World was that it was too exhausting. Even without the video, it’s hard to hear that song without feeling emotionally drained by the end. Even while cynical early 90s me dismissed as “maudlin” to disguise the fact it never fails to get a sob out of me.
Same with Love and Anger, which I still love but had thought was nothing more than a product of its time with a simple “we’re all in this together!” message. Paying even a little bit of attention to the lyrics shows it to be more sophisticated than that: I think it’s about passion and empathy, expressing even what we think of as negative emotions instead of being repressed and “waiting for a moment that will never happen.”
In an interview around the release of The Sensual World, she said that it was her first album that was written from a feminine perspective, since up until then, all her musical and artistic influences had been men. Which, I think, is selling herself short, since so much of her entire body of work is uniquely feminine. In that BBC documentary, Neil Gaiman calls out the maternal aspects of the songs Breathing and Army Dreamers. The song that Americans around my age were likely most familiar with — Running Up That Hill — is a call for empathy disguised as synth-heavy 80s pop with some terrific choreography. (With the fascinating, slightly sinister twist of making it sound selfish with “let me steal this moment from you now.”)
“It’s in the Trees! It’s Coming!”
And then there’s Hounds of Love, which is so good that it kind of makes me angry that attitudes like the one 1990s me had kept it from taking off in the US and so I didn’t get to see and hear it until 2015.
The story I keep reading is that Bush was savvy enough to build on her early success from her first two records, to the point that she was able to free herself from the record label and do everything on her own terms. By the time of Hounds of Love, she was not only writing, singing, and producing her own music, but had built her own studio and conceived of and directed the video to the title track. It’s driving and cinematic and enigmatic, and it’s fantastic in the way that I usually think of Terry Gilliam’s movies as being. (And apparently, she collaborated with Gilliam on the video for Cloudbusting on the same album). In yet another interview, she casually mentions drawing storyboards for the video as if it were no big deal.
One of the reasons I admire St Vincent so much is that she’s able to go all-in on the conceptual art side of her work, and then in “real life” is as personable and down-to-earth as it gets. (Unlike, say, Bjork, who’s brilliant but whom I’d never, ever want to meet in person).
Kate Bush comes across the same way, as a person who pours all her imagination and idiosyncrasies into her work. This results in fantastic things like Sat in Your Lap from The Dreaming, which seems to me as early 80s prog rock as early 80s prog rock gets. And then this wonderful appearance on a British children’s show, where she says she’s lucky because she got to wear roller skates in her video, and she lets a little girl in the audience wear one of the minotaur masks.
It probably goes without saying that Kate Bush is objectively, almost impossibly, beautiful. But even that aspect seems to be something she always treated as incidental — great insofar as it helps the music, but never something that should take away from the music. Experiment IV, for instance, is a sci-fi horror story where she lets a bunch of comedic actors (and her then-partner) take the focus while she takes a bit part as a harpy and a horrible monster.
Most amazing to me is Babooshka from 1980. As with Wuthering Heights, she treats dance as a crucial part of telling the story of the song. She appears both as the scorned wife and as a wild-eyed Valkyrie. The first thing that amazes me about this video is imagining the concept stage: when coming up with ideas of how this alter-ego character would look, evidently Bush saw this piece of art by Chris Achilleos and thought, “Hmm, I bet I could probably pull that off.” The second amazing thing is that she totally does pull it off. And there’s absolutely no hint of pandering and zero sign of the Male Gaze. It has the vibe of an artist completely in control of her work, her appearance, and her sexuality.
Also remarkable to a viewer first seeing it in 2015 is Bush’s interpretation of the song at the time. I doubt it was ever intended to be a “deep” song, but I would’ve taken it as an indictment of the husband for discarding his wife once she was no longer young and beautiful. Bush’s take on it was entirely from the woman’s perspective, though; the husband was mostly incidental but sympathetic. Bush describes the song as being about the wife’s self-doubt and paranoia bringing about her own downfall.
At the risk of reading too much into it, I think that’s a perfect metaphor for Bush’s career. There’s a recurring theme of empathy and love and human interaction throughout her work, but never a sense that she’s defined by anyone else. The songs are inescapably hers, and even when she’s playing a character, it’s a character that she created.
And the final thing I find fascinating about Babooshka is that it sounds so much like an ABBA song. On every album of hers that I’ve heard, the sound is all over the place, reminding me at times of ABBA, the Carpenters, The Rocky Horror Picture Show, Peter Gabriel, Kirsty MacColl, the Eurythmics, Pink Floyd, Queen, and probably dozens more that I’d recognize if I had more expansive taste in music. (Not to mention artists like St Vincent and Tori Amos, who’ve declared outright that Bush was an influence on their own work). But even when you can place it in a specific time period, it never sounds derivative or pandering.
If she were pandering, there’d be no explanation for Delius, which is beautiful and memorable and undeniably, unabashedly weird. Or for that matter, the concept album second half of Hounds of Love, which has tracks that are just as melodic as anything from her singles, not to mention an Irish reel that would’ve made me a lifelong fan if I’d only heard it when I was in the middle of my obsession with the Pogues. But it’s not at all concerned with being commercial, and only exists as a purely personal expression.
Even when that expression isn’t high-minded or cerebral, and just putting on costumes and goofing off with a bunch of friends and collaborators.
Stepping Out of the Page
So in other words: yes I said yes, I finally get it now. And what’s more, I wouldn’t have been able to get it in 1990. If for no other reason than I didn’t have Neil Gaiman to explain to me that the title track of The Sensual World was inspired by and referenced Molly Bloom’s soliloquy at the end of Ulysses, and I didn’t have easy access to Ulysses to get the significance of that.
And even if I had, I would’ve thought that the significance of that soliloquy is just a woman’s anachronistically frank and vulgar discussion of her own sexuality. I would have — and did — come to the vapid, simple-minded conclusion that it’s just about being “sex-positive.” But the whole significance is much more than that; it’s how the stream of consciousness is an unpunctuated torrent of the entirety of her experience: vaginas and religion and landladies and chocolates and paintings and gossip and cigarettes and cleaning semen out of sheets and castles and geraniums and breasts and flowers of the mountain. And how she says yes to all of it.
And I wouldn’t have had instant access to decades of a body of work, and all the articles and documentaries and interviews that interpret it and put it in context. So I couldn’t have fully appreciated how that soliloquy would be significant to someone who’d spent years pouring all of her work and energy into sharing her experience without much thought over whether it was commercial or even accessible but just that it was genuine and uniquely hers.
For almost every one of Kate Bush’s videos, I can instantly tell roughly when it was made, whether she was responding to the “look” of the decade or whether she was helping define it. This is mid-to-late 70s, that’s clearly mid-80s, that’s absolutely a product of the early 90s. The exception is The Sensual World, which is timeless. It could’ve been made last year, or it could’ve been dropped to Earth as a response to the Voyager disc.
I’d said that seeing it again recently was like vividly remembering images from a dream, and that’s still the case. But now that I’ve caught up with the people who’ve been lifelong fans of Kate Bush, the images are even more powerful. In that documentary, St Vincent describes Hounds of Love, Stefon-like, as “that thing where it burns like wildfire and then comes alive,” and Viv Albertine describes it like repressed sexuality, as if “the whole song’s on a leash, but you know it’s gonna escape and burst and run free.”
For me, it’s that tremendous moment of release in The Sensual World where she removes her headdress and is dancing barefoot in front of a field of flames. And seeing her confidently and effortlessly dance backwards down a moonlit path in a velvet dress is the most beautiful thing.
The Apple TV sure seemed like a good idea… at first!
On the surface (sorry), it seemed like Apple had made all the right decisions with its new product announcements yesterday. [For future anthropologists: new Apple Watches, a bigger iPad with a stylus, and Apple TV with an app store, and iPhones with better cameras and pressure-sensitive input. Also, the title of this blog post is a reference to something that happened a few months ago that nobody cares about now. — Ed.]
I’ve wanted an iPad with a stylus since before the iPad was even announced, so long ago that my image links don’t even work anymore! And I’ve been wanting a lighter laptop to use as purely a “personal computer” in the strictest sense — email, social media, writing, whatever stuff I need to get done on the web — and keep finding myself thinking “something like a MacBook Air that doubles as a drawing tablet would be perfect!” In fact, the iPad Pro is pretty close to what I’d described years ago as my dream machine but cheaper than what I’d estimated it to cost.
There’s been a lot of grousing online about how Apple’s acting like it invented all of this stuff, when other companies have had it for years. On the topic of pen computing, though, I can unequivocally say no they haven’t. Because over the years, I’ve tried all of them, from Tablet PCs to the Galaxy Note to the Microsoft Surface to the various Bluetooth-enabled styluses for iOS. (I’ve never been able to rationalize spending the money for a Cintiq, because I’m just not that great an artist). I haven’t tried the iPad Pro — and I’ll be particularly interested in reading Ray Frenden’s review of it — but I know it’s got to be at least worth investigation, because Apple simply wouldn’t release it if it weren’t.
Even if you roll your eyes at the videos with Ive talking about Apple’s commitment to design, and even if you like talking about Kool-Aid and cults whenever the topic of Apple comes up, the fact is that Apple’s not playing catch-up to anyone right now. They’ve got no incentive to release something that they don’t believe is exceptional; there’d be no profit in it. The company innovates when it needs to, but (and I’m not the first to say it): they don’t have to be the first to do something; they just have to be the first to do it right. And they’ve done exactly that, over and over again. The only reason I may break precedent and actually wait a while to get a new Apple device is because I’m not convinced I need a tablet that big — it’d be interesting to see if they’ll release a pen-compatible “regular-sized” iPad.
And if I’ve been wanting a pen-compatible iPad for almost a decade, I’ve been wanting a “real” Apple-driven TV set-top box for even longer. The first time I tried to ditch satellite and cable in favor of TV over internet, I used a bizarre combination of the first Intel Mac mini with Bootcamp to run Windows Media Center, a Microsoft IR remote adapter, a third party OTA adapter, and various third party drivers for remotes and such, all held together with palm fronds and snot. I’ve also tried two versions of the “hobby” Apple TV, relics of a time when Apple was known for glossy overlays, Cover Flow, and an irrational fear of physical buttons. Basically, any update would’ve been welcome.
But the announcement yesterday was a big deal, obviously, because they announced an App Store and an SDK. Which turned it from “just a set-top box” into a platform. That’s as big a deal for customers as it is for developers, since it means you don’t have to wait for Apple to make a new software release to get new stuff, content providers can make their own apps instead of having to secure some byzantine backroom deal with Apple to become a content channel, and some developers will come up with ways to innovate with the device. (Look to Loren Brichter’s first Twitter client as a great example of UI innovation that became standard. Or for that matter, Cover Flow).
And for games: I don’t think it’s an exaggeration to say that the iOS App Store has done more to democratize game development than anything, including Steam as a distribution platform and Unity as a development tool. Whether it was by design or a lucky accident, all the pieces of device, software, market, and audience came together: it was feasible to have casual games ideally played in short bursts, that could be made by small teams or solo developers, and have them reach so many millions of people at once that it was practical and (theoretically) sustainable.
I hope nobody expects that the Apple TV will become anywhere near as ubiquitous as the iPhone (or even the iPad, for that matter), but still: opening up development creates the potential for independents to finally have an audience in the console game space. It’d be like the Xbox Live Indie Games and XNA, if all the games weren’t relegated to a difficult-to-find ghetto separate from the “real” games. Or like the Ouya, if they’d made a device that anyone actually wanted to buy.
Game developers love saying that Apple doesn’t care about games and doesn’t get how games work — as if they’d just inadvertently stumbled into making a handheld gaming device that was more popular than Nintendo’s and Sony’s. You could look at the new Apple TV the same way, and guess that while trying to secure deals with big content providers and compete with Amazon or “Smart” TV manufacturers, they’d accidentally made a Wii without even trying.
There’ve been enough game-focused developments in the SDK, and the company marketing as a whole, that suggest Apple really does get it. (Aside from calling Disney Infinity “my favorite new Star Wars game”). But there’s a couple of troubling things about the setup, that suggest they expect everything on the TV to play out exactly the same way that it has on smartphones and tablets.
First is that the Apple TV has a heavy reliance on cloud storage and streaming of data, with a pretty severe limitation on the maximum size of your executable. They’ve demoed smart phone games on stage (Infinity Blade) that were 1 GB downloads, so it’s not inspiring to see a much smaller limit on downloadable size for games that are intended to run on home theater-sized screens. Maybe it’s actually not that big a problem; only developers who’ve made complete games for the Apple TV would be able to say for sure. But for now, it seems to suggest either very casual games, or else forcing players to sit through very long loading times. The latter’s been enough of a factor to kill some games and give a bad reputation to entire platforms.
Second is the emphasis on universal apps. They mentioned it at the event and just kind of moved on. I didn’t really think much of it until I saw this from Neven Mrgan:
Universal apps = haha no seriously good luck making money, folks.
You could take the most mercenary possible interpretation of that, which is what people always do once the economics of software development comes up: “Big deal! Having one app is what’s best for consumers! What’s best for consumers always wins, and it’s the developers’ responsibility to adjust their business model to enable that!” Also “Information wants to be Free!!!”
Except what’s best for consumers is that the people making great stuff can stay in business to keep making great stuff. And we’ve already seen on iOS exactly what happens when developers “adjust their business models” to account for a market that balks at paying anything more than 99 cents for months to years of development. Some big publishers (and a few savvy independents, like Nimblebit) came in and made everything free-to-play with in-app purchases. Maybe there is a way to make a free-to-play game that doesn’t suck (and again, Nimblebit’s are some of the least egregious). But I can’t see anybody making a believable case that the glut of opportunistic games hasn’t been a blight on the industry. I was out of work for a long time at the beginning of this year, and it was overwhelmingly depressing to see so many formerly creative jobs in game development in the Bay Area that now put “monetization” in the job title.
Believe me, I’d love it if one of these publishers went all-in on the Apple TV, and then lost everything because they didn’t take into account they were pandering to a different audience. But that’s not what would happen, of course. What would happen is that a couple of the big names would see that they can’t just fart out a “plays on your TV screen!!!” version of the same casual game and still make a fortune off of it, so they’d declare the entire platform as being not worth the effort. And then smaller studios who are trying to make stuff that takes specific advantage of the Apple TV “space” will be out of luck, because there are no big publisher-style marketing blitzes driving people to the platform. You need a combination of big names and smaller voices for a platform to work: again, see XBLIG.
It just seems as if there’s no recognition of the fact that there’s a lot more differentiating a game you play on your phone and one you play on your television than just the screen size. It seems especially tone-deaf coming from a company like Apple, who’s made a fortune out of understanding how hardware and software work together and what makes the experience unique. (Part of the reason that iOS has had so much success is that they didn’t try to cram the same operating system into a laptop and a smartphone).
At least the games on display showed evidence that they “get it.” The game demoed by Harmonix took advantage of the stuff that was unique to the Apple TV — a motion-sensitive controller and (presumably) a home theater-quality audio system. And even Crossy Road, which would seem like the worst possible example of shoveling a quick-casual game onto a TV screen and expecting the same level of success, showed some awareness of what makes the TV unique: someone sitting next to you playing the game, or at least having other people in the room all able to see something goofy happening on your screen.
I haven’t seen enough about tvOS to know if Universal apps are actually a requirement, or just a marketing bullet point and a “strong recommendation” from Apple. (Frankly, since I’m trying to make an iPad-only game, I’m ignorant of the existing requirements for iOS, and whether they restrict developers from releasing separate iPad-only or iPhone-only versions of the same software). So maybe there’ll be a market for separate versions? And somehow, magically, a developer will be able to release a longer, more complex game suitable for a home entertainment system, and he won’t be downvoted into oblivion for being “greedy” by asking more than ten bucks for the effort.
And there’s been some differentiation on the iPad, too. Playing XCOM on the iPad, for example, is glorious. That’s not a “casual” game — I’ve had sessions that lasted longer than my patience for most recent Xbox games — but is still better on the iPad because you can reach in and interact with the game directly. I could see something like that working — I’d pay for a game with lower visual fidelity than I’d get on Xbox/PS4/PC, if it had the added advantage that I could take it with me and play on a touchscreen.
So I could just be reactionary or overly pessimistic. But it’s enough to take what first seemed like a slam-dunk on Apple’s part, and turn it into an Ill Portent for The Future Viability Of Independent Game Development. As somebody who’s seen how difficult it was to even make a game in The Before Times, much less sell one, the democratization of game development over the past ten years has been phenomenal. And as somebody who’s finally realized how much some game studios like to exploit their employees, it’s incredible to be in an environment where you can be free of that, and still be able to realize your passion for making games.
The reason I first wanted to learn programming was being at a friend’s house, watching them type something into their VIC-20, and seeing it show up on screen. It was like a little spark that set me down a path for the next 40 years: “Wait, you mean I can make the stuff that shows up there, instead of just sitting back and watching it?” It’d be heartbreaking to see all the potential we’re enjoying right now get undermined and undone by a series of business decisions that make it impractical to keep making things.
Worst case, it’ll be another box that lets me watch Hulu. I was down to only eight.
First impressions of Until Dawn and the current state of story-heavy games
Until Dawn is a horror game about a bunch of dead-eyed teenagers in their mid-20s, none of whom have full control of their necks. The game is set in a dark, secluded ski lodge in the mountains in the dead of winter, where they’ve all gathered to celebrate the one-year anniversary of a cruel prank that resulted in the disappearance and/or death of two of their friends because why not.
Even though it’s emulating the style of horror movies, it’s inexplicably split into “episodes,” each with its own “Previously on Until Dawn” sequence to recap the stuff you just did 30 minutes ago. According to the episode count at least, I’m still only about halfway through the game.
Normally you’d experience an artistic work to its conclusion before you’d be arrogant enough to start critiquing it, but I’m not going to do that for two reasons:
Even B-grade horror movies scare the hell out of me, so I can only play the game in short, tense bursts where my heart’s racing and I’m not particularly enjoying it. By the time I actually finish the game, it’s probably not even going to be relevant anymore and I might as well be writing a navel-gazing analysis of the ludonarrative complexities of Night Trap.
I feel like I’ve already seen everything that interests me about the game.
What interests me is the way games are developing a unique language of storytelling. I particularly like trying to pick apart horror movies, because they have a built-in tension between active and passive storytelling that they share with narrative-driven games. (It probably helps that horror movies are usually so easy to pick apart, because they’re usually so direct in what they’re trying to do and trying to say).
So this is, if anything, a “first impressions” instead of a review of the game. So far, the game hasn’t blown me away with its originality or any particularly brilliant achievement, but what it does have going for it is that it’s completely accessible and surprisingly compelling.
It’s weird to be An Old Person (in video game terms) whose first exposure to horror games was Uninvited‘s black-and-white, MacPaint-drawn rooms, but still be dismissive of Until Dawn as an artistic achievement. Just look at it! It’s got recognizable actors like Hayden Panettiere and Peter Stormare painstakingly motion captured and rendered down to their pores, walking around extremely detailed environments with dramatic lighting. Plus each of the eight characters is controllable at some point in the game, which would be a ton of animation work even before all the narrative branching were taken into account.
So it’s frustrating to think of all that work being undone by dozens of small, seemingly insignificant details. Like how the characters seem stiff and overly fidgety unless they’re in a canned cutscene, at which point they’re clearly walking around on a sound stage. Or the eyes that never focus quite right, or the necks that don’t move quite naturally. As well done as it all is, it still ends up feeling like a bunch of robots wearing rubber masks of the actors.
That feels to me like a problem of technology, though. (I can already hear the groans and see the eye-rolling of character animators and modelers reading my dismissive “Press button to make character look human” take on it). The environments don’t get off as easy, because that seems like a problem of design. And one that hasn’t been solved by any narrative-driven game I can think of.
The first iteration of Until Dawn was apparently focused on directing a light source with a Playstation Move controller, and that’s very evident in the final game. There’s dramatic and atmospheric lighting throughout, with a flashlight or lantern cutting through the darkness, and it all makes for a very distinctive look. (One criticism I will make is that for every distinctive environment like the ski lodge or cable car station, there’s another generic one that’s lifted directly from the book of Early 21st Century Horror Movie Locations).
But the environments are detailed while by necessity having few objects that you can actually interact with. So important items are marked with a glint of blue light. Which means that a stray reflection off snow, or a strong specular highlight on a doorknob or vase or something, looks like an object of interest, and you keep getting taken out of the moment trying to figure out how to get to it and activate it.
On top of that are all the problems of level design that aren’t at all unique to Until Dawn: areas that feel like narrow corridors from point A to point B, rooms where it’s not obvious how far you can travel until you run into an invisible wall, spaces that give the illusion of being freely explorable but actually only have one or two areas of interest.
All of these problems are common, because they’re all fundamentally the result of having multiple design goals that are completely at odds with each other. Everything you do to encourage exploration and decision-making are going to kill your game’s pacing, and vice versa. Too few objects to interact with, and the environment feels barren and video game-like; too many, and you’re wasting time looking at incidental things that have no bearing on the plot, draining all the urgency out of the moment. The more you make a level “intuitive,” where the “right” way to go is the one that just feels right with no obvious clues, the less the player feels as if he’s actively exploring a space and making decisions.
It works the same way that the uncanny valley does for characters, since counter-intuitively, making things more realistic or more subtle just makes the problem worse. Playing Until Dawn has frequently reminded me of Gone Home, since they both have you wandering around dark spaces looking for things to pick up and turn over in your hands to get the next bit of environmental storytelling. Gone Home‘s objects and environments are obviously much less detailed and realistic than Until Dawn’s — whether out of intentional design or simply the fact that it’s a much smaller team and smaller budget — but it still has a better sense of place. It’s entirely likely that I’ve already spent more time in Until Dawn‘s ski lodge than the entire running time of Gone Home, but the latter’s house is the one that feels like a real place. I can still remember the layout of that house and where stuff happened, while I have no clear picture of how the ski lodge’s rooms even fit together.
It occurs to me now that “uncanny valley” is inherently optimistic; it just assumes that the problem will go away if we keep pushing forward. I’m starting to become skeptical. I’m sure that there’ll come a point in the future where it’s feasible and even practical to motion-capture an entire performance. Production on that type of game will become just like it is currently for linear media, and the software will be advanced enough to seamlessly blend between pre-recorded and procedurally generated movement in real time. In fact, after seeing how far “intelligent assistants” have come on cell phones in the past few years, I no longer think it’s unrealistic to expect CG actors to be able to understand natural language and respond intelligently.
But all that assumes that making things more realistic will solve all the problems, when we’ve seen time and again that interactive entertainment is a medium that rewards artifice and punishes realism. It’s Understanding Comics material: our brains are constantly looking for tiny, nitpicking details that will make something realistic seem “off,” while at the same time eagerly filling in the blanks on less detailed things to make them seem more recognizable and human.
To bring it down out of the clouds and back to a specific example from Until Dawn: most of the “teenage” characters are going for a completely naturalistic performance in both their voice delivery and motion capture, which ends up with lots of “ums” and “ahs” and overly-casual poses that just seem weird in comparison to everything else. It inevitably feels like a mannequin playing a recording of a real person instead of a real person.
Peter Stormare’s character, on the other hand, is played completely batshit crazy. He’s chewing the scenery so hard that he tears right through the fourth wall. It doesn’t feel at all real, but his character is still somehow the most compelling. Even though the lines he’s given and the questions he asks aren’t all that interesting, in my opinion. Most of the teenagers feel like disposable ciphers in comparison.
Warning: Explicit Language
So I think the “problem” with going for something hyper-realistic isn’t actually in rendering or art direction, but in game design and narrative design. When I said that Gone Home does a better job of establishing a sense of place, I don’t think it’s because of its relative low fidelity, or even due its careful and thoughtful level design (although both contributed to it). I think the main reason is that the game’s pacing allowed you to explore the space at your own leisure. You aren’t just dumped into a space and left on your own — it’s clear that a good bit of thought went into gating the sections of the house in a believable way and making sure that the revelations of the story could play out non-linearly and still make sense — but there’s nothing pressuring you towards the next story development except for your own interest and curiosity.
My biggest criticism of Gone Home is still the same as it was when I first played it: there’s absolutely no sense of player agency in the entire narrative. Everything interesting has already happened by the time the player’s game starts. And by the end of the game, it even seems to be mocking the player for wanting to participate in the experience.
Until Dawn is basically at the other end of the spectrum, desperate to remind the player how much they’re shaping the entire experience around player choices. I was going to say that it fetishizes player choice, and it does so almost literally: instead of fetishes, there are vaguely native American-ish totems lying around everywhere that dispense prophetic visions. (When each one gets discovered, the camera zooms inside, which makes for a hilarious image of a teenager picking up this weird totem and immediately slamming her face into it).
In case you’re unfamiliar with the concept of branching narratives, the game helpfully talks about the butterfly effect, then drives the metaphor home with an elaborate sequence where you fly over the veins in a butterfly wing. Each crucial junction point in the game is punctuated with an animation of butterflies. There’s a screen listing each of the story threads, which lets you page through your choices and see how they build on top of each other.
When I played Telltale’s Walking Dead series, I said that that game’s notifications of branching points (“So-and-so will remember that”) were a pleasant surprise. They seemed jarring, artificial, and clumsy at first, but in practice turned out to work like musical stingers. If we accept non-diegetic music in movies and don’t freak out that there’s suddenly a full orchestra in the shower with Janet Leigh, why dismiss non-diegetic notifications of in-story developments as being too “gamey?”
There are plenty of aspects of cinematic “language” that would be weird if we hadn’t spent a century being trained to accept them without a second thought. Cuts and montages are the most apparent, but even the way filmmakers compose shots is so deliberately unnatural that when we’re shown a scene framed the way a person would actually see it, it’s unsettling. In games, though, the tendency has been to reject all the game-like elements almost as if we should be ashamed of something so clumsy and primitive. Health meters have to be explained in-world as displays generated by your hazard suit, assuming they’re not eliminated altogether. Even Mario games have to explain that there’s a Lakitu following you around with a camera. Instead of developing a new language for games, it seems as if there’s a desire to hide the fact that they’re games as much as possible.
The obvious problem with Telltale’s approach is that they haven’t done anything with it. It was promising at first as an intriguing warning of story developments to come; after so much of it, it feels as empty as a jump scare. It’s just “a thing that these games do,” as if they’re not as interested in actual innovation in storytelling as they are in branding.
Until Dawn‘s notifications make Telltale’s seem restrained by comparison, but I think they work better as a result. When a story branch occurs or a clue is found, you’re shown exactly what happened, given a good idea of what it means, and you’re explicitly shown the junction points that led up to it.
That’s not to say that the choices are particularly interesting. So far, they’ve been all over the place — actual considered decisions are extremely rare. Most take the form of split second binary choices that don’t give enough information to judge against each other, e.g. “take the quickest route or the safest one?” Others let you slightly steer a conversation in one direction or another, which supposedly affects your relationships with the other characters. Others just go the Saw route and have you deciding between one horrible thing or another.
(When I bought the game, I’d forgotten that modern horror movies have been overtaken by stuff that don’t interest me at all or actively repels me, like all the torture porn franchises or the found-footage craze. I’d been expecting something more like 80s slasher movies or the Scream series. I’d still like to see more done with the Final Destination movies, because I think they’re relentlessly clever and it’d be interesting to see if it worked at all when made interactive).
As often as the game reminds me that characters can die as a result of my choices, I rarely feel like I’m making informed choices. But I’m not sure that that’s a failure of the game, because I don’t believe the game is trying to present a narrative built off of your player’s informed choices. I believe its ambitions are a lot more modest and straightforward. I believe it just wants to be a pastiche of horror movies, but with a simple layer of interactivity: instead of yelling at the screen “don’t go into that room!” you get to decide whether the character goes into that room or keeps going down the hallway.
In other words: it aspires to be a movie with some moments of interactivity, instead of a story-driven game that’s presented cinematically. The reason I believe that’s the case is because it uses the language of horror movies throughout, even at the expense of the game.
Sometimes, it works fine: the sequence in which Sam is exploring the lodge alone seems to be what the game was designed for, the standard scene from any slasher movie translated shot-by-shot into a video game.
Occasionally, it works so well that it seems too ingenious to be completely intentional: having multiple controllable characters is nothing new, but it turns out to be a perfect way to re-introduce cinematic edits into a video game narrative. Usually, games have to take place in some version of real time, and you’re either relinquishing complete control of the pacing to the player, or making the player feel like everything’s on rails and she’s in a shooting gallery. Thirty Flights of Loving is all about using cinematic cuts, flashbacks, and flash forwards in a first-person game, but it’s frankly tough to tell how much of the experiment could be translated to feature length. Until Dawn has no reservations about cutting away right as something interesting happens, but it doesn’t feel like missing time, or like the player’s had control ripped away from her, because she’s immediately given another part of the story to work on.
Most of the time, it just seems to be doing its own thing with the player input as something of an afterthought. You’re occasionally given something to open or push or flip over to read the back, but it doesn’t do much for engagement or immersion since you’re just following prompts. Same with several of the arbitrary binary choices: I can’t reliably predict what’ll happen if I choose hide instead of run, but I’m going to try it anyway. A lot of the quick-time button-press sequences, on the other hand, work surprisingly well. I despise QTE sequences on a philosophical level, but in this case, they add tension throughout — the usual “something is going to jump out and kill these fools,” but with the added stress of knowing that you could be asked to participate at any moment. Of course, these are even more random, unpredictable, and have no regard for agency: there are several sequences where Mike does a whole sequence of acrobatics in a cutscene. Or decides to shoot something, and even though I think it’d be a big mistake, I’ve got no option except to pull the trigger.
(To be fair: there are a few moments where the player’s choice not to do something is used for dramatic effect, and those are pretty well done).
But Until Dawn also insists on using some horror movie tricks that just definitively do not work in a video game, and it’s infuriating. The worst offender so far is the painfully long sequence of Mike and Jessica making their way through wooded paths up to a cabin. The game cuts away — over and over again — to show that there’s a strange person in the woods stalking them. We get just about every possible variation that’s been used in movies before: the Predator style POV shot. The shot where the characters walk off-frame but the camera stays behind to show the stalker waiting in the woods. There’s a particularly asinine jump scare one where the stalker is suddenly visible in a set of binoculars right as Mike stops using them. They’re infuriatingly tone-deaf, because they act as if what works in a movie will work in a video game with absolutely no thought given to player agency.
It’s entirely possible for a player to know more than a character, and to get tension out of that. In fact, Until Dawn does an adequate job of it later on, starting off the sequence I mentioned earlier where Sam is walking through the house alone. The player knows for a fact that there’s a killer in the house, but really, that’s something the player’s known since scene one. In that case, having the audience know more than the characters works exactly the same way it would in a horror movie.
The sequence of Mike and Jessica walking through the woods cuts away so often and so clumsily that it goes past “frustrating” all the way to “insulting.” If you show me a POV shot where a weird dude is looking directly at the characters I’m controlling, and then immediately turn the joystick back over to me, of course my first inclination is going to be to walk directly to where the guy is standing and ask him what’s going on. If you show me a flash of a bad guy in a set of binoculars, of course I’m going to immediately try and use the binoculars again.
The fact that I think it works with Sam’s sequence but completely fails with Mike and Jessica’s may seem like a contradiction until you consider what role the player has in Until Dawn. For me, at least, I’m never playing as Mike or as Sam. I’m floating in limbo somewhere between the director of a horror movie and the movie’s audience. Maybe I’m a production assistant?
During Sam’s sequence, the character’s goal and the player’s goal are aligned: we both want to find out what’s going on. So even though I know more than she does, following the trail is the best course of action because I know something interesting will happen when I get there. During Mike & Jessica’s sequence, their goal is to get to some absurdly distant cabin to have sex. I have absolutely nothing to gain from their having sex. I’m more interested in when this story is going to finally commit to being a horror movie and make something happen, already. So introducing the threat and then repeatedly showing it and pulling it away isn’t cleverly manipulating the tension between what the audience knows and what the character knows. It’s just showing me the thing I want to do — bring on the confrontation! — and then yanking it away from me for no discernible reason.
The other day I caught the tail end of a conversation/skillfully-defused argument where a bunch of people were trying to call out Patrick Klepek of Kotaku for writing “Emily, Who Is The Worst, Deserves to Die in Until Dawn“. As far as I can tell (Twitter makes it difficult to eavesdrop on other people’s conversations these days), people were accusing the article, if not Klepek himself, for being somehow complicit in the horror genre’s long history of misogyny. Or maybe it was because Emily and her boyfriend are the only characters who aren’t 100% white in the game? Like I said: tough to tell exactly what the complaint was.
Regardless, objecting to lack of empathy for a character in a horror movie doesn’t just miss the point; I believe it’s even more ghoulish than the alternative. Obviously, there are volumes of material looking at the “problematic” aspects of the horror genre, and its treatment — both intentional and subconscious — of women, ethnic minorities, and gay and transgender people. But to suggest that the audience is expected to feel genuine empathy for any of the characters in a slasher movie is either a pointlessly broad rejection of the entire genre, or is seriously messed up.
Knowing more than the characters do is an implicit part of watching a horror movie, since from the start of the film, you know they’re probably going to die horribly. If you’re feeling sympathetic towards the characters and getting attached to them, I’ve got to wonder why you started watching the movie in the first place. “I love seeing three-dimensional, fully realized humans be murdered and/or horribly traumatized!” The distance and lack of empathy is what makes the movies work at all.
The stakes in a horror movie (no vampire puns intended) aren’t something horrible happening to the characters, but something happening to you. You’re going to get startled by the jump scare or the face suddenly appearing in the bathroom mirror. You’re the one who has to feel tense knowing the character’s walking into danger. You’re the one who’s going to have to see something gross and disgusting. Until Dawn isn’t subtle about manipulating this: it explicitly asks you what bothers you, then shows you exactly that a few scenes later. Consider it “enthusiastic listening.”
Manipulating the audience as much as the characters is something that translates particularly well to video games, because the audience is even more invested in what’s happening. Not in the characters, necessarily, but in their story and in the things they’ll have to see. And as much as I hate QTEs, they add that layer of being invested in what I’ll be expected to do.
So far, even with its faults, Until Dawn pulls it off better than any game I can remember since the first Silent Hill or Eternal Darkness. It doesn’t require breaking the fourth wall, but it does require being aware of the fourth wall and how to use it. It may be as simple as the fact that the game is structured not so that you feel in control of what happens, but that you feel responsible for it.
Whatever the case, what’s increasingly clear to me is how much of the experience of interactive narratives depends on artifice. It rewards explicitly exposing the mechanics instead of subtlety. It benefits from deliberate design instead of hyper-realism. So much of the marketing of games — which has inevitably taken over the design, at least outside of indie games — is focused on selling the idea of player empowerment. You’re in control! You’re making all the decisions! This fantastic world has been built entirely for you! That should’ve been setting off all our bullshit alarms, even before GamerGate happened and made it explicitly obvious what a shitty goal that was for a medium that’s striving to be artistic expression.
It’s definitely true for horror games, but I think it’s true for all story-driven games: players aren’t giving you their money so that they can be in control; they’re giving you their money so that they can have fun being artfully manipulated.
Wet Hot American Summer: First Day of Camp hits the right balance of high-concept stupid.
Few things are more tedious than over-explaining a bit of goofy comedy in an attempt to analyze how it works and put it in some kind of wider pop-culture context. One of those few things is the self-important “Am I the only one?”-style takedown of something, as if it’s a crisis of cultural degradation just because other people like something that you don’t.
I’m going to do both anyway, since Wet Hot American Summer: First Day of Camp is really neat. I don’t just think it’s funnier than the movie, I think it’s a lot smarter and even retroactively makes the movie better. Plus I think it may be the best example so far of how Netflix is really doing stuff that “normal” television can’t.
So yeah, to get it out of the way: I’ve never liked the movie. This weekend is the first time I’ve been able to watch it in full, and it was only thinking of it as preparation for watching the series that I was able to finish it. I liked the general concept behind it. I liked what they were trying to do with a lot of it. Paul Rudd is so innately charismatic that it’s impossible for him not to be entertaining in anything. But every time I’ve tried to watch it in the past, I’ve just gotten bored and frustrated.
To me, it feels like it treats being in on the joke as a valid substitute for actually making jokes. It looks like it’s going to be an absurdist non-parody of 80s summer camp movies, like Airplane! and Top Secret! were for Irwin Allen and World War II movies — where parodying the “source” material isn’t the point so much as using it as a jumping-off point for an absurd gag. But it’s made by people who’ve already seen Airplane! countless times and heard the gags and one-liners repeated incessantly over twenty years, so that even that is over-familiar. As a result, the fact that they’re not making the joke you’d expect becomes part of the joke. Surely this can’t be humorous.
But it does work, sometimes. It’s pretty much the same tone as Childrens Hospital, and that show is occasionally brilliant. The problems are that even at fifteen minutes, the show can feel meandering as it struggles to land a joke; and because it’s so far removed from wanting to parody its source material, there’s not much of anything holding it together. Apply that to a feature-length movie, and the effect is that I really wanted to like it, but it just felt flat. It often seems as if the fact that we all know what’s supposed to happen in this scene makes up for the fact that nothing really does happen.
I enjoyed the hell out of First Day of Camp, though, and it’s pretty much exactly what I’d hoped the movie was going to be when I first heard the concept. This article by Andy Greenwald on Grantland covers a lot of what I like about it. He also articulates that preoccupation with being in on the joke, but he calls it “sitcomity” and is a lot more charitable towards it than I am. He might also have explained why the series works for me where the movie didn’t: I’m an unabashed fan of Arrested Development, and maybe that’s just a sign I need to have the high-concept-as-basis-for-lowbrow-humor spelled out for me explicitly.
But whatever reason, the high concept finally works for me. One of the implicit gags in the movie is that a bunch of actors in their mid-to-late 20s were playing teenagers alongside actual teenagers. Which on its own, especially when it’s presented without comment, is kind of funny. But when you’ve got the same actors in their 40s playing even younger versions of those characters, it’s hilarious. And then they take it a few steps farther, when Abby has her first period and becomes a woman, and when Lindsey goes undercover as a teenager at a summer camp even though she’s obviously in her mid 20s. (I actually respect even more that they’ve got Paul Rudd right there, and they still don’t even bother with the “he looks younger than he really is” joke).
Most of the “structure” of the series is built off that basic idea: paying off on jokes they started 15 years ago. Which makes it kind of a masterpiece of comic timing. Stuff that feels like it was probably the result of a random comment after a bong hit in the late 90s is now given an overly elaborate backstory and justification. Stuff that felt like a throwaway gag in the movie, or a desperate attempt to come up with a punchline for a scene, is stretched out and forced into the shape of an actual character arc. Even stuff that would’ve just been fanservice references to the movie (e.g. “Jim Stansel”) gets turned into sub-plot. It actually made me nostalgic for a movie that I didn’t even like all that much.
Plus, it looked like a ton of fun to make. They didn’t just get (as far as I can tell) every adult member of the original cast to come back, but they added what seems to be every single actor working in comedy (and/or Mad Men) today. The movie’s gotten a reputation over the years for being the first film for a lot of people who went on to become super-famous, so “he‘s in this, too?!” becomes itself a sort of call-back. I got the sense from the movie that it might’ve been more fun to make than it was for me to watch, but the series feels like they’re letting me in on the fun.
And the last thing that impressed me was how well it was structured as a series. I read a comment online from someone saying it was basically just a four-hour movie, but I don’t agree. All of the series that I’ve seen on Netflix and other streaming services have been too beholden to either broadcast TV or movies: either they’re structured exactly like a series that was intended to broadcast one episode per week, so binge-watching really does feel like an overload; or they’re structured like super-long movies somewhat arbitrarily broken into hour-long segments. First Day of Camp is the first I’ve seen that actually uses it as a storytelling device instead of just an artifact of distribution. That familiarity with how episodic television works becomes part of why the story’s engaging (which is part of what Greenwald’s “sitcomity.”)
The whole style of the series (and the movie, and every one of David Wain and Michael Showalter’s other projects that I’ve seen) is “punchline-averse.” It often seems as if they think the traditional structure of setup and punchline is such an obvious crutch that they’ll do anything they possibly can to avoid it. Including stretching a scene out for minutes by having the characters draw attention the fact that they’re not delivering a punchline (like with The Falcon’s final scene, or the embarrassed teenager having to stand through a price check on everything except the condoms and lube). It can sometimes feel 1990s-style reactionary: we’ll comment on how tired and overused this thing is, without really putting anything in its place.
But each episode of First Day of Camp has a cliffhanger ending, a cold open, or both. They force the scenes to end on a big moment, and even in something that’s deliberately and self-consciously not meant to be taken at all seriously, it’s exactly what’s needed. For one thing, it just helps the pacing: scenes can still have funny moments piled on top of each other and veering off in different directions, but it doesn’t feel like the whole thing is just meandering while waiting for something hilariously funny to happen. (Plus the pacing is just better overall: possibly my favorite gag in the entire series is a blink-and-you’ll-miss-it shot to a sheet of paper on which someone has written “(PHONE) NUMBER”).
More than that, though, it feels constructive instead of dismissive and reactionary. It acknowledges that you don’t have to be genuinely, deeply invested in the dramatic developments of an intricately-constructed plot, but you can still be curious to know what happens next. And that, plus everything inherent to the concept of making a prequel to something you’ve already seen, meant that I did get invested. How were they going to take this ridiculous concept and pay it off? How would they get rid of this character who clearly wasn’t around by the time of the movie? How would they explain this setup that was directly contradicted later on? It doesn’t have to be meaningful or profound, or even make sense at all, for it to be satisfying to see how all the pieces fit together. It doesn’t have to be High Art, just basic storytelling.
Of course it’s possible for something to be so obsessed with working on an intellectual level that it’s not funny or interesting (see: this blog post). But you can also go the opposite direction, so averse to pretense and protective of being-stupid-for-stupid’s-sake that it just falls apart. For me, Wet Hot American Summer: First Day of Camp was just smart enough to be hilariously stupid.
What terrible reviews of Trainwreck tell us about the sorry state of pop-progressivism on the Internet
Trainwreck is reasonably (if not spectacularly) funny, and the most surprisingly brave thing about it is that it’s so often sincere, not that it’s so often raunchy. It’s also overlong, oddly paced, too reliant on celebrity cameos, and disappointingly reluctant to go over the top with its gags, especially since we’ve all seen just how amazing both Amy Schumer and Bill Hader can be when they’re free to go full-on bizarre.
What Trainwreck isn’t:
I’m not quite sure how anyone could have misread this movie as badly as they did. When the first reviews came out, a recurring complaint was that all the potential of Schumer’s breakthrough feature film starring vehicle had been Judd Apatow’ed: turned into a raunchy but ultimately conservative spin on a completely conventional movie format.
It wasn’t until the very last scenes of Trainwreck that I started to see why some people may have thought their America’s New Feminist Hero had been straitjacketed by a guy who likes to make movies about 40-year-old stoners getting happily married. It’d still be a dense and wrong conclusion, considering the rest of the movie, but it was just a simple misinterpretation that could easily be cleared up by one of my remarkably insightful blog posts.
But not only does Amy explicitly explain what the point of the final scenes were, Hader’s character interrupts her repeatedly to say “Yes, I get the metaphor.” She went out of her way to make sure her message is clear, but it’s still not clear enough for the faux-progressives.
Our Miss Schumer
Take for instance “Judd-ging Amy: The Slut-Shaming Heteronormative Morality of Trainwreck”, which, if the title didn’t already give it away, is written with the tone of someone who doesn’t understand that Los Feliz Daycare is a parody account.
In case you can’t make it past the part where he inexplicably puts “married” in scare quotes, the gist is that writer Peter Knegt and his diverse group of friends felt betrayed. They’re long-time devotees of Schumer’s stand-up routine and Comedy Central series, and for them, this was going to be their big event movie. (“…like I imagine various demographics might approach ‘Star Wars’ or ‘The Dark Knight.'” where “various demographics” is code speak for “straight nerds”). But Judd Apatow took Schumer’s slutty, boozy persona that they all identified with, and turned it into a judgmental and heteronormative morality play that “slut-shamed us and brought Amy Schumer along for the ride.”
It seems to throw the very people Schumer has been vouching for all these years under the bus with an essential moral that excess behavior will only lead to unhappiness and that we best assimilate into societal norms even if it doesn’t feel natural. Why would Amy Schumer — our Amy Schumer — want to express such a notion?
Okay, for starters, she’s not your Amy Schumer.
The basic premise of the entire article is more backwards and offensive than even the most willfully ignorant interpretation of anything in Trainwreck. It says that a successful woman at a huge breakthrough point in her career, who’s got her own television series (not to mention the pull and the sense of loyalty to cast her friends and family along with the people she admires), managed to write, star in, and co-produce a feature film, but simply couldn’t help but get steamrolled by a man who’s powerful in the industry.
Another thing I find “problematic” is the increasingly widespread trend of people so eager to take offense at something they find “problematic” that they forget how fiction works. So they insist that celebrities explain it to them, or else there’s gonna be hell of think pieces about it on Salon. Knegt even acknowledges that Schumer’s slutty, boozy routine is an exaggerated persona. But he ignores that to go on for another page and a half, refusing to acknowledge that stand-up routines are painstakingly written and rehearsed performances, instead of just humorously-delivered affidavits.
For me, the reason this crosses the line from just annoying to downright infuriating is that Schumer has been so deft and clever at handling it without having to explicitly explain it. One of the most subtly brilliant things about her TV series (and which is carried on in Trainwreck) is that all her characters — even the wackiest and even the most offensive — are named Amy. That implies that they’re all, at least to some small degree, aspects of her. Which is huge, because it removes both the defensive distance that comedians usually keep between themselves and their subjects, as well as any sense of judgment.
That’s why my initial take on Schumer’s material years ago was so flat-out wrong: she’s not just a shallow gender-swapped, raunchy shock comic. She didn’t just combine Lisa Lampanelli’s “I can be as raunchy as any man!” schtick with Sarah Silverman’s “I play the part of a clueless white girl to make a larger point” and call it day. The bulk of her material is carefully constructed to talk about multiple things at once, and she almost always includes herself as a target. It’s what elevates much of her material to satire instead of just gags. And it’s probably why Knegt and his friends have always felt that she was representing them instead of judging them.
I Feel Like I Won
As long as I’m draining all the humor out of things by over-explaining them, let me do it with the bit that Knegt quotes (in full) in his article, the one where Amy has to endure a bridal shower with a bunch of “Stepford Wives” from Connecticut.
Schumer adapted this joke into the storyline of Trainwreck with a couple of changes. It’s the changes that Knegt takes issue with, by — surprise — finding them “problematic:”
But the other, much more problematic difference is that it seems Amy doesn’t quite feel like she’s won the game this time. She even feels the need to call up the person whose baby shower it was and apologize.
Considering that he’s a self-professed fan of Schumer’s comedy material, it’s weird that Knegt would only acknowledge the change in wording (with a “fair enough,” as if it were arbitrary), and the addition of a scene afterwards, instead of taking into account how the context, subject matter, timing, and in fact the entire punchline changed. Here’s a few things that he either missed or didn’t acknowledge:
That joke is old, in stand-up terms. If you’ve heard a comedy bit enough times to have it memorized, you can be sure that Schumer’s heard it a thousand times more. And considering that Trainwreck isn’t a “best-of” concert movie, but instead a debut screenplay, you can make one of two conclusions:
The woman who’s co-written three seasons of a comedy series, years of stand-up sets, Comedy Central roasts, and countless smaller routines for hundreds of appearances, was either so in love with that one gag, or so hard up for material, that she just put in as much of the bit as Apatow and Universal would allow.
Amy Schumer’s really smart, and she reworked some of her older material to fit in with a larger message, to make it say something more than it did as part of her stand-up set.
I’m skeptical that even Judd Apatow was saying “Shit, early cuts of our romantic comedy are only 2 hours long. We need some filler material, quick. Amy: do your ‘Connecticut Stepford Wives’ bit!”
Schumer’s raised her own bar for shock value. Changing Amy’s contribution to the game wasn’t just arbitrary. “I let a cab driver finger me” just doesn’t have the same punch after doing a commercial for Finger Blasters with a bunch of teenagers. So there’s probably a reason it was changed.
The stand-up version of the joke is still funny, but kind of mean. At least by Schumer’s standards in 2015. Not undeservedly mean, because she’s making fun of her friend for being ashamed of her younger behavior, and making fun of the arrogant and judgmental women who’d try to shame her. But in that version of the joke, they’re exclusively the targets. The gag is “I really shocked the hell out of those uptight bitches.”
The old joke is still there. You still get to see the shocked expressions on Nikki Glaser and Claudia O’Doherty’s characters. (Which is itself funny, knowing that instead of bringing in the usual suite of blonde actresses hired to play the Stuck-Up Bitch role, they cast a bunch of women comedians). But it doesn’t end there. Schumer’s newer material builds on the assertions of her older stuff, adding more layers and more targets, but without losing what made the original gag work.
The timing of Schumer’s line completely changed. Now it’s more drawn out, into a vulgar (but still pretty funny) story about having to fish out a condom that’d gotten lodged in her cervix. After the “she just said something shocking!” moment, we get to see how she keeps pushing it just for the sake of making everyone uncomfortable. And the person she’s making most uncomfortable is no longer the friend who’s ashamed of her past and worried that Amy’s going to embarrass her. It’s her sister, who’s long been the butt of Amy’s jokes for living a “boring” “normal” life.
Amy’s line is no longer the punchline. Instead, that goes to the character played by Schumer’s friend Bridget Everett, who feels “empowered” enough by Amy’s story that she can admit to getting double-teamed by her husband and another dude. It’s telling, too, that Everett’s story is about a kind of sexual adventurousness, while Amy’s has been changed to be not about casual sex itself, but the tedious and kind of gross aftermath of it. That acknowledges something that wasn’t present in the old version of the joke: some of these women have their own wild-ish stuff going on too, without choosing between the polar opposites of “enjoying life” and “being married.” (It also shows that Schumer isn’t so wrapped up in her breakthrough starring vehicle that she won’t give good lines to her friends).
She doesn’t call her sister to apologize. It’s kind of a pivotal scene in the movie, in fact. Her sister calls her, Amy casually (but sincerely) apologizes, and her sister dismisses it as no big deal. Partly because she just knows that’s the kind of thing Amy does, and she understands where it comes from even if Amy herself doesn’t. But mostly because there’s something much more important to talk about.
What Schumer’s done is keep everything that made the old bit work, and then added a layer of empathy and self-awareness to it. The character of Amy had been so concentrated on saying “fuck anyone who tries to judge me” for so long, that she’d ignored how judgmental she’d become herself.
I think the funniest line in her “Last Fuckable Day” sketch is when Julia Louis-Dreyfus asks her “Are you that girl from the television who talks about her pussy all the time?” Amy looks absolutely elated and replies with a delighted “Yes! Yes! Thank you!”
By complaining that Trainwreck sold them out and is being judgmental of them, Knegt and his friends are saying they’re not interested in actually listening to anything that Schumer wants to say beyond the most superficial level. They just want to feel empowered by hearing her talk about her pussy some more.
But At What Cost?!
Now, if I went off on a tear every time a young writer for a queer blog found something “problematic,” I’d never get anything done. It’s the kind of thing they do, and I understand where it’s coming from even if they themselves don’t. But when I hear basically the same thing coming from a Pulitzer-recognized film critic, I worry that it’s becoming a trend.
What makes Knegt’s article such an easy target is actually part of what’s good about it: it’s completely honest in what it’s trying to say and why it upset him and his friends. And while he does ignore everything Schumer’s trying to say with Trainwreck in favor of how it didn’t meet with what he wanted to and expected to see, at least he does it by comparing it to her older work.
The Taming of Amy Schumer by Stephanie Zacharek is more worrisome because it not only ignores the fairly easy-to-read message of the movie, it compares it to a simplistic, two-dimensional, and frankly antiquated conception of what feminism is supposed to be. (Granted, it’s the Village Voice, so know your audience and all that. But still).
Zacharek gets off to a good start, lamenting how there’s an extra burden on women writers and comedians now that we’re living in the age of the “problematic:”
in the current climate of watchfulness — one in which every joke must be constructed and sealed drum-tight so as not to offend anyone, at any time — it’s not enough for a woman just to be funny. Women comics must also be spokespeople: for feminism, for all women, for anyone who might be perceived as oppressed or marginalized in any way.
Yes! So far, we’re in near-complete agreement. But then the entire rest of the review contradicts or undermines everything in that first paragraph.
Zacharek’s problem with Trainwreck, like Knegt’s, is that she believes the movie is too focused on conservative moralizing. And she too believes that it’s mostly the fault of the same man:
But there’s a much bigger, more insidious problem with Trainwreck: Schumer may be the writer and star, but Judd Apatow is the director, and in the end, you can’t escape the feeling that somehow Schumer’s vision has been wrestled into the template that nearly all of his movies, even the best ones, follow […] Apatow and Schumer probably believe they’ve made a feminist picture, but the reality is something different. This is a conventional movie dressed as a progressive one.
Complaining that the movie isn’t feminist enough while also asserting that Schumer’s will has been beaten into submission by Apatow is a pretty impressive double standard. I can only assume, naturally, that Zacharek’s original vision for the review was wrestled into the standard Village Voice template by some male editor.
(Hopefully, he’s also the one who thought “Don’t be a Hader” was a funny gag. Because if that’s hers, I don’t even know why I’m bothering).
Some of it I’ll assume is just tone-deaf instead of sexist: I’m skeptical that if she were aware of just how much of Amy Schumer’s material has been devoted to ruthlessly excoriating the bullshit, esteem-destroying standards of beauty in the entertainment industry, and how much she’s mocked her own weight gain, “baby fat,” and the men who’d call her “butterface,” Zacharek wouldn’t have described Schumer’s appearance as “like a Campbell’s Soup Kid.”
To illustrate how there’s an unfair added expectation for women in comedy to be funny and smart, Zacharek references another Voice piece about Inside Amy Schumer, and a couple of sketches from the show. But she only references the ones that went super-viral, and the reason that they went super-viral is because in addition to being funny, they were so overtly political that they were easy to interpret.
But the entire premise, that Schumer’s too occupied with being feminist to just let loose and be funny, is completely invalidated by the existence of Cat Park. Anyone who doesn’t think ending a sketch by having a cat looking into a microscope to develop a vaccine to save the world’s children is someone who just doesn’t understand comedy. I said good day, sir.
And more than that, the true genius of the series is how it takes an overt statement and then layers more stuff — from a point about feminism to some shamelessly goofy gag — on top. One of my favorites is still Love Tub, which is a parody of The Bachelor that wants to say more than just make the obvious assertion that The Bachelor is backwards, sexist bullshit.
In a lot of ways, it’s another expansion and evolution of the “Stepford Wives of Connecticut:” it’s still indomitable-spirit Amy sticking it to the squares and prudes. But the target is no longer just some concept of boring “heteronormativity;” the target is the corruption of that into a schmaltzy and insincere televised competition for a man’s attention. The guy’s creepy whispered “Congratulations” as he undresses the “winner” is still my favorite part.
Amy’s still doing her slutty-and-boozy-as-I-wanna-be schtick, but it’s even more exaggerated. She still, without question, gets to end the night saying “I think I won,” because she refused to take any of that bullshit seriously. But the coda takes it a step farther: you’re not supposed to watch the end of that sketch and conclude, “Now there’s an independent woman who’s entirely got her shit together.”
Still, for some reason, people went to see a movie called Trainwreck, and they went away feeling betrayed that it wasn’t intended to be aspirational.
Stop Me If You’ve Heard This One
Zacharek’s review of Trainwreck is a prescriptive piece of film criticism dressed as a progressive one.
It starts with the assertion that Schumer’s making an argument she’s no longer particularly interested in making, and then criticizes her for doing a lousy job of making that argument. Essentially, Zacharek is faulting Trainwreck for not being about Kim Cattrall’s character in Sex and the City (which began in 1998):
We think we’re getting a movie where a woman gets to enjoy the company of lots of partners, without remorse or shame, the sort of freedom men — some of them, at least — have enjoyed for centuries.
Or in other words, the same assertion that was the basis of Schumer’s stand-up routine for several years.
And this is despite the fact that every piece of promotional material before the movie’s release made it clear what the premise was: what happens when a character like that has lots of remorse- and shame-free sex and then falls in love with a boring, “normal” guy? That had to be in the press kit.
While Knegt sees it as a betrayal that Schumer’s not still doing her earlier, funnier, stuff, Zacharek’s holding up a lighter, yelling “Freebird,” and demanding a repeat of the deepest cuts from Ms. and Cosmopolitan-era feminism. Even after dismissing the idea that women can’t be funny as a “boneheaded dictum,” she goes on to let the counter-argument of that frame the rest of the review. Women can be as funny as men! Women do enjoy sex!
It doesn’t matter that Schumer’s spent her career distilling complex observations about feminism and empowerment into two-minute long comedy routines. Why can’t she keep doing that? We just want to hear the same trivially true assertions repeated over and over again.
What Amy actually wants — Schumer or Townsend, take your pick — is pretty much irrelevant. You want to write a story about a woman whose self-destructive behavior is visibly making her life worse? What are you, some kind of prude? We paid our money to see a successful and empowered career woman (circa 1988) who gets to have it all and can be just as raunchy as any man. But instead of that, you went and wrote something conventional. So arrogant.
Also it’s not funny enough. You should smile more.
What’s especially frustrating in this case is that Trainwreck contains exactly the simple-minded gender-swapped romantic comedy that internet progressives crave. Amy works for a lifestyle magazine! (And it’s a men’s magazine! That’s run by a woman!) Bill Hader’s character is the over-achieving career guy who’s got it all… except love. Not only is he a surgeon who has every single famous athlete as a client, he also does award-winning work for Doctors Without Borders! Vanessa Bayer is Amy’s enabling, perpetually horny, commitment-phobic best friend. LeBron James is Hader’s supportive and nurturing best friend who’ll do anything to keep him from getting hurt.
In the age of feminism-as-meme-and-YouTube-series, that’s supposed to be enough. It doesn’t matter whether or not there’s any acknowledgment of context or whether it’s saying anything of substance: just look at it! Isn’t that something?! Like, subscribe, and retweet.
But the most interesting aspect of the basic premise in Trainwreck is that no one comments on it, ever. It’s just accepted as a given. I’ve been struggling to think of any instance in the entire movie where someone makes any reference to traditional gender roles, or makes any sort of comment that it’s weird how everything is swapped, and I can’t remember a single one. The only thing that comes even close is when Hader tells her he’s slept with three women, and the gag is that she replies “I’ve also slept with three women.”
In other words, Schumer is so uninterested in the argument that women can do everything men can, that she doesn’t even bother making it.
Strong Female Character
There’ve been sketches on Inside Amy Schumer that started with the premise of the gender swap, like the uptight office worker who finally breaks free of his inhibitions at an all-male version of Hooters, or the porn from a lady’s point of view that still turns out to be for men. (Note the pop-up ad for O’Nutters). An underlying message is that the swap is silly, because the context will always be completely different. The double standard is just too deeply ingrained.
Which turns out to be depressingly accurate, since in Trainwreck, Amy gets criticized for not even being able to be a lovable fuck up in the right way:
…her character in Trainwreck is at times so badly behaved — toward a man she supposedly loves — that it’s hard to be on her side. We shouldn’t have to approve of characters’ behavior; in comedy, especially, it’s more fun if we don’t. Still, we have to be mostly sympathetic to Amy for the movie to work, and if I were Aaron, I’d run a mile from her. […] Anyone, man or woman, can be an emotional bully. And in the end, it’s supposed to be a triumph that Amy is won over to the wonders of monogamy.
In the movie’s terms, we know she’ll never miss any of those other guys, because she never had much invested in them anyway. Trainwreck pretends to be frank about sex from a woman’s point of view, yet it refuses to reckon with how ferocious and unmanageable sex really is. A retreat into the safety of couplehood is the only possible future it can imagine, the necessary corrective to sleeping around. In its too-tidy universe, good girls don’t. And bad girls probably shouldn’t, either.
We already know that acceptable behavior in a romantic comedy would be creepy if not outright illegal when applied to real life. But there’s a much older fucked-up but universally accepted aspect of romantic comedies that’s even more insidious and more pernicious: the double standard. When men in romantic comedies (and real life) do stuff that’s callous, insensitive, selfish, or irresponsible, it’s a plot complication. We scramble for justifications: he’s just defensive or insecure. He’s been hurt in the past. It’s the age-old mantra for women everywhere: “I can fix him, I just know it.”
When Amy’s self-destructive behavior causes her to be insensitive or hurts people’s feelings, she becomes completely irredeemable and unsympathetic. Toxic. Avoid at all costs. Character flaws don’t just make her a bad person, but a bad role model for young single women and men everywhere.
Knegt’s article says it’s a “cringe-worthy montage” (and yeah, the montage aspect is pretty cheesy) when Amy tosses out all the booze and pot paraphernalia in her apartment. What he neglects to mention is that this scene comes after Amy gets upset over a break-up, drinks to excess, hooks up with a guy she doesn’t like at all, comes just short of being guilty of statutory rape and assault, and loses her job as a result of it.
In a later scene, she outright tells her sister that she’s not happy, and that she feels like she’s “broken.” The response from Knegt and his friends, apparently: “Sack up! Learn to deal with it, because you’re making the rest of us look bad.” It’s the kind of compassion that says a true friend is the one who holds your hair back when you puke while you’re drinking yourself to death.
And Trainwreck absolutely does “reckon with how ferocious and unmanageable sex really is,” just not in the too-tidy way that Zacharek wants. It says that one of the consequences of sex is that people can get hurt. That’s the entire point of John Cena’s character.
I think Zacharek’s read on the character — “somehow he believes they’re exclusive and is crestfallen to discover his mistake” — is totally at odds with what’s shown in the movie. It’s not “his mistake,” since it’s completely reasonable that he’d have different expectations from their relationship. And it’s not that he “somehow” thought they were more serious, since they’re going out to romantic comedies together. (Incidentally: the movie-within-a-movie was bafflingly pointless). As he says, having to declare that you’re “exclusive” is not something that adults do after high school, since they’re supposed to talk about it with each other and get a mature understanding of what they’re both hoping to get.
Their break-up is not at all ambiguous: she likes having sex with him (even if it is “like fucking an ice sculpture”) but had so little respect for him that it never even occurred to her to consider what he wanted. His last lines are explicit: “Fuck you, Amy. You’re not nice.”
Still, the script puts the blame on Amy but doesn’t condemn her for it. She genuinely doesn’t understand that he could’ve wanted something different, because isn’t this just the way things are for everyone? If you’re not married by your early thirties, it’s because you’re never going to be because you don’t want to be. That’s just the way things work.
(To underscore that — or maybe it’s just a funny recurring gag, but I’m going to run with it anyway — there’s the suggestion that he might be gay and doesn’t even realize it himself. He’s just going through the motions of what he thinks he’s supposed to like and supposed to want).
Another of my favorite sketches from Inside Amy Schumer shows how men and women can have very different expectations after having sex. It’d be easy and simple just to say that the guy’s a dick for taking advantage of her and then immediately forgetting about it. But the sketch careful to exaggerate how much she’s responsible for her own unrealistic expectations. Which says to me that whether she’s playing the apart of the emotional bully or the one being taken advantage of, either way she’s going to be the one who takes the blame.
Ten Things I’m Not Saying About You
This time, Schumer’s getting criticized (albeit indirectly, since remember she’s apparently nothing more than a mouthpiece for Judd Apatow) for saying that “a retreat to the safety of sobriety and monogamy” is The Only Way.
Except of course she’s not saying that at all. The most didactic that Trainwreck gets about monogamy is to say that it’s nothing to be afraid of, and nothing to be dismissive of.
Typically, when a flawed character is criticized for being a negative representation of Everyone Who Ever Lived Who Has Any Recognizable Traits In Common, it’s because there’s a genuine lack of diversity. The character has to bear the weight of representing everyone, because there’s no one else in the story who can.
That’s not the case with Trainwreck at all. Not only are there many types of women, there’s many types of relationships. Tilda Swinton’s character seems to be a fascinatingly bizarre take on Richard Branson, and she’s callous, cruel, and just plain weird, but there’s never even the slightest question whether she’s exactly where she wants to be. Bayer’s lecherous idiot doesn’t just come out of the movie unscathed, she gets awarded with a promotion. I already mentioned that Bridget Everett’s character is happily enjoying married life in the suburbs with her husband and the other guy who double-teams her. Even in Chris Evert’s cameo, she spends the entire time not-at-all subtly hitting on Hader.
And of course, the boring, uptight housewives are now even more boring and awful than they were in Schumer’s stand-up routine: now the scandalous secret is that one of them is sneaking a whole box of Skinny Cow ice cream at night. That’s like a whole ice cream!
As it turns out, people didn’t need to spend so much time worrying about what she was saying about them. On the day that Trainwreck opened, Schumer came right out and said what it was about:
Which, really, is the most offensive thing you could possibly say to some people: this isn’t about you.
At the beginning of the movie, Colin Quinn’s character is lecturing his two daughters about how monogamy is unrealistic. The humor comes from two places: that he’s dismissing monogamy as a fundamental concept when it’s completely obvious he’s just frustrated he can’t fuck around like he wants to, and that the two little girls are repeating what he says word-for-word as if it were a crucial life lesson.
Fast forward to the girls as adults, and we see that one sister has taken the lesson completely to heart and the other has rejected it. One sister is having plenty of remorse-free sex and partying and advancing in her career, while the other has settled down in the suburbs with a dorky guy and a heartbreakingly nerdy stepson. One sister is living exactly the life she wants to lead, while the other is just settling for doing what she thinks she’s supposed to be doing.
Can you see what she did there?
I don’t know how much of the movie autobiographical, just like I don’t know how much of Schumer’s stand-up routine is “true.” Not only is it none of my business, it’s almost completely irrelevant. Unless I need her to explain to me explicitly how much of it is satire so I can determine exactly how much offense I can take.
What I suspect, though, is that the finale of the movie is framed like a totally conventional romantic comedy sell-out moment, specifically as a pointed “fuck you” to anyone who’d dismiss it for being a conventional romantic comedy sell-out moment.
Throughout the movie, she’d mocked the men she was sleeping with, mocked her nephew, mocked her brother-in-law, mocked her sister for being boring, mocked her job for being beneath her, mocked herself for falling in love and becoming such a cliche, and mocked cheerleaders and sports in general as being stupid and pointless. In the end, she puts on the cheerleading uniform, does a cheerleading routine to a song she hates, and — as befits an empowered 90s woman — makes a run for the basket. The entire time, Hader’s character is telling her that she doesn’t have to do this, but she keeps doing it anyway. Of course she doesn’t have to do it, but she wants to.
And then, when she’s breathlessly trying to explain what it all means while he’s saying “Yeah, I get the metaphor,” is the first time since I Know Where I’m Going that I almost teared up at the end of a romantic comedy. Partly because Hader’s a good actor even when he is playing it totally straight, and the look on his face was one overwhelmed by sincere appreciation. But mostly because I was genuinely happy to see her be truly fearless and risk looking stupid to get what she wanted.
This Is What You Think Is Hot?
I said earlier that it’s disappointing that the sketches from Inside Amy Schumer that go viral are always the ones that are overt in their message, when there’s so much even better material that works on multiple levels. An exception to that is the one that went viral at the beginning of this season: Milk Milk Lemonade.
In the grand tradition of funny stuff that boring people like me love to write think pieces about to over-analyze: it’s a parody of Anaconda that wants to say more than just “Anaconda is kind of silly.” It suggests that women having the freedom to objectify themselves is a pretty shitty substitute for actual empowerment.
When Anaconda came out, everybody was stumbling over themselves to use terms like “sex positive” and “positive body image” and “owning your own sexuality,” trying desperately to put a progressive spin on a video in which a bunch of women writhe around in the jungle celebrating each other’s loaf pinchers before presenting them to Drake. Putting the whole thing over a sample from a 20-year-old novelty song was apparently supposed to be an example of “taking it back.” Inside Amy Schumer’s version responds, “Nah, I don’t want it. I’m good.”
Something that’s not mentioned in Schumer’s video (for that matter, I’m only assuming it’s parodying Anaconda in the first place): I’m going to call bullshit on any claims that Anaconda is positive or empowered when it spends so much time saying “fuck the skinny bitches.”
And that’s why I think “Milk Milk Lemonade” is kind of brilliant, and ultimately why misinterpretations of a romantic comedy I liked but didn’t love were enough to set me off on a few thousand words of rambling commentary. The video makes a pointed commentary, but it’s not particularly interested in condemning or even really judging anybody. More than anything else, it feels like Schumer wanted to dress up with her friends and have fun.
It’s gloriously, unapologetically juvenile. If it makes a statement about women owning their own bodies, it does so the same way a six year old makes a statement about owning a cookie by licking it before anyone else can — ha ha I ruined it for you! It treats the whole thing as completely silly, because it is silly. “My sense of self-worth isn’t dependent on whether or not a guy is turned on by my ass.”
But also: hey, if it’s your thing, knock yourself out. No need to get defensive because it doesn’t affect her. She’ll just be over here dancing with Amber Rose and Method Man because they seem cool.
To me, it shows just how much the culture of “engagement,” retweets, trending topics, and think pieces have helped corrupt every progressive “social justice” ideal into a defensive version of “fuck the normals!” (And how that’s always rationalized with some “they attacked us first!” justification like the inexcusably insipid “always punch up!”) The goal of self-actualization has been de-emphasized in favor of just swapping one version of conformity with a different one. Inclusivity has given way to word-policing. The word “heteronormative” has been so casually tossed around as a pejorative that people now act as if “hetero” is the toxic part of it.
And every time some pinhead pipes up with an antiquated opinion, people stumble over themselves to correct it, or to at least show they are vehemently opposed to it. Not because it actually advances anything, but because it’s easier. At some point, we each have to decide how much of our lives we’re going to waste reacting to other people’s opinions of us. Otherwise we’re going to just keep having the same stupid arguments every 5 years until we’re all lying in our cryo-feeding tubes croaking “People can be whatever they choose to be!”
Amy Schumer gets to make her voice heard and waggle her ass in tight skirts. She gets to mock anyone who’d judge her for her looks and make fun of her looks for a ton of comedy material. She gets to write at length about cunnilingus and about a girl winning the heart of her One True Love. And she gets to do it without demeaning or mocking anyone who doesn’t deserve it, because they’re simply not a threat to her.
Some people may call it selling out, but I’m like, “Really? Because I feel like she’s won.”
Last week, we went to Universal Studios Hollywood and Disneyland to celebrate my 44th birthday. It was my third trip to Disneyland this year and no, you’re the one with the problem. I’d never been to Universal in Hollywood, although I’ve been to the Orlando version a few times.
Join me for a magical journey of memories and unsolicited opinions, won’t you?
Universal Hollywood is surprisingly fun. “Surprising” because I’ve always been an obnoxious Disney snob and thought of the Universal parks in Orlando as pale imitations. (Except for the Spider-Man ride, which is awesome). I still think it’s fair to judge the Orlando parks on that basis, since I think they’re clearly trying to compete with Walt Disney World. But Hollywood is its own thing, built up around a deservedly famous tram tour and functioning studio, and committed to making its own type of attraction.
The studio tour was the best part. I’ve been seeing ads for the tram tour for as long as I can remember — the queue area cleverly shows scenes from ads, promos, and movies that have featured the tour, establishing it as something “historic” in itself — and it didn’t disappoint. The fact that it’s a random assortment of highlights over the past few decades was a feature, not a bug, because it added to its charm. I’d just wanted to see the Bates Motel and Psycho house, so everything else was a bonus. The best aspect of it was that they got the “charmingly cheesy” tone exactly right: they don’t take anything too seriously or oversell it as a fantastic spectacle, but they don’t let it devolve into the Jungle Cruise, either.
The Kong 3D section of the tour was amazing. Easily my favorite part of the entire park, and, like the Spider-Man ride in Orlando, one of my favorite attractions at any theme park. The synchronization of the effects and the motion simulator was perfect, and more important than that: the show itself was designed to immerse the guests (and the tram) into the experience completely, with real pacing and an actual climax instead of just a sequence of effects.
The Rock gets it. This year’s highlight (and honestly, the main reason we went) was the Fast and the Furious “ride,” which turns out to be not so much a ride as the finale of the tram tour. It was fine, and fun, and appropriately campy, but it seemed a little too enamored of its “story” and effects and special guest stars to really work. The beginning was way too talky to setup what was just “batshit crazy race through LA;” they would’ve been better off going the “King Kong fights monsters, the end” approach. Plus Dwayne Johnson was the only person who seemed to realize it was supposed to be goofy and fun instead of wry and extreme; he was clearly having a blast with it.
Universal is still lousy at crowd control. We were warned to get to the park obscenely early because of the Fast & Furious crowds. Being the third group of people waiting in line before the park opened seemed like a waste… until we tried to leave later, and were hit with an unstoppable wave of people just showing up and headed for the studio tour. Going early not only meant we got to avoid the crowds, but we rode everything we wanted to and were done before noon. It gave room for the Despicable Me and Simpsons rides to be charming and fun without having to be Big Event showstoppers. And there’s no way in hell I’m going near the Harry Potter land when it opens in Hollywood. Even in Florida, where they have plenty of space, it still feels overcrowded and claustrophobic; in Hollywood it’s going to be bonkers.
Universal should make an effort to take bigger guests into account. It was a lot more jarring and infuriating in Orlando, after being immersed in Disney’s obsession with making rides accessible to absolutely everyone possible, to be confronted with an attraction that took millions of dollars to create but won’t let you ride if you’re too big. But even after going into the Hollywood expecting it, it was still a drag to be jammed into small seats with uncomfortably tight restraining bars. We didn’t even ride the Mummy coaster because the test seats ended up being too tight to be worth it.
Disneyland’s 60th Anniversary is more about the shows than the rides. The Matterhorn and Haunted Mansion got some new effects, and the newly-refurbished Peter Pan ride was doing a soft open for annual passholders (that we skipped because the lines were too long). But the highlights are the new World of Color show at DCA, and the fireworks and “Paint the Night” parade at Disneyland.
The Hatbox Ghost is excellently done. Granted, it’s something that’s aimed exclusively at Disney nerds, so it’s barely enough to be a draw on its own. But it fits so perfectly that it seems like it’s been there since the ride opened. And it actually kind of hurts the scene with the bride in the attic, which is something I never had any problem with before. But seeing a modern effect done in the art style of the original mansion makes it jarring to see the real photographs and live action video of the previous scene.
The fireworks aren’t what I expected, but are still cool. The 50th anniversary fireworks show is still the best fireworks show that Disney’s ever done (even better than Illuminations at Epcot). I’d been hoping that they’d do the same thing for the 60th, focusing on the parks and attractions themselves instead of being a treacly pastiche of songs from the movies. They kept it a collection of songs, but downplayed the usual dreams & wishes of magic and imagination and chose some songs that haven’t yet been overplayed to death, and also “Let it Go.” The architectural projection down Main Street is fantastic; chimney sweeps dance on the roofs in “Step in Time” from Mary Poppins, and the buildings wobble and shrink during “Heffalumps and Woozles” from Winnie the Pooh. The effects are so well done that they threaten to overpower the fireworks themselves — which is really only a problem if you’re only seeing it once instead of over and over again.
“Let it Go” has crossed the line to unsettling. As tired as I am of seeing Frozen stuff — the movie was completely charming, but Disney’s over-marketed it past the point of annoyance — including it in a show at the theme park was simultaneously cool and creepy. Being surrounded by dozens of little girls (and young women) (and older women) (and guys too) all singing in unison makes you realize that Disney could totally start a cult army if they wanted.
“Paint the Night” is the best nighttime parade they’ve done since the first. You’ve got to feel a little sympathetic to Disney, since they want to keep making new stuff, but the Main Street Electrical Parade (or “The Electric Light Parade” to those of us from the east coast) was so incredible that everybody just wants to keep seeing that. Previous attempts to come up with a replacement have been disappointing at best, but the new show is the first one that didn’t have me missing the old one. The floats and costumes all seem to be shooting for something between SpectroMagic and the Electrical Parade, and they all hit the sweet spot of weird enough to be imaginative but not so weird that they’re creepy. And having the characters ride around on modified versions of the old ladybug cars was a great callback.
I’d buy a copy of the “Paint the Night” song. I’m not a fan of the vapid One Direction-ification of Disney music, but if that’s a mandate now, they did as good a job as they possibly can. What impressed me the most is that it’s got enough “Baroque Hoedown” in it to satisfy old farts like myself, but not so much that it just feels like a rehash. And it’s catchy as hell. Asking “When can we do it again?” over and over seems like a slightly less subtle version of the Mount Splashmore song. BREAKING NEWS UPDATE: Dave Cobb informs me that the song is already available on the Wreck-it Ralph soundtrack as “When Can I See You Again?” by Owl City. I don’t remember it from the movie (or any of the music from the movie, actually), and I’d assumed it was written specifically for the parade. I’d still like to get a version that’s used in the parade, mixed in with a lot of Baroque Hoedown and other songs.
World of Color is what I’d expected the anniversary fireworks to be. It’s more of a history of Disney and Disneyland, and it’s really well done. It does veer a little too far into preaching to the choir and comes off as a marketing push reminding us all how great Disney is. But for people like me who are more fans of the parks than of the studio, it has a section devoted to celebrating the classic attractions, with some new 3D animations projected onto Mickey’s Horror Wheel.
If Disney pays 4 billion dollars for something, they’re going to get their money’s worth. The section of the World of Color devoted to Star Tours starts out innocently enough, with the familiar chime and some audio from the ride. Then it inexplicably goes nuts and turns into a full-on ad for the new movie, with TIE Fighters swooping in and BB-8 rolling all over the place and the Millennium Falcon flying across the fountains and lasers and for some reason, a giant fireball. It was completely gratuitous and I loved every single second of it. At the theater in Downtown Disney, they had a teaser poster for The Force Awakens and I felt my heart rate increase along with a sinking feeling in my stomach that oh crap I’m a fan of Star Wars again.
It wasn’t that crowded, surprisingly. I’d been planning to wait until after the summer to go to Disneyland again, since I expected the turnout for the 60th Anniversary to be so huge that it’d completely ruin the fun. But seeing all the pictures and videos coming in from the park were just too much for me to wait. As it turned out, it wasn’t all that bad — in line with a busy day at Disneyland, but not obscenely crowded. There was a lot of stuff we didn’t bother riding, since we’d been so recently, but nothing that felt like I was missing out. Still, I wish they’d get moving on the third park that’s been rumored for decades: if the parks are so busy even on weekdays that they’re considering charging extra for peak times, that’s a clear sign that they’re at capacity and it’d be a good investment to expand. (Note to Disney executive staff: I’m available any time to tell you how to run your business. Glad to help).
Inside Out reminds us that we can’t be happy all of the time, an idea that angered, disgusted and frightened me.
It’s taken the better part of 24 hours and three drafts of a blog post, but I finally have to begrudgingly concede that I liked Inside Out.
That’s not a review of the movie, since this isn’t a review. It’s just an unfocused — and completely personal — attempt to sort through the aftermath of the movie.
(And it doesn’t make any attempt to avoid spoilers, so it’s probably best to avoid this if you haven’t seen it).
If I were writing a movie review, I’d just cut-and-paste the review by Dana Stevens on Slate, because I agree with it completely, from the non-hyperbolic “astonishing” all the way to that killer of a closing sentence:
As Inside Out is aware to a degree that’s rare in kids’ movies, growing up is both a grand triumph and an irreversible tragedy.
The only part I’d take issue with is the suggestion that it’s a “kids’ movie,” even if it’s just used for contrast. Maybe that’d help put a little emotional distance between me and a movie, but lumping it in with “kids’ movies,” even in passing, just seems oblivious to what Pixar’s been doing for decades. They’ve built a well-deserved reputation by insisting on making deeply personal movies that try to focus on themes that are completely universal.
And Inside Out takes that one “irreversible tragedy” that is completely universal and submerges us in an extended metaphor that forces us to confront it head-on. Like the reconditioning scene in A Clockwork Orange, but instead of violence, it’s the loss of childhood.
The Toy Story 3 Scale
When early reviews of the movie started to pop up, I made an only half-joking request that reviewers include an indication of how likely it would reduce us to heaving sobs. Crying in a Pixar movie is all but inevitable — I found myself tearing up at the storyboards for Brave — but I wanted to avoid something like Up‘s completely unfair sucker punch. I suggested a scale from Finding Nemo (bittersweet sniffling) to the finale of Toy Story 3 (complete emotional breakdown).
As it turns out, Inside Out affected me like the end of Toy Story 3, stretched out to feature length. It was too potent. It just left me feeling drained, exhausted, and pretty miserable for the next day.
It didn’t even feel like a cathartic “let it all out” venting, because there wasn’t a devastating but optimistic thanks for the adventure, or even the implied promise of new adventures with a new child and ongoing specials on ABC Family. It’s not that I think Inside Out was poorly structured or manipulative, but just the opposite. The “problem” is that I think it insists on being honest. The actual tear-jerking moments felt earned because they were an inevitable and integral part of the story. Which means that an uplifting “here’s how everything turned out great forever” would’ve felt artificial, too.
So instead, I interpreted it as a celebration of sadness as necessary and inevitable. Which may be true, and surprisingly mature, and exactly what I’ve been asking for as an alternative to what usually tries to substitute for a profound statement in “family movies.” But instead of a promise of adventure, the promise is… life as a relatively well-adjusted adult. I’ve seen how that turns out, more or less, and it’s not that great. There’s even the gag about the looming specter of puberty and the repeated question of “what could go wrong?” that seem — if not dark, exactly, then a little sardonic and defeatist.
“You’re going to be sad. A lot. It’s part of growing up.” It’s entirely possible that it’s just because my own headquarters functions better when Anger and Sadness are kept in check by the happy sprite of Wellbutrin, but I left the movie wishing it had been a more explicit, obvious, and artificial celebration of the grand triumph than an acknowledgement of the irreversible tragedy. That it’d let me keep on enjoying my already ridiculously overextended arrested development, instead of reminding me that “Growing up means that joy and optimism need to learn their place.”
Don’t Spoil Titanic For Me
Instead, they introduced (among other things) the character of Bing Bong, and as soon as it was clear that he was Riley’s imaginary friend, we all knew exactly what was going to happen. Because I’m sitting in the audience, realizing that it’s not just nostalgia for toys that I’ve put away or happy memories from childhood, but I can’t even remember the name of my imaginary friend. It played out less like an abstraction of a growing child’s mind and more like a primary colored version of Final Destination.
There’s more subtle foreshadowing throughout. When we first get a glimpse into the headquarters of Riley’s mom and dad, it’s played for gags but has an undercurrent I felt like a slow-motion punch to the gut as all the implications sunk in. Dad’s mind is run like a submarine in war, dominated by Anger keeping a tight check on any outbursts of emotion. And while the movie is still in the process of answering the question “what is the purpose of having Sadness?” we see inside Mom’s head, where the emotions are sitting around like the hosts of The View, pining over a long-lost potential romantic adventure, and we have to notice that Sadness is clearly in charge of the show.
“Here’s what you have to look forward to, kids! Now let’s get back to the action and find out what could possibly be in store for this little girl’s brightly colored imaginary friend!”
As it turns out, there’s a good bit more to it than that. Using colorful abstractions to tell the story doesn’t just make it universal beyond the experiences of one little girl, but it also allows the movie to make some pretty profound observations without stating them explicitly. So I’m going to do exactly what I’ve resolved not to do, which is to be reductive about the “message” of the movie. Simply because it took me a while to parse through everything I think it says and think it implies.
I also just want to call out some of the decisions that make Inside Out astonishing, since the movie doesn’t draw that much attention to them.
On the technical side, Pixar has progressed to the point where I’m too much of a layman to even identify what’s remarkable. It seems like every feature has required at least one big technical breakthrough, but usually they exploit the hell out of it — if not showing off, then at least making sure they got their money’s worth. So if they’re going to set a movie underwater, you’re going to get a lot of sequences that just show how beautiful the ocean is. Or if they’re going to simulate every hair on Sully’s body, you’re going to see it in close-up. I wouldn’t have noticed the natural lighting effects developed for Monsters University if they hadn’t been pointed out to me, but it makes perfect sense for a story that’s set over the course of a year.
With Inside Out, I initially had a minor mental criticism that Pixar’s gone all-in on its House Style for human characters — they’re fine, but ultimately inoffensive at best, too cartoonish to be realistic but not cartoonish enough to be interesting. I quickly realized that that criticism is missing the point when the “stars” of a movie are toys, fish, bugs, robots, and emotions. In Inside Out, the emotions need to be expressive (obviously), but the humans need to be universal enough that every human in the audience can project herself onto them.
And with the emotions, the character design goes all-in on modernism. That’s possibly not the “correct” term, but it’s referring to the style from the 50s that was more graphic and abstract. So you get the character of Fear, who should only be able to work in two dimensions, and yet he coexists with the others with no obvious cheats. And then we get a sequence that drives the idea home, where the characters are rendered in more and more abstract forms until they’re reduced to a single line.
It’s even more apparent with Joy, who looks like someone took a piece of concept art done in pastels or crayons and said, “We want this, exactly, to be the main character in a feature-length piece of 3D animation.” I can remember only a couple of scenes where the camera’s allowed to linger on them up close, to show off the effect. But much like the animated paintings in Ratatouille, it takes what is steadfastly a static, two-dimensional art style and gives it depth and movement. It insists that the rough speckles aren’t just an artifact of Joy’s concept art, but an integral part of the character.
It seems like a confident decision that could’ve been sacrificed in the name of convenience. The movie’s full of confident decisions that could’ve been sacrificed in the name of “accessibility.” Most obviously, it’s a movie driven by female characters. It’s worth pointing out, even though it’s a shame that it’s worth pointing out, and even though it goes so far into the realm of universally accessible story that it makes the entire question seem irrelevant. Maybe its success will finally put the stupid “debate” — which is itself a modern invention, as a simple scan of centuries of female protagonists would illustrate — to rest.
What interests me a lot more is that there’s no villain. It’s especially astonishing considering that both Up and Frozen were brilliant movies that also took on more subtle and sophisticated themes than usual, and yet each one still suffered from a third act that required a Disney Villain to pop up and cause conflict. Again, maybe it’s optimistic, but I’d hope that the success of Inside Out will finally convince people that you can have a story based entirely on emotional conflict and it’s still completely accessible.
Sunny-Side Up, or, Happy Together
Which gets back to the last confident decision I’ll mention, which is the one that took me a while to get. Because it’s a question that’s asked at the beginning of the movie but isn’t explicitly answered. (At least explicitly enough that I picked up on it).
I read a review of Inside Out that made the minor complaint that the beginning of the movie, where Joy introduces herself and the other characters, was regretfully necessary exposition in an otherwise subtly-told story. But I don’t think it was just exposition. I think it was setting up the central conflict that Joy (and the audience) would spend the rest of the movie — and in my case, the weekend after — trying to figure out.
When Fear, Disgust, and Anger are introduced, we get an illustration of what they do and why they’re there to protect Riley in one way or another. In fact, that assertion that they’re not just manifestations of personality, but deeply invested in making sure she’s okay, is one of the subtle ways that Inside Out makes the complaint “this idea’s been done before!” seem laughably irrelevant. Tasha Robinson’s review on The Dissolve lists more examples of films and TV series that started from the same concept, but in comparison, they feel like gags riffing on a premise instead of a genuine attempt to explore all the deeper implications of a premise.
But instead of just an introduction to the “rules” of how all this stuff works, it asks the movie’s important question: why is Sadness there? For as much as I talk about Pixar being universal instead of just for kids, and how it tackles some mature and sophisticated themes, it could seem like “Why do we feel sad?” is an insipidly childish question. But it’s clearly one we struggle with as adults. Anyone who’s tried to figure out what’s “normal” vs what’s a breakdown in brain chemistry has had to ask it. Anyone who’s been frustrated to be told “stop trying to fix things, I just want to feel sad,” has had to ask it. If you use Facebook, you likely see people struggling with it every day, with self-actualization aphorisms like “Today I Choose Happiness.” How is sadness productive? What practical purpose does it serve?
On the surface, Inside Out seems to suggest an acceptance more than an answer. “Being grown-up is complex, yo.” The age of “pure” emotions doesn’t last long, and our memories are really tinged with a bunch of different emotions. Sadness is just there, and being an adult means learning how to deal with it. At best, it seemed to say, sadness made the joyful memories stronger. The explicit “moral” seemed to be that you can’t suppress it and contain it. You can’t expect to be happy all the time.
That was the part that hit hard with me, because it seemed to be reaching directly into my subconscious and calling me out. Cripes! They’re onto me! They know that I feel like I’m constantly trying to stay content and optimistic and put a positive spin on things when I’d rather just lie on a couch and moan.
And just like the jackasses who call me a “grouch” or “curmudgeon,” or tell me to “smile more” (as if I were a woman in corporate management or running for office!), they’re calling me a charlatan! They’re saying I’m doing a lousy job of it, and they can see right through me.
And if that weren’t bad enough, they’re saying it’s a futile effort in the first place! I just came here to see some bullshit about believing in my dreams; I didn’t come to see a Disney/Pixar movie whose uplifting message was “You are fated to a life of sadness so Deal With It.”
(Ever since I heard multiple men say that The Little Mermaid was exactly what they needed to deal with coming out in the 90s, I’ve made it a point not to under-interpret family movies or resist taking them too personally).
But then: movie studios don’t stay profitable with an audience of one. And if I were the only person feeling like that, then they wouldn’t have made a movie about it. Maybe the message is that everybody feels the same way, that they’re struggling to stay happy and keep sadness tightly controlled and prevented from leaking out. And it’s not necessarily that I’m doing a bad job of it, but that people can recognize it because they do it themselves.
Which brings back to mind the scene where Sadness helps the imaginary friend* get back on his feet by being able to relate to him, while Joy doesn’t know what to do. [*It’s hard to insist that these are adult, sophisticated concepts that it’s perfectly normal for a 44-year-old not to grasp immediately while talking about Sadness and Bing Bong]. Or the scene where Joy figures it all out, where the revelation isn’t simply that happy memories have an element of sadness to them, but that sadness has a purpose, too. It was sadness that brought the family together and turned the memory into a happy one.
Or the finale, which isn’t the scene showing Riley at hockey practice with all her personality islands back in place. It’s the one just before that, where Angry Dad and Sad Mom tell Riley that they’re sad too. Maybe I would’ve picked up on it faster if they’d included a sequence in which Sadness begins sparkling and magically transforms into Empathy.
But of course they didn’t, and of course the movie is a billion times better for not making it completely explicit. And the peek inside Mom’s mind magically transforms from quietly defeatist foreshadowing of a life dominated by sadness, to one where they’re all cooperating and sharing a happy memory together.