The story in brief was that Volkswagen did a beginning-of-April marketing stunt announcing that they were changing their US branding to “Voltswagen,” to reaffirm their commitment to electric vehicles. The Verge chomped on that like a starving bass, running it as a top story on the site. Now, after finding out that the obvious marketing stunt was, in fact, a marketing stunt, they edited their story from press release regurgitation into a long-form tantrum.
Normally, I’d do the Nelson Muntz point-and-laugh and then move on, but the Verge writers’ histrionics have actually made me kind of angry. First, instead of being good-natured — or even the wet blanket but appropriately skeptical approach that Ars Technica took — they changed the headline to say that Volkswagen lied about their rebranding! Here showing the same understanding of “lying” as the aliens in Galaxy Quest.
Worse, they made repeated references — in the byline of the rewritten article, and on Twitter — to “Dieselgate.” Because, obviously, fooling a couple of gullible and clickbait-seeking internet writers is equivalent to a multi-billion dollar, years-long, massive environmental scandal.
But now we know the rebrand was nothing more than another lie from a company that’s become known for something else: lying.
A butt-hurt, insufferably whiny baby
The reason this makes me so irrationally angry — apart from putting me in a position where I’m not just defending Volkswagen, but defending an April Fools prank — is that it’s another reminder of how embarrassingly low journalistic standards are in 2021. Actually, that’s a third strike against it: it makes me want to put “journalist” in sarcastic quotes, but I can’t do that, because that’s the province of all the knuckle-dragging losers on the internet complaining about Brie Larson and Kathleen Kennedy.
The writers can clutch their pearls and stand aghast at VW all they want, but the truth is that they simply didn’t do due diligence for their non-story. They valued page views over newsworthiness. “They published a press release!” insists the article, ignoring the obvious fact that you’re not obliged to run every press release as front-page news.
The undeniable fact of all this is that this stunt was not news. Even if it had been 100% real. Even for a company gigantic as the Volkswagen Group. It was so obviously as much a non-story as the results of the Puppy Bowl or the war between Left Twix and Right Twix. It’s depressing that they think the problem is a company fooling them with a lame (and clearly publicity-grabbing) stunt, instead of how eager they were to “report” on the stunt in the first place.
I’m still comparison-shopping EVs, and I’ve got some questions.
I’ve been “researching” (read: watching YouTube videos about) electric vehicles for several weeks now, and a lot of the same ideas keep recurring: tips to speed up fast-charging time, maximizing battery life, maximizing range, etc. But never having owned an EV or spent a long time looking into them, there are a few things I can’t figure out.
I’ve had an entirely too charitable impression of car reviewers One thing I’ve learned from watching lots of car reviews is that car reviewers mostly suck. There are obvious exceptions, but as someone who’s never been particularly interested in cars, I’ve always just assumed that reviewers are well familiar with all the myriad details about cars that are lost on me. But I’ve been surprised by how many reviews get the basic details wrong, ignore aspects of the car that are obviously specific to a review situation, or go on about aspects of the car that are irrelevant to drivers that aren’t reviewers. Is it all Top Gear‘s fault?
What’s the deal with the front trunk? Speaking of terrible reviews: what the hell is this garbage review of the ID.4? The reviewer was biased against the car from the start, but that’s okay because I was biased against the review for being from a Gawker site. (Yes, I know that Gawker Media doesn’t exist anymore, but the taint is inescapable). What’s odd to me, though, is that this isn’t the only review I’ve seen to waste so much time talking about the lack of a front trunk.
It’s an absurd complaint. The closest I’ve seen to a reasonable explanation is that it’s convenient to keep the charging cable in there, but I’m not buying it. Is this supposed to be a real complaint?
How do Elon Musk’s fanboys justify a proprietary super charger network? I’ve been in the SF Bay Area enough to see a depressing number of men go glassy-eyed and speak in reverent tones about how Musk’s visionary work is going to save our fragile planet. I’ve been so eager to get into a situation of no longer talking to them, that I never got to ask them the obvious question: how do they justify making the super charger network proprietary and exclusive to Tesla owners? Obviously, the ubiquity of the network is a selling point for the cars, but wouldn’t it be best for everyone to encourage more EV purchases in the US, while at the same time charging non-Tesla drivers for the convenience?
Are crossover SUVs really as popular as people keep saying? The thing I found most surprising when I started comparing cars: there are almost no affordable options for 200+ mile range in a sedan, coupe, or hatchback. As far as I can tell, there’s just the Chevy Bolt or the Tesla Model 3. I understand that bigger batteries give better range, but I’m stunned that more manufacturers haven’t gone the ID.3 route, and that Volkswagen hasn’t made the ID.3 available in the US. The explanation was “Americans want SUVs.” I can’t tell if that’s a real thing or just a self-fulfilling prophecy.
An update on the search for an electric car, with the surprising introduction of a new contender.
(For the record: the title of this post is a reference to Randy Candy’s part in this Saturday Night Live sketch, which I disappointingly found out recently was actual product placement).
When last we checked into my car search, I’d decided to forget the fun mid-life crisis convertible I’d been coveting, in favor of something that felt more environmentally responsible. I’ve been reading articles and watching tons of videos about the current state of electric vehicles, and I’ve been getting myself comfortable with the idea of a crossover SUV, since that’s apparently the body style America has declared it wants.
So far, the front-runner has been the Volkswagen ID.4, which seems unlikely to blow anybody away, but which strikes me as comfortable. I like their tech system, I like the sunroof, I like the interior lighting, I like the estimated range, I like the “free” charging, and it seems like they’ve filled it with just enough conveniences to hit their target: a comfortable, moderately-priced electric vehicle.
It might not be “fun,” exactly, but I can at least geek out over the technology while patting myself on the back for “zero emissions.” (In quotes because I think Alex Dykes makes a reasonable argument in this video that it’s disingenuous not to include the emissions it takes to charge the car’s battery).
I’ve been thinking about electric vehicles, and I want the internet to check my work
I’m turning 50 this year,1Whether I want to or not and I had big plans for a year-long banger of a mid-life crisis. Grow a wiry, dingy-graying ponytail. Get more age-inappropriate earrings. Pick up a new, ridiculous hobby. And pointedly: get a convertible.
Not a muscle-car convertible, because I may be a soon-to-be-50-year-old man, but I’ve got the heart of a sophomore sorority pledge. I wanted a convertible VW Beetle. I’m a big fan of the 2011 redesign, and I rented a convertible in Florida for a work trip, and it was a ton of fun. Plus I’ve spent the last 20+ years driving practical, fuel-efficient sedans — two of them hybrids — and I just wanted something dumb, fun, and completely impractical.
But getting an internal combustion engine in 2021 just feels a little too irresponsible. Assuming you’re in a position to do otherwise, of course: a lot of very rich people have spent an awful long time and an awful lot of money making sure that electric vehicles were prohibitively expensive for most people. Even now, they’re eye-wateringly expensive. But when even fuel-efficient cars are putting out tons of emissions per year, it feels gross to keep doing it just for fun.
So I’ve got the extremely privileged “problem” of having to decide what car I want to get when my current lease runs out. Some requirements:
Raspberry Pi has announced a new $100 computer that I hope can be as significant for 21st century kids as the Commodore 64 was for me
The Raspberry Pi 400 (link is to a review on Ars Technica) is a faster version of the famously affordable computer, now embedded in a keyboard with a suite of USB and display ports. The computer on its own is only $70, but a kit for $100 makes it an affordable desktop PC with everything you need except for the monitor.
For those of us who grew up in the 80s (or 90s, probably), I can’t imagine seeing this and not getting excited about the potential. I can still remember my mom taking me to K-Mart to get my first computer, a Commodore 64. More vividly than most things I can still remember from the 80s: I remember the dim fluorescent lighting, I remember the stacks of boxes and the excitement of getting to take one off the shelf, I remember getting a spiral-bound introduction to BASIC programming book to go with it, and I remember sitting in the kitchen, hooking it up to a TV to try it out and get our first ?SYNTAX ERROR.
Two unexpected side-effects of this extended shelter-in-place order: there’s more time for playing board games, and 3D printing is more practical since I’ve been at home to keep an eye long-running prints. Taken together, it’s been the perfect opportunity for a project to re-learn Blender and get more experience with 3D printing. (Which up until now, has seemed like more of a time investment than it was worth, unless it was for a very special project).
One pleasant surprise of the past couple of months has been discovering the game Godzilla: Tokyo Clash, published by Funko and designed by Prospero Hall. We first heard about it via a Watch It Played video, and before we even got to the ending, we’d already decided it was a must-buy. After some initial confusion over the rules — almost entirely the result of my assuming the game was more complicated than it actually is — we were able to enjoy it as a light-to-medium-weight beat-em-up game of kaiju flinging tanks and buildings into each other, and flinging each other into buildings. Giving each kaiju a mostly-individualized deck of cards with special powers adds just enough complexity and varies the pacing. A game really does play out like the last 20 minutes of a Godzilla movie, with monsters maneuvering into place and then unleashing a barrage of wrestling moves combined with atomic breath and then clubbing their opponent with a train car.
(Incidentally: Prospero Hall has been killing it with board game designs lately. They’re a Seattle-based design house that seems to focus on making licensed games that don’t feel like uninspired cash grabs. Disney Villainous is more interesting than a Disney-licensed game needs to be, their Choose Your Own Adventure games are a nostalgic take on escape room games, and the result is a ton of light-to-medium-weight games that are mass market enough to sell at Target, but interesting enough to actually get more people into the hobby. Plus their graphic design is flawless throughout. Anybody still just publishing yet another re-skinned version of Clue or Monopoly should be embarrassed).
Tokyo Clash has a 1960s Japanese movie poster aesthetic that is just perfect, and it comes with detailed well-painted miniatures of the four playable kaiju. There are also some simple but well-themed miniatures for the “large buildings” you can fling your opponents into. However, the game uses cardboard tokens for everything else. They’re fine, but they kind of undercut the atmosphere of seeing these monsters marching around a city, tossing things at each other. I decided to use it as an excuse to re-re-re-learn Blender — every time I dive back into the software to model something, I forget everything about how to use it within a month — and make 3D-printed replacements.
I bought an Oculus Rift S, and now you don’t have to. You’re welcome.
Back in 2016, I became a convert and likely insufferable evangelist for virtual reality after someone let me try out the Oculus Rift and the HTC Vive. At the time, I was completely enamored with Valve’s The Lab and the seemingly endless potential for immersive experiences made possible by dropping you into a world that completely surrounds you. I wasn’t one of those super-early adopters who bought the Rift development kits, but what I lacked in timing, I made up for in enthusiasm.
I took to VR headsets like Mr. Toad took to motor cars. Which means that over the last few years, I’ve tried all of the major commercially-available ones, and I’ve wasted disposable income on several of them. So I’ve got opinions, and I think they’re reasonably well-informed. Here’s my take on the current state of things:
VR isn’t just a fad that’s already gone the way of 3D Televisions. For about as long as I’ve been interested in VR, people have been declaring that VR was “dead” or at best, that it had no future in gaming and entertainment. The most common comparison that people made was to 3D televisions, which TV manufacturers tried to convince us were an essential part of the home theater of the future, but which just about completely disappeared within a few years. Even though interest has cooled a lot, I think it’s impossible for home VR to go away completely, simply because it still suggests so much potential for new experiences every time you put on a headset.
VR will remain a niche entertainment platform. That said, home VR as we know it today is never going to take over as The Next Big Thing, either. A few years ago, a lot of people were suggesting that VR headsets would become the new video game consoles, and therefore the bar for success would be an HMD to achieve PS4 or Xbox-level sales. That’s not going to happen. I’ve been pretty disappointed in the PSVR overall, but I think in terms of market positioning and ease of use and overall philosophy, it’s the one that most got it right — it’s an easy to use accessory for specialized experiences.
VR needs experiences designed for VR, and not just different presentation of existing games. For a while, I was starting to become convinced that VR had “flopped” since I almost never went through the effort of setting up and putting on the Vive or PSVR again, so they just sat collecting dust. When I was in the mood to play a game, I almost always went to the Switch, suggesting that The Future of Games Is Mobile and Accessible. But I think the real conclusion is that there are different experiences for different platforms, and the one-size-fits-all mentality of video games is a relic of the “console wars.” Not every type of game is going to work well in VR, and IMO the ones that do work exceptionally well in VR can only work well in VR. The comparison to 3D TVs is apt, since it shows that people thought of VR as a different way of presenting familiar content, but it’s actually an entirely new type of content. Altogether.
Stop trying to make “epic” VR happen. Related to that, I think a lot of people (including myself) assumed that the tipping point for VR adoption would come as soon as one of the big publishers made the VR equivalent of Skyrim or Halo: the huge, big-budget game that will incontrovertibly prove the viability of VR as an entertainment platform. But actually playing Skyrim or Fallout in VR turns out to be a drag, in some part because you can’t just lose hours to a game in VR without noticing. The fact that most VR experiences have been brief isn’t a bug; it’s a feature. The success of Beat Saber doesn’t mean that VR is a baby platform for stupid casuals, unless you’re a teenager on a message board. Instead, it means that we’re getting closer to finding out what kinds of short, dense experiences work inside a VR headset.
The biggest obstacle to VR is that it’s isolating and anti-social. I think it’s kind of ironic that one of the biggest investors in VR — and in fact the greatest chance for VR to reach wide adoption — is a social media company, since putting on a VR headset is about as anti-social as you can get. Sony had the right approach with their initial PSVR push, emphasizing it as the center of a social experience, but I think it ultimately came across as gimmicky and limited, like Wii Sports. Sometimes you want to shut the rest of the world out — I was surprised to see so many people touting the Oculus Go as perfect for media consumption, since I can’t imagine anything I’d want to do less than watch a movie with my sweaty face stuck against a computer screen. But I think the real key to longevity and wider adoption with VR will be a way to have that sense of immersion and isolation but still have a lifeline to the outside.
Ease of use and convenience are always preferable to “better” technology. Back in 2016, I was 100% on Team Vive, because it had the better tracking technology, and better technology meant better immersion, right? I’ve done an almost complete reversal on that. In practice, an easier experience beats a “better” experience every single time. I think the PSVR tracking is throw-the-controllers-across-the-room-in-frustration abysmal, and the display is disappointingly fuzzy and pixelated, but it still ended up getting more overall use than the HTC Vive, simply because it was more comfortable and easier to jump into. And I suspect I played more with the Oculus Quest in the first week after I owned it, than I’d spent over the entire past year with the Vive. I wouldn’t have thought it would be a huge difference being able to set up a play space in seconds as opposed to minutes, but just that one change made VR something I looked forward to again, instead of feeling like a burden. All the videos about haptic gloves or force feedback vests or two-way treadmills to guarantee a more immersive experience seem not just silly now, but almost counter-productive in how much they miss the point.
At the moment, the best headset is the Oculus Quest. It’s still a mobile processor, so it sacrifices a lot of the graphical flourishes that can make even “smaller” VR experiences cool. But being able to just pick the thing up and be playing a game within a minute is more significant than any other development. I have to say that Facebook/Oculus’s efforts to make it easier to jump in and more social when you are in, are just more appealing to me than anything else happening in VR.
Facebook has been holding its Oculus Connect event this week, and in my opinion the biggest announcement by far was that the Oculus Quest —their wireless, standalone headset with a mobile processor — would soon be able to connect to a PC via a USB-C cable. That would essentially turn it into an Oculus Rift S, their wired, PC-based headset.
Full disclosure: I have to say that I was instrumental in bringing this change that made the Oculus Rift S functionally obsolete, since about a month ago, I bought an Oculus Rift S. I never expected Facebook to add a feature to one of its hardware platforms that would invalidate another of its hardware platforms, but then I’ve never really understood Facebook’s business model. And honestly, I’m kind of happy that I don’t.
But the end result is that if the technology works as described, it’ll be the best of both worlds for the Oculus Quest. You’ll still be able to have the just-pick-up-the-headset-and-start-playing experience for a lot of games. But on the occasions where you want to play a larger-scale game like No Man’s Sky, or if you’re just playing Moss and are sad at how bad the downgraded water looks when it’s so evocative on the PSVR, you can sacrifice mobility and ease of setup for higher fidelity and a bigger library.
And the other announcements — in particular, hand recognition so that there are some experiences that won’t require controllers at all; and the “Horizon” social platform that may finally make VR feel less isolating, if they get it right — are encouraging to me. I feel like the way towards wide adoption isn’t going to come from taking the most advanced technology and gradually making it more accessible, but from taking the most accessible technology and gradually making it more advanced.
And while I’m predicting the future (almost certainly incorrectly, since I think I was completely off in my predictions just three years ago): I think all the efforts that see AR and VR as competing or even different-but-complementary technologies are missing the point. I believe that the future isn’t going to look like VR or AR as they’re pitched today — putting on a headset that blinds you and has you start swinging wildly at imaginary monsters only you can see, or just projecting an existing type of mobile game onto a real-world table or showing a Pokemon on your living room table — but is going to be more like the immersive AR shown in the movie Her. People will need to be able to treat it as a continuum that goes from private to social, where they can shut out as much or as little of the outside world as they choose to at any given moment. And whether that’s an isolating dystopian future, or a magical one-world-united future, depends less on the technology itself and more on how we decide to use it.
My take on Walt Disney World’s “magic bands,” which will probably be misinterpreted as a defense of the NSA.
My friend Michael sent me a link to “You don’t want your privacy: Disney and the meat space data race,” an article by John Foreman on GigaOm, and made the mistake of asking my opinion on it. I think it’s a somewhat shallow essay, frankly, but it raises some interesting topics, so in the interest of spreading my private data everywhere on the internet, I’m copy-and-pasting my response from Facebook. Overall, it seems like one of those shallow mass-market-newspaper-talks-about-technology pieces, the kind that breathlessly describes video games as “virtual worlds” in which your “avatar” has the freedom to do “anything he or she chooses.”
For starters, I’m immediately suspicious of anyone who says something like “Never will we take our children to Disney World.” (Assuming they can afford it, of course; considering that the author had just talked about vacationing in Europe and enjoying the stunningly blue waters off crumbling-economy Greece, that’s a safe assumption). Granted, I’m both childless and Disney theme park-obsessed, so my opinion will be instantly and summarily dismissed. But all the paranoia about Disney in general and princesses in particular strikes me less as conscientious parenting and more as fear-based pop-cultural Skinner-boxing. It seems a lot healthier to encourage kids to be smarter than marketing, than to assume that they’re inescapably helpless victims of it. Peaceful co-existence with the multi-billion dollar entertainment conglomerate.
Which is both none of my business and a digression, except for one thing: I really do think that that mindset is what causes a lot of shallow takes on the Disney phenomenon, which are based in the assumption that people can’t see past the artificiality and enforced whimsy, so an edgier, “counter-culture” take on Disney is showing them something they haven’t seen before. It also causes the kind of paranoia about Disney that describes it as if it were an oppressive government, and not a corporation whose survival depends on mutually beneficial business transactions.
There’s no doubt that Disney wants to get more data on park guests, but that essay’s extrapolations of what they’ll actually DO with that data are implausibly silly. They’re all based on the idea that Disney would spend a ton of money to more efficiently collect a ton of data aggregated for weeks across tens of thousands of customers, and then devote all that money and effort to develop creepily specific experiences for individuals.
It’s telling that Foreman compares Disney’s magic bands to the NSA, since I think the complaints miss the point in the same way. People freak out that the government has all kinds of data on them, when the reality is that the government has all kinds of data on millions of people. The value of your anonymity isn’t that your information is private; it’s that your information is boring. All your private stuff is out there, but it’s still a burden to collate all of it into something meaningful to anyone.
This absolutely is not an attempt to excuse the NSA, by any stretch. The NSA’s breaches are a gross violation, but the violation isn’t that they’re collecting the data, so much as that they’re collecting the data against our will and without our knowledge.
Anything Disney does with the Magic Band data, at least in the next ten years or so, is going to be 1) trend-based instead of individual based, and 2) opt-in. For instance, they’ve already announced that characters can know your name and about special events like birthdays, but they’re only going to use something like that at a character meet-and-greet. For example, you’ve specifically gone to see Mickey Mouse, and he’ll be able to greet you by name and wish you a Happy Anniversary or whatever. Characters seeking you out specifically is just impractical; the park has already had enough trouble figuring out how to manage the fact that tens of thousands of people all want to get some individual time with the characters. The same goes for the bit about “modifying” a billion-dollar roller coaster based on the data they get from magic bands; it’s just as silly as assuming that you could remove floors from a skyscraper that weren’t getting frequented enough by the elevators.
It’s absolutely going to be marketing driven; anybody who says otherwise doesn’t get how Disney works. But I think it’s going to be more benign. Walt Disney World as a whole just doesn’t care about a single guest or a single family when they’ve got millions of people to worry about every day. So they can make more detailed correlations like “people who stay at the All Star resorts don’t spend time at the water parks” and adjust their advertising campaigns accordingly, or “adults 25-40 with no children spend x amount of time in Epcot.” But the most custom-tailored experience — at least, without your opting in by spending extra — is going to be something like, at most, coming back to your hotel room to find a birthday card waiting for you.
The creepier and more intrusive ideas aren’t going to happen. Not because the company’s averse to profiting from them, but because they’re too impractical to make a profit.
A review, more or less, of the Samsung Galaxy Note 8.0, plus a bit of marveling on the current state of tablet computers.
Back in March of 2009, Kindles still had keyboards, and we were still a year away from enjoying all the feminine hygiene jokes that came with the release of the iPad. I took advantage of the release of the Kindle 2 to describe what would be my ideal tablet computer.
Reading that blog post now, what stands out the most is what a fundamental shift in thinking the iPad was. Looking back, it’d be easy to say that the iPad was inevitable — of course they’d just make a bigger iPhone! But that’s definitely not where the speculation was before the iPad’s announcement. People were still thinking that there’s a clear distinction between computer and media device. It’s why there’ve been so many “hybrid” laptops with removable screens that become “tablets,” that invariably have tech journalists swooning and declaring them the perfect solution right up until the point that the thing is released and fails to make a dent. It’s why people still insist on making a distinction between devices for “consumption” vs. ones for “creation.” If you’d asked most people in 2009 to describe what the iPad was going to be like, they’d have described something basically like the Microsoft Surface Pro.
That includes me; what I had in mind was essentially a thinner, lighter Tablet PC (in other words, the Surface Pro). The iPad undercut that, not just in size and in price, but in function. It made good on the promise of a “personal computer:” portable enough for media consumption, but multi-purpose enough not to be dismissed as just an evolution of the e-book reader or PDA. It’s clear now that that was absolutely transformative, and anyone who suggests otherwise is not to be trusted with your technology prognostication.
I’m not claiming to be prescient; at the end of that blog post, I gave a spec list for my perfect tablet computer, and it’s not an iPad. However, it is eerily close to a tablet computer that exists today, with one major difference: it wasn’t made by Apple, and it doesn’t run OS X. It’s the Samsung Galaxy Note 8.0.
Why Would You Even…?!
I’ve already been completely converted to the form factor of the iPad mini, and this one reportedly had all of that plus removable storage and a Wacom digitizer. The existence of refurbished models, some left-over gift certificates, and “reward” points, meant that I could get one for about $250. (They retail for about $400, and not to spoil the review, but I really can’t recommend it at that price). If you don’t think I chomped on that like the star attraction at Gatorland, then you just don’t know me at all.
Most of the reviews for the Note 8 that I’d read acknowledged that it’s a fairly good — but soon to be outdated — tablet whose main draw was stylus input, and that unless you need the stylus, either the iPad mini or the Nexus 7 is a better value. They still treated the digitizer as an extra feature though, as opposed to the whole reason for the tablet’s existence. (Which is fair, since Samsung’s treating it basically the same way by selling a Galaxy and Galaxy Note line in parallel). I hadn’t seen any that reviewed it mainly for the strength of its digitizer and its appeal to digital artists; the closest I could find was Ray Frenden’s review of the Galaxy Note II smartphone.
Over the years, I’ve tried out various graphics tablets, tablet PCs, styluses, and art software with the hope that I’d find the magic bullet that suddenly turned me into a better artist. I’ve finally given up on that idea and resigned myself to the fact that only practice is going to turn me into a better artist. By that measure, anything that reduces “friction” and encourages me to practice more often is a worthwhile investment. I’ve got several Moleskines that were going to do exactly the same thing, but instead just frustrated me with their analogness. Without an “erase everything” button, they’re like tiny Islands of Dr. Moreau, the misshapen forms of my previous failures staring back at me and discouraging me before I’ve even begun a new drawing.
A tablet that I use as often as the iPad mini, on the other hand, but that has a pressure-sensitive stylus and palm rejection and layers and simulates different media and colors and download reference material directly into my art program? And for less than 300 bucks? How could that not be ideal?
As a Digital Notebook
The sketch above is the most effort I was willing to put into a drawing for this blog post. Obviously better artists could show more of the capabilities of the device; even I have generated better drawings on the Note 8 when I’ve put more time into it.
But sample images for these things are always deceptive. I know I’ve gotten in the habit of looking at Frenden’s reviews and thinking, if I buy this thing, I’ll be able to draw like that!, which is of course a lie. And you can find amazing pieces of work done on just about any device, from a cell phone to a Cintiq, by artists who already know what they’re doing. What I wanted to see was what kind of results you’d get if an average person interested in being a better artist sat down and tried to use it.
That drawing was done in Autodesk Sketchbook Pro for Android, and it’s intended to show off the basic advantages of using a digital notebook. Reference art from the web, multiple layers for sketching & inking, brushes with variable line weight, and a tool that makes it easy to add simple color.
Pressure Sensitivity: You can tell that it’s a pressure-sensitive pen, but you’re not going to see dramatic differences in line weight unless you’re willing to do a lot of fiddling with brush settings. There is a way to increase the sensitivity of the S-Pen, apparently (instructions are in that Frenden review of the Note 2), but I had no luck getting it to work.
Palm Rejection: Even though there are now pressure-sensitive styluses for the iPad, one of the biggest annoyances remaining was that none of the software (at least that I’m aware of) supported any sort of palm rejection. As a result, you had to hold the stylus out as if you were using charcoal or pastels, which to me kind of defeats the purpose of having a stylus. On the Galaxy Note 8, of all the apps I’ve tried — Sketchbook Pro, Sketchbook Ink, Photoshop Touch, ArtFlow, and Infinite Painter — it only worked reliably in Sketchbook Pro. The others would either leave a smudge at the bottom of the screen, resize the view, or interrupt the current drawing stroke. Even in Sketchbook Pro in “Pen Only” mode, it seemed eager to interpret my palm as an attempt to resize the canvas. I get the impression that both pressure sensitivity and palm rejection have to be implemented by each app for itself, although it seems like it’d make far more sense to have it implemented at the OS level.
Accuracy: The other big problem with drawing on the iPad is that you need a blunt tip to register on a capacitive display. The S-Pen is much, much better at this, as you’d expect. The other thing that helps is that the tablet detects proximity of the pen to the surface, not just an actual touch, so you get a cursor showing where you’re about to draw. (It also means you get tooltips throughout the entire system when using the pen. Which is nice, I suppose, but I’d prefer just to have simple clarity of the UI, and I’d been hoping that touch screens meant tooltips were dying off for good).
Drawing on Glass: Earlier I said I wanted something that would reduce the “friction” of drawing so I’d practice more often; drawing on the Note 8 takes that a little bit too literally. I’ve gotten used to drawing on graphics tablets, and with rubber-tipped styluses on the iPad. That’s entirely different from drawing with a plastic nib on a glass screen, enough to make me wish they’d sacrificed the display brightness a bit in favor of a more matte surface on the screen. That would never have happened, since Samsung’s trying to position the tablet as a superset competitor to the iPad mini and is going to make a big deal out of the slightly better pixel density. But I think it would’ve been a good way to further differentiate this as a stylus based tablet, instead of a table that happens to also have a stylus.
Responsiveness: It varied from app to app, with Sketchbook Ink being the worst. When I turned off the “Smooth Brush” option in Sketchbook Pro, the lag was all but imperceptible to me, unless I was drawing with a particularly large or complex brush.
Bezel: Unlike the iPad mini, the Note 8 has a bezel that’s as wide on the sides as it is on the top and bottom. While I think it does actually contribute a bit to the overall “cheap and plastic” look of the device, it’s absolutely essential for a tablet with a stylus. If you were to simply slap a Wacom digitizer onto an iPad mini, there’d be no good place to hold it.
Software: If it’s not obvious by now, Sketchbook Pro is the clear winner of all the apps I’ve tried. That’s no big surprise, since it’s been around for years and was designed specifically for tablet computers. I’ve bought a version of it for every operating system and every computer I own, and they’re all excellent; it’s nice to finally be able to use it as it was designed to be used. I do wish that it were possible to import brushes on the tablet version as you can on the desktop versions; if there is a way to do that, I have yet to find it.
Overall, I’d say that even though our skill levels are vastly different, my take on the Note 8 isn’t all that different from Frenden’s take on the Note II. (Much of that’s intentional on Samsung’s part, as they want consistency between the phone, 8-inch, and 10-inch tablet devices in their line). Don’t expect to use it for finished art, and don’t expect it to function like a $300 Cintiq tablet. But as a sketch book with a complete set of art tools that you always have with you, it’s fine. Whether you have the Note II or the Note 8 — every review I’ve read of the Note 10 says that it’s underpowered, so I’d avoid it — just depends on which one you’re more likely to have with you everywhere you go.
For my part, I can definitely see myself practicing more often on this thing.
As a Tablet Computer
Practicing art was only part of the thing; even I can’t justify spending a couple hundred bucks to replace a $10 Moleskine. The idea was that I’d have something that would do everything the iPad mini can, and function as a digital notebook. In that regard, I’d say that it’s not quite there, but it’s pretty close.
When I had my semi-religious experience in an Apple Store, I said that the iPad mini seems absolutely silly until you actually hold one. I still think that’s the case, and I think that the build quality of the Note 8 really drives that home. It’s got a white plastic back and a silver border that makes it seem 1) like a prop from 2001 or Space: 1999, 2) thicker than it actually is, and 3) kind of cheap. The iPad mini feels like a solid block of metal and glass; the Galaxy Note 8 just feels like a plastic consumer product.
According to the specs, the Note 8 has a slightly higher pixel density than the mini. It shouldn’t be enough to be perceptible, but whether it’s more clarity, better use of fonts, brighter colors, or just placebo effect, the picture does look better than the mini’s. Especially with text and line drawings (by which I mean comics, of course). The colors also seem brighter than on the mini.
Battery life is middling. I haven’t stress tested it (and I’m unlikely to), but it has been completely drained of power just sitting idle for three days, which has never been the case with any iPad I’ve used. I suspect that if I took it on the road, I’d be having to charge it every night.
It does support micro SD cards up to 64 GB for external storage — one of the items on my “ideal tablet computer” list from 2009 — but for documents only, not apps. (Since it’s Android, there are instructions online on how to root the tablet so you can use the SD card for apps, but I’ve always considered rooting or jailbreaking these things to be more trouble than it’s worth). Since the tablet is limited to 16 GB of internal storage, and you’re left with around 9.7 GB after all the pre-installed software, the extra space is definitely nice to have. It could store my entire library of Kindle books and comic books, and have enough space to actually store a significant chunk of my music library, which is something I’ve never been able to do with the iPad. Consistent with the build quality of the rest of the tablet, the door for the SD card is one of those tiny plastic covers that always seems in danger of breaking off.
The stylus is definitely closer to a Palm Pilot stylus than a Wacom pen, but it’s perfectly adequate for drawing. It’s more white plastic, it fits snugly in the underside of the tablet, and pulling it out automatically brings up a page of Samsung’s pen-enabled “S Note” app. Unlike the bloatware I would’ve expected, that’s actually a pretty solid app. It’s got a set of templates of questionable usefulness, but the technology underneath is solid. Handwriting recognition is flawless enough to be eerie, and it’s got additional modes that recognize mathematical formulae and shapes for diagrams. The latter one was the biggest surprise for me, since I’ve been surprised that I haven’t see any tablet computers pull off the potential of OmniGraffle very well, when it seems like it’d be a natural. (It’s possible that OmniGraffle for iPad is an excellent program, but at $50 I’m never going to find out).
Handwriting recognition is available throughout the system as a “keyboard” mode; the others are a traditional keyboard and voice input. (Somewhat surprisingly, handwriting is faster and more accurate for me than voice input. Could Star Trek have gotten the future wrong?)
Samsung has re-skinned the entire OS and included its own apps, but I didn’t think either one was particularly obtrusive. All the apps other than S Note were quickly relegated to a different page. Having a “Samsung Cares Video” icon that can never be deleted from the system is kind of an annoyance, but at least I never have to look at it. And I tried using a different launcher for a bit, but soon went back to the default one.
Considering how often I read comments online from people demanding that this app or that service be released on Android, I’d expected the Google Play store to be filled with nothing but fart apps and tumbleweeds. But I’d quickly found and downloaded every one of the apps I use most often on iOS. I’d be more disappointed if I’d any intention of giving up iOS completely, but there’s a respectable amount of software out there.
It’s also got two cameras, and they’re both terrible. Which is as it should be, because if tablets had good cameras, you’d have even more people taking pictures with them in public.
Android vs. iOS
Believe it or not, I did go into Android with a completely open mind. As long as it’s functionally equivalent to iOS, then there’s no point in getting butthurt over all the differences.
And at least with the version of Android that’s installed on this thing — I don’t know, it’s Peanut Buster Parfait or some shit — it is pretty much functionally equivalent. On a task-by-task basis, there’s little that’s inherently better about one way of doing things than the other. Widgets and Google Now seem better in theory in practice, and the only thing that’s outright worse about Android is the lack of a gesture to immediately scroll to the top of a screen.
What’s surprised me is just how much the cliches about each OS are true. Overall, Android seems like an OS that was made by programmers, while iOS seems like an OS that was made by designers. iOS tends to have a consistent aesthetic, while Android has that weird combination of sparseness and excess that you see on Linux desktops: there’s only an icon for a Terminal window and an open source MS Office clone, but they glow and rotate in 3D space with The Matrix constantly scrolling on top of an anime girl in the background.
I’ve certainly got my own preferences. The lack of options and settings common to iOS apps is often, bafflingly, described as a failing, but what it is is an acknowledgement that having a consistent experience that just works is preferable to having to fiddle with a billion different settings. I often have to read people complaining about Apple’s “walled garden” and its arrogant insistence on one way of doing things as opposed to giving the user choice; what I see in Android is a ton of meaningless, inconsequential choices that I’m simply not interested in making.
One of the “features” of the Note 8 that I didn’t mention above is that it supports multiple windows. You can open a little task bar and drag a separate app onto the screen to have two apps running at the same time. A lot of reviews that I’ve seen for the tablet list this as a major advantage of the system. I say that it’s a clear sign the developers have learned nothing from the failure of the Tablet PC. They’re still trying to cram a desktop OS onto a tablet with a touchscreen, when even Microsoft has learned to stop emphasizing windows in Windows. The iOS limitation of having only one app running concurrently isn’t just some technical limitation; it’s one of those constraints that makes the design of the entire system stronger. It means the designer can’t just lazily port a desktop interface to a tablet, but has to put real thought into how to optimize the app for the new device and how it will be used.
(There are definitely, absolutely, major inconveniences to having only one app running at a time on iOS, as anyone who uses 1Password will tell you. But I’m convinced that the best way to solve it won’t look anything like what works on a desktop OS).
I think the best example of the whole divide between Android and iOS is in the file system. iOS is notoriously closed; each app has its own sandbox of files that only it can touch, and transferring documents between apps is cumbersome. Android is notoriously free and open; you have access to the entire file system of the device, with a file-and-folder-based GUI that should be familiar to you because it’s the exact same one you’ve been using for 30 years.
Some people will say this is a perfect example of each person being able to choose the operating system philosophy that works best for him. I say it’s an example of how stubbornly sticking to one way of doing things results in something that’s best for nobody. I’m perpetually frustrated by the file handling in iOS, where I just want to use this app to open that document but can’t find any flow of import or export that’ll make it work. But I’ve been just as frustrated with Android, where I keep creating files and then am completely unable to find them in any of the dozens of folders and subfolders on the system. (Sketchbook, for instance, doesn’t save pictures you’ve exported in Pictures. Nor in Documents. It saves them in Autodesk/SketchbookPro/Export).
I’m hoping that Android will eventually get over its problems with market fragmentation, let go of the desktop, and finally embrace a post-PC world. And I’m hoping that iOS will eventually let go of Steve Jobs’s pathological fear of multiple buttons and develop a scheme for cross-app communication that doesn’t depend on clipboards or exposing the file system. Concentrating on how to use touch as a completely new way of interacting with a computer could lead to a dramatically improved method of working with computers; we’ve already seen that kind cross-pollination happening between iOS and OS X. I don’t see that kind of innovation coming from Android, though, since it seems to be still doing little more than iterating on stuff that’s as old as X Window.
And one of the cliches that’s hilariously not true is the one about Android being all about functionality and practicality with Apple being all about flash and gimmickry. Because I’m now the owner of a tablet that has no less than eight different ways to unlock it (most using a rippling water effect), and which keeps warning me that it can’t see my eyes, because it has a “feature” (optional, of course!) that won’t let it go to sleep if it detects my face looking at it. Unlike the iPad, which, you know, turns off when I close the case.
And Finally, the Verdict
I’m way too invested in iOS at this point to ever switch over completely, so that was never an issue. And I think I’ve gotten most of the making-fun-of-Android out of my system, so I’m not going to be starting any campaigns against it. (I’d even like to try writing an app for it, at some point).
The questions for me were whether the Galaxy Note 8 could replace my iPad mini as the “everyday workhorse” tablet, and whether it’d help me practice drawing more often by having a ubiquitous digital notebook. The answers, so far: almost definitely not, and maybe.
If I were actually writing for one of the tech blogs, I’d be laughed out of my job if I based my entire verdict on “how the computer feels.” But for me, that’s what it comes down to with the iPad mini. It’s like Kirk Cameron’s banana: it just fits the hand perfectly (and doesn’t squirt in your face, either). It just feels more fun to use, for some indefinable value of “fun.” When Apple inevitably releases one with a higher resolution display, it’s going to be all but impossible for me to avoid getting one. I bought the first one thinking it was a ridiculously excessive extravagance, and it almost immediately became indispensable; I use it every day.
Still, I’m happy to have the Galaxy Note 8, although I’m glad I didn’t pay full price for it. It’s a solid (if not exceptional) drawing tablet that didn’t require me to shell out for a Cintiq or even a Surface Pro. If it helps me get to the level where I could actually make art for a game, then it was a good investment.
As for normal people, without my weird affliction when it comes to gadgets?
If you don’t care that much about drawing and just want the best tablet: get an iPad mini.
If you want a good tablet for an unbeatable price: get the Nexus 7.
If you’ve got the money, and you’re looking for a laptop replacement or the best drawing experience you can currently get on a tablet: get the Surface Pro. (I haven’t used it myself, but I’ve never seen a review of one that could find fault with the digitizer on it).
If you want a mid-sized tablet and think you’ll ever want to use a stylus with it: get the Galaxy Note 8. Preferably on sale.
Making sense of the iPad mini in a world that doesn’t need it.
After my previous unfortunate episode in an Apple store, it should come as little surprise that I didn't last very long before I broke down and bought an iPad mini. No, it doesn't make sense for me to be throwing my credit card around as if I were the CEO of Papa John's or something. I've already got a perfectly fancy tablet computer that's not just functional, but really quite terrific. It's not like I'm getting paid to write reviews of these things, and even my typical “I need it for application development testing” is sounding increasingly hollow.
What helps is a new metric I've devised, which measures how long it takes me after a purchase before the appeal of the thing overwhelms my feeling of conspicuous consumption guilt over buying it. It's measured in a new unit: the Hal (named after Green Lantern Hal Jordan, the Jordan who does have willpower).
By that standard, the iPad mini clocks in with a new record of 0.03 Hals, or about 30 minutes after I opened the box. Because this thing is sweet, and I pretty much never want to stop holding it. I'm writing this post on it, as a matter of fact, even though a much more functional laptop with keyboard is sitting about three feet away from me at this very moment. But to use it would mean putting the iPad down.
The “finish” of the iPad mini, with its beveled edge and rounded matte aluminum back, is more like the iPhone 5 than the existing iPads. It makes such a difference in the feel of the thing that I can talk about beveled edges and matte aluminum backs without feeling self conscious, as if I were a tech blogger desperately seeking a new way to describe another piece of consumer electronics.
It’s about as thin as the iPhone 5, and almost as light. With the new Apple cover wrapped around the back, it's perfect for holding in one hand. There have been several times that I've made fun of Apple, or Apple fanatics, for making a big deal about a few millimeters difference in thickness, or a few ounces in weight. And I joked about the appeal of the iPad mini, as if the existing iPad was unreasonably bulky and heavy.
But then something awful happened: I had to fly cross country four times within two weeks. And reading a book on the iPad required me to prop the thing up on the tray table and catch it as the person in front of me kept readjusting his seat. All my mocking comments were flying back in my face (along with the iPad, my drink, and the in-flight magazine), in the form of the firstest of first-world problems.
“Version 1 of the iPad mini is for chumps,” I said. “Check back with me once you've put in a higher resolution display, Apple.” In practice, though, the display is perfectly sharp. “Retina” isn't the make-or-break feature I thought it would be. You can certainly tell the difference when comparing the two; I'd assumed that squabbling over pixel density was something best left to the comments sections of tech blogs, but the difference in sharpness is definitely visible. It's really only an issue for very small text, though. Books, Flipboard, and web pages are all clear and legible.
And speaking of Flipboard, it and Tweetbot are the two apps that get me giddy enough to own up to making another unnecessary tech purchase. Browsing through articles and status updates on a tablet that thin is probably the closest I'll ever come to being on board the Enterprise.
The phrase I've seen reoccurring the most in reviews of the iPad mini is some variation on “this is the size the iPad is supposed to be.” And really, there's something to that. I'm not going to give up my other one; the larger size really is better for some stuff, like drawing, Garage Band, and reading comics or magazines. But overall, I haven't been this impressed with the “feel” of a piece of consumer electronics since I saw the original iPhone. Realizing that this is just version 1.0 is actually a little creepy — apart from the higher resolution display, I honestly can't conceive of how they'll improve on the design of the iPad mini.