Two unexpected side-effects of this extended shelter-in-place order: there’s more time for playing board games, and 3D printing is more practical since I’ve been at home to keep an eye long-running prints. Taken together, it’s been the perfect opportunity for a project to re-learn Blender and get more experience with 3D printing. (Which up until now, has seemed like more of a time investment than it was worth, unless it was for a very special project).
One pleasant surprise of the past couple of months has been discovering the game Godzilla: Tokyo Clash, published by Funko and designed by Prospero Hall. We first heard about it via a Watch It Played video, and before we even got to the ending, we’d already decided it was a must-buy. After some initial confusion over the rules — almost entirely the result of my assuming the game was more complicated than it actually is — we were able to enjoy it as a light-to-medium-weight beat-em-up game of kaiju flinging tanks and buildings into each other, and flinging each other into buildings. Giving each kaiju a mostly-individualized deck of cards with special powers adds just enough complexity and varies the pacing. A game really does play out like the last 20 minutes of a Godzilla movie, with monsters maneuvering into place and then unleashing a barrage of wrestling moves combined with atomic breath and then clubbing their opponent with a train car.
(Incidentally: Prospero Hall has been killing it with board game designs lately. They’re a Seattle-based design house that seems to focus on making licensed games that don’t feel like uninspired cash grabs. Disney Villainous is more interesting than a Disney-licensed game needs to be, their Choose Your Own Adventure games are a nostalgic take on escape room games, and the result is a ton of light-to-medium-weight games that are mass market enough to sell at Target, but interesting enough to actually get more people into the hobby. Plus their graphic design is flawless throughout. Anybody still just publishing yet another re-skinned version of Clue or Monopoly should be embarrassed).
Tokyo Clash has a 1960s Japanese movie poster aesthetic that is just perfect, and it comes with detailed well-painted miniatures of the four playable kaiju. There are also some simple but well-themed miniatures for the “large buildings” you can fling your opponents into. However, the game uses cardboard tokens for everything else. They’re fine, but they kind of undercut the atmosphere of seeing these monsters marching around a city, tossing things at each other. I decided to use it as an excuse to re-re-re-learn Blender — every time I dive back into the software to model something, I forget everything about how to use it within a month — and make 3D-printed replacements.
I bought an Oculus Rift S, and now you don’t have to. You’re welcome.
Back in 2016, I became a convert and likely insufferable evangelist for virtual reality after someone let me try out the Oculus Rift and the HTC Vive. At the time, I was completely enamored with Valve’s The Lab and the seemingly endless potential for immersive experiences made possible by dropping you into a world that completely surrounds you. I wasn’t one of those super-early adopters who bought the Rift development kits, but what I lacked in timing, I made up for in enthusiasm.
I took to VR headsets like Mr. Toad took to motor cars. Which means that over the last few years, I’ve tried all of the major commercially-available ones, and I’ve wasted disposable income on several of them. So I’ve got opinions, and I think they’re reasonably well-informed. Here’s my take on the current state of things:
VR isn’t just a fad that’s already gone the way of 3D Televisions. For about as long as I’ve been interested in VR, people have been declaring that VR was “dead” or at best, that it had no future in gaming and entertainment. The most common comparison that people made was to 3D televisions, which TV manufacturers tried to convince us were an essential part of the home theater of the future, but which just about completely disappeared within a few years. Even though interest has cooled a lot, I think it’s impossible for home VR to go away completely, simply because it still suggests so much potential for new experiences every time you put on a headset.
VR will remain a niche entertainment platform. That said, home VR as we know it today is never going to take over as The Next Big Thing, either. A few years ago, a lot of people were suggesting that VR headsets would become the new video game consoles, and therefore the bar for success would be an HMD to achieve PS4 or Xbox-level sales. That’s not going to happen. I’ve been pretty disappointed in the PSVR overall, but I think in terms of market positioning and ease of use and overall philosophy, it’s the one that most got it right — it’s an easy to use accessory for specialized experiences.
VR needs experiences designed for VR, and not just different presentation of existing games. For a while, I was starting to become convinced that VR had “flopped” since I almost never went through the effort of setting up and putting on the Vive or PSVR again, so they just sat collecting dust. When I was in the mood to play a game, I almost always went to the Switch, suggesting that The Future of Games Is Mobile and Accessible. But I think the real conclusion is that there are different experiences for different platforms, and the one-size-fits-all mentality of video games is a relic of the “console wars.” Not every type of game is going to work well in VR, and IMO the ones that do work exceptionally well in VR can only work well in VR. The comparison to 3D TVs is apt, since it shows that people thought of VR as a different way of presenting familiar content, but it’s actually an entirely new type of content. Altogether.
Stop trying to make “epic” VR happen. Related to that, I think a lot of people (including myself) assumed that the tipping point for VR adoption would come as soon as one of the big publishers made the VR equivalent of Skyrim or Halo: the huge, big-budget game that will incontrovertibly prove the viability of VR as an entertainment platform. But actually playing Skyrim or Fallout in VR turns out to be a drag, in some part because you can’t just lose hours to a game in VR without noticing. The fact that most VR experiences have been brief isn’t a bug; it’s a feature. The success of Beat Saber doesn’t mean that VR is a baby platform for stupid casuals, unless you’re a teenager on a message board. Instead, it means that we’re getting closer to finding out what kinds of short, dense experiences work inside a VR headset.
The biggest obstacle to VR is that it’s isolating and anti-social. I think it’s kind of ironic that one of the biggest investors in VR — and in fact the greatest chance for VR to reach wide adoption — is a social media company, since putting on a VR headset is about as anti-social as you can get. Sony had the right approach with their initial PSVR push, emphasizing it as the center of a social experience, but I think it ultimately came across as gimmicky and limited, like Wii Sports. Sometimes you want to shut the rest of the world out — I was surprised to see so many people touting the Oculus Go as perfect for media consumption, since I can’t imagine anything I’d want to do less than watch a movie with my sweaty face stuck against a computer screen. But I think the real key to longevity and wider adoption with VR will be a way to have that sense of immersion and isolation but still have a lifeline to the outside.
Ease of use and convenience are always preferable to “better” technology. Back in 2016, I was 100% on Team Vive, because it had the better tracking technology, and better technology meant better immersion, right? I’ve done an almost complete reversal on that. In practice, an easier experience beats a “better” experience every single time. I think the PSVR tracking is throw-the-controllers-across-the-room-in-frustration abysmal, and the display is disappointingly fuzzy and pixelated, but it still ended up getting more overall use than the HTC Vive, simply because it was more comfortable and easier to jump into. And I suspect I played more with the Oculus Quest in the first week after I owned it, than I’d spent over the entire past year with the Vive. I wouldn’t have thought it would be a huge difference being able to set up a play space in seconds as opposed to minutes, but just that one change made VR something I looked forward to again, instead of feeling like a burden. All the videos about haptic gloves or force feedback vests or two-way treadmills to guarantee a more immersive experience seem not just silly now, but almost counter-productive in how much they miss the point.
At the moment, the best headset is the Oculus Quest. It’s still a mobile processor, so it sacrifices a lot of the graphical flourishes that can make even “smaller” VR experiences cool. But being able to just pick the thing up and be playing a game within a minute is more significant than any other development. I have to say that Facebook/Oculus’s efforts to make it easier to jump in and more social when you are in, are just more appealing to me than anything else happening in VR.
Facebook has been holding its Oculus Connect event this week, and in my opinion the biggest announcement by far was that the Oculus Quest —their wireless, standalone headset with a mobile processor — would soon be able to connect to a PC via a USB-C cable. That would essentially turn it into an Oculus Rift S, their wired, PC-based headset.
Full disclosure: I have to say that I was instrumental in bringing this change that made the Oculus Rift S functionally obsolete, since about a month ago, I bought an Oculus Rift S. I never expected Facebook to add a feature to one of its hardware platforms that would invalidate another of its hardware platforms, but then I’ve never really understood Facebook’s business model. And honestly, I’m kind of happy that I don’t.
But the end result is that if the technology works as described, it’ll be the best of both worlds for the Oculus Quest. You’ll still be able to have the just-pick-up-the-headset-and-start-playing experience for a lot of games. But on the occasions where you want to play a larger-scale game like No Man’s Sky, or if you’re just playing Moss and are sad at how bad the downgraded water looks when it’s so evocative on the PSVR, you can sacrifice mobility and ease of setup for higher fidelity and a bigger library.
And the other announcements — in particular, hand recognition so that there are some experiences that won’t require controllers at all; and the “Horizon” social platform that may finally make VR feel less isolating, if they get it right — are encouraging to me. I feel like the way towards wide adoption isn’t going to come from taking the most advanced technology and gradually making it more accessible, but from taking the most accessible technology and gradually making it more advanced.
And while I’m predicting the future (almost certainly incorrectly, since I think I was completely off in my predictions just three years ago): I think all the efforts that see AR and VR as competing or even different-but-complementary technologies are missing the point. I believe that the future isn’t going to look like VR or AR as they’re pitched today — putting on a headset that blinds you and has you start swinging wildly at imaginary monsters only you can see, or just projecting an existing type of mobile game onto a real-world table or showing a Pokemon on your living room table — but is going to be more like the immersive AR shown in the movie Her. People will need to be able to treat it as a continuum that goes from private to social, where they can shut out as much or as little of the outside world as they choose to at any given moment. And whether that’s an isolating dystopian future, or a magical one-world-united future, depends less on the technology itself and more on how we decide to use it.
My take on Walt Disney World’s “magic bands,” which will probably be misinterpreted as a defense of the NSA.
My friend Michael sent me a link to “You don’t want your privacy: Disney and the meat space data race,” an article by John Foreman on GigaOm, and made the mistake of asking my opinion on it. I think it’s a somewhat shallow essay, frankly, but it raises some interesting topics, so in the interest of spreading my private data everywhere on the internet, I’m copy-and-pasting my response from Facebook. Overall, it seems like one of those shallow mass-market-newspaper-talks-about-technology pieces, the kind that breathlessly describes video games as “virtual worlds” in which your “avatar” has the freedom to do “anything he or she chooses.”
For starters, I’m immediately suspicious of anyone who says something like “Never will we take our children to Disney World.” (Assuming they can afford it, of course; considering that the author had just talked about vacationing in Europe and enjoying the stunningly blue waters off crumbling-economy Greece, that’s a safe assumption). Granted, I’m both childless and Disney theme park-obsessed, so my opinion will be instantly and summarily dismissed. But all the paranoia about Disney in general and princesses in particular strikes me less as conscientious parenting and more as fear-based pop-cultural Skinner-boxing. It seems a lot healthier to encourage kids to be smarter than marketing, than to assume that they’re inescapably helpless victims of it. Peaceful co-existence with the multi-billion dollar entertainment conglomerate.
Which is both none of my business and a digression, except for one thing: I really do think that that mindset is what causes a lot of shallow takes on the Disney phenomenon, which are based in the assumption that people can’t see past the artificiality and enforced whimsy, so an edgier, “counter-culture” take on Disney is showing them something they haven’t seen before. It also causes the kind of paranoia about Disney that describes it as if it were an oppressive government, and not a corporation whose survival depends on mutually beneficial business transactions.
There’s no doubt that Disney wants to get more data on park guests, but that essay’s extrapolations of what they’ll actually DO with that data are implausibly silly. They’re all based on the idea that Disney would spend a ton of money to more efficiently collect a ton of data aggregated for weeks across tens of thousands of customers, and then devote all that money and effort to develop creepily specific experiences for individuals.
It’s telling that Foreman compares Disney’s magic bands to the NSA, since I think the complaints miss the point in the same way. People freak out that the government has all kinds of data on them, when the reality is that the government has all kinds of data on millions of people. The value of your anonymity isn’t that your information is private; it’s that your information is boring. All your private stuff is out there, but it’s still a burden to collate all of it into something meaningful to anyone.
This absolutely is not an attempt to excuse the NSA, by any stretch. The NSA’s breaches are a gross violation, but the violation isn’t that they’re collecting the data, so much as that they’re collecting the data against our will and without our knowledge.
Anything Disney does with the Magic Band data, at least in the next ten years or so, is going to be 1) trend-based instead of individual based, and 2) opt-in. For instance, they’ve already announced that characters can know your name and about special events like birthdays, but they’re only going to use something like that at a character meet-and-greet. For example, you’ve specifically gone to see Mickey Mouse, and he’ll be able to greet you by name and wish you a Happy Anniversary or whatever. Characters seeking you out specifically is just impractical; the park has already had enough trouble figuring out how to manage the fact that tens of thousands of people all want to get some individual time with the characters. The same goes for the bit about “modifying” a billion-dollar roller coaster based on the data they get from magic bands; it’s just as silly as assuming that you could remove floors from a skyscraper that weren’t getting frequented enough by the elevators.
It’s absolutely going to be marketing driven; anybody who says otherwise doesn’t get how Disney works. But I think it’s going to be more benign. Walt Disney World as a whole just doesn’t care about a single guest or a single family when they’ve got millions of people to worry about every day. So they can make more detailed correlations like “people who stay at the All Star resorts don’t spend time at the water parks” and adjust their advertising campaigns accordingly, or “adults 25-40 with no children spend x amount of time in Epcot.” But the most custom-tailored experience — at least, without your opting in by spending extra — is going to be something like, at most, coming back to your hotel room to find a birthday card waiting for you.
The creepier and more intrusive ideas aren’t going to happen. Not because the company’s averse to profiting from them, but because they’re too impractical to make a profit.
A review, more or less, of the Samsung Galaxy Note 8.0, plus a bit of marveling on the current state of tablet computers.
Back in March of 2009, Kindles still had keyboards, and we were still a year away from enjoying all the feminine hygiene jokes that came with the release of the iPad. I took advantage of the release of the Kindle 2 to describe what would be my ideal tablet computer.
Reading that blog post now, what stands out the most is what a fundamental shift in thinking the iPad was. Looking back, it’d be easy to say that the iPad was inevitable — of course they’d just make a bigger iPhone! But that’s definitely not where the speculation was before the iPad’s announcement. People were still thinking that there’s a clear distinction between computer and media device. It’s why there’ve been so many “hybrid” laptops with removable screens that become “tablets,” that invariably have tech journalists swooning and declaring them the perfect solution right up until the point that the thing is released and fails to make a dent. It’s why people still insist on making a distinction between devices for “consumption” vs. ones for “creation.” If you’d asked most people in 2009 to describe what the iPad was going to be like, they’d have described something basically like the Microsoft Surface Pro.
That includes me; what I had in mind was essentially a thinner, lighter Tablet PC (in other words, the Surface Pro). The iPad undercut that, not just in size and in price, but in function. It made good on the promise of a “personal computer:” portable enough for media consumption, but multi-purpose enough not to be dismissed as just an evolution of the e-book reader or PDA. It’s clear now that that was absolutely transformative, and anyone who suggests otherwise is not to be trusted with your technology prognostication.
I’m not claiming to be prescient; at the end of that blog post, I gave a spec list for my perfect tablet computer, and it’s not an iPad. However, it is eerily close to a tablet computer that exists today, with one major difference: it wasn’t made by Apple, and it doesn’t run OS X. It’s the Samsung Galaxy Note 8.0.
Why Would You Even…?!
I’ve already been completely converted to the form factor of the iPad mini, and this one reportedly had all of that plus removable storage and a Wacom digitizer. The existence of refurbished models, some left-over gift certificates, and “reward” points, meant that I could get one for about $250. (They retail for about $400, and not to spoil the review, but I really can’t recommend it at that price). If you don’t think I chomped on that like the star attraction at Gatorland, then you just don’t know me at all.
Most of the reviews for the Note 8 that I’d read acknowledged that it’s a fairly good — but soon to be outdated — tablet whose main draw was stylus input, and that unless you need the stylus, either the iPad mini or the Nexus 7 is a better value. They still treated the digitizer as an extra feature though, as opposed to the whole reason for the tablet’s existence. (Which is fair, since Samsung’s treating it basically the same way by selling a Galaxy and Galaxy Note line in parallel). I hadn’t seen any that reviewed it mainly for the strength of its digitizer and its appeal to digital artists; the closest I could find was Ray Frenden’s review of the Galaxy Note II smartphone.
Over the years, I’ve tried out various graphics tablets, tablet PCs, styluses, and art software with the hope that I’d find the magic bullet that suddenly turned me into a better artist. I’ve finally given up on that idea and resigned myself to the fact that only practice is going to turn me into a better artist. By that measure, anything that reduces “friction” and encourages me to practice more often is a worthwhile investment. I’ve got several Moleskines that were going to do exactly the same thing, but instead just frustrated me with their analogness. Without an “erase everything” button, they’re like tiny Islands of Dr. Moreau, the misshapen forms of my previous failures staring back at me and discouraging me before I’ve even begun a new drawing.
A tablet that I use as often as the iPad mini, on the other hand, but that has a pressure-sensitive stylus and palm rejection and layers and simulates different media and colors and download reference material directly into my art program? And for less than 300 bucks? How could that not be ideal?
As a Digital Notebook
The sketch above is the most effort I was willing to put into a drawing for this blog post. Obviously better artists could show more of the capabilities of the device; even I have generated better drawings on the Note 8 when I’ve put more time into it.
But sample images for these things are always deceptive. I know I’ve gotten in the habit of looking at Frenden’s reviews and thinking, if I buy this thing, I’ll be able to draw like that!, which is of course a lie. And you can find amazing pieces of work done on just about any device, from a cell phone to a Cintiq, by artists who already know what they’re doing. What I wanted to see was what kind of results you’d get if an average person interested in being a better artist sat down and tried to use it.
That drawing was done in Autodesk Sketchbook Pro for Android, and it’s intended to show off the basic advantages of using a digital notebook. Reference art from the web, multiple layers for sketching & inking, brushes with variable line weight, and a tool that makes it easy to add simple color.
Pressure Sensitivity: You can tell that it’s a pressure-sensitive pen, but you’re not going to see dramatic differences in line weight unless you’re willing to do a lot of fiddling with brush settings. There is a way to increase the sensitivity of the S-Pen, apparently (instructions are in that Frenden review of the Note 2), but I had no luck getting it to work.
Palm Rejection: Even though there are now pressure-sensitive styluses for the iPad, one of the biggest annoyances remaining was that none of the software (at least that I’m aware of) supported any sort of palm rejection. As a result, you had to hold the stylus out as if you were using charcoal or pastels, which to me kind of defeats the purpose of having a stylus. On the Galaxy Note 8, of all the apps I’ve tried — Sketchbook Pro, Sketchbook Ink, Photoshop Touch, ArtFlow, and Infinite Painter — it only worked reliably in Sketchbook Pro. The others would either leave a smudge at the bottom of the screen, resize the view, or interrupt the current drawing stroke. Even in Sketchbook Pro in “Pen Only” mode, it seemed eager to interpret my palm as an attempt to resize the canvas. I get the impression that both pressure sensitivity and palm rejection have to be implemented by each app for itself, although it seems like it’d make far more sense to have it implemented at the OS level.
Accuracy: The other big problem with drawing on the iPad is that you need a blunt tip to register on a capacitive display. The S-Pen is much, much better at this, as you’d expect. The other thing that helps is that the tablet detects proximity of the pen to the surface, not just an actual touch, so you get a cursor showing where you’re about to draw. (It also means you get tooltips throughout the entire system when using the pen. Which is nice, I suppose, but I’d prefer just to have simple clarity of the UI, and I’d been hoping that touch screens meant tooltips were dying off for good).
Drawing on Glass: Earlier I said I wanted something that would reduce the “friction” of drawing so I’d practice more often; drawing on the Note 8 takes that a little bit too literally. I’ve gotten used to drawing on graphics tablets, and with rubber-tipped styluses on the iPad. That’s entirely different from drawing with a plastic nib on a glass screen, enough to make me wish they’d sacrificed the display brightness a bit in favor of a more matte surface on the screen. That would never have happened, since Samsung’s trying to position the tablet as a superset competitor to the iPad mini and is going to make a big deal out of the slightly better pixel density. But I think it would’ve been a good way to further differentiate this as a stylus based tablet, instead of a table that happens to also have a stylus.
Responsiveness: It varied from app to app, with Sketchbook Ink being the worst. When I turned off the “Smooth Brush” option in Sketchbook Pro, the lag was all but imperceptible to me, unless I was drawing with a particularly large or complex brush.
Bezel: Unlike the iPad mini, the Note 8 has a bezel that’s as wide on the sides as it is on the top and bottom. While I think it does actually contribute a bit to the overall “cheap and plastic” look of the device, it’s absolutely essential for a tablet with a stylus. If you were to simply slap a Wacom digitizer onto an iPad mini, there’d be no good place to hold it.
Software: If it’s not obvious by now, Sketchbook Pro is the clear winner of all the apps I’ve tried. That’s no big surprise, since it’s been around for years and was designed specifically for tablet computers. I’ve bought a version of it for every operating system and every computer I own, and they’re all excellent; it’s nice to finally be able to use it as it was designed to be used. I do wish that it were possible to import brushes on the tablet version as you can on the desktop versions; if there is a way to do that, I have yet to find it.
Overall, I’d say that even though our skill levels are vastly different, my take on the Note 8 isn’t all that different from Frenden’s take on the Note II. (Much of that’s intentional on Samsung’s part, as they want consistency between the phone, 8-inch, and 10-inch tablet devices in their line). Don’t expect to use it for finished art, and don’t expect it to function like a $300 Cintiq tablet. But as a sketch book with a complete set of art tools that you always have with you, it’s fine. Whether you have the Note II or the Note 8 — every review I’ve read of the Note 10 says that it’s underpowered, so I’d avoid it — just depends on which one you’re more likely to have with you everywhere you go.
For my part, I can definitely see myself practicing more often on this thing.
As a Tablet Computer
Practicing art was only part of the thing; even I can’t justify spending a couple hundred bucks to replace a $10 Moleskine. The idea was that I’d have something that would do everything the iPad mini can, and function as a digital notebook. In that regard, I’d say that it’s not quite there, but it’s pretty close.
When I had my semi-religious experience in an Apple Store, I said that the iPad mini seems absolutely silly until you actually hold one. I still think that’s the case, and I think that the build quality of the Note 8 really drives that home. It’s got a white plastic back and a silver border that makes it seem 1) like a prop from 2001 or Space: 1999, 2) thicker than it actually is, and 3) kind of cheap. The iPad mini feels like a solid block of metal and glass; the Galaxy Note 8 just feels like a plastic consumer product.
According to the specs, the Note 8 has a slightly higher pixel density than the mini. It shouldn’t be enough to be perceptible, but whether it’s more clarity, better use of fonts, brighter colors, or just placebo effect, the picture does look better than the mini’s. Especially with text and line drawings (by which I mean comics, of course). The colors also seem brighter than on the mini.
Battery life is middling. I haven’t stress tested it (and I’m unlikely to), but it has been completely drained of power just sitting idle for three days, which has never been the case with any iPad I’ve used. I suspect that if I took it on the road, I’d be having to charge it every night.
It does support micro SD cards up to 64 GB for external storage — one of the items on my “ideal tablet computer” list from 2009 — but for documents only, not apps. (Since it’s Android, there are instructions online on how to root the tablet so you can use the SD card for apps, but I’ve always considered rooting or jailbreaking these things to be more trouble than it’s worth). Since the tablet is limited to 16 GB of internal storage, and you’re left with around 9.7 GB after all the pre-installed software, the extra space is definitely nice to have. It could store my entire library of Kindle books and comic books, and have enough space to actually store a significant chunk of my music library, which is something I’ve never been able to do with the iPad. Consistent with the build quality of the rest of the tablet, the door for the SD card is one of those tiny plastic covers that always seems in danger of breaking off.
The stylus is definitely closer to a Palm Pilot stylus than a Wacom pen, but it’s perfectly adequate for drawing. It’s more white plastic, it fits snugly in the underside of the tablet, and pulling it out automatically brings up a page of Samsung’s pen-enabled “S Note” app. Unlike the bloatware I would’ve expected, that’s actually a pretty solid app. It’s got a set of templates of questionable usefulness, but the technology underneath is solid. Handwriting recognition is flawless enough to be eerie, and it’s got additional modes that recognize mathematical formulae and shapes for diagrams. The latter one was the biggest surprise for me, since I’ve been surprised that I haven’t see any tablet computers pull off the potential of OmniGraffle very well, when it seems like it’d be a natural. (It’s possible that OmniGraffle for iPad is an excellent program, but at $50 I’m never going to find out).
Handwriting recognition is available throughout the system as a “keyboard” mode; the others are a traditional keyboard and voice input. (Somewhat surprisingly, handwriting is faster and more accurate for me than voice input. Could Star Trek have gotten the future wrong?)
Samsung has re-skinned the entire OS and included its own apps, but I didn’t think either one was particularly obtrusive. All the apps other than S Note were quickly relegated to a different page. Having a “Samsung Cares Video” icon that can never be deleted from the system is kind of an annoyance, but at least I never have to look at it. And I tried using a different launcher for a bit, but soon went back to the default one.
Considering how often I read comments online from people demanding that this app or that service be released on Android, I’d expected the Google Play store to be filled with nothing but fart apps and tumbleweeds. But I’d quickly found and downloaded every one of the apps I use most often on iOS. I’d be more disappointed if I’d any intention of giving up iOS completely, but there’s a respectable amount of software out there.
It’s also got two cameras, and they’re both terrible. Which is as it should be, because if tablets had good cameras, you’d have even more people taking pictures with them in public.
Android vs. iOS
Believe it or not, I did go into Android with a completely open mind. As long as it’s functionally equivalent to iOS, then there’s no point in getting butthurt over all the differences.
And at least with the version of Android that’s installed on this thing — I don’t know, it’s Peanut Buster Parfait or some shit — it is pretty much functionally equivalent. On a task-by-task basis, there’s little that’s inherently better about one way of doing things than the other. Widgets and Google Now seem better in theory in practice, and the only thing that’s outright worse about Android is the lack of a gesture to immediately scroll to the top of a screen.
What’s surprised me is just how much the cliches about each OS are true. Overall, Android seems like an OS that was made by programmers, while iOS seems like an OS that was made by designers. iOS tends to have a consistent aesthetic, while Android has that weird combination of sparseness and excess that you see on Linux desktops: there’s only an icon for a Terminal window and an open source MS Office clone, but they glow and rotate in 3D space with The Matrix constantly scrolling on top of an anime girl in the background.
I’ve certainly got my own preferences. The lack of options and settings common to iOS apps is often, bafflingly, described as a failing, but what it is is an acknowledgement that having a consistent experience that just works is preferable to having to fiddle with a billion different settings. I often have to read people complaining about Apple’s “walled garden” and its arrogant insistence on one way of doing things as opposed to giving the user choice; what I see in Android is a ton of meaningless, inconsequential choices that I’m simply not interested in making.
One of the “features” of the Note 8 that I didn’t mention above is that it supports multiple windows. You can open a little task bar and drag a separate app onto the screen to have two apps running at the same time. A lot of reviews that I’ve seen for the tablet list this as a major advantage of the system. I say that it’s a clear sign the developers have learned nothing from the failure of the Tablet PC. They’re still trying to cram a desktop OS onto a tablet with a touchscreen, when even Microsoft has learned to stop emphasizing windows in Windows. The iOS limitation of having only one app running concurrently isn’t just some technical limitation; it’s one of those constraints that makes the design of the entire system stronger. It means the designer can’t just lazily port a desktop interface to a tablet, but has to put real thought into how to optimize the app for the new device and how it will be used.
(There are definitely, absolutely, major inconveniences to having only one app running at a time on iOS, as anyone who uses 1Password will tell you. But I’m convinced that the best way to solve it won’t look anything like what works on a desktop OS).
I think the best example of the whole divide between Android and iOS is in the file system. iOS is notoriously closed; each app has its own sandbox of files that only it can touch, and transferring documents between apps is cumbersome. Android is notoriously free and open; you have access to the entire file system of the device, with a file-and-folder-based GUI that should be familiar to you because it’s the exact same one you’ve been using for 30 years.
Some people will say this is a perfect example of each person being able to choose the operating system philosophy that works best for him. I say it’s an example of how stubbornly sticking to one way of doing things results in something that’s best for nobody. I’m perpetually frustrated by the file handling in iOS, where I just want to use this app to open that document but can’t find any flow of import or export that’ll make it work. But I’ve been just as frustrated with Android, where I keep creating files and then am completely unable to find them in any of the dozens of folders and subfolders on the system. (Sketchbook, for instance, doesn’t save pictures you’ve exported in Pictures. Nor in Documents. It saves them in Autodesk/SketchbookPro/Export).
I’m hoping that Android will eventually get over its problems with market fragmentation, let go of the desktop, and finally embrace a post-PC world. And I’m hoping that iOS will eventually let go of Steve Jobs’s pathological fear of multiple buttons and develop a scheme for cross-app communication that doesn’t depend on clipboards or exposing the file system. Concentrating on how to use touch as a completely new way of interacting with a computer could lead to a dramatically improved method of working with computers; we’ve already seen that kind cross-pollination happening between iOS and OS X. I don’t see that kind of innovation coming from Android, though, since it seems to be still doing little more than iterating on stuff that’s as old as X Window.
And one of the cliches that’s hilariously not true is the one about Android being all about functionality and practicality with Apple being all about flash and gimmickry. Because I’m now the owner of a tablet that has no less than eight different ways to unlock it (most using a rippling water effect), and which keeps warning me that it can’t see my eyes, because it has a “feature” (optional, of course!) that won’t let it go to sleep if it detects my face looking at it. Unlike the iPad, which, you know, turns off when I close the case.
And Finally, the Verdict
I’m way too invested in iOS at this point to ever switch over completely, so that was never an issue. And I think I’ve gotten most of the making-fun-of-Android out of my system, so I’m not going to be starting any campaigns against it. (I’d even like to try writing an app for it, at some point).
The questions for me were whether the Galaxy Note 8 could replace my iPad mini as the “everyday workhorse” tablet, and whether it’d help me practice drawing more often by having a ubiquitous digital notebook. The answers, so far: almost definitely not, and maybe.
If I were actually writing for one of the tech blogs, I’d be laughed out of my job if I based my entire verdict on “how the computer feels.” But for me, that’s what it comes down to with the iPad mini. It’s like Kirk Cameron’s banana: it just fits the hand perfectly (and doesn’t squirt in your face, either). It just feels more fun to use, for some indefinable value of “fun.” When Apple inevitably releases one with a higher resolution display, it’s going to be all but impossible for me to avoid getting one. I bought the first one thinking it was a ridiculously excessive extravagance, and it almost immediately became indispensable; I use it every day.
Still, I’m happy to have the Galaxy Note 8, although I’m glad I didn’t pay full price for it. It’s a solid (if not exceptional) drawing tablet that didn’t require me to shell out for a Cintiq or even a Surface Pro. If it helps me get to the level where I could actually make art for a game, then it was a good investment.
As for normal people, without my weird affliction when it comes to gadgets?
If you don’t care that much about drawing and just want the best tablet: get an iPad mini.
If you want a good tablet for an unbeatable price: get the Nexus 7.
If you’ve got the money, and you’re looking for a laptop replacement or the best drawing experience you can currently get on a tablet: get the Surface Pro. (I haven’t used it myself, but I’ve never seen a review of one that could find fault with the digitizer on it).
If you want a mid-sized tablet and think you’ll ever want to use a stylus with it: get the Galaxy Note 8. Preferably on sale.
Making sense of the iPad mini in a world that doesn’t need it.
After my previous unfortunate episode in an Apple store, it should come as little surprise that I didn't last very long before I broke down and bought an iPad mini. No, it doesn't make sense for me to be throwing my credit card around as if I were the CEO of Papa John's or something. I've already got a perfectly fancy tablet computer that's not just functional, but really quite terrific. It's not like I'm getting paid to write reviews of these things, and even my typical “I need it for application development testing” is sounding increasingly hollow.
What helps is a new metric I've devised, which measures how long it takes me after a purchase before the appeal of the thing overwhelms my feeling of conspicuous consumption guilt over buying it. It's measured in a new unit: the Hal (named after Green Lantern Hal Jordan, the Jordan who does have willpower).
By that standard, the iPad mini clocks in with a new record of 0.03 Hals, or about 30 minutes after I opened the box. Because this thing is sweet, and I pretty much never want to stop holding it. I'm writing this post on it, as a matter of fact, even though a much more functional laptop with keyboard is sitting about three feet away from me at this very moment. But to use it would mean putting the iPad down.
The “finish” of the iPad mini, with its beveled edge and rounded matte aluminum back, is more like the iPhone 5 than the existing iPads. It makes such a difference in the feel of the thing that I can talk about beveled edges and matte aluminum backs without feeling self conscious, as if I were a tech blogger desperately seeking a new way to describe another piece of consumer electronics.
It’s about as thin as the iPhone 5, and almost as light. With the new Apple cover wrapped around the back, it's perfect for holding in one hand. There have been several times that I've made fun of Apple, or Apple fanatics, for making a big deal about a few millimeters difference in thickness, or a few ounces in weight. And I joked about the appeal of the iPad mini, as if the existing iPad was unreasonably bulky and heavy.
But then something awful happened: I had to fly cross country four times within two weeks. And reading a book on the iPad required me to prop the thing up on the tray table and catch it as the person in front of me kept readjusting his seat. All my mocking comments were flying back in my face (along with the iPad, my drink, and the in-flight magazine), in the form of the firstest of first-world problems.
“Version 1 of the iPad mini is for chumps,” I said. “Check back with me once you've put in a higher resolution display, Apple.” In practice, though, the display is perfectly sharp. “Retina” isn't the make-or-break feature I thought it would be. You can certainly tell the difference when comparing the two; I'd assumed that squabbling over pixel density was something best left to the comments sections of tech blogs, but the difference in sharpness is definitely visible. It's really only an issue for very small text, though. Books, Flipboard, and web pages are all clear and legible.
And speaking of Flipboard, it and Tweetbot are the two apps that get me giddy enough to own up to making another unnecessary tech purchase. Browsing through articles and status updates on a tablet that thin is probably the closest I'll ever come to being on board the Enterprise.
The phrase I've seen reoccurring the most in reviews of the iPad mini is some variation on “this is the size the iPad is supposed to be.” And really, there's something to that. I'm not going to give up my other one; the larger size really is better for some stuff, like drawing, Garage Band, and reading comics or magazines. But overall, I haven't been this impressed with the “feel” of a piece of consumer electronics since I saw the original iPhone. Realizing that this is just version 1.0 is actually a little creepy — apart from the higher resolution display, I honestly can't conceive of how they'll improve on the design of the iPad mini.
Impressions of the iPad mini and Microsoft Surface, and the realization that I just don’t know what the hell is going on anymore.
It started out innocently enough; I thought it’d be good to get out of the apartment for once and take an aimless drive in Marin County. And hey, there’s an Apple Store in Corte Madera, so what could it hurt to stop by and take a quick look around?
It ended with a descent into craven tablet-computing madness. In a shockingly short time, I’ve gone from thinking that tablet computers were a vulgar, unnecessary display of conspicuous consumption; to thinking that tablet computers were a vulgar, unnecessary display of conspicuous consumption and I want all of them.
I’ve already conceded that the iPad for me was a case of blowing a wad of cash on a gadget, in the hope that it would create a use for itself at some point down the road. And it did; even if I weren’t still calling myself an iOS developer, I’d still have gotten enough use out of the iPad to justify the expense. But what happened today is much worse.
iPhone Gigante Más Pequeño
I should explain something first: my reaction to the Apple “event” where they announced the iPad mini. Now, I’m about as shameless a whore for Apple products as anybody you’ll find outside of Daring Fireball, but I still saw nothing in that batch of announcements that interested me in the slightest. New MacBook Pro with “retina” display? Please; they hit exactly the sweet spot of size vs. power with the Air. New iPad with a thinner “lightning” connector and faster processor? I don’t think even Apple seriously expected anyone to consider an “upgrade” after just a few months.
And the iPad mini was just baffling. Who could it possibly appeal to? It’s smaller than the iPad, but not enough to be a recognizably different product, and not quite small enough to be really called “handheld.” It’s not as if the iPad is uncomfortably big and bulky anyway; in fact, it’s the perfect size for comics and magazines, and is actually a little bit too small for playing board games. And the price of the iPad mini is way out of “impulse purchase” range like the Kindle or Nexus 7, and too expensive for an e-reader. It’s been getting stellar reviews pretty much across the board, but those are just tech reviewers buried under such a mountain of gadgetry that anything with Apple’s fit and finish is going to make a great first impression. It couldn’t possibly pass the real-world, practical test. It seemed like Apple had once again done the impossible: creating an Apple product that I had absolutely no desire to buy.
Then I went into the Apple store and tried out the iPad mini, just for yuks. I don’t know what kind of gas they were pumping into the store, but whatever it is has the same effect on my brain chemistry as a nightclub singer has on a Tex Avery wolf. This is the perfect size. How was I ever able to imagine reading in bed with such a freakishly enormous iPad? I would totally read more with a tablet this size. And the display is so sharp and the text so clear that it doesn’t even need a higher-resolution display. We have always been at war with the 10-inch diagonal. Go away, old woman, I don’t care that you’ve been waiting in line; I just want to keep holding this and never stop.
I’m not a complete idiot; I left the store without buying anything. And even though I see an Apple Store the same way Angelina Jolie sees African orphanages, I don’t plan on buying another iOS device anytime soon. When they release version 2 of the mini, though, it’s going to be a tough time for me. I’ll admit to spending a few minutes in the car trying to envision scenarios where my bulky, cumbersome iPad 3 met with an unforeseen “accident.”
In what I’m sure was a completely coincidental bit of retail zoning, a new Microsoft Store happens to have opened up just a few doors down from the Apple Store in Corte Madera. I checked in to take a look at the new Surface tablet, and came out almost as surprised as I was by the iPad mini.
Another thing I should clear up: even though I’ve gone all-in on Apple, I’m actually kind of excited by Windows 8. The last time I considered Windows to be anything more than a necessary evil was back in 1995. (Yeah, I genuinely got excited over Windows 95. Come on, you could put spaces in filenames. Just try and tell me that wasn’t cool). But Windows 8 seems to be a case of Microsoft getting it right to such a degree that I’m surprised it’s from Microsoft.
I installed the upgrade on my Windows machine, and I’ve spent the time since trying to come up with excuses to use it more often. (Until now, it’s just been The Machine That Runs Steam). As a desktop OS, it’s reassuringly Vista-like in its clunkiness: stuff takes more clicks than it should, basic system settings are confusingly split into several different places, it’s hard to tell what’s clickable vs what’s decorative, and the UI on the whole requires a tutorial or cheat sheet to be usable at all since there’s absolutely no discoverability. Basically, there’s enough wrong with it to remind me that I haven’t fallen into some alternate dimension where Microsoft is no longer Microsoft.
But as a consistent, cross-device UI, it’s outstanding. Text is big, legible, and just plain looks cool. All the hassle of writing WPF applications that required mark-up and screen resolution-independence finally seems to have paid off. The “Live Tiles” are such an ingenious way of combining widgets, icons, and full-blown apps that it seems obvious in retrospect, and it makes it look like other OS developers have been dragging their feet for not coming up with it sooner. It’s functional and actually is genuinely fun to use, from desktop to touch screen to tablet to phone to video game console.
And it’s instantly recognizable — even if, without “Metro,” there’s no good name for it — and unique, which is something that Android still hasn’t been able to accomplish. I was in a Best Buy the other day, and I legitimately couldn’t tell the difference between a bunch of Android phones and the iPhone 5 at the end. And speaking of phones: I like the iPhone 5, but I got one more as an obligation than anything else; it was as much to get rid of AT&T as it was to to get any of the new stuff in iOS. But I’ve looked at the new HTC Windows Phone and for the first time have legitimately considered jumping ship.
So I think that Microsoft has been doing a remarkably good job with the new Windows push, overall. And the Surface fits in with that. The commercial still offends me deeply on a philosophical level, considering what a fan I am of basic human dignity. But still, it does what it’s supposed to do. Above everything else, it sets it apart from the iPad and Android tablets. It shows you how you’re supposed to use it — not tossing it to your pals or dancing with it in a fountain, but then again, why not? — propped up and attached to the keyboard cover, like a super-portable laptop. It’s still Windows, and the implication is that you can do the stuff you use Windows for, instead of just putting your feet up on a table and reading the New York Times.
Finally seeing it in person, I was surprised by how well it works. It’s much lighter and less bulky than it looks in pictures. The display is bright and clear. The OS functions pretty much the same with a touch screen as it does with a mouse and keyboard. Having already learned the gestures on the desktop, I didn’t have any problem figuring out how to use the tablet, and there was absolutely no sense of translation between desktop/laptop OS and tablet OS as there is between OS X and iOS.
I had very little success getting efficient with the touch cover keyboard, but I think the touch cover is as much branding as it is technology. It’s undeniably neat that it’s functional at all, but I think its bigger purpose is to remind people that they have a keyboard and trackpad with them wherever they go — This is a Windows machine. Look you can run Office, it whispers in your choice of colors. The “type cover” adds an imperceptibly small amount of height and is infinitely more functional; it’s a super-slim laptop keyboard.
It feels ridiculous to hold the Surface vertically, but again, I get the impression that’s an essential part of the branding. They want you thinking, This is not a magazine or newspaper. This is like a laptop computer. I know this. I can be productive with this, as I laugh at the iPad users lounging on their couches swiping through vacation photos.
I’m not actually tempted to get one, but I will say that it’s the first product that’s gotten me to look up from my iPad and pay any attention to it at all. There’s absolutely no way I’d actually buy one.
And yet… this is the Windows RT version. The “Surface Pro” is coming out sometime next year, and it’s even more shameless about being a full-fledged notebook computer in tablet form. It’ll be thicker and heavier. I’ve heard it’ll actually come with a fan (other than Steve Ballmer, obviously). And it’ll undoubtedly be considerably more expensive than the current Surface tablet, which is already on the higher end of the iPad price range. But: it’ll come with a pressure-sensitive stylus. And I’ve wanted a tablet PC for as long as I can remember, as soon as I learned that they were even a thing. In fact, three years ago I described my ideal tablet computer, and it’s pretty much point-by-point the feature list of the Surface Pro.
Except for the “Apple makes it” part. And that’s the part that’s got me wondering what’s happened to the world. I just spent a considerable amount of time praising Microsoft. I defended an ad that has an old couple kissing and a tablet-covered dude doing the robot. I actually considered getting a third iOS device, for a not-insignificant number of seconds. The Apple Store seemed austere, black-and-white, business-like; while the Microsoft Store was colorful and fun. I’m actually envisioning a world where I might have a Windows device as my main computer. A time will come — but I must not and cannot think! Let me pray that, if I do not survive this blog post, my executors may put caution before audacity and see that it meets no other eye.
Maps in iOS 6 bring about the downfall of western civilization, and my disillusionment with tech commentary continues.
Just days after the entire developed world sank into a depressive ennui due to Apple’s boring new smart phone, society was rocked to its foundations by the unmitigated disaster that is iOS 6’s new Google-free Maps app. Drivers unwittingly plunged their cars into the sea. Planes over Ireland crashed into nearby farms due to mislabeled icons. College students, long dependent on their iPhones to find their way around campus from day to day, were faced with a featureless blob of unlabeled buildings and had no option but to lie down on the grass and sob. Huge swaths of Scotland were covered with an impenetrable fog, and the Brooklyn Bridge collapsed.
Throughout the entire ordeal, Tim Cook only stopped giving the world the middle finger long enough to go on Saturday Night Live and rip up a picture of Steve Jobs. Jobs’s only recourse was to haunt the homes and workplaces of thousands of bloggers, commenters, and Twitter users, moaning “You’re the only one who truly understands what I wanted. Avenge me!”
At least, that’s the way I heard it. You want proof? It’s right there in the video, the one where they say “The Statue of Liberty? Gone!” while showing a picture of the Statue of Liberty. (Psst… hey, The Verge people — it’s that green thing in the middle of that star-shaped island). You think just because it’s “journalism” they have to have sources to show that it’s a serious, widespread problem? Check it, Jack: a tumblr full of wacky Apple maps mishaps.
And no, it doesn’t matter that the vast majority of those are complaints about the 3D Flyover feature, which was universally acknowledged as being a “neat but practically useless” feature of the maps app as soon as it was released, because shut up that’s why.
Of course, since I’m a relentless Apple apologist, I’m focused, Zapruder-like, on one tiny six-second segment of that three-minute long video: the part that says “For walking and driving, the app is pretty problem free.” And I’m completely ignoring the bulk of the video, which shows incontrovertible evidence that not everything is 3D modeled and lots of things end up looking kind of wavy.
Sarcasm (mostly) aside, my problem with this isn’t “oh no, people are picking on Apple!” My problem is that the people who are supposed to be authorities on tech — and to be clear, it’s not just The Verge, by a long shot — keep spinning the most shallow observations into sweeping, over-arching narratives. (And no, I haven’t see a single Verge post about Apple in the past week that’s neglected to find a way to link to that 73-degrees-Apple-is-timid post).
The tech journalists are the ones who are shaping public opinion, so I don’t think it’s unreasonable to expect an attention span of longer than a week and a short term memory of longer than a year. And as a result, I’m going to hold them responsible every time I read something dumb on the internet.
To be fair, even though the video just says “Apple’s supposedly been working on its version of Maps for 5 years, and it’s resulted in an app that’s inferior to what was there before,” and leaves it at that, the article does mention that Google’s data has been public for 7 years. And points out that the data gets refined with the help of location data from millions of iPhones and Android devices.
But why make it sound as if the decision to break free from dependence on Google was an arbitrary decision on Apple’s part? By all accounts, Jobs had a preternatural grudge against Google for releasing Android. But it’s not as if the Maps app on iOS 5 and earlier was feature equivalent to the Google maps on Android, and Apple’s deciding to roll their own was a completely petty and spiteful decision. Android’s had turn-by-turn directions for a while now, and there were no signs that it was ever coming to the Google-driven app on iOS.
Was that a case of Google holding out, or Apple not bothering with it because they knew they had their own version in the works? I certainly don’t know — it’s the kind of thing it’d be neat for actual tech journalists to explain — but it ultimately doesn’t matter. The licensing deal with Google ran out, so Apple’s options were to reassert their dependency on their largest competitor, or to launch with what they had.
And incidentally, whenever someone says “Steve Jobs would never have allowed something this half-assed to be released!” it’s the tech journalists’ responsibility to remind them that the iPhone released without an SDK and nothing but Apple’s assurance that Web apps were The Future. Or that Jobs had no problem releasing the original iPhone without support for Flash video, even though there was an outcry that Flash was crucial to the user experience.
I installed iOS 6 on the iPad and tried out a few practical searches. It found everything I needed, and it actually gave me more relevant information than I remember the Google version giving me, since I was looking for businesses, and Yelp automatically came up with business hours. Of course, my experience means very little, since I happen to live in the one part of the world that’s going to be most aggressively tested by Silicon Valley companies. I have little doubt that Europe and Asia are going to have a harder time of it, and obviously they’re not markets to be ignored. But it’s not a one-size-fits-all problem, so it’s silly to treat it like one.
Apple has no problem calling Siri a beta, so they probably should’ve called Maps beta as well. It’s a huge part of why people use smart phones, so it’d be foolish to imply that serious inaccuracies are no big deal. Regardless, it’ll work well enough in a lot of cases for most Americans, and in the cases where it doesn’t work, the web version of Google maps is still available (and you can set up a link on the home page with its own icon, even). Maybe Google and Apple will reach enough of a détente for a third party Google Maps app to get released. Maybe it’ll even finally bring turn-by-turn directions, or Apple will even allow third party apps to be default handlers for links!
Until then, maybe we can stop with the melodrama and the hyperbole, and just appreciate Apple Maps as version 1.0 mapping software with a neat extra feature of peeking into an alternate reality where Japantown in San Francisco has been overtaken by giant spiders.
Tech writers are disillusioned with the iPhone 5, and I’m getting disillusioned with tech writers.
I understand that if you’re writing about technology and/or gadgetry, a significant part of your job is taking a bunch of product announcements and reviews, and then fitting them into an easily-digestible, all-encompassing narrative. Usually, though, there’s at least an attempt to base that narrative off of reality, “ripped from today’s headlines” as it were. Lately, it seems like writers are content with a story that’s loosely based on actual events.
For the week or so building up to the iPhone 5 launch and the days after, the narrative has been simple: “The iPhone 5 is boring.” Writing for Wired, Mat Honan says, essentially, that it’s boring by design. And really, fair enough. Take Apple’s own marketing, remove the talking heads on white backgrounds, remove the hyperbole, and give it a critical spin, and you’ll end up with basically that: they’ve taken the same cell phone that people have already been going apeshit over for the past five years, and they’ve made it incrementally better.
Take that to its trollish extreme, and you have Dan Lyons (the creator of the “Fake Steve” blog). He wrote a “provocative” take on Apple’s past year including the iPhone 5 announcement for BBC News, in which Lyons (who wrote for Newsweek and also a satirical blog in which he pretended to be Steve Jobs) spends a couple of paragraphs reminding us why we should care about his opinion (it’s because he wrote a blog in which he was “Fake Steve”), and then mentions that he dropped the blog out of respect for Jobs’s failing health, and then invokes Jobs’s memory several times. In an “analysis” of Apple that’s as shallow and tired as you can possibly get without actually saying Micro$oft — he actually uses the word “fanboys.”
We can all acknowledge that to give him the attention he needs and then move on; there’s absolutely nothing there that wouldn’t get him laughed off of an internet message board. Lyons doesn’t even have the “I speak for Steve Jobs” thing going for him, since everybody has an opinion on how things would be different if Jobs were still in charge.
What’s more troubling to me is seeing writers who are usually worth reading instead take a similar approach: building a story that’s driven mostly by what other people are writing. Any idiot can regurgitate “news” items and rearrange them into a cursory non-analysis. (And that’s worth exactly as much as I got paid for writing them). (Which is zero in case you couldn’t tell already). Is it too much to ask for insight? Or at least, wait to see what the actual popular consensus is before making a declaration of what popular opinion should be?
If It Ain’t Broke, Fix It Anyway Because it Has Grown Tiresome to Me
On The Verge, Dieter Bohn found a perfect analogy for the iPhone in its own Weather app icon. He turned Honan’s piece from a blog post into the “prevailing opinion” about the announcement. But then he takes Honan’s reasonable but back-handed compliment and turns it into an accusation: the iPhone isn’t boring but timid. Sure, the hardware’s fine, but whatever: where Apple has failed is by showing no willingness to experiment with the core operating system or UI.
The mind-numbingly tedious drudgery of having to close multiple notifications under iOS (which, incidentally, I’ve never once noticed as a problem) proves that Apple’s entire philosophy is a lie. You’ve got a reputation for “sweating the details,” Apple? Really? Then how can you possibly explain this?! — as he thrusts a phone full of email notifications into the face of a sheepish Tim Cook, while Jobs’s ghost shakes his head sadly.
I honestly don’t want to be too harsh with any of the writers on The Verge, since I visit the site multiple times daily, and I really like their approach a lot. But I don’t think it’s particularly insightful to be aware that interface consistency isn’t just the dominant driving factor of UI design, but of Apple’s philosophy since the introduction of the Mac. We’ve been using “version 10” of the Mac OS for 10 years now, and while the appearance and the underlying hardware have changed dramatically, the core functionality is largely the same. Intentionally. It’s only with the still-new Mountain Lion that Apple’s made changes to how things work on a fundamental level — and, in my opinion, it hasn’t been all that successful. (I don’t understand all the people whining about faux leather while saying relatively little about changing the entire structure of document management for the worse).
On top of that, though, there’s the fact that The Verge has spent at least a month covering the Apple v. Samsung trial, in which Apple was spending a ton of time, effort, and presumably money to defend the UI that Bohn claims needs a dramatic overhaul. Yes, Microsoft has done considerable work to dramatically rethink the smart phone UI. That’s how Apple wants it. They spent a month saying, “this is ours, we made it, we like it, stop copying it.” Could it use some refinement? Of course. It always can, and some of those good ideas will come from other attempts at inventing a new cell phone OS. Does it need a dramatic re-imagining? No, unless you’re irrationally obsessed with novelty for its own sake, as a side effect of being steeped in cell phone coverage 24/7 to the point of exhaustion.
The Audacity of Swatch
Speaking of that, there’s Nilay Patel’s write-up of the new iPod Nano. Again, I think Patel’s stuff is great in general. But here, he carries on the “boring but actually it’s timid” idea by linking to Bohn’s article, and then goes on to build a story about what it says about Apple in general. Essentially, they’ve become The Establishment, too afraid of change to take any risks. With a product that’s changed dramatically in design in just about every iteration since the original.
“But that’s the old market, and the old way.” Apple isn’t about profiting over the planned obsolescence and impulse purchase cycle — which is news to all of us who have now become conditioned to buy a new cell phone every 2 years and a new laptop every 3 or 4 — but to pioneer new markets. The last iteration of the nano heralded a future of wearable computing. The last nano could’ve been the harbinger of a bold new market for a more forward-thinking Apple: wristwatches.
Let’s ignore the fact that Patel himself acknowledges that the nano wasn’t a very good watch in the first place. What about the fact that smart phones pretty much killed the entire wristwatch business? I’m about as far from being fashionably hip as you can get, but I still get considerable exposure to what’s actually popular just by virtue of living in San Francisco. And I don’t know anyone who still wears a watch. It’s been at least five years since anyone’s been able to ask for the time and not have to wait for everyone to pull their phones out of their pockets or handbags. Why would Apple go all-in on a market that they themselves helped destroy?
(Incidentally, Patel quotes watch band designer Scott Wilson as saying “The nano in that form factor gave me a reason to have three iOS devices on my body.” I can think of the iPhone and the iPod Nano-with-watchband; neither Wilson or Patel make it explicit what the third device is. And now I’m afraid to ask, because I’m not sure I want to know what this third device is exactly, or where a person would enjoy sticking it).
Saying that the MP3 market is dead fails to acknowledge what killed it: that functionality, along with that of the point-and-shoot camera, has moved away from a dedicated device and towards the smartphone. Smartphones are expensive, even with a contract, and the more info we put onto them, the more they become irreplaceable. There’s still a market for a smaller, simpler, and relatively inexpensive MP3 player. There’s a clue to that market on Apple’s own marketing page, and the most prominent icon on its home screen: “Fitness.” Joggers and people who work out — at least from what I’ve heard, since I have even less familiarity with the world of exercise than I do with people who still wear wristwatches — want a music player they can take for a run or take to the gym without worrying too much if it gets lost or broken. They’ll get more use out of that than from a too-large wristwatch that has to be constantly woken from sleep and needs a headphone wire running to your wrist if you want to listen to music.
That’s where the new market is, ripe for Apple to come in and dominate: stuff like the fitbit. I don’t have the actual numbers, of course, and I don’t even have any way of getting them, but I can all but guarantee that Apple sold more of the iPod nano armbands than it ever did with watchbands. And I imagine it’s the same philosophy that made them put a place for carrying straps on the new iPod touch: it’s not even that Apple doesn’t want to take risks with its flagship product, it’s that customers don’t want to take risks with the one device that handles all their communication with the world. For them, the iPod is an accessory.
Speaking of Consistency
But if you’re going to be making up stories, you should at least try to be consistent with them. On Slate, Farhad Manjoo has some serious issues with the new dock connector. He repeats the idea that the new iPhone is boring, but he uses the magic of sarcasm to make his point extra super clear. The problem is that so many details about the new phone leaked out weeks before release. By the time of the actual announcement, the world had already seen everything and stopped caring.
Got that? Apple’s problem is that it’s got to keep a tighter lid on its plans. Classic Apple, going blabbing about everything all over the press, flooding the market with product announcements. It’s boring everyone!
He’s bored by all the leaked information and the lack of any big bombshells in Apple’s announcement. Except for the big bombshell of the new dock connector. It’s pretty boring but also very impressive. It doesn’t take any risks but completely and unnecessarily changes the dock connector, destroying an entire industry of accessories. Apple has a long history of invalidating what it deems outdated technology, but this is different. Manjoo has to get a new alarm clock.
Also, the new phone is remarkably thin and light, “the thinnest and lightest iPhone ever made, and the difference is palpable.” But how could Apple possibly justify changing everything just for the sake of this new, thinner dock connector?
Back on The Verge, Vlad Savov describes all the leaks that bored Manjoo, and he mentions how embarrassing they must be for Tim Cook. He says that the problem is all the points of potential leaks in the supply chain, which have “drilled holes in Apple’s mystique.” In the same article, Savov links to Verge’s own post about every detail that was leaked ahead of time.
Isn’t it a little disingenuous to be on a site that publishes front-page posts of pictures of new iPhone cases, a detailed report of the new dock connector, and a compilation of all the rumors and leaks to date, and then comment on the unprecedented demand for leaked information? It seems a little like prom committee chairman Edward Scissorhands lecturing everyone on their failure at putting up all the balloons.
And of course, on the first night of pre-orders for the boring new iPhone that nobody’s interested in, Apple’s online store was overloaded, and the phone sold out of its initial order within within two hours.
On the topic of pots, kettles, and their relative blackness: I wasn’t that interested in the new iPhone, and I still chomped on the pre-order like a starving bass. I was much, much, much more excited about finally ditching AT&T than I was about the device itself. (So eager to get rid of AT&T that I’m willing to run into the arms of a company that’s no doubt almost as bad). Now that Apple’s not talking about magic, I can take their advertising at face value: I’m pretty confident that it is the best iPhone they’ve ever made. I’ve got an iPhone, and I like it, so I’ll get a better one.
What does that say about the state of gadget obsession that “I’m going to buy a new expensive device every two years even though I don’t technically need it” comes across as the most pragmatic view?
I can still remember when I first saw Engadget, and I thought the concept was absurd. A whole ongoing blog devoted solely to gadgets and tech? Is that really necessary? Then, of course, I got hooked on it, and started following it and now The Verge and quite a few Apple-centric sites. If I’ve reached a point of gadget saturation just reading the stuff, what does it do the folks having to write about it? It seems like it’s creating a self-perpetuating cycle of novelty for its own sake, which then drives commentary about this bizarre fixation on novelty for its own sake. You can’t even say “maybe just step away from it all for a bit;” it has to be a stunt, completely giving up the internet for an entire year while still writing for a technology-oriented site.
Whatever the case, it’d probably a good idea to step back a bit. I’ll start doing that just as soon as I get my sweet new extra-long phone next week.
Dozens of over-privileged gadget hounds have suddenly found themselves with outdated electronic equipment. Won’t you please help?
As an electronic gadget obsessive with more disposable income than common sense, I’m well aware with the trials of being an “early adopter.” (That’s a euphemism for the older term, “impatient doof.”) We buy overpriced things, we watch them go down in price and up in specs and features, we sell them or donate them once they’re four or five years old, we buy a new one. It’s all part of the Great Circle of Life.
Rarely, though, is this delicate ecosystem hit with such a wide-spread cataclysm like the one we’ve seen this week. In just a few short days, I went from being blessed with pristine examples of consumerism at its finest, to being burdened with obsolete relics. It disgusts me even to look at them.
What’s worst is that all of the new models fix the one most annoying thing about each. For instance:
We all knew that The iPhone 4 was coming out, so it wasn’t a surprise. (Well, it wasn’t a surprise to most of us — apparently Apple and AT&T didn’t get the memo). This version has a faster processor and a much improved camera, which were my biggest complaints about the old version: now I don’t feel compelled to take a point-and-shoot everywhere. So I set aside some money — that I would’ve wasted on charity or something frivolous like that — and was prepared to make an informed purchase.
But then E3 happened! A new Xbox 360! Styled after the PS3, with special dust-collecting coating and barely-sensitive touch-activated not-buttons! And it, theoretically, fixes the two biggest problems with the old Xbox 360: catastrophic system failure from poor ventilation, and the fact that turning on the console is like having a leaf blower pressed against your head while standing on a runway at LAX.
And then: A Nintendo 3DS! Which fixes the biggest problem with the older Nintendo DS: that, err, it didn’t have 3D. Okay, that one is kind of weak, but I still want one after hearing everybody on the Twitter going nuts over it.
But then out of nowhere: A new Mac Mini! I’ve spent the past couple of years trying to piece together a decent home theater PC using the enormous, Brezhnev-era Mac Mini; an external drive; a USB TV tuner; an assortment of remote control apps; a Microsoft IR receiver; various DVI-to-HDMI adapters; and snot. Now Apple has said, “Oh right! HDMI has existed for several years now!” and upped the hard drive size and built an HDMI port right into the back, making it an HTPC right out of the box. (And by the sound of it, fixing the problem Mac has with overscan/underscan on my TV). It’s still overpriced to use as just a home theater PC, but it’s the best version of the mini that Apple has made yet.
And of course, the Microsoft Kinect business, which solves the problem of “I don’t look stupid enough while playing videogames.”
Now looking on ebay at all the listings of used Mac minis and Xbox 360s is positively heartbreaking; you can almost hear Sarah McLachlan wailing in the background as you scroll past one “0 bids” after the other. And now I actually feel kind of gross for writing all this, so I’ll start browsing elsewhere.
All the angles on the issue have been covered extensively on tech blogs, in particular Daring Fireball, so you won’t see any particularly novel insight here. But I haven’t yet seen them all gathered in one place. Apple has gotten a lot of criticism across the internet — much of it entirely deserved — for its App Store approval policies and the closed system approach it’s taking with the iPhone OS. And it bugs me to see Adobe employees — whether representing the company as with Chambers’s post, or not as with Brimelow’s — getting so much traction by taking advantage of that ill will, when Adobe doesn’t have a leg to stand on.
After citing various stories about Apple’s rejection policies and an even-handed piece on Slate called Apple Wants to Own You, Chambers goes on to say that Adobe will be shifting its focus with Flash CS5 onto the Android platform. Here are a few helpful tips for Adobe that might make this next choice of platform go more smoothly:
1. Don’t promise something to your customers unless you’re sure you can deliver.
Adobe’s claiming that Apple suddenly introduced a new clause to the developer program license that blew all their hard work out of the water. How could they possibly have predicted that Apple would so cruelly impose a last-minute ban of Flash on the iPhone suddenly out of nowhere? I mean, sure, the iPhone has been out for three years now and it’s never allowed a Flash player, but with Apple’s draconian secrecy, who knows why that is? Okay, fine, the CEO of the company has repeatedly said that it’s for performance reasons and battery life, but that’s just spin. Adobe had no way of predicting that the company that’s refused to allow Flash on their devices would suddenly decide not to allow a program that runs Flash. It’s all Apple’s fault!
2. Have a chat with Google and Motorola first.
There’s no way that a small start-up like Adobe could’ve communicated with industry giant Apple, either. Who even knows those guys’ email addresses? Plus, they’re scary: Gizmodo made a whole post about the guy whose book-reading app was rejected for containing adult material. Who’s to say that the exact same thing wouldn’t happen to a multinational corporation proposing to create a new development environment for the platform? Unlike Apple, Motorola and Google are pledged to complete openness, and they won’t have any qualms about performance or security on their branded Android OS devices. You probably don’t even need to ask first.
3. Try running your software on the device in question.
Apple’s reasons for refusing Flash are so arcane and mysterious that nobody can figure them out. Even though it’s been said repeatedly from multiple sources both inside and outside Apple that Flash is a hit on performance on battery life, that’s just idle speculation. Better to try to sneak something in instead of actually trying to find the problems with interpreted code and non-standard video playback and getting it to run acceptably.
4. Don’t use a Mac for development.
Because if you want to get anything done, you’ll have to use Adobe software, since Adobe has near-total market dominance in every area of production. And Adobe software runs like shit on a Mac. Mr. Brimelow, I suggest that your talk about the long relationship Apple and Adobe have had with each other would be more convincing if you had a dramatic backdrop, or a YouTube video playing in the background. For the backdrop, you’ll want to use Photoshop CS4, the first version that supports a 64-bit OS, which came out a year and a half after OS X converted to 64-bit. And for the YouTube video, be sure you speak up loud, because playing anything with the Flash video player on a Mac will cause your computer’s fan to kick into overdrive from the increased processor load.
5. Consider what “cross-platform” means for a platform built entirely around its unique identity.
If the blog posts from employees weren’t enough to convince you that Adobe’s committed to cross-platform development, then running any piece of Adobe software — especially any AIR app — should do the trick. Using PhotoShop or Flash on a Mac means that you get to give up everything that made you choose the Mac OS in the first place. The closest they’ll come to “Mac look and feel” is begrudgingly including a “Use OS Dialog” button on their file dialog boxes. But the iPhone, even more than the Mac, is specifically branded as a device that wins not on features, but on the OS.
Chambers makes a point of saying “While it appears that Apple may selectively enforce the terms, it is our belief that Apple will enforce those terms as they apply to content created with Flash CS5.” Or in other words, Apple will allow Unity, .NET, et. al., but is singling out Flash/Adobe to screw them over specifically. Adobe’s complaining about Apple not giving them fair treatment is a lot like a polygamist accusing one of his wives of cheating.
This is the most galling part of the whole thing, to me: Adobe’s desperately grabbing on to Cory Doctorow’s coat tails and waving the flag of intellectual freedom, while simultaneously suggesting that the iPhone OS is gimped because Flash has something like 98% market saturation with internet video.
The best explanation I’ve seen is from Louis Gerbarg on his blog: allowing Flash, or even iPhone-targeted Flash, onto the iPhone would mean Apple effectively turning its OS development cycles over to Adobe’s engineers. It’s the same reason they’re so uptight about developers using private frameworks: if they change something with an OS update, the app breaks, and customers complain to Apple. Not the developers.
Adobe’s essentially going into a store, handing the owner a big black box, refusing to let the owner see what’s inside, and then complaining about freedom and openness when the owner refuses to sell it.
7. Learn to appreciate the monopoly you’ve already got.
It’s not particularly insightful to point out that the environment Apple’s created with the iPhone OS is very similar to the environment that game developers have had to deal with for years. Game console manufacturers have very tight restrictions on what they will and won’t allow to run on their devices — if you think that Apple’s approval process is complicated and draconian, you should go through Nintendo’s technical certification process sometime. (Note that this isn’t a complaint: the certification process means it’s much, much harder to find a buggy game that crashes your system or runs poorly on your particular console configuration).
And the lesson in game development is that content is more of an investment than code. (At least, code written in a particular language. And that’s partly my programmer bias showing through, where it’s a point of pride that once you’ve learned how to do something in one development environment, it’s much easier to do the same thing in a different one). Art assets will port from platform to platform, even if the code base doesn’t. [More on this in point number 10.] I have yet to see a game company that didn’t use Photoshop to generate art assets, and most also use a combination of Illustrator or AfterEffects or any number of other Adobe products.
8. Come out and acknowledge who multi-platform development benefits, exactly.
There is an ease-of-use and familiarity benefit to using Flash. But Adobe reps hardly ever mention that. (As someone who’s developed games using Flash and using Cocoa, I can kind of understand why Adobe wouldn’t push the “ease-of-use” or “predictability” claims where Flash is concerned). Instead they talk about cross-platform capability. An independent developer might be drawn to Flash for, say, making a game because it’s an environment he already knows. A publisher would be drawn to Flash for being able to make the same game for the iPhone, Android, Web, and anything else.
And this makes it a little bit like trying to explain to poor people why they shouldn’t vote Republican: they don’t care about you. Adobe isn’t going to make such a big stink, or for that matter build a campaign around a new feature in one of their flagship products, for the indie developer who’s going to blow a thousand bucks on the new Creative Suite. Adobe wants to get publishers to buy site licenses. And publishers want to make something once and then get it onto as many platforms as possible, because for a publisher, development time is more expensive than hardware purchases, testing, and customer support. Smaller developers will quickly reach the point where having their product on multiple platforms is costing more than the revenue it’s generating.
So when Chambers says:
The cool web game that you build can easily be targeted and deployed to multiple platforms and devices. However, this is the exact opposite of what Apple wants. They want to tie developers down to their platform, and restrict their options to make it difficult for developers to target other platforms.
what he means is: Apple includes a free development environment on their machines, to encourage people to buy their hardware. It comes complete with documentation, visual design tools, and built-in animation and layering libraries that make it relatively easy to achieve Flash-like results using native code. However, this is the exact opposite of what Adobe wants. They want to tie developers to Flash, to ensure that they have a proprietary development environment that’s most appealing to larger publishers, and restrict their options to optimize the runtime to target any particular platform, guaranteeing that it runs equally bad everywhere.
The “cool web game” bit is there to make it sound like the guy sitting in his bedroom who’s just finished his cool Bejeweled clone-with-RPG-elements can just hit a button in Flash CS5 and suddenly be rolling in heaps of money from App Store sales. And to the smaller, independent developers who would like to try releasing their games for multiple platforms: learn Objective C. It’s not that difficult, and you’ll have developed another skill. That to me seems more valuable than getting upset that a game designed for a mouse and keyboard on the internet won’t port to a touch-based cell phone without your intervention and a little bit of effort.
9. Make a genuine attempt at an open system.
If Adobe really is all about content creation, and if they’re going to insist on jumping on the anti-Apple “closed system” bandwagon, why do it for an inherently closed system? They’ve got one development kit that requires a plug-in and forces all its content into a window on a webpage, and they’ve got another development kit that works with HTML and PHP but nobody uses it. Why not put their content creation software expertise to work creating stuff that’s genuinely based on open standards?
10. Stop the damn whining already.
Brimelow closed comments to his post to avoid “the Cupertino Spam bots,” and Chambers warned that non-constructive comments such as “Flash SUXXORS!” would be deleted. Because, as everyone on the internet knows, anyone who defends Apple for any reason, ever, is automatically a drooling Apple fanboy who believes Steve Jobs can do no wrong.
Which means, I guess, that everyone in the tech industry is 12 years old.
What these guys need to understand is that complaining about Adobe’s closed, proprietary system doesn’t automatically make Apple’s good, and vice versa. (Although it’s a big point in Apple’s favor that they don’t try to claim that their system isn’t closed). There are definitely problems with iPhone development.
(I’d be remiss if I didn’t mention again that an iPad-native equivalent of HyperCard would be sweet. It could even use the Keynote front-end and run everything with WebKit. If you need a consultant on the project, Apple, let me know).
The other problem is the lack of transparency in the approval process. I mentioned that the certification requirements for consoles are a lot more complicated than those for the App Store; the advantage, though, is that they’re very explicit. You can and will still get surprised by a rejection, but a lot of the more obscure problems are solved when there is a huge list of requirements and developers are forced to test everything.
As for the other objections that are so often brought up, they seem reasonable enough to me. Yes, the state of file management on the iPad is really terrible right now, but I’m confident it’ll improve. Sure, Apple can reject an app for “duplicating functionality” of one of its built-in apps, but that situation is fluctuating (witness their support for VOIP apps like Skype, and browsers like Opera Mini) and the core apps are functional enough anyway. (Rejecting an app for the “pinch to preview a stack of pictures” functionality is pure bullshit, though).
And Apple can and does reject apps based on content alone. But as John Gruber pointed out, Apple’s still selling a brand as much as a platform. That’s the fundamental philosophical difference between the Android model (and Adobe’s whining) and the iPhone model: Android is selling you on the idea that you can run anything, Apple is selling you on the idea that you can run anything good. That’s why it’s a good thing that both platforms are available to both developers and customers. If you want a general-purpose phone that can run anything you throw at it, including ports of web games, then get an Android. If you want only the stuff that’s been specifically tailored to run well under the iPhone OS, then get the iPhone.