building artificial minds is going to be the most important thing our species ever does

And you shouldn’t let anyone tell you otherwise!

I’m prompted to write this by my friend Tim Lee’s new piece on Vox: Will artificial intelligence destroy humanity? Here are 5 reasons not to worry. It is characteristically smart, but I disagree with most of it.

Tim’s first and second points concern the difficulty of interfacing artificial minds with the physical world. This is accurate, but decreasingly so. The internet now provides programmatic means by which I can command a huge variety of commercial activity (Amazon, Uber, Push for Pizza); puts most of the people on Earth within easy communication range (email, SMS, POTS); and, in rich countries, is increasingly connected to ubiquitous telemetry (traffic cams, fitbit, mobile phone location trackers).

Progress in robotics seems to be accelerating, and is still temporarily constrained by discontinuities between the field’s capabilities and its market size. There are only so many buyers for automotive welding robots and creepy robot dogs, after all. The consumer market is currently mostly about robot vacuum cleaners that sort of work. But we’re on the cusp of ubiquitous robot cars, and it seems plausible that geriatric caregiver bots will be viable in my lifetime. If a machine intelligence has a strong desire to interact with the real world (which it might not), it’s hard to imagine the physical interface remaining a substantial obstacle for much longer.

The third bullet is the meatiest, but also runs into the most problems:

Digital computers are capable of emulating the behavior of other digital computers because computers function in a precisely-defined, deterministic way. To simulate a computer, you just have to carry out the sequence of instructions that the computer being modeled would perform.

The human brain isn’t like this at all. Neurons are complex analog systems whose behavior can’t be modeled precisely the way digital circuits can. And even a slight imprecision in the way individual neurons are modeled can lead to a wildly inaccurate model for the brain as a whole.

Yes, neurons are complex. But their behavior seems to be computable in a Church-Turing sort of way. You can consider digital music playback as an analogy. Music exists as a continuous and extremely complex transformation of air pressure. It is very dissimilar to how digital circuits work. But those circuits can operate so quickly that trains of on/off pulses can recreate an arbitrary piece of music perfectly. So it is, plausibly, with neurons.

Although brains are very complex mechanisms, it is overwhelmingly likely that you can strip out much of their functionality without any impact on their computational capacity. Most of the cells in the brain are glia, responsible for things like immune function, garbage collection and building myelin sheaths. As far as anyone knows they’re just there for biological support. How abstract can you make your model’s neurons before they lose any hope of spawning a mind? Nobody knows. Neurons actually are weirdly computerlike, in that an action potential firing down an axon is an all-or-nothing event. But the threshold excitation that triggers firing is manipulated in lots of subtle ways (both temporarily and over longer time periods), and no one knows how many will have to be simulated or how accurately. Still, you can certainly perform recognition tasks with highly stylized approximations of neurons.

It’s also not clear that we need a particularly accurate simulation of the brain to create a mind. Tim:

A good analogy here is weather simulation. Physicists have an excellent understanding of the behavior of individual air molecules. So you might think we could build a model of the earth’s atmosphere that predicts the weather far into the future. But so far, weather simulation has proven to be a computationally intractable problem. Small errors in early steps of the simulation snowball into large errors in later steps. Despite huge increases in computing power over the last couple of decades, we’ve only made modest progress in being able to predict future weather patterns.

Simulating a brain precisely enough to produce intelligence is a much harder problem than simulating a planet’s weather patterns. There’s no reason to think scientists will be able to do it in the foreseeable future.

It’s really hard to predict the exact sequence of a particular weather pattern. But modeling a plausible weather pattern is pretty easy. And neural systems seem to be able to operate in a really huge variety of configurations. Not only is every person’s (presumably) conscious brain different, but they keep operating in mindlike ways after suffering severe alterations to their performance characteristics. Drugs! ALS! Concussions and lesions! Lobectomies, for pete’s sake! Not to mention the seeming likelihood of many or most animals having substantial phenomenal experience despite wildly varying biologies. Once we figure out how to do it, there will probably be a considerable fudge factor in building minds.

Tim’s fourth argument concerns the importance of human relationships. This is fair: there’s good reason to think human social behavior is one of our most evolved and convoluted systems, and one that a machine might have a hard time figuring out quickly. But although our behavior is complex it’s also fairly predictable–we have already systematized a surprisingly large amount of this knowledge in fields like marketing and political campaigning. There’s every reason to think that a machine intelligence that’s immune to fatigue, moodiness, territoriality, jealousy and other human social impairments could master relationship-building.

Tim’s final point is an argument about the falling value of intelligence in a world where superintelligent machines proliferate. I’m not sure it makes a ton of sense to treat cognition as a simple commodity, but even if it does, this ignores the potentially trivial relative value of human minds in such a world.

It’s important to remember just how lousy our neural hardware is. When a neuron fires, it does so by opening channels along its axon, which allows an uneven gradient of sodium and potassium ions (maintained by a ceaseless cellular pump) to equalize between the inside and outside of the cell. This opens up adjacent channels, flowing down the length of the axon, stimulating the release of neurotransmitters at its synapses. The whole thing takes about a millisecond, which is several million times slower than a transistor. That our brains work despite this sluggish mechanism is a testament to the power of parallel computation, of course. And neurons perform analog operations (summing excitation, for instance) that would require many transistor switchings to simulate. And there are about twenty billion neurons in the human brain.

So simulation isn’t easy, exactly. But if a workable hardware configuration can be found, one can imagine scaling scenarios that transcend biological limits on sentience very quickly indeed. If your neurons had the switching performance of contemporary transistors, you could plausibly experience two lifetimes in an hour. You’d also be able to throw away a bunch of subsystems devoted to autonomic processes and other unnecessary biological and social functions, simplifying the problem further.


I have no idea if we’ll build machine intelligences. I think it’s pretty likely that consciousness is an epiphenomenon free-riding on top of a powerful neural network, and that some aspect of causally isolated panpsychism is a basic component of the universe. But there’s a mystic in me that wants the real source of our minds to retreat away from our plausible guesses.

I think he’ll be disappointed, though. If we do create a thinking machine, it’s hard to imagine what it will want or do. It will be designed by our hands, not by evolutionary processes. So I don’t think there’s any particular reason to expect it to want to reproduce or grow or consolidate power or even avoid death. Perhaps it will have no volition at all.

But if it does constitute a conscious being in a way that we can relate to, I think we should expect to be surpassed by it pretty quickly. Whether that presages extinction, irrelevance or transcendence, I couldn’t say. But it’s certainly going to be a big deal.

arduino class notes

For the last four weeks I’ve been teaching an Intro to Arduino class at Sunlight. It’s been fun! I’m hopeful that the participants have gotten a new hobby out of it. Being able to translate your software skills into the physical world isn’t exactly sorcery, but it’s the next best thing.

The notes are available at the links below. And the class Github repository can be found here.

It’s safe to say that this curriculum isn’t too different from other Arduino classes. The extent to which it relies on the sample code that ships with the Arduino IDE is proof enough of that. But in my experience the hardest-won pieces of knowledge in any technical hobby are the bits of folk knowledge that don’t rise to the level of Timeless Principle. What vendors have the best deals? What’s the name of that kind of connector? Which stuff do I really need to know, and which stuff is just there because the instructor thinks it’s good for me?

I tried to focus on these questions in the notes attached to these slides. Hopefully you’ll find them useful! Based on student response, I think that lesson 3 needs some touch-up work for non-Python users, but otherwise they’re probably in pretty okay shape.

advice for an aspiring programmer

Last week we interviewed a candidate who we really liked but who was much too green. He asked for some advice, so here’s what I wrote — might as well put it online. Hopefully it’s a little more specific and opinionated than these things tend to be.

—–

It was a real pleasure to meet you, but your instincts are right: at this point we have to invest in people with a bit more experience under their belts. I do want to stress, though, that your enthusiasm and interest in software engineering came through clearly, and made us all enthusiastic about the developer you will no doubt become.

Toward that end, let me offer a little more advice than I usually put into these sorts of emails:

  • Pick a technology and invest time in it. There is tremendous value to understanding the repetition of patterns across engineering domains, but you need to gain deep expertise in one before you can do so effectively.
  • I’ll be more specific: pick one of these technologies — Ruby, Python, Node/Javascript. All have vibrant open source communities from which you can learn a lot for free. All have bustling job markets. All have bindings in a huge variety of domains. All are abstract and widely supported and will spare you many of lower-level languages’ headaches. All have robust web frameworks. Personally, I’d suggest Python, because it is the most stable and widely supported. It’s everywhere– it is Google’s noncompiled language of choice, for instance, and widely used in scientific computing and a huge number of other areas. But its community is less fun and accessible than the others, and it’s more sedate. The others will take you on a wilder ride, but you will probably have to learn things a few times as the community changes its mind about how to solve a problem. This is extra true for Node and less so for Ruby — which reflects each community’s age.
  • There is a premium for mobile dev work, but I wouldn’t invest in that right now because it’s too specialized to be a great way to learn. Also iOS will be in turmoil thanks to Swift, and Java dev is a drag outside the genuinely-exciting opportunities of Android.
  • Focus on the web and the key tasks associated with it. Skim the topics that other languages’ web frameworks cover — they all solve the same problems in slightly different ways. Invest a little time in learning jQuery — being able to build out web templates is a very plausible starter job, and one you can get good at fast. Also, make a point of learning regular expressions and the network libraries and functions necessary for using APIs.
  • You do not need to know much about data structures, compiler design, sorting algorithms, recursion or most of the other things that they teach you in a CS program.
  • Microsoft technologies can earn you money but will never fully integrate with the world of open source software, which is where the best engineers and most exciting projects exist. I have written Visual Basic for a living; I don’t think you should write any more of it. The .NET frameworks are okay but basically a less-open version of Java. Everyone hates Java.
  • I wrote PHP for many years professionally and still think it is a cheap, useful tool. It gets zero respect in programming circles, though — I would not suggest spending more time learning it until/unless you have mastered something more prestigious and just want it for quick personal projects.
  • You should probably learn with a good text editor (but not an IDE) and the command line as your primary tools. On OS X I like Sublime Text 2. Speaking of which: you should be developing on OS X or Linux (people around here tend to favor Ubuntu or Mint). If you’re on Windows now this will be painful, but you will never fully connect with the open source world and its idioms unless you get used to the *nix command line interface.
  • There is no substitute for working with engineers who are better than you are. This is tough until you get yourself hired somewhere, though! On the far end there are code bootcamps, but those cost money. On the near end there are technical meetups — shop around and find one that seems technical enough to teach you things. Contributing to open source projects is a good idea, too – writing an IG scraper for Sunlight might be an approachable task (he said selfishly). Online tutorials can take you a long way if you put in the time.
  • Get active on Github! Follow how people like Eric Mill (@konklone) and Tom Macwright (@tmcw) and Josh Tauberer (@govtrack) do their work. Recognize that filing tickets is a valid way to contribute, as long as they are well-informed. It doesn’t have to all be pull requests.
  • Master the art of googling for error messages. Using search engines, Stack Exchange, mailing lists and IRC properly to uncover unknown answers is maybe the most important skill in real-life programming.
  • Once you identify superstar programmers, follow them on Twitter or their blogs. The writing of people like Ian Bicking will get you familiar with the cultural context surrounding your programming language of choice. Speaking of which: conferences can be pricey but once you’re ready they can be a really good way to learn — if you pick the right one. Pycon is excellent. I know less about the other languages’ marquee cons.
  • Spend some time reading about diversity in technology. The situation is not good, and a lot of people are working very hard to change it. This is a huge topic of discussion right now and you need to be able to talk about it intelligently.
  • If someone mentions linked data or the semantic web and they have never held a job at Google, assume they are about to waste your time.

There! I think that’s all the advice I can come up with for someone in your shoes. Ask me questions when you have them. And good luck.

the thing about the Internet of Things

thingsWired makes a yeoman’s effort at turning a basically boring Pew report about the Internet of Things into something worth wringing your hands over. If you actually read the report, the experts seem much less worried (and quite a bit less compelling) than Wired wants us to think.

Partly this is because only a few of them seem to know much about it. There are a lot of very impressive people on the list of respondents, but at a glance they seem to mostly be drawn from the Internet’s Elder Statesperson class. And this IoT business has less to do with the internet than the name implies–it’s really about hardware, sensors and microcontrollers. So we wind up with some warmed-over and implausible futurism from the guy who runs the Webbys.

I think the milquetoast ambivalence flows from this: we understand what we’re facing. We’ve been at this industrial revolution business for a while now, and it’s mostly apparent how it works. We’ve all lived through the advent and democratization of various manufactured technological conveniences, and we are confident both of their steady pace and their limited capacity for delivering transcendence. Consumerism: we get it.

This was not the case with software! Infinite abundance, communication and human potential — you could tell a really amazing (and, alas, often overblown) story about what this would mean for all sorts of social institutions. Something truly new was happening, emergent forces were emerging, and nobody could tell how it was going to end. It was unclear why your boss was paying for you to get drunk at SXSWi but he was and it was awesome and everything was surely about to change.

This is not the case with the Internet of Things. With the exceptions of miniaturized-yet-affordable PCB manufacturing and solid state accelerometers, most of the central technologies have been achievable for a while. They just haven’t been used. For example, the idea of a home thermostat you can set from your office is sort of neat, but such products have existed for decades. Why are we excited about this now? Well, prices have dropped, the gadget-purchasing habit has been solidified, and control interfaces have improved (thanks, smartphones). Ubiquity is newly practical.

But we still don’t have many really compelling stories about what it’s all going to do for us. The benefits to these use cases are known, or at least can be imagined. It’s nice to have a door open itself for you or an alarm clock that knows when you’re sleepy, but how much is it really worth? We’ve been able to network appliances for quite a while. We did it a long time ago for cardiac monitors in hospitals, because in that application it’s worth the money. Giving your fridge an IPv6 address? We can certainly do it, and we probably will. But don’t kid yourself about the scale of the benefits that will flow from this innovation.

(One exception: the quantified self movement *does* have a bunch of compelling stories about gigantic improvements to health that careful self-measurement can deliver. Given the enormous amounts of money we invest in not-very-effective healthcare interventions, it seems safe to say that if this idea could deliver a fraction of what it’s being used to promise, our failure to implement it already would represent one of the greatest market failures in history.)

I love playing around with hardware, so don’t mistake my skepticism about IoT futurism for a lack of enthusiasm. Filling the objects around us with dancing grains of sand that we’ve etched with runes and whispers of ions, so that they might ceaselessly observe and manipulate the environment for our convenience: I think that’s a lovely thing for a species to do, and often a pretty fun art project. And I suppose emergent network effects are always possible. Seems a little far-fetched to me, though, at least so long as we’re mostly talking about thermostats and pedometers. But my imagination is admittedly terrible.

I’ll boil it down to a few things, I guess:

  • The adoption of ubiquitous computing is a function of physical technology’s ever-falling price versus the benefit it confers. There are many applications enabled by lower prices that are just now achieving market viability. But that’s because their benefit is meager, not because the tech was impossibly pricey. This may not be universally true, but it’s probably true for the anticipated uses that are currently being used to sell this phenomenon: quantified self and home automation.
  • Concerns about maintaining the software in a zillion different devices seem legit (though people are underestimating just how awful embedded tech can get away with being, and overestimating both the incentives facing bad actors and the threat surface present on devices that are designed to be *extremely* limited). Partly for this reason, functions will continue to accrue to your phone whenever possible (we’re running low on compelling sensors at the moment, but IR photography and laser rangefinding might sell some iPhones). Some will try to achieve a profitable, lock-in-driven business through proprietary solutions to this headache, but I doubt they’ll succeed.
  • The most interesting questions surrounding these issues concern transhumanism.

UPDATE: You know, I did leave off one huge thing–the sharing economy (with apologies to Tom Slee). Uber, Bixi, AirBnB–using technology for access control really is only recently possible, thanks to the evolution of IT payment and identity systems. And it really can make our collective use of property hugely different and better.

a man, a plan

Panama: pretty great. The Panama City aesthetic is the first thing that strikes you on the drive from the airport: chrome and colorful and BIG, with absurdly distracting animated LED brake lights sprinkled throughout. Optimus Prime was designed by a Panamanian, I’m sure of it.

They are mostly not kidding around about the whole not-speaking-English thing, but otherwise I think you can safely count Panama as an absurdly American-friendly travel destination. This is probably pretty obvious — after all, we’re responsible for spurring Panamanian independence from Colombia, they use American currency, and the School of the Americas boasted both Panamanian facilities (now a resort!) and graduates.

Honestly, what’s most striking is how benign this history of meddling currently seems. At the moment, at least, the country is prosperous, proud and happy. Panama City is an impressive, cosmopolitan place. A tourist’s perspective can’t be trusted, and we didn’t venture toward the more dangerous Colombian border, but driving through a large chunk of the country without seeing any real human suffering must count for at least something. The experience made me feel uneasily comfortable with American hegemony — though it was well timed for our burgeoning Cold War resumption, I suppose. Probably I’ll eventually be deeply embarrassed to have thought this, but for now: things seem like they’ve worked out.

Otherwise? The canal is pretty cool. Santa Catalina is a lovely little surf town. The coffee is sadly not as good as Panamanians think (mostly because they don’t brew it strongly enough), but the hats seem legit. Panama City is very impressive, and Casco Viejo is particularly lovely. Boquete was a lush respite from the heat (though its animals failed to cooperate with our hiking plans). We fucked our rental car up pretty good. All in all, a great vacation.

bracketography

Reentering an NCAA bracket across multiple sites drives me nuts — it’s an obvious data format problem that could be solved very simply.

I used to think the incompatibility was deliberate, designed to capture audiences and keep them staring at a given sports site. Now I’m not so sure. The bracket functionality doesn’t try to extract all that much value from us, to be honest — these things are sponsored, sure. But there’s a definite whiff of sports fan developers taking advantage of principal agent dynamics to simply build sportsy things.

But even if the incentives for compatibility aren’t completely backward, the mayfly lifespan of bracket sites makes coordination difficult. Last year, after the tournament ended, I spent a few minutes emailing and tweeting at developers who seemed to have worked on the highest-profile bracket sites, but I received no responses.

So for now, bracket compatibility remains a pipe dream. It’s a shame, though, because the problem is a simple one. I used to think about this in terms of JSON data formats, files that you would download and upload between sites. But it can be handled much more efficiently. There are only 64 + 32 + 16 + 8 + 4 + 2 + 1 = 127 games, after all (let’s ignore the play-ins for a moment, since most bracket sites do). Each game has a binary outcome. That’s 127 bits of data.

Decisions about encoding that data can be made arbitrarily; they just have to be agreed upon. Getting the order of games correct, from 0 to 126, is essential. It doesn’t really matter how you do it, but here’s one scheme that would work.

For each region (ordered alphabetically, A-Z); then for each round (low to high); assume the highest-ranked seed wins — no upsets — and assign games consecutive numbers, from highest seed to lowest. Tiebreakers fall back to the alphabetical region name ordering.

You now have 127 ordered slots to fill with ones and zeros. 1 encodes a win for a higher-numbered seed; 0 an upset. In cases of identical seeding, 1 encodes the team from the region with the alphabetically-first name.

Here’s some Python that demonstrates how the resulting sequence of bits could be assembled and encoded into an easily transportable string:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import random, base64

def retrieve_winner(game_number):
    return random.choice((0, 1))   

picks = 0
pick_bytes = []
for i in xrange(0, 128):
    picks = (picks << 1) | retrieve_winner(i)      
    if (i % 8)==7:
        pick_bytes.append(picks & 255)
        picks = 0

print base64.b64encode(''.join(map(lambda x: chr(x), pick_bytes)))

This just makes random picks, but you could easily connect retrieve_winner() to a web interface. The output is something like “IXNcAyp72iGVl9iGE4i4FA==” (those trailing equal signs can be dispensed with), which is easily portable through email or twitter or copying and pasting. If you want it to be easily readable over the phone, you could change that “b64encode” to “b32encode” and get an all-caps string like “EFZVYAZKPPNCDFMX3CDBHCFYCQ======” — that’s only four meaningful characters longer (you have to chop off a few more =’s). Bracket tiebreakers — usually the total score of the championship game — could be added for a cost of 4 or 5 more characters.

In conclusion, I hate CBSSports.com

quis custodiet ipsos chief analytics officers?

Cocoa Krispies will make your child invincibleVia my former colleague Luigi Montanez, now of Upworthy, here’s an interesting look at how the media industry is reexamining its use of analytics. The search for more meaningful measures of media efficacy is interesting in its own right. But I think the structural incentives that surround it deserve some attention, too.

It’s worth reflecting on the metrics that have fallen out of style: most notably conversions (how often an online message leads directly to a measurable action) and impressions (how often an online message is seen, perhaps affecting the viewer unconsciously). For the past few years there has been an outsize focus on the level of social activity spurred by a message — though this kind of result is increasingly viewed as a overvalued. In the past few months, there has been growing enthusiasm, including at Sunlight, for measures of how thoroughly a message is considered by its viewer. Upworthy’s attention minutes metric is leading the charge — unsurprisingly, given the organization’s undeniable sophistication at measuring and driving traffic.

Although it would be hard to completely deny a fad dynamic to these successive waves of focus, I think efforts to find better analytics have been driven by good intentions. But I also think these efforts may be leading us to a different future than many imagine. For instance, I wonder if my colleague Eric is mistaking a feature for a bug here:

I can see why Eric thinks black-box metrics would be bad. But the bespoke nature of the new, increasingly in-house analytics trends carries advantages for those creating them. And this isn’t the first time that content creators have evolved toward capturing the mechanisms by which their own success is measured. A little over three years ago, before many of the aforementioned analytics trends occurred, I wrote this:

[B]y all accounts online advertising doesn’t work very well. You can measure whether someone clicks on an ad, and often whether they buy something after that click. But it turns out they rarely do those things. So businesses aren’t willing to pay very much for ad space on websites.

Is it really a coincidence that the advertising medium with the best instrumentation also appears to be the least effective? I suspect it’s not. It may be that ads never worked as well as the industry had told us; or it may be that the eyeballs/clicks/conversions funnel is a naive conceptualization of how the system works. Either way, Google has succeeded by giving advertisers what they think they want, which is analytic tools that seem to reveal that the whole enterprise is horribly ineffective.

I think the push for better tools and more efficient ads is basically a race to the bottom. In fact, less perfect instrumentation might allow the ad industry to capture a bit more revenue from business thanks to decreased efficiency.

The lure of incomparability is very strong. Forget Google AdWords for a moment. Big ad buys are still largely arranged by salespeople, working on commission, making phone calls. And how could it be otherwise, for people in the business of convincing other people? Having objective, universal measures of efficacy is not helpful to that kind of endeavor. Much better to have a measure that works for you, that people are excited about, and that you control. It could just as easily be manipulable circulation numbers as a boutique web metric.

This case can be overstated. As I’ve said, I think people are largely working on these problems in good faith — particularly at outfits like Upworthy, which focus on a social mission; or the journalists I know who have left comfortable jobs because they care about whether their work affects the world.

But the incentives for a metrics Tower of Babel are real. To some degree, they’re even admirable, insofar as they’re driven by varying conceptions of success. Is my goal to make my audience think deeply, talk loudly, or spend freely? All of these can change the world for the better; ad-buyers’ temptation to ask which is best can reasonably be resisted, even if I do a little cherry-picking to make my case.

Besides, if one is prepared, for a moment, to disregard the capacity for world-improvement that widely-viewed and ethically correct publications represent, there’s no real problem here. Advertising often has strongly positional aspects, determining who will come out on top but not the overall level of welfare (a world in which Pepsi is the number one cola may strike some as more dystopian than it does me). Not only that, advertising is in some ways a force that directly opposes human agency — it’s designed, quite explicitly, to turn dollars into altered desires and behavior. I have limited enthusiasm for the kind of improved instrumentation that might let us hone that weapon’s edge even further.

In its most benign form, advertising is a tax on industry that flows according to influences too numerous to understand. There’s an appeal to the idea of rationalizing this process, of making it measurable, quantitative and objective — making it legible, as James Scott might put it. It seems like the result would be more fair. And admittedly, the human systems that make up the alternative are not fair: they’re sexist, racist and elitist.

But I suppose I’m optimistic that those systems don’t have to stay that way. And keeping advertisers confused about what they’re buying might preserve room for some wonderful things. So three cheers for analytic innovation — even if the innovators aren’t wholly aware of what they’re doing.

Flappy Bird and the case for fads

flappy_space_smallIn a tab not far from this one, a small bird orbits an 8-bit Earth on a ceaseless elliptical. I put him there, and feel a certain pride about it. You should launch some birds, too, if only to remind yourself that physics is pretty weird.

Too much has been written about Flappy Bird, but I’m going to pile on anyway: it reminds me of a conversation I’ve had with Kriston (and more recently John Bergmayer). Kriston was complaining about some not-that-great book that was sucking up a ton of public attention. These people could be reading the classics! he said. Or just last year’s much-better crop of novels!

I agreed with him about the objective merits of whatever book it was, but I stuck up for fad-chasing. There’s something great about having everyone settle on a single conversation for a week or two, applying all their capacity for inventive criticism, clever jokes and feedback loops of enthusiasm. Faced with exile on a desert island, I could assemble a media library that was very self-edifying. Faced with participation in culture, I’m happy enough to watch the new season of House of Cards even though it’s sort of garbage. I keep an eye on new album releases for the same reason, even as experience makes each band’s influences and lack of invention clearer.

Flappy Bird is compelling for a number of reasons, foremost among them the narrative surrounding its author and the ineffable appeal of a game with neurologically agreeable physics. But I’m also really enjoying it as a cultural rallying point: the aforementioned orbit game, the MMO, the essays.

Admittedly, this is because, so far, the conversation is mostly among people who enjoy essays and indie games — for me, this is a comfortably skintight demographic. One doesn’t have to look far to find other, grosser avian videogame phenomena.

But for now, and maybe for the rest of its run, it’s something everyone can talk about.

less horrible still!

I’m almost done fiddling with it, I think. Please excuse the infinite scroll effect on the photos. I know it’s tacky.

I will mention one other thing: if you scroll allllll the way down, there is now both a search box and an email subscription option. Since even fewer people use RSS these days than before Google Reader died, it might be of interest to this blog’s profoundly modest readership.

a less horrible theme

Though still quite horrible. I couldn’t stand the old one anymore, though.

I’ve only just begun messing around with customizing this one, so apologies for its work-in-progress nature. I suspect nobody really feels too strongly about it, though.