I still don’t like Sleigh Bells

Kriston wants me to believe that Sleigh Bells is good. To wit, he shares this video:

So! They’ve dropped the irritating audio clipping, which I found physically unpleasant. But this is just as gimmicky. Have a look at the waveform:

sleigh bells waveform

The Sleigh Bells audio has been compressed to hell and back. By way of comparison, here’s the waveform captured from a Youtube of Back In Black, which we can hopefully agree is a not particularly sedate song (in order to be more than fair, I’ve tried to crop this to include both more time, and portions of the song that extend beyond the admittedly stuttery intro):

Back in Black

Overcompression is standard operating procedure for recorded music these days. But it’s also a cheap trick that makes things seem louder. That’s fine for grabbing attention, but in the case of Sleigh Bells it’s the second time they’ve made a play at the same gimmick — and this is a far less daring move than the audacious/unpleasant clipping of the last LP.

I think the way to understand this is by way of analogy to food. There are certain reliable levers that chefs can lean on to stimulate our pathetic simian brains. More salt/more fat/more sweetness/more umami. Any of these will endow food with a greater valence. That’s not a good or a bad thing, necessarily; it can be deployed artfully by skilled gastronomists, or it can be hammered home via a bag of Sweet Chili Doritos that’s been crammed full of sodium, MSG and corn syrup solids.

Loudness is just the same. The louder it is, the more fervent it is; the more raucous the party, the more urgent the emotion, the more outrageous the rebellion. And Sleigh Bells is indisputably making their name by being the loudest bad around.

So I leave it to you: is Sleigh Bells a Ferran Adrià, or are they a Frito Lay? Tedious pop historians will adjudicate this, but based on the simplistic lyrics, repetitive hooks and increasingly well-worn playbook, I know where I’m placing my bet.

Artomatic 2012

I participated in Artomatic in 2009, making a hurried tech piece that ill-advisedly involved audience interaction, and which consequently spent most of the show broken. But the experience was a good one! For one thing, my project never caught fire. And for another, I learned some lessons about what makes a viable piece.

But mostly I just liked spending time at Artomatic. If you want to be a part of the show you have to volunteer for a few shifts — I vividly remember sitting on a loading dock and reading about Dr. Hilarius as I spent eight hours directing fitful traffic to the parking garage. Being a part of the event provided me with a clear view of the operation and the people behind it. There are impressive strains of bohemianism, tenacity and mutual goodwill that make the whole huge structure of the thing possible. And I like that it’s just for DC–there’s no ambition, no pretension, just an insistence on pragmatism and (often cringe-inducingly) democratic ideals. I like it a lot.

The current installment of Artomatic opened on Friday, and though I’m not participating this time (no time; no ideas), I suspect I’ll be making a number of trips to see what’s on offer. Today was the first such foray. I only made it through two floors before closing time, but I probably wouldn’t have made it through too many more anyway–the experience can be overwhelming.

I took a bunch of photos, but I have to admit I concentrated on the most absurd pieces (or those that reminded me of something else and which I wanted to share with friends). This isn’t to say that there isn’t anything good on offer (though it is safe to say that such pieces are in the minority). For instance, I enjoyed paintings by Bob Aldrich, sculpture by Julia Bloom and some CNC-fabricated furniture by Ryan McKibbin (all of this was on on the seventh floor, if you’re curious to seek it out).

But you don’t want to see my crappy iphone photography of decent-to-good art, do you? What would be the point? Much more interesting are the pieces that seem strange or inexplicable. Am I a jerk for thinking so? Probably. I don’t just mean to gape, though, or to belittle the artists. In part, I just like being reminded how different people are; how much their tastes and interests vary from my own; and how courageous so many of them are in exposing their idiosyncratic passions to the world. Say what you will about a crystal-covered painting of Oprah: it’s a braver thing to display than the bloodless wood and reception bells I chose to exhibit.

Anyway, here are some photos. I’m sure I’ll be adding to them soon.

unsolicited advice

Will Wilkinson is too kind to me, but too cruel in general:

[The] hyperventilating false drama about never-delivered transformative change is by no means unique to the tech beat. Here on the politics blogs, we’re only too happy to remind our readers that every coming election is the most important election in a generation, that the fate of our civilisation depends upon which of two barely discernible politicians’ cronies get paid. If we can’t generate a narrative with live-or-die stakes out of meaningless developments in public-opinion polls, then we’ve got nothing worthwhile to offer. Reflecting too often upon the ultimate triviality of almost everything we write about does no good for technology or politics writers, or for their readers. The illusion that the next thing will be truly meaningful has always meant more to us than the reality of the next thing. I agree with Mr Lee that there is something quite sad in the way Mr Madrigal, after having discovered that he has been reporting on nothing of significance, does not then go on to draw the well-warranted conclusion that he has wasted some of the best years of his youth foolishly yammering on about ephemera, but instead doubles down and declares “we all better hope that the iPhone 5 has some crazy surprises in store for us later this year”. But it’s only sad because life is sad. Really, why not roll the rock back up the hill?

I am rarely out-gloomed, but I think this is one such instance. So let me present a case for technology being meaningful. I think it’s possible! Anyone who knows me can tell you that, contra my somewhat embittered bloggy pronouncements, I love technology. I mess around with Arduino on weekends; I obsessively amass, modchip and then fail to actually play game consoles; I spent my holiday building a programmable array of Christmas lights; and I can put my hand on a Digikey packing slip without leaving my bed (though this last credential is perhaps as much about messiness as it is about geekiness).

The point is that I believe in this stuff. Information technology, in particular, is incredibly powerful and democratically accessible, and I genuinely think it can improve our society. When you see me getting upset about the tech industry, it’s because I feel that others have lost sight of this. They’re making this inspiring thing I love into a silly business school game, or they’re making ignorant promises — on my behalf, it feels like! — about things they don’t understand and which won’t ever come to pass. Loudmouths are distracting from good work done humbly. Fuck those guys; I hate ‘em. I wish they would shut up and go away. But since they won’t, we might as well get on with things.

If you’re someone with technical skills, hopefully you you will prove to be better at ignoring those people than I have. Aside from that, I’d like to talk about the ways that I feel a career making technology can be meaningful. Because I really do believe it’s possible; I would hate the people who know me, who work with me, to read this blog and conclude that I feel otherwise.

Not, mind you, that your job has to define you. There’s nothing wrong with doing an honest day’s work and coming home to enjoy your family, or partner, or dog. Pick up a hobby. Enjoy your vacations. In a few short decades you will only exist as the memories of your loved ones. A few more and you’ll be nothing more than a couple of kilobytes in the Mormons’ genealogical databases. I wish I had a better deal to offer, but by all accounts history is relentless, and it seems assured that rocking back and forth muttering/tweeting about “innovation” and “disruption” will be no charm against it. The important thing is to try not to waste the time you have on stupid bullshit.

I should warn you: this will be grandiose and sappy. To wit:

Improve the World

Yes, the hi-tech, still-quite-expensive things that you build will mostly be used by rich people. That’s just a for-now thing, though. Smartphone adoption is already better than home broadband penetration. Speaking very conservatively, in two generations, everyone in America will be using this technology. In four, I’d bet on everyone in the world using it. And in the meantime, you can push on the decisionmakers. Correcting asymmetries of information can ameliorate asymmetries of power, despite the occasional troublingly counterintuitive result. Look at what Public Laboratory is doing: democratizing technology to make it possible for ordinary people to monitor and — hopefully — legally defend the quality of their environment. I’m admittedly biased, but I find their work incredibly inspiring.

A lot of efficiency gains are made possible by better information — dynamic energy pricing systems, car- and bike-sharing fleets, programmable thermostats. Information technology has a real role to play in keeping the earth habitable.

But you don’t have to know anything about IR filters or weather balloons or Arduino to make a difference. Designing a webform that serves the needs of some fraction of a social worker’s clients, freeing resources for others: that’s work that isn’t flashy, but is truly important. By way of example, my friend Chris helped build a clearinghouse of performance data for the microfinance sector, and though I know the day-to-day development experience was nearly indistinguishable from any other CMS-project-hell, it still seems to me a very fine thing to have done. I’m sure that toiling on the EHR problem is even more mind-numbing, and yet it’s unquestionably of huge potential importance. Writing a line of code can feel very distant from the act of directly alleviating human suffering, but that distance is and will continue to shrink.

Create Knowledge

The callowness and innumeracy of those promoting the Big Data brand almost defies belief, but (I should remind myself more frequently) it’s important not to let this distort your perspective. Yes, there are dopes who don’t understand that a properly selected sample of their (inevitably clickstream or social media) data could get them the same “insights” (always insights) as their massive Hadoop infrastructure. Plus it would let them use scientific-looking error bars, which I bet they would enjoy.

But there really are problems in need of solving which are bigger than human cognition. The gulf between the people who think their FitBits will extend their lifespan and the people working on actual computational biology problems is vast, but those willing to traverse it should be celebrated. There are archives to be digitized, regressions to be run, extraterrestrial radio signals to be processed. There are more disciplines than I can imagine that could make use of our skills if only they were introduced to them.

Make Art

All of this stuff is changing us, and we’re going to need to spend some time figuring out how — particularly as the energies, quantities and general magnitudes of the things we can manipulate grow ever more threateningly huge. Somehow we’re going to have to give this old monkey brain the slip.

That would be the pragmatic case, but maybe it’s foolish to try to mount one. What better thing could there be to spend your time on than making beauty? Besides, you’d be hard-pressed to read much of rhizome.org or the (now-defunct) New Aesthetic Tumblr or the increasingly philosophically-minded indie game scene and not come away convinced that a bunch of exciting, fast-moving (and yes, somewhat insufferable) conversations are reaching crescendo right now. It’s getting to be the part of the party where you have to shout to be heard, and either everyone will start to dance or there’ll be a fight or we’ll get up on the roof. Something interesting is sure to happen — it probably already is, in fact.

Try, At The Very Least, Not To Hurt Anyone

There are a few subdisciplines that you should probably stay away from. “Neuromarketing,” Zynga-style games, Klout scores and other algorithmic approaches to eliminating human agency, dignity and/or equality strike me as basically evil, and though the trend they represent is probably unstoppable, I sure wouldn’t want to be associated with it. Ditto becoming one of the quants designing the HFT engines of tomorrow, or one of the parasites that make their living off of SEO.

On the more benign/less high-skill end of the spectrum, coupon sites are starting to look less like a positive-sum marketing interaction and more like a system for skimming small businesses’ revenue. This model has been deployed to arguably good effect in the past (newspapers! Gmail!), but this latter phase seems to merely be subsidizing my fellow yuppies’ lifestyles in a sort of bizarrely regressive retail sales tax scheme. If you have the economic freedom to choose, I’m confident that you’ll be able to find something more productive to do with your time and talents.

If You Absolutely Must Play The Startup Game

Understand that you’re unlikely to come up with a million dollar idea solely by sticking together free software like so many legos, hoping that lightning will strike and you’ll wake up to a valuable population of users who are now pleasantly locked into your product by network effects and/or transition costs. Sure, it happens — for now, Instagram still counts as an example rather than a punchline — but a lotto ticket offers only slightly worse odds, and requires you to spend much less time fiddling with Keynote. It’s simply too easy for other, smarter people to have the same idea and build it. Competitive markets are good for consumers and bad for entrepreneurs.

But if the startup dream compels you, I would suggest two things.

First, realize that ICT makes information cheaper. That’s it, really. If you want to earn money with this technology, you should look for tractable problem areas where information is still expensive.

Second, connect your project to the logistical nightmare that is the real world. Ship physical goods, install a bikesharing fleet, go meet with the bureaucracy to get the data you need for your business intelligence site. These things are hard to do without leaving the house (or at least picking up the phone), and consequently fewer of them are being done. Another handy heuristic bucket: pursue ideas that require capital for things other than loft space, foosball tables and your bar tab. The low hanging fruit has been plucked, in other words. Reach higher. It’ll certainly be more interesting, and you might even improve your odds.

I’m a Lucky Guy

I’ve already copped to not being a startup guy myself. I guess I should probably acknowledge that I’m not a particularly cheerful person, either. But while I have admittedly made some terrible decisions, my professional choices haven’t been half bad, if I do say so myself. I’m extremely grateful to have the opportunity I do: one that affords me the chance to do work that I count as meaningful across a couple of the above dimensions.

I can’t guarantee you’ll have the good fortune I’ve had in finding a fulfilling way to spend your workdays, but I do wish you luck at not wasting your time.

but I *am* super into the internet!

Yglesias is right to point out that not everyone is like us: specifically, most people are not technologically-literate yuppies with motivations for avoiding cable TV subscription (e.g. self-betterment; vague anti-corporate resentment) that go beyond a simple cost/benefit calculation.

But! I still think he’s too skeptical about the prospects for widespread cord-cutting.  Matt doesn’t link to any figures, but, contra the headline, this chart sure looks like it’s showing a downward trend to me, albeit one that’s noisy and surely confounded by other effects like slowed household formation. And there’s some reason to think that cord-cutting is still a nascent idea but one that, like abandoning landlines, will catch on once people see their peers doing it without ruining their lives.

I’ll add that my own experience doing without cable TV has been a positive one. Netflix reliably has stuff I want to watch; iTunes has delivered season passes to the new Avatar series and Deadliest Catch that, while not a steal, are reasonably priced and, aside from the slightly delayed delivery and regrettable absence of Cap’n Phil (RIP), totally acceptable. Live sports remain the real problem, of course. A few years ago, when the leagues did their business with broadcast networks that made their money on ads, this would’ve been totally solvable. Recent years’ shenanigans with cable networks and proprietary distribution channels make this situation seem a bit less hopeful, but I’m optimistic that growing consumer impatience will eventually spur lawmakers to get involved with these legally-granted monopolies and deliver the digital bread and circuses that the public rightly demands.

Finally, Matt makes a technical point:

The problem for people who do want to watch all their TV over the internet is that to provide enough video content to everyone for that to be the standard way of doing things, you’d need much more broadband capacity. And we could build much more broadband capacity, but people would have to want to buy it. And at the moment, it seems like people don’t really want to. Of course they would want to if cable television stopped existing, but all the infrastructure is already there. Now maybe aggregate population preferences will change over time. There’s certainly some evidence that they’re shifting a bit. But hard as it is for web junkies to remember, lots of people seem perfectly happy checking Facebook on their phone.

First: I do think preferences will change over time. Cohort replacement!

Second, there’s more capacity than might be apparent. The basic problem Matt’s gesturing toward is broadcast versus video-on-demand (VOD). Multiple viewers can watch a single, synchronous stream of programming — the same amount of bandwidth is needed regardless of how many people tune in. For on-demand stuff, everyone typically needs their own stream of data, making scalability a problem.

But of course most cable providers now offer substantial VOD services without choking their systems. I believe that AT&T’s U-verse system actually delivers everything in a VOD-like manner, though it’s a bit hard to suss out the details. Regardless, one can easily imagine various solutions to this that take advantage of consumer predictability and caching technology.  When you tune into The Voice, you could be offered the option of watching immediately for a surcharge or waiting X minutes to hop on the next every-X-minutes scheduled broadcast stream. Or your DVR could download the week’s ads every Sunday night, then stagger their distribution between pre-roll and mid-broadcast placement in order to line you up with the next broadcast stream. One can even imagine dynamic schemes where content is priced according to current network conditions and your subnet-neighbors’ current viewing habits. That would probably be economically fascinating enough that even Matt would be in favor.

liner notes by H. Turtledove

I’ve recently rediscovered this album, and have really been enjoying it:

Somewhere, in a better universe, the Weakerthans became America’s preferred purveyor of pretentious steampop. Colin Meloy is doing fine, almost certainly writing Objective C at a tidy standing desk, atop which sits a coffee cup, moleskine and carefully-chosen pencil.

‘noncommercial’ and ‘good’ aren’t the same

Matt makes a point that more people in the free software space should learn to appreciate:

Another issue raised in comments is the idea that a “fair use” by definition can’t be commercial. I was glad to see someone raise this point if only because I do wish we could re-inject more life into the commerce/non-commercial distinction for broad copyright purposes. But my goal would be to use the distinction to raise the scope of tolerated non-commercial copying, not to narrow the scope of allowable commerce. Commerce is a legitimate and important human undertaking, and the goal of copyright law should be to facilitate useful commerce. That includes preventing large-scale commercialized digital copying, but I think also means allowing commercialized sampling, quoting, and repurposing of existing material.

I made a somewhat similar point recently when talking about open data:

[...] I think it’s flatly wrong to consider private actors’ interest in public data to be uniformly problematic. We should be clear: we won’t tolerate those interests’ occasional attempts to lock public data into exclusive monopolies. I think our community has done a pretty good job lately of identifying such situations and stopping them, and of course people like Carl Malamud have been doing important work on this question since well before most of us ever heard of “open data.” But if commercial activity is enabled by data, that’s all to the good—the great thing about digital information is that scarcity doesn’t have to be a concern. Google Maps’ uses of Census TIGER data, for instance, is proprietary, motivated by profit, and unquestionably a huge boon to human welfare. And the source data remains free for anyone else to use! Cutting off those kinds of uses with noncommercial licensing would be nothing more than a destructive act of pique.

This really came into focus for me when I was in Berlin for a conference run by the good folks at the Open Knowledge Foundation. I admit that before I stopped to think about it, I never found noncommercial licenses that problematic, and would casually throw them on material I produced on the web. That way our vaguely-defined communal web society (so pure and untainted by the profit motive!) could use it, but “they” wouldn’t benefit from my hard(?) work tagging photos on Flickr. Honestly, this was dumb. I was never going to put in the time to try to make money off of those photos. If someone else could do the work to make them useful to others, why begrudge them that opportunity?*

I wouldn’t go as far as Matt about the goal of copyright law being to facilitate commerce (this formulation seems to ignore the kind of deadweight losses that Matt’s writing about IP is usually about).  But he’s right about commerce being a “legitimate and important human undertaking.”  Certainly the private sector is capable of excesses, but it’s also an incredible tool for identifying and satisfying human needs. We shouldn’t resent it out of some sort of ideological tribalism — particularly when we’re discussing digital goods, where things are rarely zero-sum and where (with apologies to Julian and Kash) the negative externalities (Mark Zuckerberg can infer your sexual preferences) are less severe than those found in the physical world (the chemical plant next door means your baby was born with fins).

* I should acknowledge that this is just a for-instance. As there’s very occasionally a market demand for unflattering photos of some of my friends by ideological press outlets, I’ve elected to keep somewhat restrictive licensing on my meager photographic output. But in less problematic cases — the code I put on GitHub, for instance — I’ve moved to open, nonviral licensing.