The Best Atlantic Technology Writing 2012

Transcription

The Best Atlantic Technology Writing 2012
The Best Atlantic Technology
Writing
EDITORS: MEGAN
GARBER, ALEXIS C.
MADRIGAL, AND
REBECCA J. ROSEN
The Best Atlantic Technology
Writing
The Atlantic • Washington, DC
The Best Atlantic Technology Writing
Contents
Introduction: Shock, Stock, and the Gibson Corollary to the
Faulkner Principle
part one. Nuggets
. What Space Smells Like
. The Meaning of That Ol’ Dial-Up Modem Sound
. The Phone That Wasn’t There
. Interview: XKCD
. eTalmud
. Q: Why Do We Wear Pants? A: Horses
. The Only American Not on Earth on /
. On the Internet, Porn Has Seasons, Too
iii
iv Atlantic Tech
. Lana Del Rey, Internet Meme
. Copy, Paste, Remix
. The Sonic Fiction of the Olympics
. Reel Faith
. What It’s Like to Hear Music for the First Time
part two. Provocations
. It’s Technology All the Way Down
. The Landline Telephone Was the Perfect Tool
. Privacy and Power
. The Jig Is Up
. In The Times, Sheryl Sandberg Is Lucky, Men Are
Good
. Moondoggle
. How TED Makes Ideas Smaller
. Dark Social
. Interview: Drone War
. Plug In Better
part three. Reportage
. The Tao of Shutterstock
. I’m Being Followed
Contents
. Did Facebook Give Democrats the Upper Hand?
. The Philosopher and the FTC
. The Rise and Fall of the WELL
. The Professor Who Fooled Wikipedia
. The Perfect Milk Machine
. Machine Vision and the Future of Manufacturing
. Google Maps and the Future of Everything
. The Never-Ending Drive of Tony Cha
. How the Algorithm Won Over the News Industry
. What Will the "Phone" of
Look Like?
. How Do You Publish a City?
. Pivoting a City
. Voyager at the Edge of the Solar System
. When the Nerds Go Marching In
part four. Essays
. Earth Station
. Consider the Coat Hanger
. Interview: Automated Surveillance
. Communion on the Moon
. The Animated GIF, Zombie
v
vi Atlantic Tech
. The IVF Panic
. The Broken Beyond
. Interview: What Happened Before the Big Bang?
. A Portrait of the Artist as a Game Studio
. A Fort in Time
Introduction: Shock, Stock, and the
Gibson Corollary to the Faulkner
Principle
Thank you for downloading The Atlantic’s Technology Channel anthology
for
. We’re proud of the work in here and hope you find stories that
you love.
But I have to admit that this project began selfishly.
I wanted to see what we’d done on a daily basis assembled into one
(semi-)coherent whole; I wanted to see how, over the course of the year,
we’d shared our obsessions with readers and continued to grope toward a
new understanding of technology.
That process really began when I launched the Technology Channel
at the The Atlantic in
. Back then, I knew that I wanted to build a
different kind of tech site. I wanted to write things that would last. My
friend Robin Sloan, who you see pop up on the site now and again, has
a way of talking about this. He says that “stock and flow” is the ”master
metaphor” for media today. “Flow is the feed. It’s the posts and the tweets.
It’s the stream of daily and sub-daily updates that remind people that
you exist,” he’s written. “Stock is the durable stuff. It’s the content you
produce that’s as interesting in two months (or two years) as it is today.
Atlantic Tech
It’s what people discover via search. It’s what spreads slowly but surely,
building fans over time.”
To my mind, even the best tech blogs focus on “flow” for both constitutional and corporate reasons. They’re fast, fun, smart, argumentative,
hyperactive. Some of them do flow very well. And I knew we were not
going to beat them at that game. But stock, that was something else. Stock
was our game.
Looking at The Atlantic as a brand or an organizing platform or a
mission, I see a possibility that verges on a responsibility to do things
that resound over time. Resound. Things that keep echoing in your head
long after the initial buzzing stops. Things that resonate beyond the news
cycle. After all, we’re old! Born in
out of the fires of abolitionism,
The Atlantic has survived because it’s been willing to change just the
right amount. It’s responded to the demands of the market, but never let
them fully hold sway. And in the best of cases, we changed the way our
readers thought, challenged our own convictions, and laid down some of
the essential reporting in American history. This may all sound like big
talk, but we have to own this history, regardless of what the information
marketplace looks like right now. Recognizing this history gives us a duty
to provide journalism that stands up over time—no matter how it gets
consumed.
But how to create stock in a blogging environment? It may sound
crazy as a content strategy, but we developed a worldview: habits of mind,
ways of researching, types of writing. Then, we used the news to challenge
ourselves, to test what we thought we knew about how technology worked.
Embedded in many stories in this volume, you can see us going back and
forth with ourselves over the biggest issues in technology. How much
can humans shape the tools they use? What is the relationship between
our minds and the tools we think with, from spreadsheets to drones?
What is the potential for manipulating biology? How do communications
technologies structure the way ideas spread?
Delve into almost any technological system, and you’ll see the complex
networks of ideas, people, money, laws, and technical realities that come
together to produce what we call Twitter or the cellphone or in vitro
fertilization or the gun. This book is an attempt to document our forays
Introduction: Shock, Stock, and the Gibson . . .
into these networks, looking for the important nodes. This is a first-person
enterprise, but we couldn’t do it alone.
So, I’d like to thank the wide variety of people who have shaped the
way we think. These are some of the ideas that we’ve been trying to
synthesize, although obviously not the only ones.
To Jezebel’s Lindy West, we owe thanks for this remarkable
distillation of the technological condition:
”Humanity
isn’t static—you can’t just be okay with all development up until the
invention of the sarong, and then declare all post-sarong technology to be
‘unnatural,”’ she wrote this year. “Sure, cavemen didn’t have shoes. Until
they invented fucking shoes!”
To Evgeny Morozov, we owe gratitude for the enumeration of the dangers of technocentrism. “Should [we] banish the Internet—and technology—
from our account of how the world works?” he asked in October
.
“Of course not. Material artifacts—and especially the products of their
interplay with humans, ideas, and other artifacts—are rarely given the
thoughtful attention that they deserve. But the mere presence of such
technological artifacts in a given setting does not make that setting
reducible to purely technological explanations.” As Morozov prodded us
consider, if Twitter was used in a revolution, is that a Twitter revolution?
If Twitter was used in your most recent relationship, is that a Twitter
relationship? Technology may be the problem or the solution, but that
shouldn’t and can’t be our assumption. In fact, one of the most valuable
kinds of technology reporting we can do is to show the precise ways that
technology did or did not play the roles casually assigned to it.
We are indebted to the historian David Edgerton for providing proof,
in The Shock of the Old, that technologies are intricately layered and
mashed together. High and low technology mix. Old and new technology
mix. The German army had 1.2 million horses in February of
.
Fax machines are still popular in Japan. The QWERTY keyboard appears
on the newest tablet computer. We need simple HVAC technology to make
the most-advanced silicon devices. William Gibson’s famous quote about
the future, “The future is already here—it’s just not very evenly distributed,”
should be seen as the Gibson Corollary to the Faulkner Principle, “The past
is never dead. It’s not even past.”
Atlantic Tech
To Stewart Brand and his buddies like J. Baldwin, we are thankful for
cracking open outdated and industrial ways of thinking about technology,
allowing themselves to imagine a “soft tech.” They asked what it might
mean to create technologies that were “alive, resilient, adaptive, maybe
even loveable.” They did not just say technology was bad, but tried to
imagine how tech could be good. And in so doing, they opened up a narrow
channel between technological and anti-technological excesses.
From the museum curator Suzanne Fischer and the philosopher
Ivan Illich, we found ways of thinking about the importance
of technology beyond market value. What if the point of tools
is not to increase the efficiency of our world, but its, in Illich’s phrase,
conviviality? “Tools,” he wrote, “foster conviviality to the extent to which
they can be easily used, by anybody, as often or as seldom as desired, for
the accomplishment of a purpose chosen by the user.” Of course, these are
ideals to aspire to. Ideals we may not even agree with or find too broad for
our taste. But isn’t it nice to have some ideals? We need measuring sticks
not dominated in dollars.
We owe Matt Novak for his detailed dismantling of our nostalgic
visions of the past. His work at Paleofuture lets us imagine decades’
worth of solutions to today’s still-pressing problems. His point is not that
things are the same as they ever were. Because it’s the details of these past
visions that allow us to see how we’ve changed, not just technologically,
but culturally.
What does all this add up to? A project to place people in the center
of the story of technology. People as creators. People as users. People as
pieces of cyborg systems. People as citizens. People make new technologies,
and then people do novel things with them. But what happens then? That’s
what keeps us writing, and we hope what keeps you reading.
Thank you,
Alexis C. Madrigal
Oakland, California
part one.
Nuggets
1.
What Space Smells Like
M E AT, M E TA L , R A S P B E R R I E S , R U M
by Megan Garber
When astronauts return from space walks and remove their helmets, they
are welcomed back with a peculiar smell. An odor that is distinct and
weird—something, astronauts have described, like “seared steak.” And also:
“hot metal.” And also: “welding fumes.”
Our extraterrestrial explorers are remarkably consistent in describing
Space Scent in meaty/metallic terms. “Space,” the astronaut Tony Antonelli has said, “definitely has a smell that’s different than anything else.”
Space, as the three-time spacewalker Thomas Jones has put it, “carries
a distinct odor of ozone, a faint acrid smell.”
“Sulfurous,” Jones elaborated. Space smells a little like gunpowder.
Add to all those anecdotal assessments the recent discovery, in a
vast dust cloud at the center of our galaxy, of ethyl formate—and the
fact that the ester is, among other things, the chemical responsible for the
flavor of raspberries. Add to that the fact that ethyl formate itself smells
like rum. Put all that together, and one thing becomes clear: the final
frontier sort of stinks.
part one. Nuggets
But . . . how does it stink, exactly? It turns out that we, and more
specifically our atmosphere, are the ones who give space its special spice.
According to one researcher, the aroma astronauts inhale as they move
their mass from space to station is the result of “high-energy vibrations in
particles brought back inside, which mix with the air.”
In the past, NASA has been interested in reproducing that smell
for training purposes—the better to help preemptively acclimate astronauts to the odors of the extra-atmospheric environment. And the better
to help minimize the sensory surprises they’ll encounter once they’re
there. The agency, in
, talked with the scent chemist Steve
Pearce about the possibility of recreating space stench, as much as
possible, here on earth.
Pearce came to NASA’s attention after he re-created, for an art
installation on “impossible smells,” the scents of the Mir space station.
(This was, he noted, a feat made more complicated by the fact that
Russian cosmonauts tend to bring vodka with them into space—
which affects not only the scent of their breath, but also that of their
perspiration.) The result of Pearce’s efforts? ”Just imagine sweaty feet and
stale body odor, mix that odor with nail polish remover and gasoline . . .
then you get close!”
Those efforts, alas, did not move forward. But had Pearce continued in
creating a NASA-commissioned eau de vacuum, he would have had the aid
of remarkably poetic descriptions provided by astronauts themselves.
Such as this, for example, from the wonder-astronaut Don Pettit:
“Each time, when I repressed the airlock, opened the hatch and
welcomed two tired workers inside, a peculiar odor tickled my olfactory
senses,” Pettit recalled. “At first I couldn’t quite place it. It must have come
from the air ducts that re-pressed the compartment. Then I noticed that this
smell was on their suit, helmet, gloves, and tools. It was more pronounced
on fabrics than on metal or plastic surfaces.”
He concluded:
It is hard to describe this smell; it is definitely not the olfactory
equivalent to describing the palate sensations of some new
food as “tastes like chicken.” The best description I can come up
with is metallic; a rather pleasant sweet metallic sensation. It
What Space Smells Like
reminded me of my college summers where I labored for many
hours with an arc welding torch repairing heavy equipment
for a small logging outfit. It reminded me of pleasant sweet
smelling welding fumes. That is the smell of space.
2.
The Meaning of That Ol’ Dial-Up
Modem Sound
PSHHH K K K K K K RRRRKAK ING
KAK ING KAK ING
TSHCHCHCHCHCH CHCHCCH
*DING*DING*DING
by Alexis C. Madrigal
Of all the noises that my children will not understand, the one that is
nearest to my heart is not from a song or a television show or a jingle.
It’s the sound of a modem connecting with another modem across the
repurposed telephone infrastructure. It was the noise of being part of the
beginning of the Internet.
I heard that sound again this week on Brendan Chillcut’s simple and
wondrous site, The Museum of Endangered Sounds. It takes technological objects and lets you relive the noises they made: Tetris, the Windows
part one. Nuggets
start-up chime, that Nokia ringtone, television static. The site archives not
just the intentional sounds—ringtones, etc.—but also the incidental ones,
like the mechanical noise a VHS tape made when it entered a VCR, or
the way a portable CD player sounded when it skipped. If you grew up
at a certain time, these sounds are like techno-aural nostalgia portals. One
minute, you’re browsing the Internet in
, the next you’re on a bus
headed up I- to an eighth grade football game against Castle Rock in
.
The noises our technologies make, as much as any music, are the
soundtrack to an era. Soundscapes are not static; completely new sets of
frequencies arrive, old ones go. Locomotives rumbled their way through
the landscapes of th-century New England, interrupting the reveries
of Nathaniel Hawthorne–types in Sleepy Hollows. A city used to be
synonymous with the sound of horse hooves and clattering carriages on
stone streets. Imagine the people who first heard the clicks of a bike wheel
or the vroom of a car engine. It’s no accident that early films featuring
industrial work often include shots of steam whistles, even though in many
(say, Metropolis) we can’t hear that whistle.
When I think of
, I will think of the whir of my laptop’s overworked
fan and the ding announcing a text message on my iPhone. I will think of
the beep of the FasTrak as it charges my credit card so my car can pass
through a tollbooth, onto the Golden Gate Bridge. I will think of Siri’s
“uncanny valley” voice.
But to me, all of those sounds—symbols of the era in which I’ve
come up—remain secondary to the hissing and crackling of the modem’s
handshake. I first heard that sound as a nine-year-old. To this day, I can’t
remember how I figured out how to dial the modem of our old Zenith.
Even more mysterious is how I found the BBS number to call or even knew
what a BBS was (a “bulletin board system”). But I did. BBSs were dialin communities, kind of like a local AOL. You could post messages and
play games, even chat with people on the bigger BBSs. It was personal:
Sometimes, you’d be the only person connected to that community. Other
times, there’d be one other person, who was almost definitely within your
local prefix.
When we moved to Ridgefield, which sits outside Portland, Oregon, I
The Meaning of That Ol’ Dial-Up Modem Sound
had a summer with no friends and no school; the telephone wire became
a lifeline. I discovered Country Computing, a BBS I’ve eulogized
before, located in a town a few miles from mine. The rural-Washington
BBS world was weird and fun, filled with old ham-radio operators and
computer nerds. After my parents closed up shop for the workday, their fax
line became my modem line, and I called across I- to play games and then,
slowly, to participate in the nascent community.
In the beginning of those sessions, there was the sound, and the sound
was data.
Fascinatingly, there’s no good guide to what the beeps and hisses
represent that I could find on the Internet. For one thing, few people care
0 s hottest
about the technical details of
k modems. And for another,
whatever good information exists out there predates the popular explosion
of the Web and the all-knowing Google.
So, I asked on Twitter and was rewarded with an accessible and elegant
explanation from another user whose nom de plume is Miso Susanowa.
(Susanowa used to run a BBS.) I transformed it into the annotated graphic
below, which explains the modem sound, part by part.
Figure . .
This is a choreographed sequence that allowed these digital devices
to piggyback on an analog telephone network. “A phone line carries
only the small range of frequencies in which most human conversation
takes place: about
to ,
hertz,” Glenn Fleishman explained in The
New York Times back in
. “The modem works within these limits
in creating sound waves to carry data across phone lines.” What you’re
hearing is the way 20th-century technology tunneled through
a 19th-century network; what you’re hearing is how a network de-
part one. Nuggets
signed to send the noises made by your muscles as they pushed around
air came to transmit anything, or the almost-anything that can be coded in
0 s and 0 s.
The frequencies of the modem’s sounds represent parameters for
further communication. In the early going, for example, the modem
that’s been dialed up will play a note that says, “I can go this fast.” As
a wonderful old (1997) Web site explained, “Depending on the
speed the modem is trying to talk at, this tone will have a different pitch.”
That is to say, the sounds weren’t a sign that data was being transferred;
they were the data being transferred. This noise was the analog world being
bridged by the digital. If you are old enough to remember it, you still knew
a world that was analog-first.
Long before I actually had this answer in hand, I could sense that the
patterns of the beats and noise meant something. The sound would move
me, my head nodding to the beeps that followed the initial connection.
You could feel two things trying to come into sync: Were they a duo of
computers, or me and my version of the world?
As I learned again today, as I learn every day, the answer is both.
3.
The Phone That Wasn’t There
T H I N G S YO U N E E D T O K N OW
A B O U T P H A N T O M V I B R AT I O N S
by Robinson Meyer
You’re sitting at work. Your phone vibrates in your pocket. As you reach
for it, you look up . . . and see your phone sitting on the table.
You just experienced a phantom vibration.
A new study was released in July
on the phenomenon. Led by IUPU Fort Wayne’s Michelle Drouin, it was published in the journal Computers in Human Behavior. It’s only the third study on this new phenomenon
of the mobile age, so we can fairly say that these are the things we know
about phantom vibrations:
. Many, many people experience phantom vibrations. Eighty-nine
percent of the undergrad participants in Droulin’s study had felt phantom
vibrations. In the two other studies on this in the literature—a
doctoral
thesis, which surveyed the general population, and a
survey of staff
at a Massachusetts hospital—majorities of the participants experienced
phantom vibrations.
part one. Nuggets
. They happen pretty often. Undergrads and medical professionals
agree: about
percent of those surveyed experience phantom vibrations
every day. Eighty-eight percent of the doctors, specifically, felt vibrations
on a basis between weekly and monthly.
. If you use your phone more, you’re more likely to feel phantom
vibrations. The
graduate study found that people who heard phantom rings used roughly twice as many minutes and sent five times as many
texts as those who didn’t.
. No one’s really bothered by them. Ninety-one percent of the
kids in this new study said the vibrations bothered them “a little” to
“not at all.” Ninety-three percent of the hospital workers felt similarly,
reporting themselves “slightly” to “not at all” bothered. But this is where
age differences start kicking in, because:
. Among those surveyed, working adults try to end the vibrations
much more often than undergrads. More than
percent of the undergrads made no attempt to stop phantom vibrations. This doesn’t match
with the hospital workers at all: almost two-thirds of them tried to get the
vibrations to stop (and a majority of that set succeeded, though the sample
gets so small, lessons become unclear).
. If you react strongly and emotionally to texts, you’re more likely
to experience phantom vibrations. Droulin’s study also found that a
strong “emotional reaction” predicted how bothersome one finds phantom
vibrations. Emotional reactions to texts have been researched before: in a
study of Japanese high school students, they were found to be a key
factor in text-message dependence.
. And that strong emotional reaction means personality traits
given to emotional reactions correlate with increased phantom vibrations. People who react more emotionally to social stimuli of any type
will react more emotionally to social texts. And people who react more
emotionally to social stimuli can be sorted into two large groups (with
the usual attached caveats about the usefulness of psychological groups):
extroverts and neurotics. But they can be sorted into these two huge groups
for two totally different reasons.
Extroverts have many friends and work hard to stay in touch with them.
Social information carries more import for them because they care deeply
The Phone That Wasn’t There
about it, they’re directed to it, and their regular emotional reaction to social
stimuli carries over into texts. And since a strong emotional reaction to
texts predicts increased phantom vibrations, it makes sense—and indeed, it
correlates—that extroverts experience more phantom vibrations.
But what correlates stronger, across the board, are neurotic traits.
Neurotics fret about their social relationships; they worry about texts, and
fear each might signal social doom. Droulin’s study found that neurotic
traits strongly correlated—even more strongly than extroversion—with an
emotional reaction to texts.
. You can luck into experiencing fewer phantom vibrations. In this
study, conscientious undergrads, capable of greater focus, reported
fewer text messages than the rest of the undergrad population.
. We don’t have great ways to study this yet. The three main studies
all depend on people self-reporting their own phantom vibrations when
they’re taking surveys. In all three cases, the researchers just gave surveys
to people—
in the
doctoral study,
in the medical survey, and
in this newest study—and asked them to remember. “[A]t present,”
write the new study’s authors, “the technology does not exist to measure
individuals’ perceptions of phantom vibrations in ‘real time.”’ They hope
to apply brain scanning techniques in the future, and also that better
technology will come along that will make phantom vibration reporting
possible—perhaps this technology will rely on mobile technology itself.
. Scientists don’t seem to know whether this is a disease. The
survey goes out of its way to declare “phantom text syndrome” a “Holy
Roman Empire” involving neither phantoms nor syndromes. The newer
study, though, classifies the perception of a vibration without the sensation
of it a hallucination, and notes, “typically hallucinations are associated with
pathology.” The study’s authors wonder aloud if the doctors and nurses at
the hospital were more eager to train themselves out of phantom vibrations
because they worried about disease and abnormal symptoms, or because
they were just old. And throughout the rest of literature, scientists have
protested recently that aural hallucinations aren’t a big deal, that they’re
not associated with a disease. The
survey’s authors compare phantom
vibrations with hearing your name called when it wasn’t.
Brains hiccup, they parse sense wrong, and the result is a phantom
part one. Nuggets
vibration. Wrote the authors:
Presumably, if individuals considered these imagined vibrations ‘pathological tactile hallucinations,’ they would feel bothered that they had them. Instead, it is likely that individuals
consider these phantom vibrations a normal part of the humanmobile phone interactive experience.
4.
Interview: XKCD
R A N DA L L M U N R O E O N W E B
COMICS AND PHYSICS
by Megan Garber
Randall Munroe began his career in physics working with robots at NASA’s
Langley Research Center. He is famous, however, for engineering a creation
of different kind: the iconic Web comic that is xkcd. Last week, Munroe
won the Web’s wonder for approximately the thousandth time when
he published comic No.
, “Click and Drag,” a soaring, spanning,
surprising work that encouraged users to explore a fanciful world through
their computer screens.
It was pointed out that if you printed the comic at
dpi, the
resulting image would be about 46 feet wide.
But Munroe, work-wise, is no longer dedicated exclusively to xkcd.
He recently launched What If?, a collection of infographic essays that
answers questions about physics. Published each Tuesday, the feature—a
blog extension of the xkcd site—aims to analyze the kind of wonderful
and fanciful hypotheticals that might arise when the nerdily inclined get
part one. Nuggets
together in bars: “What would happen if the Moon went away?” “How
much wood would a woodchuck chuck . . . ?” Some of What If’s recently
explored questions include: “What if everyone actually had only
one soul mate, a random person somewhere in the world?,”
and
”If you went outside and lay down on your back with
your mouth open, how long would you have to wait until
a bird pooped in it?,” and “If every person on Earth aimed
a laser pointer at the Moon at the same time, would it
change color?”
I spoke with Munroe in September
about What If, xkcd,
creativity, baseballs pitched at 90 percent of the speed of
light, and how, for him, The Lord of the Rings helped lead to it all. The
conversation below is lightly edited and condensed.
First things first: Why did you create What If?
It actually started with a class. MIT has a weekend program where
volunteers can teach classes to groups of high school students on any
subject you want. I had a friend who was doing it, and it sounded really
cool—so I signed up to teach a class about energy, which I always thought
was interesting, but which is a slippery idea to define. I was really getting
into the nuts and bolts of what energy is, and it was a lot of fun—but when I
started to get into the normal lecture part of the class, it felt kind of dry, and
I could tell the kids weren’t super into it. And then we got to a part where I
brought up an example—I think it was Yoda in Star Wars. And they got really excited about that. And then they started throwing out more questions
about different movies—like, “When the Eye of Sauron exploded at
the end of The Lord of the Rings , and knocked people over from
this far away, can we tell how big a blast that was?” They got really excited
about that—and I had a lot more fun than I did just teaching the regular
material.
So I spent the second half of the class just solving problems like that
in front of them. And then I was like, “That was really fun. I want to keep
doing it.”
So What If was basically a spin-off of the class?
That was where the idea came from. I actually wrote the first couple
entries quite some time ago, based on questions students asked me in that
Interview: XKCD
class—and then on another couple questions that my friends had asked. It
was only recently that I finally managed to get around to starting it up as a
blog.
The variety of the topics you tackle is incredibly broad. How do
you figure out the best way to explain all these different, complicated
subjects to people?
Part of what I’m doing is selectively looking for questions that I already
know something interesting about. Or I’ve stumbled across a paper recently
that was really cool, and now I’ll keep an eye out for a question that will let
me bring it up—something I can use as a springboard. So in a conversation,
someone might say, “Money doesn’t grow on trees.” Okay, well, what if
money did grow on trees? Our economy would collapse. On the other hand,
we would switch to a new currency. It’s complicated.
What I like doing is finding the places in those questions where normal
people—or, people who have less spare time than I do—think, “This is
stupid,” and stop. I think the really cool and compelling thing about math
and physics is that it opens up entry to all these hypotheticals—or, at least,
it gives you the language to talk about them. But at the same time, if a
scenario is completely disconnected from reality, it’s not all that interesting.
So I like the questions that come back around to something in real life.
And the great thing with this is that once someone asks me something
good, I can’t not figure out the answer, you know? I get really serious, and
I’ll drop whatever I’m doing and work on that. One of the questions I
recently answered was, “What if, when it rains, the rain came down in
one drop?” And I was like, “Well, how big would that drop be?” I know a
little bit about meteorology, and then, before I knew it, I had spent four
hours working out the answer.
Why that need to answer? Is it because people are asking you—
because you want to help them out by answering the questions for
them?
Oh, no, no, no, there’s nothing altruistic about it! It’s just like, once it
gets in my brain, it keeps bugging me, and I don’t know the answer, but I’m
really curious. What really happens is: I have an idea for what the answer
is, but then I want to figure out if I’m right or not. So I have to keep working
to find out. And oftentimes, in the process of learning I’m wrong, I’ll run
part one. Nuggets
into something even cooler. And then, once I find that, I just want to tell
everyone about it.
So I basically set up this blog to flatter all of my random impulses. And
it’s been a lot of fun so far.
And how do you decide which questions you ultimately commit to
answering?
For the first few entires I wrote, I just wanted to make sure this format
made sense. So I wrote a couple entries with questions just from my friends.
And then when we put up the blog, we included an “Ask a Question” link.
And since then, the volume of questions has been high enough that I don’t
think any set of two or three people could read them all. So I pretty much
just sit down whenever I have a few spare hours and go through them and
answer the questions that come in and try to see if there’s anything that
would make a good article.
Of the ones you’ve done, do you have a favorite so far?
The one that I recently put up: “What would happen if the
land masses of the world were rotated degrees?” I’m kind of a geography
and map geek, so I started drawing out the maps for that, and I think that
ended up being, by a large margin, my longest entry. And I kept on thinking,
“Oh, what about this thing?” and, “Oh, I should really go back and read
more about trade winds and figure this out.” So I had a lot of fun with that.
But I also really enjoyed the first one that I posted, and that’s
been one of a lot of people’s favorites: the one about the baseball thrown at
percent of the speed of light.
That one I have a soft spot for because it was the first one I put up.
And I heard from people who know a lot more about these things than I
do—I got e-mail from a bunch of physicists at MIT saying, “Hey, I saw your
relativist baseball scenario, and I simulated it out on our machine, and I’ve
got some corrections for you.” And I thought that was just the coolest thing.
It showed there were some effects that I hadn’t even thought about. I’m
probably going to do a follow-up at some point using all the material they
sent me.
I imagine, given all that, that the posts are incredibly laborintensive. How long would an average one take you?
I’m still deciding how long to make them—and part of that is just
Interview: XKCD
figuring out how long it takes to answer the average question. When I
started out, I didn’t really know what to expect from the questions, so I
wasn’t sure if I’d be able to answer them quickly or what. But I think, now,
it’s about a day of work in which I don’t do much else.
That’s it? I was figuring it’d take much longer!
Well, that’s a day of solid work—I mean, most people don’t actually
work through a whole day. I certainly don’t. So in practice, it’s a few days,
because there’s a lot of e-mail checking, or having to go run an errand.
Makes sense. And, design-wise, I love how each article stands on its
own—a single product on a single page. Since that’s the same structure
you use for xkcd, I’m wondering: Why did you choose to repeat that
format?
Especially because I was so delayed in actually getting
the site up, I had a lot of time to think about how I wanted it to look.
Did I want to have individual entries, or did I want to do more of a blog
format, or did I want to have a bunch of questions answered as they came
in? So we settled on the current format, and it seems to work pretty well.
One of the things I’ve learned with doing xkcd is that you sort of give
people, “Here’s the thing, and here’s the button you can press to get another
thing.” Sometimes that can be easier to digest than, “Here’s a long page of
things.” You can read through it, and you get to the end and think, “Wow, I
just read a whole bunch. Do I really want another page like this?”
I’m not a huge fan of some of the infinite scrolling things that are
happening now. I think it’s really annoying to want to read partway
through, and then you navigate away, and can’t get back.
xkcd led to sales of posters, books, and other items. Does What
If, at this point, have a business model?
When I first started xkcd, it was all stuff I drew during classes—because
I wasn’t paying attention to the lecture. And then when I started drawing
them from home, I found that I’d have a lot of trouble coming up with
ideas. And then I’d get a project and start working on that—and I found
that, instead of it taking up more of my time, I had more comic ideas per
day and was drawing more of them. So they all reinforced each other.
The format would definitely make sense as a book. But for the moment,
it’s just been so much fun to write and answer. My experience of the
part one. Nuggets
Internet has been that if you make something really cool, the neatness
speaks for itself. And that’s much more important than trying to make
something marketable, trying to make something into a product. So I just
found that if I’m steadily trying to make cool things and putting them up,
some of them, in some way or another, have a business opportunity.
Is there a direct relationship between xkcd and What If? Do they
inspire each other?
Mostly, I just think it’s helped me because it’s given me all this cool
stuff to read through. I’ll sometimes be researching a question and then be
like, “I don’t think I can turn that into a thing.” But I did find this paper
while I was doing it, which led me to this other paper, which led me to this
blog, which led me to this comic.
And it probably made me annoying! I read this book once about this
guy, A.J. Jacobs, who read the entire Encyclopedia Britannica and wrote
about the experience. He said he had the problem where someone would
be like, “Pass the salt,” and he’d be like, “Oh, did you know that salt was
originally used in medicine for this kind of thing, and then we learned it
causes this?” And people would be like, “Just pass the salt.”
I had something similar. When I was doing the money chart, someone
would say, “Oh, I can’t afford to move into a new apartment,” and I’d say,
“Oh, I know, a lot of people are in that situation because the income has
changed like this, whereas the rents in this state have shifted more than in
other states, because blahblahblahblahblah—all these economics.”
It was like, “Okay, wait. Pull back. This is not interesting.”
I learned very early on in life that not everyone wants to hear every
fact in the world, even if you want to tell them everything you’ve ever
read. Which is why it’s probably good that I have the comic schedule that I
do—because I would figure out something to say every minutes if I were
forced to by my schedule. But it would not be the most interesting.
How does the weekly schedule of xkcd and What If play into that?
Is it a way of forcing yourself to create with regularity?
If there’s one thing I’ve learned from drawing xkcd, it’s that I need a
strict schedule. Some people who publish comics will just write whenever
they have a good idea and put things up, without a regular update schedule.
Interview: XKCD
If I did that, I would never post anything. I have to have that deadline
pressure to make me pick something.
And that was part of why I hesitated with the question-answering site,
and part of why I picked it out of all the things I could have done a blog
about—because I knew that the questions were going to make me want to
answer them anyway.
What about your work environment? Where do you actually do
your research and your drawing?
For a long time, I was working from home. Once I got married, I started
working from an office. I found that having somewhere to go that isn’t my
house is mentally helpful: “This is the place where I answer email and write
blog posts,” and “over there is the place where I do the dishes.” Because
otherwise, if I’m sitting around, I can go, “You know, I haven’t cleaned the
floor of the bathroom for a while.” If you ever come to my house and the
bathroom is clean, it means that I have some project I’m supposed to be
doing that I can’t get started on.
There are a lot of people who have written books about creativity that
I haven’t read, so I’m not by any means an expert on it, but my impression
is that being creative is just a combination of getting new stimulus, and
also having periods where you’re not getting any stimulus. I had to stop
reading Reddit a few years ago, because I found that whenever I’d bump
into a problem that was going to take a little time to solve, I’d just switch
over and refresh Reddit and distract myself. Depriving myself of that has
definitely been an important part of actually getting anything done, ever.
But at the same time, if you’re just staring at your room with no
Internet or no connection, you just go crazy and don’t have anything to
give you any ideas at all. Comedians, once they start touring, all their jokes
are about airplanes. So that’s something I try to remember: Don’t do all of
your jokes about running a Web site.
But the jokes—not just visually, but in tone—feel consistent at the
same time. I was going to ask you how the comic has evolved, but I
guess the better question is: Has it evolved? Or would you say it’s kept
a steadiness throughout the years?
I don’t know. I try not to spend too much time interpreting my comics
for people, because I try to put out there whatever I can, and people can
part one. Nuggets
draw whatever conclusions they want. And if they like something and
laugh, it affects how I think about it—and maybe I do more of those, and
understand what sort of jokes make people laugh, and so on.
The one thing that I didn’t anticipate at the beginning was how much
fun I had doing infographics. I think the things I’ve had the most fun
creating—and that have been the most invigorating and rewarding—have
been things like the chart of the movement all the characters
in Lord of the Rings . That was one of those things where I’d always
thought, “You know, it’d be so cool if someone could make that and see
what that looked like.” And since I’d been doing xkcd, I realized, I had a
platform where I could do it.
And last year, I did a chart of money and all these
different amounts of money and all the different amounts
of money in the world and how they compare to each other.
It was about a month and a half of -hour days of research. I had easily
times as many academic journal articles and sources on that one comic
than on anything I did in the course of my physics degree. And the comic
was so large, it wound up being a huge, sprawling grid. It was sort of
disorganized — I was going for this Where’s Waldo? feel, where you could
look through it, and find all this different stuff here and there. We printed
up a version that was normal poster size, x inches, and then we got a
billboard printer to make a double-size version that was like six feet high,
just so you could read all the little text.
That was a fun one. I don’t really know about the “what is and what
isn’t a comic?” debate, but it’s quite a stretch to call that a comic. I think it
was comic No. 980, which I know because we had to do a lot of custom
work for that comic to make it so people could drag and scroll and zoom
in on it—because there was no way that we could fit the whole thing on
one screen. So I remember the number from having to load it up and test
features on it.
So it lives in your heart.
Yeah. A lot of people will refer to comics by number to me, and I’ll
realize they’re expecting me to remember all the comics by number. And I
can’t even remember what I ate this morning, let alone which comic was
No.
! What will also happen is that I’ll have a joke that I’ll make to
Interview: XKCD
friends, and then eventually I’ll think, “Hey, that could be good as a comic.”
But then the next time I go to make that joke, with a new person, they’ll be
like, “Why are you quoting your comic to me?”
“Nooo! I’m not being pretentious, just forgetful!”
Exactly. I’m just one of those people who can always tell the same story
twice, forgetting that I’ve told it already.
So, given the challenges of creating fresh content—and I know you
get this all the time, but I have to ask—how do you actually come up
with new ideas? Especially over such a long stretch of time?
I think, if anything, it’s noticing the things that make me laugh, and
grabbing onto them and figuring out how to write them down. There are
definitely times—and I think this is pretty common among cartoonists—
where you spend an entire day trying to think of an idea, and you’re like, “I
give up.” And then you go and take a shower or run an errand, and halfway
there you get an idea.
5.
eTalmud
T H E I PA D F U T U R E O F T H E
ANCIEN T TEXT
Rebecca J. Rosen
On the night of August ,
, in the marshes of East Rutherford, in
a stadium that is normally home to cheering football fans or screaming
teenyboppers, some ,
Jews—a sellout crowd—gathered for Siyyum
HaShas, a celebration that marks the completion, after seven and a half
years, of the page-a-day (front and back) Talmud program know as Daf
Yomi (literally “page of the day”). When the cycle began anew later that
week, some Talmud-reading Jews became the first ever to take on the
challenge, from start to finish, not on the written page but with an app.
The Talmud, specifically the Babylonian Talmud, is the printed work of
rabbinic discussions and legal interpretations collected and transcribed in
Aramaic in the fourth and fifth centuries. All told, its , pages continue
to be the basis of many Jews’ religious practice. In
, Rabbi Meir
Shapiro introduced the idea of the Daf Yomi, and the first cycle began on
part one. Nuggets
Rosh Hashanah (the Jewish New Year) of that year. The August ceremony
marked the end of the th such cycle.
For Jews who want to participate in Daf Yomi, there’s really only
one thing you need (besides, obviously, some serious Jewish literacy), and
that, no surprise, is a Talmud. A Talmud set— volumes for a mainstream edition with English translation—is going to run you some
$2,000 (though simpler versions without the translation can be less). It
also weighs more than
pounds (and this is the small size, intended
for Daf Yomi practice). All this to say, an iPad Talmud app could be much
more convenient, affordable, and usable (despite it being off-limits on the
Sabbath for many Shabbat-observant Jews).
Daf Yomi was created to bring Talmudic study to more people.
An edition known as the Schottenstein Talmud, published over a year period from
to
, continued that trajectory of
popularization, by providing in-depth English translations. Now
ArtScroll, the leading Orthodox publishing imprint, has released the
Schottenstein edition in a long-awaited app, and in doing
so takes another step in that process of making the Talmud ever more
accessible. The app costs about half as much as the printed version, though
the exact price comparison depends on whether you opt for a subscription
or a package, or buy the volumes piecemeal. Any way you look at it, it’s
still not cheap. It’s also not the first Talmud app, but the depth
of the tools available (floating translations, pop-up commentaries, and
multiple view options for different layouts and translations) set a new
standard for Jewish text apps.
But here’s what the app doesn’t do: The app is a closed work, much
like a book, and doesn’t take advantage of the openness made possible
with networked tablet technology. It’s not repurposable (it’s copyrighted);
it doesn’t allow for inline contributions or conversations; it’s not social.
It’s a book made digitally navigable, but it’s not a book made digitally
interactive.
A video demonstrating the app’s functioning notes, “We’re taking
interactive learning to the next level,” and then proceeds to show a dropdown menu in which a user can reset the font and layout, and turn different
commentaries on and off. So it’s interactive, in the sense that you can
eTalmud
manipulate the content that’s available, but not in the sense that a user
can contribute or build upon the work in any way. I suppose that is literally
“the next level” from paper, but it certainly does not live up to what is
possible on a tablet.
As Brett Lockspeiser, one of the people behind Sefaria, an opensource Jewish text resource, said to me, “It seems like it’s basically the
same content that you have in their printed [Talmud] right now. And so
they have a fixed set of commentaries and elucidations that they already
have, and now they’re making them available on the iPad. It seems like an
exciting app. It would be great to learn with, if you wanted to learn with an
ArtScroll [Talmud], but it doesn’t seem like it’s opening it up in any sort of
new way. It doesn’t seem like they are actually bringing any new content
to it; they’re just giving a new form to the content they’ve already got.”
The Talmud is an essentially social document—one that evolved in
dialogue and is often studied in pairs known as chevrutah. Its entire
layout on a page (PDF) evolved as a model for how commentary can be
better integrated with a text. But the sad irony is that the app, which could
have used modern technology to offer the most social of Talmuds, ends the
Talmud’s social evolution instead of pushing it forward to the app’s users.
Such an approach stands in contrast to efforts such as Aharon
Varady’s Open Siddur Project, part of a movement to create more
open-source texts for Jews worldwide. “Where I might expect an eagerness
to provide channels for the public to adopt, adapt, remix, and redistribute
their ideas, [publishing houses] see themselves as responsible stewards
of their intellectual property,” Varady explained to Alan Jacobs
in The Atlantic in 2012. “Are religious communities synonymous
with a passive marketplace of consumers whose experience of religion
is divorced and alienated from their essential creative spirit, or are they
creatively engaged participants in a visionary movement? It really comes
down to how one sees religion itself: Is it a collaborative project or is it
some sort of passive observed performance art?”
But building that collaborative ecosystem is, as the serial Jewish social
entrepreneur Daniel Sieradski explained to me via e-mail, challenging.
Speaking of Lockspeiser and his Sefaria project, Sieradski said, “His
problem though, like all Jewish app developers, is that there are no free,
part one. Nuggets
open-source, and, most importantly, high-quality versions of these texts,
particularly in translation, and particularly that are pre-XML encoded. That
means that to develop any free, open-source alternative that can compete
with ArtScroll’s Schottenstein [app], an inordinate amount of work must
go into just getting the texts together, let alone building the functionality to
unlock their full potential on a tablet device.”
Moreover, beyond what, exactly, is possible technologically with the
ArtScroll app, there is the issue of ArtScroll’s translations themselves,
which will be the lens through which many Talmud-app-studying
Jews access the Talmud. ArtScroll, particularly in its siddurim (prayer
books), is known for presenting a very clear, unambiguous interpretation
of Jewish law and practice. (As an essay on the release of an
ArtScroll siddur for women noted, one old joke goes, “Why did
the chicken cross the road? Because ArtScroll told it to.” The essay goes on
to say that ArtScroll’s “clear explanations of practice are heavily biased
toward certain viewpoints, and reductionist to the max” and that as a
result, those viewpoints come to be seen as “correct,” although a range
of interpretations is often possible, particularly concerning the role of
women in Jewish ritual.) Many Talmud readers have similar concerns
about ArtScroll’s presentation of Talmud, and this goes hand-in-hand
with the ArtScroll app’s unsocial, uninteractive presentation. As Shai
Secunda at The Talmud Blog has written, ”Yet, with all the bells
and whistles the new app looks like it will continue to perpetuate the sense
one has while reading from a Schottenstein Talmud; that this is a fully
coherent canon, even if every generation adds new insights channeled
through a group of bearded men with positions of rabbinic authority,
whose thoughts are collected in footnotes at the bottom of the page.”
So much more could be possible on an iPad—say, for example, that
Talmud students could follow their favorite contemporary scholars’ commentaries and teachings, in addition to those of the Babylonian era—but
we aren’t there yet, and we aren’t going to get there with ArtScroll, which
tends toward a more prescriptive bent on Jewish theology and scholarship.
Instead, over time, projects like Varady’s and Lockspeiser’s will help, but a
lot of work lies ahead.
And even if those options do develop as their progenitors hope, the
eTalmud
functionality and depth of ArtScroll’s app all but ensure that it will become
the go-to app for many students of Talmud—a growing circle (in part thanks
to the increased accessibility brought by apps) that surely includes many
people without the knowledge necessary to be critical of the translations
ArtScroll offers. And so ArtScroll it will be, at least for the near future,
with all the good and bad that that entails.
6.
Q: Why Do We Wear Pants? A:
Horses
T H E S U R P R I S I N G LY D E E P H I S T O R Y
OF TROUSER TECHNOLOGY
by Alexis C. Madrigal
Whence came pants? I’m wearing pants right now. There’s a better than
percent chance that you, too, are wearing pants. And probably neither of
us has asked ourselves a simple question: Why?
It turns out the answer is inexplicably bound up with the Roman Empire, the unification of China, gender studies, and the rather uncomfortable
positioning of man atop horse, at least according to the University of
Connecticut evolutionary biologist Peter Turchin.
“Historically there is a very strong correlation between horse-riding
and pants,” Turchin wrote in a blog post. “In Japan, for example, the
traditional dress is kimono, but the warrior class (samurai) wore baggy
pants (sometimes characterized as a divided skirt), hakama. Before the
introduction of horses by Europeans (actually, re-introduction – horses
part one. Nuggets
were native to North America, but were hunted to extinction when humans
first arrived there), civilized Amerindians wore kilts.”
The reason why pants are advantageous when one is mounted atop
a horse should be obvious; nonetheless, many cultures struggled to adapt,
even when they were threatened by superior, trouser-clad riders.
Turchin details how the Romans eventually adopted braccae (known
to you now as breeches), and documents the troubles a third-century
BC Chinese statesman, King Wuling, had getting his warriors to switch
to pants from the traditional robes. “It is not that I have any doubt
concerning the dress of the Hu,” Wuling told an adviser. “I am afraid
that everybody will laugh at me.” Eventually, a different people, the Qin,
conquered and unified China. They just so happened to be the closest to
mounted barbarians and thus were early to the whole cavalry-and-pants
thing.
Turchin speculates that because mounted warriors were generally men
of relatively high status, the culture of pants could spread easily throughout
male society.
I’d add one more example from history: the rise of the rational dress
movement in conjunction with the widespread availability of the bicycle.
Here’s a University of Virginia gloss on that phenomenon: “The advent
and the ensuing popularity of the safety bicycle, with its appeal to both
sexes mandated that women cast off their corsets and figure out some way
around their long, billowy skirts. The answer to the skirt question was to
be found in the form of bloomers, which were little more than very baggy
trousers, cinched at the knee.”
What all these examples suggest is that technological systems—
cavalries, bicycling—sometimes require massive alterations to a society’s
culture before they can truly become functional. And once it’s locked-in,
the cultural solution (pants) to an era’s big problem (chafing?) can be more
durable than the activity (horse-mounted combat) that prompted it.
7.
The Only American Not on Earth
on /
A S T R O N AU T F R A N K C U L B E R T S O N
WAT C H E D F R O M T H E S PA C E
S TAT I O N A S T H E AT TA C K S
U N F O L D E D O N T H E G R O U N D.
by Rebecca J. Rosen
When astronauts describe the feeling of sailing around space, looking at
our planet from hundreds of miles above, they often invoke the phrase
orbital perspective, a shorthand for the emotional, psychological,
and intellectual effects of seeing ”the Earth hanging in the blackness of
space.” This experience is characterized by not merely awe but, as the
astronaut Ron Garan put it, ”a sobering contradiction. On the one
hand, I saw this incredibly beautiful, fragile oasis—the Earth. On the other,
I was faced with the unfortunate realities of life on our planet for many of
part one. Nuggets
Figure . .
its inhabitants.”
This contradiction was particularly poignant on / , when the effects
of violence on earth were actually visible from space, as captured in the
photograph above. At the time, three people were in space: the Russian
cosmonauts Mikhail Tyurin and Vladimir Dezhurov, and American Frank
Culbertson, making Culbertson the only American not on earth during the
/ attacks.
Over the course of that night and the following few days, Culbertson
wrote a letter to those at home, and his words echo that orbital
perspective Garan describes. “It’s horrible to see smoke pouring from
wounds in your own country from such a fantastic vantage point,” he wrote.
“The dichotomy of being on a spacecraft dedicated to improving life on the
earth and watching life being destroyed by such willful, terrible acts is
jolting to the psyche.”
Culbertson told of how the day had unfolded on the space station:
The Only American Not on Earth on /
I had just finished a number of tasks this morning, the
most time-consuming being the physical exams of all crew
members. In a private conversation following that, the flight
surgeon told me they were having a very bad day on the
ground. I had no idea . . .
He described the situation to me as best he knew it
at ~
CDT. I was flabbergasted, then horrified. My first
thought was that this wasn’t a real conversation, that I was still
listening to one of my Tom Clancy tapes. It just didn’t seem
possible on this scale in our country. I couldn’t even imagine
the particulars, even before the news of further destruction
began coming in.
Vladimir came over pretty quickly, sensing that something
very serious was being discussed. I waved Michael into the
module as well. They were also amazed and stunned. After
we signed off, I tried to explain to Vladimir and Michael
as best I could the potential magnitude of this act of terror
in downtown Manhattan and at the Pentagon. They clearly
understood and were very sympathetic.
I glanced at the World Map on the computer to see where
over the world we were and noticed that we were coming
southeast out of Canada and would be passing over New
England in a few minutes. I zipped around the station until
I found a window that would give me a view of NYC and
grabbed the nearest camera. It happened to be a video camera,
and I was looking south from the window of Michael’s cabin.
The smoke seemed to have an odd bloom to it at the base of
the column that was streaming south of the city. After reading
one of the news articles we just received, I believe we were
looking at NY around the time of, or shortly after, the collapse
of the second tower.
As he signed off, Culbertson reflected, “I miss all of you very much.”
8.
On the Internet, Porn Has
Seasons, Too
N O T S U R E W H AT T I M E O F Y E A R I T
I S ? M I G H T WA N T T O C H E C K Y O U R
S E A R C H H I S T O R Y.
by Megan Garber
To everything (turn, turn, turn) . . . there is a season (turn, turn, turn) . . .
and a time . . . to every purpose . . . under heaven. So true, Bible. So true,
Byrds. What the truisms neglect to tell us, however, is that “every purpose
under heaven” includes not only planting and reaping and laughing and
weeping but also, apparently, performing porny Google searches.
We know this, now, because of the plantings and reapings of science.
According to a paper published in the Archives of Sexual
Behavior , Internet pornography—like so many storms, like so much
kale—is seasonal. Porn’s peak seasons? Winter and late summer.
Researchers at Villanova examined the Google trends for such
part one. Nuggets
Figure . .
commonly-searched-for terms as porn, xxx, xxvideos—and other, moredescriptive phrases that, because I am looking at a portrait of James
Russell Lowell as I write this, I will let you read in the paper
itself. And what they found was a defined cycle featuring clear peaks
and valleys—recurring at discernible six-month intervals. The cycle, as you
can see in the chart above, maps surprisingly well to our calendar seasons.
This is . . . unique. The researchers also examined a control group
consisting of Google searches for nonsexual terms—and those terms demonstrated no such cyclical pattern.
So there’s something about sex, it seems. Porn is periodical. This is born
out by another (semi-)control in the Villanova experiment: researchers determined search terms associated with a relatively purpose-driven category
of sexytime—prostitution and dating Web sites—and found that, for those
terms . . . the six-month cycle showed up again.
Which: fascinating. The seasonal Internet! The sex-seasonal Internet!
Now, sure, this is one study, and, sure—as always—there is more work
On the Internet, Porn Has Seasons, Too
to be done. (For example, what would happen if the terms were changed
or broadened? And, per this research alone: What happened in late
and early
to make porn-related searches plummet and rise so sharply?
Was the world reacting, a bit belatedly, to the global financial meltdown?
Did people suddenly get better broadband connections?)
As they stand, though, the findings are striking. And they suggest,
above all, the power of the Internet to reveal the patterns of human
emotion in a new scope, from a new angle. The Internet knows what
we want. It knows what we do when we are alone, or think we are.
And it knows all of us with the same totality of intimacy. As the blog
Neuroskeptic points out, the researchers’ findings could “reflect a more
primitive biological cycle”—something profound in the complex dynamic
among biology, technology, and the world of flesh that mediates the two.
9.
Lana Del Rey, Internet Meme
T H E E C S TA S Y A N D T H E A G O N Y O F
THE IN TERNET CELEBRI TY
by Megan Garber
Lana Del Rey‘s postponement of her planned album tour ended
an experiment that had brought Internet culture into the mainstream and
back out again.
Even before her infamously awkward appearance on Saturday
Night Live in January
, Del Rey (aka Lizzy Grant) had become “one
of the most controversial figures in indie.” That’s largely because the singer—she of YouTube mega-celebrity, and thus she who
often in press accounts gets the term Internet sensation appended to her
name—has, it turns out, a dad who is a wealthy Internet entrepreneur. She
has, it turns out, close ties to the music-industrial complex. She has, it turns
out, hair that used to be blonde.
Lana Del Rey, in other words, is a pop musician who has been
manufactured as a pop musician. In that, she is no different from Beyoncé
or Gaga or Madonna or any other musical act that has existed ever. Music
part one. Nuggets
is manufacturing. Music is performance. Music is spectacle. It lives and dies
on its ability to combine sincerity and falsity in appropriate ratios. And so,
inevitably, it has introduced many an artist to the business end of the hype
cycle.
Lana-née-Lizzie, however, is different from her counterparts in one
particular way: she found her current fame, such as it is, on YouTube. She
is not a celebrity so much as she is an Internet celebrity.
And as an Internet celebrity, Lana/Lizzie is not just a product; she is
a possession. She is, in a very real sense, ours. We, the Internet—we, the
buzzing democracy of views and virality—created her. We have made her
both what she is and more than what she is, aura and reproduction in
one, a celebrity forged in the fire of 26 million YouTube views. We have,
link by link, converted Lana Del Rey the performer into Lana Del Rey the
meme.
Which would be terrific for all involved—the singer gets her audience,
the audience gets its singer—were it not for the fact that Lana Del Rey
is also, inconveniently, a person. She isn’t a LOLcat; she isn’t an image
that can be fractured and amended and replicated and transmitted at
will. She is an artistic and aesthetic product, but one that comes with
an unavoidable narrative arc—an arc whose plot line, archived on the
Internet, involves past forays into the music business that were much less
“gangsta Nancy Sinatra” and much more “knockoff Britney Spears.” As
the blogger Carles put it when he “exposed” the indie singer for her nonindie past: “Meet Lizzy Grant. She had blonde hair, didn’t look very ‘alt
sexy.’ Sorta like a girl from my high school who was a part-time hostess at
Chili’s.”
The community that created Lana Del Rey, in other words, turned
on Lana Del Rey. She became the digital version of the electrified
Dylan, with an invented name and an incorrect instrument. Even as a folk
singer, Greil Marcus put it, Dylan-né-Zimmerman was “always suspect.”
He “moved too fast, learned too quickly, made the old new too easily.”
And Del Rey made her old new too easily. She whirled too quickly, and
on a crooked axis. Her voice, my colleague Spencer Kornhaber has
written, ranges “from gloomy sigh to Betty Boop–ish squeal”; her persona,
it seems, is equally variable. And so the singer broke the compact that was
Lana Del Rey, Internet Meme
implied in her viral celebrity, which was that she would play to the platform
that created her in her current form. She used the world of the Internet—
a world that prides itself on its meritocracy, that enables and ennobles
organic self-expression—for the very analog purposes of album sales and
album tours and Saturday Night Live spots and Vogue covers.
It worked for her. Her album debuted at No. 2 on the
Billboard 200, right above Dylan’s old friend Leonard Cohen. But
her fame was, in its very fame-iness, also a violation—a breach of
contract. “People seem to feel that Del Rey is trying to trick us,” Sasha
Frere-Jones put it.
You’d think, in this era of the Facebook Timeline—an era that finds
us archiving our own evolutions in the vast scrapbook of the Internet—
we would be more tolerant of evolution in our celebrities. You’d think,
actually, that we would celebrate it. And, to some extent, we do. Madonna
has made her history part of her current story. Lady Gaga’s transformation
from cabaret-style pianist into egg-dweller is there, on YouTube,
for all to see.
As an Internet celebrity, though—a celebrity made by the Internet,
rather than simply put upon it—Lana Del Rey lacks the privilege to
transform. We, the Web, are responsible for her. And we are also culpable
for her. We have given her one of the most precious things we have to
bestow in a world of socialized commerce: our endorsement. And that
means that we are invested in her authenticity—or, more specifically, her
ability to produce authenticity—more than we are in celebrities whose
origin stories are of the analog world. She is us; we are our tastes. What
my colleagues are sharing on Twitter, what my friends are listening to
on Spotify, what my neighbors are rating on Yelp—these, increasingly, are
determining the products I choose to consume, as well. Their choices are
influencing mine, explicitly.
Which means that our choices, collectively, take on the freight of
artistic implication. Taste making becomes not just a byproduct of social
interaction; it becomes, in some sense, the point. My friends are not just
my friends, but my filters. They play a curatorial role in my online life, just
as I play one in theirs. And we are all increasingly conscious of that fact.
This invests us much more personally in the things and the people we
part one. Nuggets
endorse commercially. And it means that authenticity becomes a matter
of not just subjective value, but of collective value. We get to decide, we
tell ourselves, whether Lana Del Rey is sincere in the art she produces for
us. Because being true to herself means also being true to us, her audience
and her enablers. It means, apparently, giving herself—her self—over to the
community that propelled her rise. The title of the singer’s album brought
forth by the Internet is Born to Die. It seems appropriate.
10.
Copy, Paste, Remix
T H E A N A L O G R O O T S O F D I G I TA L
TRICKS
by Rebecca J. Rosen
Jacob Samuel insists that he is not an artist. “I do not think like an artist,”
he says. “I do not have creative thoughts. I have technical thoughts.” And
yet this technologist has had tremendous success as an artist. Early in
,
three pieces of his work were featured in a major exhibition at the Museum
of Modern Art in New York. How is that possible?
The answer begins with a peculiar contraption that Samuel designed
in
: a portable printing studio, built of bellows from two -by- view
cameras and surgical tubing, and powered by Samuel’s own breath. With
his portable gadget, he travels the world making aquatints, a variety of
printmaking that involves etching a copper plate with an acid wash, and
so called because the resulting prints resemble watercolors. The portable
studio, Samuel told me, “gave me the freedom to be able to work with artists
in their own studios.” He’s not the artist but the instigator, the technician
part one. Nuggets
whose magic box can draw an artist into a medium he or she has never
explored before.
The exhibit, Print/Out, was a major retrospective of the past two
decades of print art—a medium that is at its core about replication, the
duplication of a work, whether on an inked-up letterpress or a laser-jet
printer. The works of the exhibit serve to remind us that replication is
not a child of the digital age. While “copy” and “paste” may be the most
basic of computer tricks, their analog names did not come out of nowhere.
Printmaking has always been about art that can be replicated—and the
manipulation that makes each iteration new.
All three of Samuels’ Print/Out pieces are collaborations: “Spirit Cooking,” with the performance artist Marina Abramovi ; an untitled work with
Gert and Uwe Tobias; and “Coyote Stories,” a project with the painter Chris
Burden featuring delicate prints of objects—knives, potato chips, a wallet—
that illustrate a series of handwritten stories.
In some ways, Samuel’s collaboration with Burden, “Coyote Stories,”
perfectly encapsulates the centuries-deep reach of Samuel’s work. The piece
is a series of stories about Burden’s encounters with coyotes in Southern
California. Samuel initially thought they would print the stories from
letterpress, so he had one of the stories typeset. Burden thought the print
looked “too mechanical.” He wanted to write it out by hand.
The trick was that for Samuel to print the stories from an etching,
Burden would have to write the stories by hand on an etching plate
backward, because the process flips the impression. Burden was unsatisfied
with his backward handwriting—it didn’t look sufficiently natural. He
wanted to write it on a legal pad and use that. How would Samuel make
it look like a print? His solution was to take Burden’s handwritten stories,
scan them into Photoshop, and print them out onto a Japanese paper known
as surface Gampi.
But something was still missing: when an artist prints from an etching
plate, the paper takes on the imprint of the plate’s beveled border. So once
Samuel had printed the scans, he took a blank, legal pad–size copper plate
and ran each print through the press with the plate, giving them the beveled
marks of an etching.
The resulting prints look like etchings—a
-year-old technique—but
Copy, Paste, Remix
they are printouts of digital scans of stories handwritten on a legal notepad.
In a similar project with the glass artist Josiah McElheny (not featured
in Print/Out), Samuel traced glass-making templates, printed aquatint
etchings from those, xeroxed the images, scanned those, manipulated them
using Adobe Illustrator, turned them into photo-engraving plates (“which
is a technology basically from the
s and
s”), and then printed the
results as etchings. “We went from these tracings, to these xeroxes, to
Photoshop, . . . and then all of the sudden bringing it all the way back
to
in terms of plate making, and then printing them as standard
etchings.” The final print was on Japanese paper; McElheny was working
from Sweden; Samuel was in Los Angeles.
All of this—moving information and images from medium to medium,
remixing them at every step along the way, collaborating across international boundaries—sounds so very familiar. They are actions we tend to
think of as belonging more to the digital world than the physical one. But
digital tools have not fundamentally altered the basic principles of print—
replication and remixing—so much as amplified them.
MoMA’s show highlighted these principles over and over again. A
piece by the German artist Martin Kippenberger was perhaps the most
extreme example: Kippenberger had an assistant make reproductions of his
own work. Unsatisfied with the assistant’s reproductions (Kippenberger
deemed them too good), Kippinberger destroyed them and disposed of them
in a dumpster. He took a photograph of the paintings as they lay broken in
a dumpster, screenprinted the photograph, and then, in the last stage, took
a circular saw to the print to manipulate the physical object so that it would
be unique. It’s not a digital process; it’s an analog one, but nevertheless one
whose basic pathology is copy, manipulate, repeat.
Print/Out captured this process in its very design. A spread of dots
appeared several times on the exhibits’ walls, each time with the same
set of prints displayed with slight variations in arrangement and edition.
Through this motif, the exhibit itself became a print, replicated, edited, and
replicated again.
The MoMA exhibit is a reminder that the power of print—its ability to
spread ideas has been observed at the center of many social and political
movements—is enhanced, not supplanted, by new techniques. Sometimes
part one. Nuggets
modern innovations can breathe life back into older techniques, not so
much rendering them obsolete as reviving them. For printmaking, the
results are works whose creation can be quite complex—remix after remix
after remix—but whose beauty, in the end, is really quite simple.
11.
The Sonic Fiction of the
Olympics
T H E AU D I O F R O M Y O U R
FAV O R I T E E V E N T S I S N ’ T R E A L .
I T’ S MUCH BETTER.
by Alexis C. Madrigal
When the London
Olympics begin in a couple of weeks, a menagerie
of sports will take over the world’s TV screens. Tens of millions of people
will watch archery, diving, and rowing.
Or at least we call it watching.
Really, there are two channels of information emanating from your
flatscreen: the pictures and the sound. What you see depends, in part, on
what you hear. To be immersed in a performance on the uneven bars, we
need to hear the slap of hands on wood, and the bar’s flexing as the athlete
twirls. Watching sports on mute is like eating an orange when you have a
stuffy nose.
part one. Nuggets
A massive sporting extravaganza like the Olympics requires massive
media production. The television broadcasts from the Olympics aren’t
merely acts of capturing reality, but acts of creation. TV sporting events are
something we make, and they have a tension at their core: on the one hand,
we want to feel as if we watched from the stands, but on the other, we want
a fidelity and intimacy that is better than any in-person spectating could
be. Our desire is for the presentation of real life to actually be better than
real life.
This is most apparent on the soundtrack, where dozens of just barely
detectable decisions are made to manipulate your experience. Behind those
decisions is the audio engineer Dennis Baxter, who has been working on
the Olympics for years.
“I am not a purist whatsoever in sound production,” Baxter says
in the BBC documentary The Sound of Sport, produced by Peregrine
Andrews. “I truly believe that whatever tool it takes to deliver a highquality, entertaining soundscape, it’s all fair game.”
For the London Olympics, Baxter will deploy
mixers,
sound
technicians, and ,
microphones. Using all the modern sound technology they can get their hands on, his team will shape your experience to
sound like a lucid dream, a movie, of the real thing.
Let’s take archery. “After hearing the coverage in Barcelona at the ’
Olympics, there were things that were missing. The easy things were there:
the thud and the impact of the target—that’s a no brainer—and a little bit
of the athlete as they’re getting ready,” Baxter says.
“But—it probably goes back to the movie Robin Hood—I have a memory
of the sound, and I have an expectation. So I was going, ‘What would be
really really cool in archery to take it up a notch?’ And the obvious thing
was the sound of the arrow going through the air to the target. The pfft-pfftpfft type of sound. So we looked at this little thing, a boundary microphone,
that would lay flat; it was flatter than a pack of cigarettes, and I put a little
windshield on it, and I put it on the ground between the athlete and the
target, and it completely opened up the sound to something completely
different.”
Just to walk through the logic: based on the sound of arrows in a
fictional Kevin Costner movie, Baxter created the sonic experience of sitting
The Sonic Fiction of the Olympics
between the archer and the target, something no live spectator could have.
On the gradient between recording and creating, diving requires even
more production. “You can really separate the ‘above’ sounds in the
swimming hall and the ‘below’ sounds, the underwater sound. It really
conveys the sense of focus and the sense of isolation of the athlete,” Baxter
continues. “We have microphones on the handrails as the divers walk up.
You can hear their hands. You can hear their feet. You can hear them
breathing.”
Then, as they reach the water, a producer remixes the audio track to pull
from the underwater hydrophone at the bottom of the pool. Now you are
literally pulling audio from in the pool. “You can hear the bubbles. You get
the complete sense of isolation, of the athlete all alone,” Baxter concluded.
But sometimes, there is no way to capture the sound of being there.
There are vast differences in the physics of recording light and sound waves.
Camera lenses can provide better than
x zoom with near-perfect fidelity.
Microphones are not lenses; sound waves are not photons. Moving air has
to strike a diaphragm, and move it, in order to generate the electrical signals
we play back as sound. You have to get close, and you can’t block out all
background noise.
Baxter’s answer to this problem appears on Roman Mars’ spectacular 99% Invisible podcast, which excerpted the BBC program.
“In Atlanta, one of my biggest problems was rowing. Rowing is a twokilometer course. They have four chaseboats following the rowers and they
have a helicopter. That’s what they need to deliver the visual coverage of
it,” Baxter explains. “But the chaseboats and the helicopter just completely
wash out the sound. No matter how good the microphones are, you cannot
capture and reach and isolate sound the way you do visually. But people
have expectations. If you see the rowers, they have a sound they are
expecting. So what do we do?”
Well, they made up the rowing noises and played them during the
broadcast of the event, like a particularly strange electronic music show.
“That afternoon we went out on a canoe with a couple of rowers
[and] recorded stereo samples of the different type of effects that would be
somewhat typical of an event,” Baxter recalls. “And then we loaded those
recordings into a sampler and played them back to cover the shots of the
part one. Nuggets
boats.”
The real sound, of course, would have included engine noises and a
helicopter whirring overhead. The fake sound seemed normal, just oars
sliding into water. In a sense, the real sound was as much a human creation
as was the fake sound, and probably a lot less pleasant to listen to.
So, in order to make a broadcast appear real, the soundtrack has to be
faked, or to put it perhaps more accurately, synthesized. We have a word
for what they’re doing: This is sonic fiction. They are making up the sound
to get at the truth of a sport.
There is clearly a difference between making up the audio for the
rowing competition and using a bunch of different mics, mixes, and
production techniques to achieve the most exciting effect in diving. But
how bright is the line that separates factual audio from fictional audio? A
bit thinner and more porous than we’d like? If we want hyperreality as an
end, can we really quibble about the means?
12.
Reel Faith
HOW T H E DRI V E- I N MOV I E
T H E AT E R H E L P E D C R E AT E T H E
M E G AC H U R C H .
by Megan Garber
The Crystal Cathedral rises, like a spaceship, out of an impossibly green
lawn in Garden Grove, in Orange County, California. The structure is
neither crystal nor, technically, a cathedral, but it’s acted as an archetype
for a th-century phenomenon: the Christian megachurch. From the
church’s soaring, sunlit pulpit, the charismatic preacher Rev. Robert
Schuller would speak to a sea of worshippers—not just to congregants
in the cavernous room itself, his image amplified by a Jumbotron, but also
to a much wider audience via the church’s iconic Hour of Power show.
If Christianity exists to be spread, the Crystal Cathedral has existed to do
that spreading. It’s been at once a place of worship and a TV studio.
The Crystal Cathedral has been in the news most recently for its financial troubles—culminating in bankruptcy, a controversial shift in
part one. Nuggets
the the church’s leadership structure, and, finally, the sale of the
cathedral itself to a neighboring (Catholic) diocese. In June
, the
church ministry announced that the congregation would be moving its
services to a smaller building, a currently Catholic church a mile down
the road from the cathedral’s compound. In that, the cathedral also seems
symbolic of its times.
But if a church is a kind of technology—of media, of communication, of
community—then it’s fitting that even a megachurch would have a humble
start-up story. And the origins of the Crystal Cathedral, for all its shine and
swagger, are more garage than skyscraper.
In
, the Reformed Church in America gave a grant of $
to Reverend Schuller and his wife, Arvella. The young couple were to
start a ministry in California; for that, they needed to find a
venue that would host their notional congregation. While making the trip
from Illinois, on Route , the reverend took a napkin and listed sites that
could host his budding ministry. Researching the matter further, however,
Schuller discovered that the first nine of those options were already in use
for other purposes. So he set his sights on the last: the Orange Drive-In
Theatre.
The efficiencies of the venue were obvious: For cinematic purposes,
the drive-in was useful only in the darkness, which meant that it could
play an effortlessly dual role, theater by night and church by day. The
architecture and technological system built for entertainment could be
repurposed, hacked even, to deliver a religious ceremony for the golden
age of the car. An early advertisement announced the new ministry’s
appeal: “The Orange Church meets in the Orange Drive-In Theater where
even the handicapped, hard of hearing, aged and infirm can see and hear
the entire service without leaving their family car.”
As the Schullers embraced their makeshift new venue, a congregation,
in turn, came to embrace them. Disneyland had just opened in Anaheim, a
couple miles down the road from the Orange; the entire area was developing rapidly. Each week, Reverend Schuller led a growing group of carconfined congregants from the tar-paper roof of the theater’s snack bar.
Arvella provided music for each service from an electronic organ mounted
on a trailer. Worshippers listened to them via portable speakerboxes.
Reel Faith
Church rubrics, the guidebooks for services, included instructions not
only about when to sing, speak, and stay silent but also for mounting the
speakers onto car windows.
The Schullers, and their contemporary entrepreneurs of
religiosity, had happened onto an idea that made particular cultural
sense at its particular cultural moment: In the mids, Americans
found themselves in the honeymoon stage of their romances with both the
automobile and the television. And they found themselves seeking forms
of fellowship that mirrored both the community and individuality that
those technologies encouraged. As one former congregant put it: “Smoke
and be in church at the same time, at a drive-in during the daytime. What
a trip!”
Drive-in theaters, the historian Erica Robles-Anderson says, were
a kind of stop-gap technology: a fusion of the privacy and publicness
that cars and TVs engendered. (And they’d long had unique by-day
identities—as makeshift amusement parks, as venues for traveling flea
markets, as theaters for traveling vaudeville acts.) We tend to think of
suburbs, Robles-Anderson told me, as symbols of the collapse of civic life;
drive-ins, however, represented a certain reclaiming of it. And a drivein church service was an extension of that reclamation. It was,
with its peculiar yet practical combination of openness and enclosure, an
improvised idea that happened to fit its time. The Schullers’ motto? “Come
as you are in the family car.”
As the size of the Orange Drive-In congregation grew, Schuller set
his sights toward expansion, purchasing a -acre plot of land in nearby
Garden Grove. He began construction on a venue that would feature
not only a classically physical church, but also a
-car parking lot
to continue the drive-in tradition. The compound was designed by the
architect Richard Neutra, who was known for open-air structures that
emphasized the fluidity between interior and exterior. The building—the
Garden Grove Community Drive-In Church—would combine the appeal
of the car-friendly service with the tradition of indoor worship: it would
employ a “walk-in, drive-in” approach that would allow congregants either
to sit indoors, in pews, or to park their cars outside of the church, listening
in remotely. In either case, the ceremony would center around Schuller,
part one. Nuggets
who preached on a stage-like balcony built into the corner of the church’s
exterior wall.
In
, as the congregation—and Orange County—continued to grow,
Schuller purchased another -acre plot: a walnut grove that bordered the
north side of the Garden Grove church. The new church would be designed
by the architect Philip Johnson. It would be soaring and striking. It would
be constructed entirely of glass.
Today, as the congregation of the Crystal Cathedral prepares to move
away from the church it’s called home, and as a new congregation
prepares to move in, the building is a reminder of what’s both reassuring
and problematic about solid structures: they don’t move. But people do.
Today, the cathedral’s current congregation is reluctantly doing what its
predecessors pioneered half a century ago: driving away.
13.
What It’s Like to Hear Music for
the First Time
AU S T I N C H A P M A N I S L I S T E N I N G
TO MUSIC FOR THE FIRST TIME
IN HIS LIFE, AND I T SOUNDS
GLORIOUS.
by Rebecca J. Rosen
Austin Chapman was born profoundly deaf. Hearing aids helped some, but
music—its full range of pitches and tones—remained indecipherable. As
Chapman explains, “I’ve never understood it. My whole life I’ve seen
hearing people make a fool of themselves singing their favorite song or
gyrating on the dance floor. I’ve also seen hearing people moved to tears by
a single song. That was the hardest thing for me to wrap my head around.”
But earlier this month, that changed when Chapman got new hearing
aids (Phonak’s Naída S Premium). Suddenly:
part one. Nuggets
The first thing I heard was my shoe scraping across the carpet;
it startled me. I have never heard that before and out of
ignorance, I assumed it was too quiet for anyone to hear.
I sat in the doctor’s office frozen as a cacophony of sounds
attacked me. The whir of the computer, the hum of the AC, the
clacking of the keyboard, and when my best friend walked in I
couldn’t believe that he had a slight rasp to his voice. He joked
that it was time to cut back on the cigarettes.
That night, a group of close friends jump-started my
musical education by playing Mozart, Rolling Stones, Michael
Jackson, Sigur Ros, Radiohead, Elvis, and several other popular
legends of music.
Being able to hear the music for the first time ever was
unreal.
When Mozart’s Lacrimosa came on, I was blown away by
the beauty of it. At one point of the song, it sounded like angels
singing and I suddenly realized that this was the first time I
was able to appreciate music. Tears rolled down my face and I
tried to hide it. But when I looked over I saw that there wasn’t
a dry eye in the car.
Go find a version of Mozart’s “Lacrimosa” right now—we’ll wait as you
listen. Now think: What would this sound like, what would this feel like, if
you had not been listening to music your whole life? I’m pretty sure you’d
have tears rolling down your face too.
Following that experience, Chapman did what any smart, Internetconnected -year-old with a question for a crowd would do: he turned
to Reddit, asking, “I can hear music for the first time ever, what should I
listen to?”
The response was tremendous, running more than ,
comments
and garnering the attention of Spotify, which gave him six months of free
membership and a -hour playlist that covers a huge range of music. In
the Reddit conversation, bands like The Beatles and Led Zeppelin figure
prominently, as do classical composers such as Beethoven (sidenote: Can
you imagine listening to this for the first time?). Overall, Chapman said
to me over e-mail, Beethoven’s Ninth was the most recommended piece.
What It’s Like to Hear Music for the First Time
There’s more music suggested in those comments than it seems possible
to consume in a lifetime. Chapman says he’s going with the recommendation of one GiraffeKiller: “This is like introducing an Alien to the music of
Earth. I wouldn’t know where to start. Once you’re through your kick on
Classical, I might start with music from the 0 s and progress through each
decade. You can really see the growth of modern music like that.” Except
he’s going to start earlier—way earlier—with Guillaume de Machaut’s
“Agnus Dei,” which dates to the th century.
But first, as Chapman writes, “I did the only sensible thing and went on
a binge of music.” From that binge, he composed his top-five list:
. Mozart’s Lacrimsoa. . . I know it’s a depressing song
but to me it represents the first time I could appreciate and
experience music.
. The soundtrack to Eleven Eleven. . . I can see how
this comes off as narcissistic, it being my own film and all but
it’s such a personal work that when I listened to it for the first
time I broke down. I felt like I was truly seeing the film for
the first time ever. I’m grateful that Cazz was able to capture
the tone perfectly. We discussed the film and specific scenes
with essay-sized reasoning/deliberations on what should be
conveyed. The critical response to the film surprised me and
I still didn’t quite get it until seeing the visual images coupled
with the soundtrack.
. Sigur Ros’s Staralfur. . . The first song I had to listen
to again, over and over.
. IL Postino-Luis Bacalov
. Minnesota’s A Bad Place [original by Shotgun Radio
featuring Mimi Page, remixed by Minnesota]
I wanted to better understand what this experience was like for Chapman. How much of our experience of music is the cultural connotations
we have absorbed, and how much of it can be conveyed to someone who
is hearing everything for the first time? How do you develop preferences
when everything is all so unmoored from the taxonomy of genre and the
nostalgia a song can evoke?
part one. Nuggets
I exchanged e-mails with Chapman to get more of a sense of what
music he is enjoying and what he hasn’t quite warmed to. The first
and clearest thing that comes across: taste does not take long to develop.
Right from the get-go, Chapman had a very strong (and, in my personal
estimation, very good) sense of what he liked and did not. Top of the “like”
list? Classical music, which he said was “the most beautiful genre to listen
to.” Country was, so far, his least favorite. “It’s very heavy on vocals and
since I can’t clearly understand the words, the story is lost on me. Instead
it just sounds like a man or woman crying for a couple minutes.”
Chapman is careful to point out that he doesn’t mean to belittle the
genre (he tells me that he “absolutely loved” Johnny Cash, “Folsom Prison
Blues” being a new favorite). Much of what music sounds like to him, and
his reaction to it, may be more the result of how the music sounds coming
through the hearing aids, and of his own gradual learning of how to hear
and listen. “I’m not a huge fan of vocals or extreme overlapping of sounds
because my brain is still adjusting to the world of sound,” he told me.
In general, his preferences tends toward what he terms as “melodic
or soothing.” In particular, the Icelandic band Sigur Rós has become his
favorite. “Every song [of theirs] haunts me and I’m not even
percent
done listening to everything by them.” He’s also enjoyed Radiohead, Pink
Floyd, Rolling Stones, Queen, and some occasional dubstep too. Dubstep
has really benefited from the new hearing aids, Chapman says. Before them,
“I could feel the bass, but because I couldn’t hear the higher tones, it was
like listening to half of the song, so I never really dug it. Now . . . being able
to hear and understand almost the full spectrum of sound has given me a
whole different view on bass. To put it simply, I’m head over heels in love
with bass.”
With so much more to listen to, Chapman says that ”ironically
enough, I’m turning my hearing aids off more often than before.” There
are too many annoying sounds.
“Silence is still my favorite sound,” he writes. “When I turn my aids off
my thoughts become more clear and it’s absolutely peaceful.”
part two.
Provocations
14.
It’s Technology All the Way
Down
TO BE HUMAN IS TO BE A USER
(AN D MAK ER, AN D REMAK ER) OF
T E C H N O L O G Y.
by Alexis C. Madrigal
Jezebel’s Lindy West makes a profound point about the desire to fix
a certain technological era as the “natural” one. She also makes it in the
funniest way possible:
Humanity isn’t static—you can’t just be okay with all development up until the invention of the sarong, and then
declare all post-sarong technology to be “unnatural.” Sure,
cavemen didn’t have shoes. Until they invented fucking shoes!
(You know what else they didn’t have? Discarded hypodermic
needles. Broken glass. Used condoms.) They also didn’t have
antibiotics, refrigeration, written languages, wheels, patchouli,
part two. Provocations
the internet, and NOT LIVING IN A ROCK WITH A HOLE IN
IT. But I don’t see anyone giving any of that up in the name
of “health.” Hey, why not install a live gibbon in your fridge so
you have to fight it to get to your bison jerky? Just like nature!
I think people should (and cannot help but) shape the technologies that
they use. But I despise the easy recourse to “natural” that some technology
opponents want to make. Because the alternative to st-century technology is not “nature,” but some other technologically mediated reality. Tired
of staring at a computer all day? Well, go back
years and you’d have
been working in a grueling factory all day. Go back another century and
you’d have been tending a field all day. Go back
years and more than
half of your children would have died before the age of . And yet you’d
still be using all kinds of human-made objects and systems. Humans may
have been deploying fire for 1 MILLION YEARS. No matter how far back
you go, you’ll find us shaping our environment. It’s technology all the way
down.
But that realization is a good thing. Rather than misdirecting our efforts
to pursue an impossible and hazy dream of the natural, we can fight to
reshape our tools to align with our social goals, realizing that part of what
it is to be human is to use, make, and remake tools.
15.
The Landline Telephone Was the
Perfect Tool
A N I D E A A B O U T H O W W E WA N T
OUR TECH TO BE
by Suzanne Fischer
Ivan Illich would have approved of the Internet.
Born in Vienna between the two world wars, the public intellectual and
radical priest Ivan Illich had by his mid- s set out to rethink the world. In
, having arrived in Mexico by way of New York City and Puerto Rico,
he started the Center for Intercultural Documentation learning center in
Cuernavaca, an unlikely cross between a language school for missionaries,
a free school, and a radical think tank, where he gathered thinkers and
resources to conduct research on how to create a world that empowered
the oppressed and fostered justice.
Illich reframed people’s relationships to systems and society, in
everyday, accessible language. He advocated for the reintegration of
community decision making and personal autonomy into all the systems
part two. Provocations
that had become oppressive: school, work, law, religion, technology,
medicine, economics. His ideas influenced
s technologists and
the appropriate-technology movement. Can they be useful today?
In
, Illich published what is still his most famous
book, Deschooling Society. He argued that the commodification
and specialization of learning had created a harmful education system
that had become an end in itself. In other words, “the right to learn is
curtailed by the obligation to attend school.” For Illich, language often
illustrated the way toxic ideas poison the ways we relate to each other. I
want to learn, he said, had been transmuted by industrial capitalism into
I want to get an education, transforming a basic human need for learning
into something transactional and coercive. He proposed a restructuring
of schooling that would replace the manipulative system of qualifications
with self-determined, community-supported, hands-on learning. One of
his suggestions was “learning webs,” whereby a computer could help
match learners with people who had knowledge to share. This skillshare
model was popular in many radical communities.
With Tools for Conviviality (
), Illich extended his analysis of
education to a broader critique of the technologies of Western capitalism.
The major inflection point in the history of technology, he asserted, is
when, in the life of each tool or system, the means overtake the end. “Tools
can rule men sooner than they expect; the plow makes man the lord of
the garden but also the refugee from the dust bowl.” Often this effect is
accompanied by the rise in power of a managerial class of experts; Illich
saw technocracy as a step toward fascism. Tools for Conviviality points out
the ways in which a helpful tool can evolve into a destructive one, and
offers suggestions for how communities can escape the trap.
So what makes a tool “convivial”? For Illich, “tools foster conviviality
to the extent to which they can be easily used, by anybody, as often or as
seldom as desired, for the accomplishment of a purpose chosen by the user.”
That is, convivial technologies are accessible, flexible, and noncoercive.
Many tools are neutral, but some promote conviviality and some choke
it off. Hand tools, for Illich, are neutral. Illich offered the telephone as an
example of a tool that is “structurally convivial”—remember, this is in the
days of the ubiquitous public pay phone. “The telephone lets anybody say
The Landline Telephone Was the Perfect Tool
what he wants to the person of his choice; he can conduct business, express
love, or pick a quarrel,” he pointed out. “It is impossible for bureaucrats to
define what people say to each other on the phone, even though they can
interfere with—or protect—the privacy of their exchange.”
A “manipulatory” tool, on the other hand, blocks other choices. The
automobile and the highway system it spawned are, for Illich, prime
examples of this process. Licensure systems that devalue people who have
not participated, such as compulsory schooling, are another example. But
these kinds of tools—that is, large-scale industrial production—would not
be prohibited in a convivial society: “What is fundamental to a convivial
society is not the total absence of manipulative institutions and addictive
goods and services, but the balance between those tools which create the
specific demands they are specialized to satisfy and those complementary,
enabling tools which foster self-realization.”
To foster convivial tools, Illich proposed a research program with “two
major tasks: to provide guidelines for detecting the incipient stages of
murderous logic in a tool; and to devise tools and tool systems that optimize
the balance of life, thereby maximizing liberty for all.” He also suggested
that pioneers of a convivial society work through the legal and political
systems and reclaim them for justice. We cannot abdicate our right to selfdetermination, and our right to decide how far is far enough. “The crisis I
have described confronts people with a choice between convivial tools and
being crushed by machines.”
Illich’s ideas on technology, like his ideas on schooling, were influential
among those who spent the
s thinking that we might be on the cusp of
another world. Some of those utopians included early computer innovators,
who saw their culture of sharing, self-determination, and DIY as something
that should be baked into tools.
The computing pioneer Lee Felsenstein has spoken about the
direct influence of Tools for Conviviality on his work.
For him, Illich’s description of the radio as a convivial tool for Central
Americans was a model for computer development: “The technology itself
was sufficiently inviting and accessible to them that it catalyzed their
inherent tendencies to learn. In other words, if you tried to mess around
with it, it didn’t just burn out right away. The tube might overheat, but it
part two. Provocations
would survive and give you some warning that you had done something
wrong. The possible set of interactions, between the person who was trying
to discover the secrets of the technology and the technology itself, was
quite different from the standard industrial interactive model, which could
be summed up as ‘If you do the wrong thing, this will break, and God help
you.’ . . . And this showed me the direction to go in. You could do the same
thing with computers as far as I was concerned.” Felsenstein described the
first meeting of the legendary Homebrew Computer Club, where
or so
people tried to understand the Altair together, as “the moment at which
the personal computer became a convivial technology.”
In
, Valentina Borremans, of the Center for Intercultural
Documentation, prepared a reference guide to convivial tools.
It listed many of the new ideas in
s appropriate technology—food selfsufficiency, Earth-friendly home construction, new energy sources. But our
contemporary convivial tools are mostly in the realm of communications.
At their best, personal computers, the Web, mobile technology, the opensource movement, and the maker movement are contemporary convivial
tools. What other convivial technologies do we use today? What tools
should we make more convivial? Ivan Illich would exhort us to think
carefully about the tools we use, and what kind of world they are making.
16.
Privacy and Power
A G G R E G AT I N G D ATA G I V E S
C O M PA N I E S I M M E N S E
P E R S UA S I V E P O W E R .
by Alexander Furnas
Jonathan Zittrain noted last summer, “If what you are getting online is for
free, you are not the customer, you are the product.” This is just a fact:
the Internet of free platforms, free services, and free content is wholly
subsidized by targeted advertising, the efficacy (and thus profitability)
of which relies on collecting and mining user data. We experience this
commodification of our attention every day in virtually everything we do
online, whether it’s searching, checking e-mail, using Facebook, or reading
The Atlantic‘s Technology Channel. That is to say, right now you are a
product.
Most of us, myself included, have not come to terms with what it
means to “be the product.” In searching for a framework to make sense of
this new dynamic, we often rely on well-established pre-digital notions of
part two. Provocations
privacy. The privacy discourse frames the issue in an egocentric manner, as
a bargain between consumers and companies: the companies will know x, y,
and z about me, and in exchange I get free e-mail, good recommendations,
and a plethora of convenient services. But the bargain we are making is a
collective one, and the costs will be felt on a societal scale. When we think
in terms of power, we can clearly see that we’re getting a raw deal: we
grant private entities—with no interest in the public good and no public
accountability—greater powers of persuasion than anyone has ever had
before, and in exchange we get free e-mail.
The privacy discourse is propelled by the “creepy” feeling of being
under the gaze of an omniscient observer that comes over us when we see
targeted ads based on data about our behavior. Charles Duhigg recently
highlighted a prime example of this data-driven creepiness when he
revealed that Target is able to mine purchasing-behavior data to determine whether a woman is pregnant, sometimes before she has even told
her family. Fundamentally, people are uncomfortable with the idea that an
entity knows things about them that they didn’t tell it, or at least that they
didn’t know they told it.
For many people, the data-for-free-stuff deal is a bargain worth
making. Proponents of this hyper-targeted world tell us to “learn to love”
the targeting—after all, we are merely being provided with ads for “stuff
you would probably like to buy.” Oh, I was just thinking I needed a new
widget, and here is a link to a store that sells widgets! It’s great, right?
The problem is that, in aggregate, this knowledge is powerful, and we are
granting those who gather our data far more power than we realize. These
data-vores are doing more than trying to ensure that everyone looking for
a widget buys it from them. No, they want to increase demand. Of course,
increasing demand has always been one of the goals of advertising, but
now they have even more tools to do it.
Privacy critics worry about what Facebook, Google, and Amazon know
about them, whether they will share that information or leak it, and maybe
whether the government can get that information without a court order.
While these concerns are legitimate, I think they miss the broader point.
Rather than caring about what they know about me, we should care
about what they know about us. Detailed knowledge of individuals and
Privacy and Power
their behavior coupled with the aggregate data on human behavior now
available at unprecedented scale grants incredible power. Knowing about
all of us—how we behave, how our behavior has changed over time, under
what conditions our behavior is subject to change, and what factors are
likely to impact our decision making under various conditions—provides
a roadmap for designing persuasive technologies. For the most part, the
ethical implications of these technologies’ widespread deployment remains
unexamined.
Using all the trace data we leave in our digital wakes in order to
target ads is known as “behavioral advertising.” This is what Target was
doing to identify pregnant women, and what Amazon does with every
user and every purchase. But behavioral advertisers do more than just
use your past behavior to guess what you want. Their goal is actually to alter user behavior. Companies use extensive knowledge gleaned
from innumerable micro-experiments and massive amounts of userbehavior data to design their systems to elicit the monetizable behavior
that their business models demand. At levels as granular as Google testing click-through rates on 41 different shades of blue, data-driven
companies have learned how to channel your attention, initiate behavior,
and keep you coming back.
Keen awareness of human behavior has taught them to harness
fundamental desires and needs, short-circuiting feedback mechanisms
with instant rewards. Think of the “gamification” that now proliferates
online—nearly every platform has some sort of reward or reputation point
system encouraging you to tell them more about yourself. Facebook, of
course, leverages our innate desires—autobiographical identity construction and interpersonal social connection—as a means of encouraging the
self-disclosure from which it profits.
The persuasive power of these technologies is not overt. Indeed, the
subtlety of the persuasion is part of their strength. People often react
negatively if they get a sense of being “handled” or manipulated. (This
sense is where the “creepiness” backlash comes from.) But the power is very
real. Target, for instance, now sends coupon books with a subtle but very
intentional emphasis on baby products to women it thinks are pregnant,
instead of more explicitly tailored offers that reveal how much the company
part two. Provocations
knows.
The tech theorist Bruno Latour tells us that human action is
mediated and “co-shaped” by artifacts and material conditions. Artifacts
present “scripts” that suggest behavior. The power to design these artifacts
is, then, necessarily the power to influence action. The mundane example
of Amazon.com illustrates this well:
The goal of this Web site is to persuade people to buy products
again and again from Amazon.com. Everything on the Web
site contributes to this result: user registration, tailored information, limited-time offers, third-party product reviews, oneclick shopping, confirmation messages, and more. Dozens of
persuasion strategies are integrated into the overall experience.
Although the Amazon online experience may appear to be
focused on providing mere information and seamless service,
it is really about persuasion—buy things now and come back
for more.
In some ways, this is just an update to the longstanding discussion
in business-ethics circles about the implications of persuasive advertising.
Behavioral economics has shown that humans’ cognitive biases can be
exploited, and Roger Crisp has noted that subliminal and persuasive
advertising undermines the autonomy of the consumer. The advent of bigdata and user-centered design has provided those who would persuade with
a new and more powerful arsenal. This has led tech-design ethicists to
call for the explicit “moralization of technology,” wherein designers
would have to confront the ethical implications of the actions they shape.
There is another significant layer, which complicates the ethics of data
and power. The data all of these firms collect are proprietary and closed.
Analysis of human behavior from the greatest trove of data ever collected is
limited to questions of how best to harvest clicks and turn a profit. Not that
there is no merit to this, but only these private companies and the select few
researchers they bless can study these phenomena at scale. Thus, industry
outpaces academia, and the people building and implementing persuasive
technologies know much more than the critics. The result is a fundamental
information asymmetry. The data collectors have more information than
Privacy and Power
those they are collecting the data from; the persuaders have more power
than the persuaded.
Judging whether this is good or bad depends on your framework for
evaluating corporate behavior and the extent to which you trust the market
as a force to prevent abuse. To be sure, there is a desire for the services
that these companies offer, and they are meeting a legitimate market
demand. However, in a sector filled with large oligopolistic firms bolstered
by network effects and opaque terms-of-service agreements laden with fine
print, there are legitimate reasons to question the efficacy of the market as
a regulator of these issues.
A few things are certain, however. One is that the goals of the
companies collecting the data are not necessarily the same as the goals
of the people they are tracking. Another is that as we establish norms for
dealing with personal and behavioral data, we should approach the issue
with a full understanding of the scope of what’s at stake. To understand
the stakes, our critiques of ad-tracking (and the fundamental asymmetries
it creates) need to focus more on power and less on privacy.
The privacy framework tells us that we should feel violated by what
these companies know about us. Understanding these issues in the context
of power tells us that we should feel manipulated and controlled.
This piece was informed by discussions with James Williams, a
doctoral candidate at the Oxford Internet Institute researching the ethical
implications of persuasive technologies.
17.
The Jig Is Up
I T ’ S T I M E T O G E T PA S T
FA C E B O O K A N D I N V E N T A N E W
F U T URE.
by Alexis C. Madrigal
We’re there. The future that visionaries imagined in the late
s of phones
in our pockets and high-speed Internet in the air: well, we’re living in
it.
“The third generation of data and voice communications—the convergence of mobile phones and the Internet, high-speed wireless data access,
intelligent networks, and pervasive computing—will shape how we work,
shop, pay bills, flirt, keep appointments, conduct wars, keep up with our
children, and write poetry in the next century.”
That’s Steve Silberman reporting for Wired in 1999 , which was
years ago, if you’re keeping count. He was right, and his prediction
proved correct before this century even reached its teens. Indeed, half of
tech media is devoted to precisely how these devices and their always-on
part two. Provocations
connectivity let us do new things, help us forget old things, and otherwise
provide humans with as much change as we can handle.
I can take a photo of a check and deposit it in my bank account, then
turn around and find a new book through a Twitter link and buy it, all
while being surveilled by a drone in Afghanistan and keeping track of how
many steps I’ve walked.
The question is, as it has always been: Now what?
Decades ago, the answer was “Build the Internet.” Fifteen years ago, it
was “Build the Web.” Five years ago, the answer was probably “Build the
social network and build the mobile Web.” And it was around that time,
in
, that Facebook emerged as the social-networking leader,
Twitter got known at SXSW, and we saw the release of the first Kindle and
the first iPhone. There are a lot of new phones that look like the iPhone,
plenty of e-readers that look like the Kindle, and countless social networks
that look like Facebook and Twitter. In other words, we can cross that task
off the list. It happened.
What we’ve seen since have been evolutionary improvements on the
patterns established five years ago. The platforms that have seemed hot in
the past couple years—Tumblr, Instagram, Pinterest—add a bit of design or
mobile intelligence to the established ways of thinking. The most exciting
thing to come along in the consumer space between
and now is the
iPad. But despite its glorious screen and extended battery life, it really
is a scaled-up iPhone that offers developers more space and speed to do
roughly the same things they were doing before. The top apps for the iPad
look startlingly similar to the top apps for the iPhone: casual games, socialnetworking, light productivity software.
For at least five years, we’ve been working with the same operating
logic in the consumer-technology game. This is what it looks like:
There will be ratings and photos and a network of friends imported,
borrowed, or stolen from one of the big social networks. There will be an
emphasis on connections between people, things, and places. That is to say,
the software you run on your phone will try to get you to help it understand
what and who you care about out there in the world. Because all of that
stuff can be transmuted into valuable information for advertisers.
That paradigm has run its course. It’s not quite over yet, but I think
The Jig Is Up
we’re into the mobile social fin de siècle.
PARADIGM LOST
It slipped into parody late last year with the hypothetical app Jotly,
which allowed you to “rate everything” from the ice cubes in your
drink to the fire hydrant you saw on the street. The fake promo video
perfectly nailed everything about the herd mentality among start-ups. Its
creator told me to watch for “the color blue, rounded corners, SoLoMo
[SocialLocalMobile], ratings, points, free iPads, ridiculous name (complete
with random adverbing via ly), overpromising, private beta, giant buttons,
‘frictionless’ sign-up, no clear purpose, and of course a promo video.”
And then the hilarious parody ate itself and my tears of laughter turned
to sadness when the people behind the joke actually released Jotly as
a real, live app.
That’s the micro version of the state of affairs. Here’s the macro
version. Thousands of start-ups are doing almost exactly the same thing,
minor variations on a theme. Tech journalists report endlessly on the same
handful of well-established companies—Apple, Amazon, Google, Facebook,
and Microsoft’s dominate pieces of the Web, and they don’t appear to be
in shaky positions. Good, long-time tech journalists like Om Malik are
exhausted. He recently posted this to his blog after much ink was
spilled over whom Twitter hired as a public-relations person:
Sure, these are some great people and everyone including me is
happy for their new gigs and future success. But when I read
these posts [I] often wonder to myself, have we run out of
things to say and write that actually are about technology and
the companies behind them? Or do we feel compelled to fill
the white space between what matters? Sort of like talk radio?
Three big innovation narratives in the past few years complicate,
but don’t invalidate, my thesis. The first—The Rise of the Cloud—was
essentially a rebranding of having data on the Internet, which is, well . . .
what the Internet has always been about. Though I think it has made the
lives of some IT managers easier and I do like Rdio. The second, Big
Data, has lots of potential applications. But, as Tim Berners-Lee noted,
the people benefiting from more-sophisticated machine learning techniques
part two. Provocations
are the people buying consumer data, not the consumers themselves. How
many Big Data start-ups might help people see their lives in different
ways? Perhaps the personal-genomics companies, but so far, they’ve
kept their efforts focused quite narrowly. And third, we have the Daily
Deal phenomenon. Groupon and its 600 clones may or may not be good
companies, but they are barely technology companies. Really, they look
like retail-sales operations with tons of sales people and marketing
expenses.
I also want to note that there are plenty of ambitious start-ups in
energy, health care, and education—areas that sorely need innovation. But
fascinating technology start-ups, companies that want to allow regular
people to do new stuff in their daily lives? Few and far between. Take
a look at Paul Graham’s ideas for frighteningly ambitious
start-ups. Now take a look at the last 30 or so start-ups on
Techcrunch. Where are the people who think big? What I see is people
filling ever smaller niches in this “ecosystem” or that “ecosystem.”
FROM FACEBOOK TO FACEBOOK CLONES
Certainly, some of the blame for tech-start-up me-tooism lies with startups’ tendency to cluster around ideas that seem to be working. Social
networks? Here’s
! Mobile social plays? Here’s another
! Socialdiscovery apps? Behold ,
! Perhaps that’s inevitable, as dumb money
chases chases smart money chasing some Russian kid who just made a
site on which men tend to flash their genitals at Web cameras.
But I think the problems go deeper. I don’t think Silicon Valley and all
the other alleys and silicon places are out of ideas. But I do think we’ve
reached a point in this technology cycle where the old thing has run its
course. I think the hardware, cellular bandwidth, and business model of
this tottering tower of technology are pushing companies to play on one
small corner of a huge field.
We’ve maxed out our hardware. No one even tries to buy the fastest
computer anymore, because we don’t give a computer any tasks (except
video editing, I suppose) that require that level of horsepower. I remember
breathlessly waiting for the next-generation processor so that my computer
would be capable of a whole new galaxy of activities. Some of it, sure, is
that we’re dumping the computation on the servers on the Internet. But the
The Jig Is Up
other part is that we mostly do a lot of the things that we used to do years
ago—stare at Web pages, write documents, upload photos—just at higher
resolutions.
On the mobile side, we’re working with almost the exact same toolset
that we had on the
iPhone—audio inputs, audio outputs, a camera,
a GPS, an accelerometer, Bluetooth, and a touchscreen. That’s the palette
that everyone has been working with, and I hate to say it, but we’re at the
end of the line. The screen’s gotten better, but when’s the last time you saw
an iPhone app do something that made you go, Whoa! I didn’t know that
was possible!?
Meanwhile, despite the efforts of telecom carriers, cellular bandwidth
remains limited, especially in the hotbeds of innovation that need it most.
It turns out that building a superfast, ultrareliable cellular network that’s as
fast as a wired connection is really, really hard. It’s difficult to say precisely
what role this limiting factor plays, but if you start to think about what
you could do if you had a
MB/s connection everywhere you went, your
imagination starts to run wild.
LESS MONEY, MO PROBLEMS
But more than the bandwidth or the stagnant hardware, I think the
blame should fall squarely on the shoulders of the business model. The
dominant idea has been to gather users and get them to pour their friends,
photos, writing, information, clicks, and locations into your app. Then you
sell them stuff (Amazon.com, One King’s Lane) or you take that data and
sell it in one way or another to someone who will sell them stuff (everyone).
I return to Jeff Hammerbacher’s awesome line about developers these days:
“The best minds of my generation are thinking about how to make people
click ads.”
Worse yet, all of this stuff is dependent on machine learning algorithms
that are crude and incredibly difficult to improve. You pour vast amounts
of data in to eke out a bit more efficiency. That’s great and all, but let’s not
look at that kind of behavior and call it “disruptive.” That is the opposite of
disruptive.
The thing about the advertising model is that it gets people thinking
small, lean. Get four college kids in a room, fuel them with pizza, and see
what thing they can crank out that their friends might like. Yay! Great! But
part two. Provocations
you know what? They keep tossing out products that look pretty much like
what you’d get if you gathered a homogenous group of young guys for
any other endeavor: cheap, fun, and about as world-changing as creating a
new variation on beer pong.
Now, there are obviously exceptions to what I’m laying out. What I’m
talking about here is the start-up culture that I’ve seen in literally dozens of
cities. This culture has a certain logic. There are organizing principles for
what is considered a “good” idea. These ideas are supposed to be the right
size and shape. There is a default spreadsheet that we expect ideas to fit
onto.
But maybe it’s time that changed.
So what does the future hold, then? I have a couple ideas, even if I’m
not sure they’re the right ones. One basic premise is this: more money has
to change hands. Free is great. Free is awesome. Halloween, for example, is
my favorite holiday. I love free stuff. But note this chart from the Pinboard
blog, comparing what happens to free sites and paid-for sites/services
when they experience growth.
Free
Paid
Stagnant
losing money
making money
Growing
losing more money
making more money
Exploding losing lots of money making lots of money
The point is that every user of a free service costs the service money,
whereas every user of a paid-for service generates money. What that means
is that a growing free site is an acquisition waiting to happen, because its
developers are burning through ever more cash.
Free applications and services get driven to do other things, too. They
must grow quickly and they must collect vast amounts of data and they
must acquire your social graph somehow. Even if those things were all
good, they would still inhibit the variety of start-ups that seem possible.
The only metric that seems to matter to a start-up is the number of users it
has been able to skim from the masses. (Partially because so many can’t get
anyone to visit them, and partially because so few of them make money.)
It’s not that I think paid software and services will be necessarily be
better, but I think they’ll be different.
The Jig Is Up
Speaking of hardware, I think we’d all better hope that the iPhone
has some crazy surprises in store for us. Maybe it’s a user-interface thing.
Maybe it’s a whole line of hardware extensions that allow for new kinds
of inputs and outputs. I’m not sure what it is, but a decently radical shift
in hardware capabilities on par with phone–>smartphone or smartphone–
>iPhone would be enough, I think, to provide a springboard for some new
ideas.
I have some of my own, too. The cost of a lumen of light is dropping
precipitously; there must be more things than lightbulbs that can benefit
from that. Vast amounts of databases, real-world data, and video remain
unindexed. Who knows what billion Chinese Internet users will come
up with? The quantified self is just getting going on its path to the
programmable self. And no one has figured out how to do augmented
reality in an elegant way.
The truth is, though, I’m a journalist, not an entrepreneur. I know that
my contribution is more likely to be distilling a feeling that is already
broadly felt, rather than inventing the future. Still, I want us to get back
to those exciting days when people were making predictions about the
affordances of the future that seemed wonderful and impossible. No doubt
the future remains unevenly distributed, but now, when you get your bit, it
seems as likely to include worse cell reception as it does seemingly magical
superpowers.
This isn’t about start-up incubators or policy positions. It’s not about
“innovation in America” or which tech blog loves start-ups the most. This
is about how Internet technology used to feel like it was really going to
change so many things about our lives. Now it has, and we’re all too
stunned to figure out what’s next. So we watch Lana Del Ray turn
circles in a thousand animated GIFs.
18.
In The Times, Sheryl Sandberg Is
Lucky, Men Are Good
A RARE INSISTENCE ON THE
I M P O R TA N C E O F C H A N C E I N T H E
PA P E R
by Rebecca J. Rosen
What does it take for someone to rise to the top—the very top—of a field?
Hard work, mentors, connections, brains, charm, ambition . . . it’s a long
list. But certainly one thing that helps very successful people get where
they are is the mystical quantity we call “luck.”
Luck, it must be said, is a loaded word. Sometimes someone’s luck
is nothing of their own doing, a result of the privilege of their birth.
Sometimes we mean luck in the purest way, the happenstance of sitting
next to the right person on a bus. (This sort of luck seems practically
mythical; one wonders if it ever happens.) But for many people, luck is
something of their own making: they work hard, network, self-promote,
part two. Provocations
and so on, and one day, they end up in the right place at the right time and
catch their break. Is that luck? Not in the same way, and telling someone
who has worked hard that their success is a result of luck is insulting.
This is why a passage in a New York Times profile of Facebook’s
Sheryl Sandberg’s accomplishments just sounds terribly off. Sandberg is
one of the most successful women in the entire world. She is known not
just as a talented Facebook leader but as an inspiration
and a role model. YouTube videos of her commencement address
at Barnard and her TED speech have gone viral, reaching hundreds of
thousands of people.
In short, she has made a sideline career for herself exhorting young
women to “lean in”—to compete and strive in the workforce with a noholds-barred attitude. For some people, this implies that Sandberg believes
women can get whatever they want if they just work harder and believe
in themselves more—that, somehow, the combination of ambition and
confidence will melt away the barriers created by years of sexism in the
workplace. These people say that Sandberg has lived a charmed life, and
doesn’t give enough credit to the extent that sexism can hold women back,
regardless of their attitude. The New York Times summarizes this
criticism by saying:
Some say her aim-high message is a bit out of tune. Everyone
agrees she is wickedly smart. But she has also been lucky, and
has had powerful mentors along the way.
This may just be a “some say” gloss on criticisms of Sandberg, but
it’s an unfortunately messy summary of her detractors. Luck, after all, is
what too many women chalk up their success to, Sandberg has argued.
Their male peers, in contrast, believe themselves to be “awesome”—
fully deserving of their success.
The problem with the way The Times framed Sandberg’s success begins
with the use of the word but: She’s smart, but she’s lucky—as though this
somehow trumps her smarts. Success comes from being smart and lucky.
I searched through the past five years of New York Times archives to
find other instances when luck has been employed to explain someone’s
success. I found an obituary for an inventor who had “marshaled luck,
spunk and inventiveness to fashion an entrepreneurial career that included
In The Times, Sheryl Sandberg Is Lucky, Men . . .
developing the first commercial frozen French fry”; a Russian billionaire
turned alleged criminal whose meteoric rise left you wondering whether
it was a result of “genius, luck, ruthlessness or connections”; and an
inventor of medical devices who had luck “like all billionaires.”
The only two instances I saw of luck gaining a fuller explanatory power
were in the case of George Steinbrenner, who the author thought was
quite lucky to have bought the Yankees just before the era of free agency,
and in the case of Kristin Gillibrand, the New York senator about whom Al
Sharpton said, “I think Gillibrand either has mystical powers or the best
luck I have ever seen in politics.” It’s rather unfortunate that the one time
the paper inadvertently discredited someone’s success on account of luck,
that someone was the woman known for telling women not to chalk their
success up to luck.
Let it be clear: Sheryl Sandberg is no fool. She does not believe that
luck plays no role in women’s achievements. In fact, she began her TED
speech by saying, “For any of us in this room today, let’s start out by
admitting we’re lucky.” Sandberg’s message isn’t that women can get
whatever they want merely by being ambitious. Her message to women—
privileged, highly educated, “lucky” women—is that once you have all those
things, you have to stop explaining away your success by crediting luck or
the beneficence of a mentor. It’s not that these things haven’t contributed
to success. It’s that when you focus on them, you implicitly understate your
own abilities.
It may be true that Sandberg gives short shrift to the challenges many
women face. But she is not talking about women as a class, women everywhere. Sandberg is talking to a specific audience: women at TED or women
graduating from Barnard. These are women who by all means should be
running major companies and organizations, if not now then in the next
few decades. Sandberg may have in mind global gender equality, but her
speeches are aimed only at the very elite. Women already made up half of
all college students decades ago, but still hold only about percent of top
corporate positions—that is the narrow problem Sandberg is focused on.
To make matters worse, The Times says that not only has Sandberg
been lucky, but she’s also had powerful mentors. Again, this plays directly
into one of the pitfalls Sandberg has been warning women to avoid:
part two. Provocations
believing that your success is the result of other people helping you out.
As with luck, it’s difficult to imagine many people advancing as far as
Sandberg has without some help from someone powerful.
That Sandberg’s “powerful mentors” should account for her success is
an interesting and odd idea, since it is thought that one reason women
have a hard time advancing is that they have a hard time finding mentors.
Powerful men find young men who remind them of themselves, and mentor
them. For Sandberg to have found a powerful mentor—Larry Summers—is
a testament to her abilities and savvy.
Sandberg’s achievements speak for themselves. As The Times says, “If
all goes well, she will soon become the $ . billion woman.” We should all
be so lucky.
19.
Moondoggle
THE FORGOTTEN OPPOSI TION TO
THE APOLLO PROGRAM
by Alexis C. Madrigal
We recall the speech John F. Kennedy made 50 years ago as the
beginning of a glorious and inexorable process in which the nation
united behind the goal of a manned lunar landing even as the presidency
transferred from one party to another. Time has tidied things up.
Polls by both USA Today and Gallup have shown that support for the
moon landing has increased the farther we’ve gotten away from it. Seventyseven percent of people in
thought the moon landing was worthwhile;
only percent felt that way in
.
When Neil Armstrong and Buzz Aldrin landed on the moon, a process
began that has all but eradicated any reference to the substantial opposition
by scientists, scholars, and regular people to spending money on sending
humans to the moon. Part jobs program, part science cash cow, the
American space program in the
s placed the funding halo of military
part two. Provocations
action on the heads of civilians. It bent the whole research apparatus of the
United States to a symbolic goal in the Cold War.
The Congressional Research Service has shown that the Apollo
program received far, far more money than any other government R&D
program, including the Manhattan Project and the brief fluorescence of
energy R&D after the OPEC oil embargo of
.
Given that this outlay came in the
s, a time of great social unrest,
you can bet many people protested. Many more people quietly opposed
the missions. Roger Launius, a historian at the National Air and Space
Museum, has called attention to public-opinion polls conducted
during the Apollo missions. Here is his conclusion:
Many people believe that Project Apollo was popular, probably
because it garnered significant media attention, but the polls
do not support a contention that Americans embraced the
lunar landing mission. Consistently throughout the
s a
majority of Americans did not believe Apollo was worth the
cost, with the one exception to this a poll taken at the time
of the Apollo lunar landing in July
. And consistently
throughout the decade – percent of Americans believed
that the government was spending too much on space, indicative of a lack of commitment to the space ? ight agenda. These
data do not support a contention that most people approved of
Apollo and thought it important to explore space.
We’ve told ourselves a convenient story about the moon landing and
national unity, but there’s almost no evidence that our astronauts united
America, let alone the world. Yes, there was a brief, shining moment
right around the moon landing when everyone applauded, but four years
later, the Apollo program was cut short, and humans have never seriously
attempted to get back to the moon ever again.
I can’t pretend to trace how the powerful images of men on the moon
combined with a sense of nostalgia for a bygone era of heroes to create the
notion that the Apollo missions were overwhelmingly popular. That’d be a
book. But what I can do is tell you about two individuals who, in their own
ways, opposed the government and tried to direct funds to more earthly
Moondoggle
pursuits: the poet and musician Gil Scott-Heron and the sociologist Amitai
Etzioni, then at Columbia University.
Heron performed a song called “Whitey on the Moon,” which
mocked “our” achievements in space.
The song had a very powerful effect on my historical imagination, and
led me to seek out much of the other evidence in this post. The opening
line creates a dyad that’s hard to forget: “A rat done bit my sister Nell
/ With Whitey on the moon.” I wrote about this song last year
when Scott-Heron died, reflecting on what it meant for “our” achievements
in space.
Though I still think the hunger for the technological sublime
crosses racial boundaries, [the song] destabilized the ease with
which people could use “our” in that kind of sentence. To
which America went the glory of the moon landing? And what
did it cost our nation to put Whitey on the moon?
Many
black papers questioned the use of American
funds for space research at a time when many African Americans
were struggling at the margins of the working class. An editorial in the Los
Angeles Sentinel, for example, argued against Apollo in no uncertain
terms, saying, “It would appear that the fathers of our nation would allow
a few thousand hungry people to die for the lack of a few thousand dollars
while they would contaminate the moon and its sterility for the sake of
‘progress’ and spend billions of dollars in the process, while people are
hungry, ill-clothed, poorly educated (if at all).”
This is, of course, a complicated story. When 200 black protesters
marched on Cape Canaveral to protest the launch of Apollo , one
Southern Christian Leadership Conference leader claimed, “America is
sending lazy white boys to the moon because all they’re doing is looking
for moon rocks. If there was work to be done, they’d send a nigger.”
But another SCLC leader, Hosea Williams, made a softer claim, saying
simply that they were “protesting our nation’s inability to choose humane
priorities.” Williams admitted to an AP reporter, “I thought the launch was
beautiful. The most magnificent thing I’ve seen in my whole life.”
Perhaps the most comprehensive attempt to lay out the case against the
space program came from the sociologist Etizioni, in his nearly-impossible-
part two. Provocations
to-find
book, The Moon-Doggle: Domestic and International
Implications of the Space Race. Luckily for you, I happen to have
a copy sitting right here.
Etzioni attacked the manned space program by pointing out that many
scientists opposed both the mission and the “cash-and-crash approach to
science” it represented. He cites a
report to the president by his Science
Advisory Committee in which “some of the most eminent scientists in
this country” bagged on our space ambitions. “Research in outer space
affords new opportunities in science but does not diminish the importance
of science on earth,” he quotes. It concludes, “It would not be in the national
interest to exploit space science at the cost of weakening our efforts in other
scientific endeavors. This need not happen if we plan our national program
for space science and technology as part of a balanced effort in all science
and technology.”
Etzioni goes on to note that this “balanced effort” never materialized.
The space budget was increased in the five years that followed
by more than tenfold while the total American expenditure on
research and development did not even double. Of every three
dollars spent on research and development in the United States
in
, one went for defense, one for space, and the remaining
one for all other research purposes, including private industry
and medical research.
He keeps piling up the evidence that scientists opposed or, at best,
tepidly supported the space program. A Science poll of
scientists not
associated with NASA found that all but three of them “believed that
the present lunar program is rushing the manned stage.” Etzioni’s final
assessment—”Most scientists agree that from the viewpoint of science there
is no reason to rush a man to the moon”—seems accurate.
But that’s just the beginning of the book. He has many other arguments
against the Apollo program: It sucked up not just available dollars, but
our best and brightest minds. Robots could do our exploration better than
humans, anyway. We would fall behind in other sciences because of our
dedication to putting men on the moon. There were special problems with
fighting the Cold War in space. And even as a status symbol, the moon was
pretty lousy.
Moondoggle
But the space program was great in one way: politically. Etzioni notes
that President Kennedy sought to help the poor and underprivileged, but
Congress blocked him. So, as Etzioni tells it, he got a massive public-works
program cleverly disguised in conservative, flag-waving garb:
Put before Congress a mission involving the nation, not the
poor; tie to competing with Russia, not slashing unemployment. Economically the important thing was to spend a few
billion—on anything; the effect would be to put the economy
into high gear and to provide a higher income for all, including
the poor.
But the space program didn’t really work out that way. “NASA does
make work, but in the wrong sector; it employs highly scarce professional
manpower, which will continue to be in high demand and short supply for
years to come,” he argues.
Etzioni lays out an alternative plan with long-term, science-based goals
for research funding, a rational peace with the Soviets, and the creation of
palatable social programs to develop rural America and help out the poor.
But his cause was lost, and in his last few pages, he may have even predicted
why.
In an age that worships technology, when man is lost among
the instruments he has created, the space race erects new
pyramids of gadgetry; in an age of materialism, it piles on more
investments in things when what is needed is investment in
people; in an age of extrovert activism, it lends glory to rocketpowered jumps, when critical self-examination and reflection
ought to be stressed; in an age of international conflicts, which
approach doomsday dimensions, it provides a new focus for
emotional divisions among men, when tasks to be shared and
to bind them are needed. Above all, the space race is used as
an escape, by focusing on the moon we delay facing ourselves,
as Americans and as citizens of the earth.
The race to the moon may not have been wildly popular among
scientists, random Americans, or black political activists, but it was hard
part two. Provocations
to deny the power of the imagery returning from space. Our attention
kept getting directed to the heavens—and our technology’s ability to propel
humans there. It was pure there, and sublime, even if our rational selves
could see that we might be better off spending the money on urban
infrastructure or cancer research or vocational training. Americans might
not have supported the space program in real life, but they loved the one
they saw on TV.
20.
How TED Makes Ideas Smaller
I D E A S E N D U P R E PA C K A G E D A S
PERSONAS.
by Megan Garber
In
, the inventor Lewis Miller and the Methodist bishop John Heyl
Vincent founded a camp for Sunday-school teachers near Chautauqua, New York. Two years later, Vincent reinstated the camp, training
a collection of teachers in an outdoor summer school. Soon, what had
started as an ad hoc instructional course had become a movement: secular
versions of the outdoor schools, colloquially known as “Chautauquas,”
began springing up throughout the country, giving rise to an educational
circuit featuring lectures and other performances by the intellectuals of the
day.
William Jennings Bryan, a frequent presenter at the
Chautauquas, called the circuit a “potent human factor in molding
the mind of the nation.” Teddy Roosevelt deemed it “the most American
thing in America.”
At the same time, Sinclair Lewis argued, the Chautauqua was “nothing
part two. Provocations
but wind and chaff and . . . the laughter of yokels”—an event, Gregory
Mason held, that was “infinitely easier than trying to think” and that was,
in the words of William James, “depressing from its mediocrity.”
The more things change, I guess. Compare those conflicted responses
to the Chautauqua to the ones leveled at our current incarnation of
the highbrow-yet-democratized lecture circuit: TED, the “Technology,
Entertainment, and Design” conference. TED is, per contemporary
commentators, both “an aspirational peak for the thinking
set” and “a McDonald’s dishing out servings of Chicken Soup
for the Soul.” It is both “the official event of digitization”
and “a parody of itself.”
One matter on which there seems to be no disagreement: TED, today,
can make you a star.
TED is a private event that faces the public through its -minute-long
TED talks—viewed more than 500 million times since they were
first put online, wonderfully free of charge, in the summer of 2006.
It has pioneered the return of the lecture format in an age that would seem
to make that format obsolete. And in converting itself from an exclusive
conference to an open platform, TED has become something else, too: one
of the most institutionalized systems we have for idea dissemination in the
digital age. To express an idea in the form of a TED talk (and to sell an
idea in the form of a TED talk) is one of the ultimate validations
that the bustling, chaotic marketplace of ideas can bestow upon one of
its denizens. A TED-talked idea is a validated idea. It is, in its way, peerreviewed.
But the ideas spread via TED, of course, aren’t just ideas; they’re
branded ideas. Packaged ideas. They are ideas stamped not just with the
imprimatur of the TED conference and all (the good! the bad! the magical!
the miraculous!) that it represents; they’re defined as well—and more
directly—by the person, which is to say the persona, of the speaker who
presents them. It’s not just “the filter bubble”; it’s Eli Pariser on the
filter bubble. It’s not just the power of introversion in an extrovertoptimized world; it’s Susan Cain on the power of introversion.
And Seth Godin on digital tribes. And Malcolm Gladwell on
spaghetti-sauce marketing. And Chris Anderson on the long
How TED Makes Ideas Smaller
tail.
For a platform that sells itself as a manifestation of digital possibility,
this approach is surprisingly anachronistic. (Even, you might say, Chautauquan.) In the past, sure, we insistently associated ideas with the people
who first articulated them. Darwin’s theory of evolution. Einstein’s theory
of relativity. Cartesian dualism. Jungian psychology. And on and on and on.
(Möbius’ strip!) Big ideas have their origin myths, and, historically, those
myths have involved the assumption of singular epiphany and individual
enlightenment.
But: we live in a world of increasingly networked knowledge. And
it’s a world that allows us to appreciate what has always been true: that new
ideas are never sprung, fully formed, from the heads of the inventors
who articulate them, but are always—always—the result of discourse and
interaction and, in the broadest sense, conversation. The author-ized idea,
claimed and owned and bought and sold, was, it’s worth remembering, an
accident of technology. Before print came along, ideas were conversational
and freewheeling and collective and, in a very real sense, “spreadable.”
It wasn’t until Gutenberg that ideas could be both contained and massproduced—and then converted, through that paradox, into commodities.
TED’s notion of “ideas worth spreading”—the implication being that
spreading is itself a work of hierarchy and curation— has its origins in a
print-based world of bylines and copyrights. It insists that ideas are, in the
digital world, what they have been in the analog: packagable and ownable
and claimable.
A TED talk, at this point, is the cultural equivalent of a patent: a private
claim to a public concept. With the speaker himself (or herself) becoming
the manifestation of the idea. In the name of spreading a concept, the talk
ends up narrowing it. Pariser’s filter bubble. Anderson’s long tail. We talk
often about the need for narrative in making abstract concepts relatable
to mass audiences; what TED has done so elegantly, though, is replace
narrative in that equation with personality. The relatable idea, TED insists,
is the personal idea. It is the performative idea. It is the idea that strides
onstage and into a spotlight, ready to become a star.
21.
Dark Social
W E H AV E T H E W H O L E H I S T O R Y
OF THE WEB WRONG.
by Alexis C. Madrigal
Here’s a pocket history of the Web, according to many people. In the early
days, the Web was just pages of information linked to each other. Then
along came Web crawlers, which helped you find what you wanted among
all that information. Sometime around
or maybe
, the social Web
really kicked into gear, and thereafter the Web’s users began to connect
with each other more and more often. Hence Web . , Wikipedia, MySpace,
Facebook, Twitter, etc. I’m not straw-manning here. This is the dominant
history of the Web, as seen, for example, in this Wikipedia entry on the
“Social Web.”
But it’s never felt quite right to me. For one, I spent most of the ’ s
as a teenager in rural Washington, and my Web was highly, highly social.
We had instant messenger and chat rooms and ICQ and USENET forums
and e-mail. My whole Internet life involved sharing links with local and
Internet friends. How was I supposed to believe that somehow Friendster
part two. Provocations
and Facebook created a social Web out of what had previously been a lonely
journey in cyberspace when this had not been my experience? True, my
Web social life used tools that ran parallel to, not on, the Web, but it existed
nonetheless.
To be honest, this was a very difficult thing to measure. One dirty secret
of Web analytics is that the information we get is limited. If you want to see
how someone came to your site, it’s usually pretty easy. When you follow
a link from Facebook to TheAtlantic.com, a little piece of metadata hitches
a ride that tells our servers, “Yo, I’m here from Facebook.com.” We can then
aggregate those numbers and say, “Whoa, a million people came here from
Facebook last month,” or whatever.
In some circumstances, however, we have no referrer data. You show
up at our doorstep, and we have no idea how you got here. The main
situations in which this happens are e-mail programs, instant messages,
some mobile applications,* and whenever someone is moving from a
secure site (“https://mail.google.com/blahblahblah”) to a nonsecure site
(http://www.theatlantic.com).
This means that the vast trove of social traffic is essentially invisible to
most analytics programs. I call it “dark social.” It shows up variously in programs as “direct” or “typed/bookmarked” traffic, which implies to many site
owners that you actually have a bookmark or typed www.theatlantic.com
into your browser. But that’s not actually what’s happening a lot of the
time. Most of the time, someone Gchatted you a link, or it came in a big
e-mail distribution list, or your dad sent it to you.
Nonetheless, the idea that “social networks” and “social media” sites
created a social Web is pervasive. Everyone behaves as if the traffic
your stories receive from the social networks (Facebook, Reddit, Twitter,
StumbleUpon) is the same as all of your social traffic. I began to wonder
whether I was wrong. Or at least whether what I had experienced was
a niche phenomenon, and most people’s Web time was not filled with
Gchatted and e-mailed links. I began to think that perhaps Facebook and
Twitter has dramatically expanded the volume of—at the very least—linksharing that takes place.
Everyone else had data to back them up. I had my experience as a
teenage nerd in the ’ s. I was not about to shake social-media marketing
Dark Social
firms with my tales of ICQ friends and the analogy of dark social to dark
energy. (“You can’t see it, dude, but it’s what keeps the universe expanding.
No dark social, no Internet universe, man! Just a big crunch.”)
And then one day, I had a meeting at the real-time-Web-analytics
firm Chartbeat. Like many media nerds, I love Chartbeat. It lets
you know exactly what’s happening with your stories, especially
where your readers are coming from. Recently, the company made an
accounting change. It took visitors who showed up without referrer
data and split them into two categories. The first was people who
were going to a home page (theatlantic.com) or a subject landing
page (theatlantic.com/politics). The second was people going to any
other page—that is to say, all of our articles. These people, they
figured, were following some sort of link, because no one actually
types
“http://www.theatlantic.com/technology/archive/
/ /atlastthe-gargantuan-telescope-designed-to-find-life-on-other-planets/
/.”
They started counting these people as what they call “direct social.”
The second I saw this measure, my heart leapt (yes, I am that much of a
data nerd). This was it! Chartbeat had found a way to quantify dark social,
even if it had given the concept a lamer name!
Figure
..
On the first day I saw it, this is how big of an impact dark social was
having on TheAtlantic.com.
part two. Provocations
Just look at that graph. On the one hand, you have all the social
networks that you know. They’re about . percent of our social traffic. On
the other, you have this previously unmeasured darknet that’s delivering
. percent of people to individual stories. This is not a niche phenomenon!
It’s more than . times Facebook’s impact on the site.
Day after day, this continues to be true, though the individual numbers
vary a lot, say, during a Reddit spike, or if one of our stories gets sent out on
a very big e-mail list or what-have-you. Day after day, though, dark social
is nearly always our top referral source.
Perhaps, though, this was true only for TheAtlantic.com, for whatever
reason. We do really well in the social world, so maybe we were outliers.
So, I went back to Chartbeat and asked it to run aggregate numbers across
its media sites.
Get this: dark social is even more important across this broader set of
sites. Almost
percent of social referrals were dark! Facebook came in
second at percent. Twitter was down at percent.
All in all, direct/dark social was . percent of total referrals; only
search, at . percent, drove more visitors to this basket of sites. (For what
it’s worth, at TheAtlantic.com, social referrers far outstrip search. I’d guess
the same is true at all of the more magazine-y sites.)
These data have a couple really interesting ramifications. First, on the
operational side, if you think optimizing your Facebook page and Tweets is
“optimizing for social,” you’re only halfway (or maybe
percent) correct.
The only real way to optimize for social spread is in the nature of the
content itself. There’s no way to game e-mail or people’s instant messages.
There are no power users you can contact. There are no algorithms to
understand. This is pure social, uncut.
Second, the social sites that arrived in the
s did not create the
social Web, but they did structure it. This is really, really significant. In
large part, they made sharing on the Internet an act of publishing(!), with
all the attendant changes that come with that switch. Publishing social
interactions makes them more visible and searchable, and adds a lot of
metadata to your simple link or photo post. There are some great things
about this, but social networks also give a novel, permanent identity to
your online persona. Your taste can be monetized, by you or (much more
Dark Social
likely) by the service itself.
Third, I think there are some philosophical changes that we should
consider in light of this new data. While it’s true that sharing came to
the Web’s technical infrastructure in the
s, the behaviors that we’re
now all familiar with on the large social networks were present long before
those networks existed, and persist despite Facebook’s eight years on the
Web. The history of the Web, as we generally conceive it, needs to consider
technologies that were outside the technical envelope of “Webness.” People
layered communication technologies easily and built functioning social
networks with most of the capabilities of the Web . sites in semiprivate
and without the structure of the current sites.
If what I’m saying is true, then the trade-off we make on social
networks is not the one we’re told we’re making. We’re not giving our
personal data in exchange for the ability to share links with friends. Massive
numbers of people—a larger set than exists on any social network—already
do that outside the social networks. Rather, we’re exchanging our personal
data for the ability to publish and archive a record of our sharing. That may
be a transaction you want to make, but it might not be the one you’ve been
told you made.
* Chartbeat data wiz Josh Schwartz said it was unlikely that the mobile
referral data was throwing off our numbers here. “Only about percent of
total traffic is on mobile at all, so, at least as a percentage of total referrals,
app referrals must be a tiny percentage,” Schwartz wrote to me in an email. “To put some more context there, only . percent of total traffic has
the Facebook mobile site as a referrer, and less than . percent has the
Facebook mobile app.”
22.
Interview: Drone War
M A R I A R O S A R I A TA D D E O O N T H E
N E W E T H I C S O F B AT T L E
by Ross Andersen
From state-sponsored cyber attacks to autonomous robotic weapons, stcentury war is increasingly disembodied. Our wars are being fought in the
ether and by machines. And yet our ethics of war are stuck in the predigital
age.
We’re accustomed to thinking of war as a physical phenomenon,
as an outbreak of destructive violence that takes place in the physical
world. Bullets fly, bombs explode, tanks roll, people collapse. Despite the
tremendous changes in the technology of warfare, war itself has remained
a contest of human bodies. But as the drone wars have shown, that’s no
longer true, at least for one side of the battle.
Technological asymmetry has always been a feature of warfare, but
no nation has ever been able to prosecute a war without any physical
risk to its citizens. What might the ability to launch casualty-free wars do
to the political barriers that stand between peace and conflict? In today’s
part two. Provocations
democracies, politicians are obligated to explain, at regular intervals, why
a military action requires the blood of a nation’s young people. Wars
waged by machines might not encounter much skepticism in the public
sphere.We just don’t know what moral constraints should apply to these
new kinds of warfare. Take the ancient, but still influential, doctrine of Just
War theory, which requires that war’s destructive forces be unleashed only
when absolutely necessary; war is to be pursued only as a last resort and
only against combatants, never against civilians.
But information warfare, warfare pursued with information technologies, distorts concepts like “necessity” and “civilian” in ways that challenge
these ethical frameworks. An attack on another nation’s information
infrastructure, for instance, would surely count as an act of war. But what
if it reduced the risk of future bloodshed? Should we really consider it only
as a last resort? The use of robots further complicates things. It’s not yet
clear who should be held responsible if and when an autonomous military
robot kills a civilian.
These are the questions that haunt the philosophers and ethicists
who think deeply about information warfare, and they will only become
more pertinent as our information technologies become more sophisticated. Mariarosaria Taddeo, a Marie Curie Fellow at the University
of Hertforshire, recently published an article in Philosophy & Technology,
“Information Warfare: A Philosophical Perspective,” that addresses these questions and more. What follows is my conversation with
Taddeo about how information technology is changing the way we wage
war, and what philosophy is doing to catch up.
How do you define information warfare?
The definition of information warfare is hotly debated. From my perspective, for the purposes of philosophical analysis, it’s best to define
information warfare in terms of concrete forms, and then see if there is
a commonality between those forms. One example would be cyber attacks
or hacker attacks, which we consider to be information warfare; another
example would be the use of drones or semiautonomous machines. From
those instances, to me, a good definition of information warfare is “the
use of information-communication technologies within a military strategy
that is endorsed by a state.” And if you go to the Pentagon, they will speak
Interview: Drone War
about this in different ways, they put it under different headings, in terms
of information operations or cyber warfare, cyber attacks, that sort of thing.
Was Russia’s attack on Estonia in
the first broad-based state
example of this?
The attack on Estonia is certainly one example of it, but it’s only one
instance, and it’s not the first. You could, for example, point to the
SWORDS robots that were used in Iraq several years prior to the attack on
Estonia, or the use of predator drones, etc. Remember, information warfare
encompasses more than only information-communication technologies
used through the Web; these technologies can be used in several different
domains and in several different ways.
But it’s hard to point to a definitive first example of this. It goes back
quite a ways, and these technologies have been evolving for some time
now; remember that the first Internet protocols were developed by DARPA.
In some sense, these technologies were born in the military sphere. Turing
himself, the father of computer science, worked mainly within military
programs during the Second World War.
Interesting, but do I understand you correctly that you distinguish
this new kind of information warfare from pre-Internet information
technologies like the radio and the telegraph?
>Well, those are certainly information technologies, and to some extent,
information has always been an important part of warfare, because we have
always wanted to communicate and to destroy our enemies’ information
structures and communication capabilities. What we want to distinguish
here is the use of these new kinds of information-communication technologies, because they have proved to be much more revolutionary in their
effects on warfare than previous technologies like telegraphs or telephones
or radios or walkie-talkies.
What’s revolutionary about them is that they have restructured the
very reality in which we perceive ourselves as living, and the way in which
we think about the concepts of warfare or the state. Take for example the
concept of the state: We currently define a state as a political unit that
exercises power over a certain physical territory. But when you consider
that states are now trying to also dominate certain parts of cyberspace,
our definition becomes problematic, because cyberspace doesn’t have a
part two. Provocations
defined territory. The information revolution is shuffling these concepts
around in really interesting ways, from a philosophical perspective, and
more specifically from an ethical perspective.
In your paper, you mention the use of robotic weapons like drones
as one example of the rapid development of information warfare. You
note that the U.S. government deployed only
robotic weapons in
Iraq in
, but that this number had grown to ,
by
. Is this
a trend you expect to continue?
I expect so. There are several ways that the political decisions to
endorse or deploy these machines are encouraged by the nature of these
technologies. For one, they are quite a bit cheaper than traditional weapons.
But more importantly, they bypass the need for political actors to confront
media and public opinion about sending young men and women abroad
to risk their lives. These machines enable the contemplation of military
operations that would have previously been considered too dangerous
for humans to undertake. From a political and military perspective, the
advantages of these weapons outweigh the disadvantages quite heavily.
But there are interesting problems that surface when you use them; for
instance, when you have robots fighting a war in a foreign country, the
population of that country is going to be slow to gain trust, which can
make occupation or even just persuasion quite difficult. You can see this
in Iraq or Afghanistan, where the populations have been slower to develop
empathy for American forces, because they see them as people who send
machines to fight a war. But these shortcomings aren’t weighty enough to
convince politicians or generals to forgo the use of these technologies, and
because of that, I expect this trend toward the use of robotic weapons will
continue.
You note the development of a new kind of robotic weapon, the
SGR-A , which is now being used by South Korea to patrol its border
with North Korea. What distinguishes the SGR-A from previous
weapons of information warfare?
The main difference is that this machine doesn’t necessarily have a
human operator, or a “man in the loop,” as some have phrased it. It can
autonomously decide to fire on a target without having to wait for a signal
from a remote operator. In the past, drones have been tele-operated, or if
Interview: Drone War
not, they didn’t possess firing ability, and so there was no immediate risk
that one of these machines could autonomously harm a human being. The
fact that weapons like the SGR-A now exist tells us that there are questions
that we need to confront. It’s wonderful that we’re able to save human lives
on one side, our side, of a conflict, but the issues of responsibility, the issue
of who is responsible for the actions of these semiautonomous machines
remain to be addressed.
Of course, it’s hard to develop a general rule for these situations where
you have human nature filtered through the actions of these machines; it’s
more likely we’re going to need a case-by-case approach. But whatever we
do, we want to push as much of the responsibility as we can into the human
sphere.
In your paper, you say that information warfare is a compelling
case of a larger shift toward the nonphysical domain brought about by
the Information Revolution. What do you mean by that?
It might make things more clear to start with the Information Revolution.
The phrase Information Revolution is meant to convey the extraordinary
ways that information-communication technologies have changed our
lives. There are, of course, plenty of examples of this, including Facebook
and Twitter and that sort of thing, but what these technologies have
really done is introduce a new nonphysical space that we exist in, and
increasingly, it’s becoming just as important as the offline or physical
space—in fact, events in this nonphysical domain often affect events in the
physical world.
Information warfare is one way that you can see the increasing
importance of this nonphysical domain. For example, we are now using
this nonphysical space to prove the power of our states; we are no longer
only concerned with demonstrating the authority of our states only in the
physical world.
In what ways might information warfare increase the risk of
conflicts and human casualties?
It’s a tricky question, because the risks aren’t yet clear, but there is a worry
that the number of conflicts around the world could increase because it
will be easier for those who direct military attacks with the use of these
technologies to do so, because they will not have to endanger the lives of
part two. Provocations
their citizens to do so. As I mentioned before, information warfare is, in
this sense, easier to wage from a political perspective.
It’s more difficult to determine the effect on casualties. Information
warfare has the potential to be blood-free, but that’s only one potentiality;
this technology could just as easily be used to produce the kind of damage
caused by a bomb or any other traditional weapon—just imagine what
would happen if a cyber attack was launched against a flight-control system
or a subway system. These dangerous aspects of information warfare
shouldn’t be underestimated; the deployment of information technology
in warfare scenarios can be highly dangerous and destructive, and so
there’s no way to properly quantify the casualties that could result. This
is one reason we so badly need a philosophical and ethical analysis of this
phenomenon, so that we can properly evaluate the risks.
You draw on the work of Luciano Floridi, who has said that the
Information Revolution is the fourth revolution, coming after the
Copernican, Darwinian, and Freudian revolutions, which all changed
the way humans perceive themselves in the universe. Did those
revolutions change warfare in interesting ways?
That’s an interesting question. I don’t think those revolutions had
the kind of impact on warfare that we’re seeing with the Information
Revolution. Intellectual and technological revolutions seem to go hand in
hand, historically, but I don’t, to use one example, think that the Freudian
Revolution had a dramatic effect on warfare. The First World War was
waged much like the wars of the th century, and to the extent that it
wasn’t, those changes did not come about because of Freud.
What you find when you study those revolutions is that while they may
have resulted in new technologies like the machine gun or the airplane,
none of them changed the concept of war. Even the Copernican Revolution,
which was similar to the Information Revolution in the sense that it
dislocated our sense of ourselves as existing in a particular space and time,
didn’t have this effect. The concept of war remained intact in the wake of
those revolutions, whereas we are finding that the concept of war itself is
changing as a result of the Information Revolution.
How has the Information Revolution changed the concept of war?
It goes back to the shift to the nonphysical domain; war has always been
Interview: Drone War
perceived as something distinctly physical, involving bloodshed and destruction and violence, all of which are very physical types of phenomena.
If you talk to people who have participated in warfare, historically, they
will describe the visceral effects of it—seeing blood, hearing loud noises,
shooting a gun, etc. Warfare was, in the past, always something very
concrete.
This new kind of warfare is nonphysical; of course it can still cause
violence, but it can also be computer-to-computer, or it can be an attack
on certain types of information infrastructure and still be an act of war.
Consider the Estonian cyber attack, where you had a group of actors
launching an attack on institutional Web sites in Estonia: there were no
physical casualties; there was no physical violence involved. Traditional
war was all about violence; the entire point of it was to physically
overpower your enemy. That’s a major change. It shifts the ethical analysis,
which was previously focused only on minimizing bloodshed. But when
you have warfare that doesn’t lead to any bloodshed, what sort of ethical
framework are you going to apply?”
For some time now, Just War theory has been one of the main
ethical frameworks for examining warfare. You seem to argue that its
modes of analysis break down when applied to information warfare.
For instance, you note that the principle that war ought to be pursued
only “as a last resort” may not apply to information warfare. Why is
that?
Well, first I would say that as an ethical framework, Just War theory has
served us well up to this point. It was first developed by the Romans, and
from Aquinas on, many of the West’s brightest minds have contributed
to it. It’s not that it needs to be discarded. Quite the contrary: there are
some aspects of it that need to be kept as guiding principles going forward.
Still, it’s a theory that addresses warfare as it was known historically, as
something very physical.
The problem with the principle of last resort is that while, yes, we want
physical warfare to be the last choice after everything else, it might not be
the case that information warfare is to be a last resort, because it might
actually prevent bloodshed in the long run. Suppose that a cyber attack
could prevent traditional warfare from breaking out between two nations;
part two. Provocations
by the criteria of Just War theory, it would be an act of war and thus only
justifiable as a last resort. And so you might not want to apply the Just War
framework to warfare that is not physically violent.
You also note that the distinction between combatants and civilians
is blurred in information warfare, and that this also has consequences
for Just War Theory, which makes liberal use of that distinction. How
so?
Well, until a century ago, there was a clear-cut distinction between the
military and civilians—you either wear a uniform or you don’t, and if
you do, you are a justifiable military target. This distinction has been
eroded over time, even prior to the Information Revolution; civilians took
part in a number of th-century conflicts. But with information warfare,
the distinction is completely gone; not only can a regular person wage
information warfare with a laptop, but also a computer engineer working
for the U.S. government or the Russian government can participate in
information warfare all day long and then go home and have dinner with
his or her family, or have a beer at the pub.
The problem is, if we don’t have any criteria, any way of judging
who is involved in a war and who is not, then how do we respond? Who
do we target? The risk is that our list of targets could expand to include
people who we would now consider civilians, and that means targeting
them with physical warfare, but also with surveillance, and that could be
very problematic. Surveillance is a particularly thorny issue here, because
if we don’t know who we have to observe, we may end up scaling up our
surveillance efforts to encompass entire populations, and that could have
very serious effects in the realm of individual rights.
You have identified the prevention of information entropy as a
kind of first principle in an ethical framework that can be applied to
information warfare—is that right, and if so, does that supplant the
saving of human life as our usual first principle for thinking about
these things?
I think they are complimentary. First of all, a clarification is in order.
Information entropy has nothing to do with physics or information theory;
it’s not a physical or mathematical concept. Entropy here refers to the
destruction of informational entities, which is something we don’t want.
Interview: Drone War
It could be anything from destroying a beautiful painting to launching a
virus that damages information infrastructure, and it can also be killing a
human being. Informational entities are not only computers; informational
entities identify all existing things, seen from an informational perspective.
In this sense, an action generating entropy in the universe is an action that
destroys, damages, or corrupts a beautiful painting or damages information
infrastructures, and it can also be killing a human being. Any action
that makes the information environment worse off generates entropy and
therefore is immoral. In this sense, the prevention of information entropy is
consistent with the saving of human life, because human beings contribute
a great deal to the infosphere—killing a human being would generate a lot
of information entropy.
This is all part of a wider ethical framework called Information Ethics,
mainly developed by Luciano Floridi. Information Ethics ascribes a moral
stance to all existing things. It does not have an ontological bias—that is
to say, it doesn’t privilege certain sorts of beings. This does not mean that
according to information ethics, all things have the same moral value, but
rather that they share some common minimal rights and deserve some
minimal respect. Here, the moral value of a particular entity would be
proportional to its contributions to the information environment. So a
white paper with one dot on it would have less moral value than, say, a
book of poems, or a human being. That’s one way of thinking about this.
23.
Plug In Better
A MANIFESTO
by Alexandra Samuel
A world ruled by spin, in which political influence lasts only until the next
embarrassing YouTube video. Industries starved of creative, deep thinking,
as focused effort gives way to incessant multitasking. Households made up
of neglectful, distracted parents and vacant, screen-glazed children. Human
beings reduced to fast-clicking thumbs, their attention spans no longer than
characters. That’s the future we hear about from Nicholas Carr, Sherry
Turkle, and The New York Times‘ Your Brain on Computers series,
which tell us that our growing time online is diminishing both our
individual intellects and our collective capacity for connection.
If this dystopian vision drives the call to unplug, there’s something
more personal motivating those who heed that call. I’ve been tracking the
urge to unplug for the past few years by aggregating and reading blog
posts from people who use phrases like Give up Facebook, Go offline, or
Without the Internet. When I read accounts of those who’ve gone offline for
a weekend, a holiday, or the days of Lent, they often seem wistful about
how their brains, bodies, and relationships feel when they aren’t constantly
part two. Provocations
engaged with life online. “Disconnecting from the virtual world allowed
me space to connect to the present one,” writes one blogger. Blogs
one mom, “it’s been sorta nice NOT hearing the Cuteness say, ‘Mommy,
pay attention to me, not your phone.”’ ”For the first time in a long time, I
experienced silence,” echoes a third.
Unplugging may feel like the most obvious way to access these
experiences of intimacy and quiet in a noisy digital world, but the very fact
that it’s so obvious should make us suspicious. As usual, we’re going for the
quick fix: the binary solution that lets us avoid the much more complicated
challenge of figuring out how to live online. It’s easier to imagine flipping
the Off switch than to engage with and work through the various human
failings that the Internet has brought to the fore.
And it’s easier to avoid what is, to many, a very painful truth: going
offline is no longer a realistic option. Sure, we can unplug for an hour, a
day, or even a week, but it’s not like you can permanently shut off the
challenges of our online existence. The offline world is now utterly defined
by networks too, from the pace of our work to the flow of our money. You
can look up from the screen, but there is no way to escape the digital.
What you can do is find those qualities of presence, focus, and even
solitude in your networked existence. Call it the new unplugging: a way
to step back from the rush and din of the Internet, and approach our time
online with the same kind of intention and integrity we bring to our best
offline interactions.
The new unplugging doesn’t require you to quit Facebook or throw
out your iPhone. What it requires is careful attention to the sources of our
discomfort, to the challenging qualities of online interaction or of simply
living in a networked world. Looking at those pain points, and finding a
way to switch them off, is the new unplugging.
Unplug from distraction. If you’re routinely using three screens at
once, distraction can feel like a way of life. But going online is only
synonymous with distraction if you assume that what you need to pay
attention to is necessarily offline. Sometimes the screen—with that crucial
e-mail, inspiring video, or enlightening blog post—is exactly what you
need to focus on. So unplug from distraction by giving that one onscreen
item your full attention: turn off your phone, shut your door, close all the
Plug In Better
windows and apps that are competing for your attention in the background.
Commit to a single task on your computer or mobile device, the same way
you might commit to an important face-to-face conversation. You can find
freedom from distraction onscreen as well as off.
Unplug from FOMO. Fear of Missing Out, or FOMO, is one of
those human neuroses that has been dramatically amplified by socialnetworking. Even before Facebook, Twitter, and LinkedIn, there were
probably conferences you missed attending, concerts you couldn’t get into,
or parties you didn’t get invited to. You just didn’t know about all of them.
Nowadays, your social-networking feeds provide constant reminders of
all the amazing, inspiring, and delightful activities other people are doing
without you. Ouch! Opting out of social networks may feel like the cure for
FOMO, but it’s the equivalent of standing in the middle of a crowded room
with your eyes and ears covered so you can pretend you’re all alone. The
real solution to FOMO is to accept the fact that you can’t be everywhere
and do everything. But if that’s more than your inner Buddha is ready for,
here’s my cheat: Click the Hide button on Facebook updates from friends
who are always bragging about their latest cool activities, and use a Twitter client that lets you filter out the enviable stream of tweets
from whatever conference you’re not attending. That way you
can unplug from FOMO without actually unplugging.
Unplug from disconnection. One of the ironies of our alwaysconnected lives is that they can leave us feeling less connected to the
people who matter most. The urge to unplug often comes from the image
of a teen texting her way through family dinner, a dad on his BlackBerry
at the playground, or a family that sits in the same room but engages
with four different screens. But the Off switch is not that family’s best
friend. If anything, the idea of unplugging for family time just sets up
an unwinnable war between family intimacy and online connectivity.
Instead, families need to embrace the potential of online tools as a way
of reconnecting. In The Happiness Project , Gretchen Rubin passes
along the advice that every couple should have one indoor game and one
outdoor game. In the Internet era, every couple or family should also
have one video game they play together (we like Bomberman), one online
presence they create together (like a blog or YouTube channel), and one
part two. Provocations
social network where they stay connected (so even if your kids don’t want
to be your Facebook friend, you can reach them on Google+). If you’re
using online tools to foster family connection and togetherness, you’ll
recognize that unplugging is a wishful-thinking alternative to the real
work of making Internet time into family time.
Unplug from information overload. Whether it’s an overflowing
inbox, a backlog of unread articles, or a Twitter feed that moves faster than
we can read, most of us are suffering from information overload. It’s one of
the most frequent motivations for “digital fasts”: the desire to take a break
from a stream of information and work that feels overwhelming. That’s
a useful instinct, since it reflects a desire to assess or even challenge the
steadily accelerating pace and volume of work. Unfortunately, when you
attempt to address overload by completely unplugging, you also unplug
from a lot of the resources that could help you set thoughtful boundaries
around your working life. Better to use the Internet to support your worklife balance by building an online support network of friends or trusted
colleagues, setting up RSS subscriptions to blogs that bring you insight
and inspiration, or starting an online log that helps you track your mostmeaningful accomplishments each day. Use the Internet to reinforce your
resolve to focus on what matters most, and unplug from the overload that
comes from the sense you’ve got to do it all.
Unplug from the shallows. If many people feel the urge to unplug,
it’s partly because they’re worried about the results of staying plugged
in. What Nick Carr terms “the shallows” encapsulates what many of us
experience online: a sense that we’re numbing out and dumbing down. But
you don’t have to unplug from the Net in order to find meaning. You can
create meaning in the way you use your time online. I’ve spent a surprising
amount of time on evangelical Christian blogs, simply because that’s where
I find a lot of active inquiry into the challenge of living online with integrity.
“When you enter the world of social media, bring Jesus with you,” advises
one not-atypical blogger. “It’s not what we say about Jesus, it’s how
our lives are being transformed by His message,” writes another blogger
on the Christian use of social media. “By sharing the details of
our lives, we offer people a window into what it means to have faith in our
time.” As Elizabeth Drescher, a scholar of Christian social
Plug In Better
media, observes of what she terms the “digital reformation,” those who
harness social media to their faith “are, in effect, recreating church as an
enduring and evolving movable feast—the very kind of which Jesus seemed
so fond.” Even if you don’t frame your social-media usage in terms of a
formal religious or spiritual practice, you can unplug from the enfeebling
qualities of the Internet, simply by focusing your own time online on the
activities and contributions that create meaning for yourself and others.
If this seems like a grab bag of practices and tools that merely slap
a band-aid on the various symptoms that online life can trigger, that’s
precisely the point. The Internet is an incurable condition — but we can’t
recognize that as good news until we find a way to treat the various aches
and pains of life online.
Once we get our Band-Aids in place, we can get over our preoccupation
with unplugging. The reason there are so many blog posts chronicling the
results of unplugging is that almost everyone who unplugs, whether for a
day or a month, eventually plugs back in. We can interpret that as addiction,
or as simple necessity.
Or we can consider a more encouraging possibility: We plug back in
because we like it. We plug back in because this new online world offers
extraordinary opportunities for creation, discovery, and connection. We
plug back in because we don’t actually want to escape the online world;
we want to help create it.
part three.
Reportage
24.
The Tao of Shutterstock
W H AT M A K E S A S T O C K P H O T O A
STOCK PHOTO?
by Megan Garber
There are not many occasions when one will find oneself seeking an
image of a cat in smart clothes with money and red caviar on
a white background. But there may well be one occasion when one will
find oneself seeking an image of a cat in smart clothes with money and red
caviar on a white background. This being the era of the Internet, actually,
there will probably be two or three.
For such occasions, when they arise, your best bet is to turn directly
to an image service like Shutterstock. The site, as the documentation
for its October 2012 IPO makes clear, is a Web community in the
manner of a Facebook or a Twitter or a Pinterest, with its value relying
almost entirely on the enthusiasm of its contributors. But it’s a community,
of course, with an explicitly commercial purpose: Shutterstock pioneered
the subscription approach to stock photo sales, allowing customers to
download images in bulk rather than à la carte. Shutterstock is e-commerce
part three. Reportage
Figure
..
with a twist, and its success depends on its contributors’ ability to predict,
and then provide, the products that its subscribers will want to buy. The
site is pretty much the Demand Media of imagery—and its revenues, for
both the company and its community, depend on volume.
Shutterstock launched in
and has grown steadily since then,
bolstered by the explosion of Web publishing. On the Internet, there
is always text in need of decoration—and the site now offers a library
of
million images to do that decorating. (Per Alexa’s somewhat
reliable demographic stats, Shutterstock’s visitors are disproportionately women—women who live in the U.S., who browse the site from
work, who don’t have children, and who do have master’s degrees. Which is
to say, probably, they’re members of the media.) As its own kind of insideout media organization, Shutterstock leverages the same kind of marketprediction strategy that Demand does—but it does that without Demand’s
infamous algorithms. Instead, says Scott Braut, Shutterstock’s VP of
content, it provides its contributors with tools like keyword trends and
popular searches so they “can find out what people are looking for and
The Tao of Shutterstock
when.”
The site also hosts multiple forums intended to guide people through
the photo-submission process—and that process, Shutterstock contributors
have told me, is exceptionally user-friendly compared with other microstock photo sites.
It’s also, however, fairly exclusive: Shutterstock has a team of reviewers
charged with ensuring editorial consistency and quality. In
, Braut
says, only
percent of applicants were approved to become Shutterstock
contributors. And less than
percent of all the images uploaded by
those approved contributors were ultimately put up on the site. For each
download their photos receive, photographers will get about $ . —more
if they’re oft-downloaded contributors or the purchaser has a high-level
subscription.
In some cases, Braut says, Shutterstock’s content team will do
direct outreach to the site’s top videographers, photographers,
and illustrators ”to help fill specific content needs.” For the most
part, though, Shutterstock contributors figure out for themselves what
subscribers are looking for. There’s very little “Hey, can you dress up a
cat and pose it with some Benjamins and Beluga? Because that would be
awesome.” If there’s a need for an image of that particular scene—or of,
say, a cheeseburger, or a German shepherd laying in the grass
with a laptop, or a shadowed man gazing contemplatively at
the sea during a colorful sunset—it’s pretty much up to the
photographers to identify that need. And, then, to fill it.
And fill it they do. To browse Shutterstock—as I often do, since we
sometimes use their images here at TheAtlantic.com—is to go on a weird
and often wacky and occasionally totally wondrous journey through the
visual zeitgeist. The site’s contributors have covered almost everything,
topic-wise—to the extent that, even with my occasionally zany image searches, it’s extremely rare to have a query come up blank. (I did a
search for zeitgeist, just to see, and was rewarded with three packed pages’
worth of images: clocks, scantily clad ladies carrying clocks, cartooned
gentlemen carrying clocks, youths flashing peace signs, stylized clinking
glasses, more cartoons, more clocks.)
The images Shutterstock serves up may not always be classy or fully
part three. Reportage
clothed or even
percent relevant—but there they are nonetheless,
courtesy of photographers from around the world. It’s all very of, by, and
for the Internet: the site’s images focus heavily on sitcomic poses, colorful
cartoons, plates of food, ponderous abstractions, and cats. And while the
images a query returns can occasionally be painful in their posery—see, for
example, the horrific/hilarious Awkward Stock Photos—they can also be
awkward in a sadder sense: as vehicles of a kind of preemptive nostalgia,
insisting stubbornly on a world that exists only in the minds of the
microstockers. (See: “Women Laughing Alone With Salad.”)
For all that, though, there’s a communal power to the stock image.
Some guy, maybe in Russia, his identity masked by a coy username
(igor_zh) and his impact amplified by his canny use of keywords, figured
I would eventually come to search for poetic pictures of sun-pierced
clouds that would in their way represent hope. And he was right. Stock
photographers specialize not just in imagery, but in sentiment prediction:
they anticipate people’s needs before they become needs in the first place.
***
One of those photographers is Ben Goode. A graphic
designer based near Adelaide, Australia, Goode moonlights as a
stock-photo shooter. He’s become one of the most popular contributors
on Shutterstock. Goode has sold more than
,
images during his
seven years as a site member, he told me, netting him more than $ ,
in U.S. currency. Goode describes the stock-image creation process as a
combination of strategy and serendipity—with a healthy dose of market
anticipation thrown in. “Over the years I’ve learned the type of image
that sells well on Shutterstock and definitely try to create images to
that end,” he says. Now, when he goes out on a shoot, he’ll take both
shots for his landscape prints Web site and images that may have
broader relevance. “Stock-friendly” shots, he calls them.
To these latter, Goode will add a little Photoshop magic—a crucial
step, he notes, and one that involves much more than simply filtering
images à la Instagram. Goode edits each image “on its own merit,” he says,
which sometimes involves slight adjustments or selective color changes,
and sometimes involves adding
or even
layers to the original image
until the thing is fairly bursting with color and exuberance.
The Tao of Shutterstock
As Shutterstock grows as a service, and as the market for stock
images both broadens and saturates, photo-stocking is becoming its own
specialized skill—a cottage industry built on Adobe. “To just take an image
and bump up the saturation does not cut it in stock these days,” Goode
says. “It takes a lot more to give them some real punch and appeal with
designers.”
So what he strives for, Goode says, is “maximum stock impact.”
Which leads to a question: What, exactly, is “stock impact”? One of
the more wacky/wondrous elements of stock photos is the manner in
which, as a genre, they’ve developed a unifying editorial sensibility. To
see a stock image is, Potter Stewart–style, to know you’re seeing a
stock image. And while stock images’ stockiness may be in part due to the
common visual tropes that give them their easy, cheesy impact—prettiness,
preciousness, pose-iness—part of it is more ephemeral, too. Though they
have little else in common, shots of a German shepherd typing on a laptop
or a man contemplating the sunset can be, in their special way, stocky.
One thing that unites stock images, says Jason Winter, a Virginiabased teacher, Web designer, and part-time Shutterstocker, is the
“unique perspective” presented in the composition of the photos themselves.
When shooting a stock photo, Winter told me, you want to think not
just about capturing an image, but also about creating a product that will
visually pop, particularly against the white backdrop of a Web page. By
“twisting a photo just a bit and making it appear three-dimensional,” he
says—by, for example, ensuring that your shot contains a well-defined
foreground and background—you can create an image that will embody
stock’s otherworldly appeal.
But there’s also the cultural sensibility of the stock photo: the cycle—
virtuous or vicious—that occurs when people start thinking of stock as its
own aesthetic category. Life as told through the stock image is beautified
and sanitized and occasionally dominated by camisole-clad ladies holding
things. It is posed; it is weird; it is fraught. But it is also unapologetic,
because it knows that it has its own particular style—one that, memelike, is incredibly easy to replicate. Dress up your cat, point, click, edit,
upload, and wait for the Internet to reward you for your efforts. As the
Shutterstocker Emily Goodwin told me, what sells well on the site isn’t
part three. Reportage
necessarily the traditionally “pretty” stuff. No, it’s the utilitarian content—
the images that capture the banalities and absurdities of everyday life—that
prove popular. Stock begets stock begets stock, until suddenly, when you
request an image of a canine in a top hat that eerily resembles Snoop
Dogg, the Internet is able to answer.
25.
I’m Being Followed
HOW GOOGLE—AN D
OTHER
C O M PA N I E S — A R E T R A C K I N G M E
ON THE WEB
by Alexis C. Madrigal
On any given morning, if you open your browser and go to NYTimes.com,
an amazing thing happens in the milliseconds between your click and
when that day’s top story appears on your screen. Data from this single
visit is sent to
different companies, including Microsoft and Google
subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms.
Almost instantaneously, these companies can log your visit, place ads
tailored for your eyes specifically, and add to the ever-growing online file
about you.
There’s nothing necessarily sinister about this subterranean data exchange: this is, after all, the advertising ecosystem that supports free
online content. All the data lets advertisers tune their ads, and the rest
of the information logging lets them measure how well things are actually
part three. Reportage
working. And I do not mean to pick on The New York Times. While visiting
the Huffington Post or The Atlantic or Business Insider, the same process
happens to a greater or lesser degree. Every move you make on the Internet
is worth some tiny amount to someone, and a panoply of companies want
to make sure that no step along your Internet journey goes unmonetized.
Even if you’re generally familiar with the idea of targeted advertising,
the number and variety of these data collectors will probably astonish you.
Allow me to introduce the list of companies that tracked my movements on
the Internet in one recent -hour period of standard web surfing: Acerno.
Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld.
Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas.
Audience Science.
And that’s just the A’s. My complete list includes
companies, and
there are dozens more than that in existence. You, too, could compile your
own list using Mozilla’s tool, Collusion, which records the companies
that are capturing data about you, or more precisely, your digital self.
While the big names—Google, Microsoft, Facebook, Yahoo, etc.—appear
in this catalog, the bulk of it is composed of smaller data and advertising
businesses that form a shadow web of companies, each wanting to help
show you advertising that you’re likely to click on and products that you’re
likely to purchase.
To be clear, these companies gather data without attaching it to your
name; they use that data to show you the ads you’re statistically most likely
to click. That’s the game, and there is substantial money in it.
As users, we move through our Internet experiences unaware of the
churning machines powering our Web pages with their cookies and pixels,
their tracking codes and databases. We shop for wedding caterers and
suddenly see ring ads appear on random Web pages we’re visiting. We
sometimes think the ads following us around the Internet are “creepy.” We
sometimes feel watched. Does it matter? We don’t really know what to
think.
The issues this data-tracking industry raises did not exist when Ronald
Reagan was president and were only in nascent form when the Twin
Towers fell. These are phenomena of our time, and while there are many
antecedent forms of advertising, never before in the history of human
I’m Being Followed
existence has so much data been gathered about so many people for the
sole purpose of selling them ads.
“The best minds of my generation are thinking about how to make
people click ads,” my old friend and the early Facebook employee Jeff
Hammerbacher once said. “That sucks,” he added. But increasingly I
think these issues—how we move “freely” online, or more properly, how
we pay one way or another—are actually the leading edge of a much
bigger discussion about the relationship between our digital and physical
selves. I don’t mean theoretically or psychologically. I mean that the
norms established to improve how often people click ads may end up
determining who you are when viewed by a bank or a romantic partner
or a retailer who sells shoes.
Already, the Web sites you visit reshape themselves before you like a
carnivorous school of fish, and this is only the beginning. Right now, a huge
chunk of what you’ve ever looked at on the Internet is sitting in databases
all across the world. The barrier surrounding what that chunk that might
say about you, good or bad, is as thin as the typed letters of your name. If
and when that wall breaks down, the numbers may overwhelm the name.
The unconsciously created profile may mean more than the examined self
I’ve sought to build.
Most privacy debates have been couched in technical language. We
read about how Google bypassed Safari’s privacy settings, whatever those
were. Or we read the details about how Facebook tracks you with those
friendly Like buttons. Behind the details, however, are a tangle of philosophical issues that are at the heart of the struggle between privacy advocates
and online advertising companies: What is anonymity? What is identity?
How similar are humans and machines? This essay is an attempt to think
through those questions.
The bad news is that people haven’t taken control of the data that are
being collected and traded about them. The good news is that—in a quite
literal sense—simply thinking differently about this advertising business
can change the way that it works. After all, if you take these companies
at their word, they exist to serve users as much as to serve their clients.
***
Before we get too deep, let’s talk about the reality of the online display
part three. Reportage
advertising industry. (That means, essentially, all the ads not associated
with a Web search.) There are a dizzying array of companies and services
who can all make a buck by helping advertisers target you a teensy,
weensy bit better than the next guy. These are companies that must
prove themselves quite narrowly in measurable revenue and profit; the
competition is fierce, the prize is large, and the strategies are ever-changing.
There’s a coral-reef-level diversity of corporate life in display advertising,
and it can all be very confusing.
Don’t get too caught up in all of that, though. There are three basic
categories: Essentially, there are people who help the buyers, people who
help the sellers, and a whole lot of people who assist either side with more
data or faster service or better measurement. Let’s zoom in on three of
them—just from the A’s—to give you an idea of the kinds of outfits we’re
talking about.
Adnetik is a standard targeting company that uses real-time bidding.
It can offer targeted ads based on how users act (behavioral), who they are
(demographic), where they live (geographic), and who they seem like online
(lookalike), as well as something it calls “social proximity.” The company
also gives advertisers the ability to choose the types of sites on which their
ads will run based on “parameters like publisher brand equity, contextual
relevance to the advertiser, brand safety, level of ad clutter and content
quality.”
It’s worth noting how different this practice is from traditional advertising. The social contract between advertisers and publications used to be
that publications gathered particular types of people into something called
an audience, then advertisers purchased ads in that publication to reach
that audience. There was an art to it, and some publications had cachet
while others didn’t. Online advertising upends all that: now you can buy
the audience without the publication. You want an Atlantic reader? Great!
Some ad network can sell you someone who has been to TheAtlantic.com
but is now reading about hand lotion at KnowYourHandLotions.com. And
it’ll sell you that set of eyeballs for a fifth of the price. You can bid in realtime on a set of those eyeballs across millions of sites without ever talking
to an advertising salesperson. (Of course, such a trade-off has costs, which
we’ll see soon.)
I’m Being Followed
Adnetik also offers a service called “retargeting” that another A company, AdRoll, specializes in. Here’s how it works. Let’s say you’re an online
shoe merchant. Someone comes to your Web store but doesn’t purchase
anything. While they’re there, you drop a cookie on them. Thereafter you
can target ads to them, knowing that they’re at least mildly interested.
Even better, you can drop cookies on everyone who comes to look at shoes
and then watch to see who comes back to buy. Those people become your
training data, and soon you’re only “retargeting” those people with a data
profile that indicates that they’re likely to purchase something from you
eventually. It’s slick, especially if people don’t notice that the pairs of shoes
they found the willpower not to purchase just happen to be showing up on
their favorite gardening sites.
There are many powerful things you can do once you’ve got data on a
user, so the big worries for online advertisers shift to the inventory itself.
Purchasing a page in a magazine is a process that provides advertisers
significant control. But these retargeted online ads could conceivably run
anywhere. After all, many ad networks need all the inventory they can get,
so they sign up all kinds of content providers. And that’s where our third
company comes into play.
AdExpose, now a comScore company, watches where and how ads are
run to determine if their purchasers got their money’s worth. “Up to %
of interactive ads are sold and resold through third parties,” they state on
their Web site. “This daisychaining brings down the value of online ads
and advertisers don’t always know where their ads have run.” To solve that
problem, AdExpose claims to provide independent verification of an ad’s
placement.
All three companies want to know as much about me and what’s on
my screen as they possibly can, although they have different reasons for
their interest. None of them seem like evil companies, nor are they unique
companies. Like much of this industry, they seem to believe in what they’re
doing. They deliver more relevant advertising to consumers, and that makes
more money for companies. They are simply tools to improve the grip
strength of the invisible hand.
***
And yet, the revelation that
different outfits were collecting and
part three. Reportage
presumably selling data about me on the Internet gives me pause. It’s not
just Google or Facebook or Yahoo. There are literally dozens and dozens
of these companies, and the average user has no idea what they do or how
they work. We just know that for some reason, at one point or another, an
organization dropped a cookie on us and created a file, on some server, that
steadily accumulates clicks and habits that will eventually be mined and
marketed.
The online advertising industry argues that technology is changing so
rapidly that regulation is not the answer to my queasiness about all that
data going off to who-knows-where. The problem, however, is that the
industry’s version of self-regulation is not one that most people would
expect or agree with, as I found out myself.
After running Collusion for a few days, I wanted to see if there
was an easy method to stop data collection. Naively, I went to
the
self-regulatory site run by the Network Advertising
Initiative and completed their “Opt Out” form. I did so for the dozens
of companies listed, and I would say that it was a simple and nominally
effective process. That said, after filling out the form, I wasn’t sure if data
would stop being collected on me or not. The site itself does not say that
data collection will stop, but it’s also not clear that data collection will
continue. In fact, the overview of NAI’s principles freely mixes talk
about how the organization’s code “limits the types of data that member
companies can use” with information about the opt-out process.
After opting out, I went back to Collusion to see if companies were
still tracking me. Many, many companies appeared to be logging data
for me. According to Mozilla, the current version of Collusion does not
allow me to see precisely what companies are still tracking, but Stanford
researchers using Collusion found that at least some companies
continue to collect data. All that I had “opted out” of was receiving targeted
ads, not data collection. There is no way, through the companies’ own selfregulatory apparatus, to stop being tracked online. None.
After those Stanford researchers posted their results to a university
blog, they received a sharp response from the NAI’s then-chief,
Chuck Curran.
In essence, Curran argued that users do not have the right to not
I’m Being Followed
be tracked. “We’ve long recognized that consumers should be provided
a choice about whether data about their likely interests can be used to
make their ads more relevant,” he wrote. “But the NAI code also recognizes
that companies sometimes need to continue to collect data for operational
reasons that are separate from ad targeting based on a user’s online
behavior.”
Companies “need to continue to collect data,” but that contrasts directly
with users’ desire not to be tracked. The only right that online advertisers
are willing to give users is the ability not to have ads served to them
based on their Web histories. Curran himself admits this: “There is a vital
distinction between limiting the use of online data for ad targeting, and
banning data collection outright.”
But based on the scant survey and anecdotal data that we have
available, when users opt out preventing data collection is precisely what
they are after.
In preliminary results from a survey conducted last year, Aleecia
McDonald, a fellow at Stanford Center for Internet and Society, found that
users expected a lot more from the current set of tools than those tools
deliver. The largest percentage of her survey group ( percent) who looked
at the NAI’s opt-out page thought that it was “a website that lets you tell
companies not to collect data about you.” For browser-based “Do Not Track”
tools, a full percent of respondents expected that if they clicked such a
button, no data would be collected about them.
Do Not Track tools have become a major point of contention. The idea
is that if you enable one in your browser, when you arrive at The New York
Times online, you send a herald out ahead of you that says, “Do not collect
data about me.” Members of the NAI have agreed, in principle, to follow
the DNT provisions, but now the debate has shifted to the details.
There is a fascinating scrum over what Do Not Track tools should do
and what orders Web sites will have to respect from users. The Digital
Advertising Alliance (of which the NAI is a part), the Federal Trade Commission, W C, the Internet Advertising Bureau (also part of the DAA), and
privacy researchers at academic institutions are all involved. In November,
the DAA put out a new set of principles that contain some good
ideas like the prohibition of “collection, use or transfer of Internet surfing
part three. Reportage
data across Websites for determination of a consumer’s eligibility for
employment, credit standing, healthcare treatment and insurance.”
In February
, the White House seemed to side with privacy advocates who want to limit collection, not just uses. Its Consumer Privacy
Bill of Rights pushes companies to allow users to “exercise control
over what personal data companies collect from them and how they use
it.” The DAA heralded its own participation in the White House process,
though even it noted that this is the beginning of a long journey.
There has been a clear and real philosophical difference between the
advertisers and regulators representing web users. On the one hand, as the
Stanford privacy researcher Jonathan Mayer put it, “Many stakeholders
on online privacy, including U.S. and EU regulators, have repeatedly
emphasized that effective consumer control necessitates restrictions on
the collection of information, not just prohibitions on specific uses of
information.” But advertisers want to keep collecting as much data as they
can as long as they promise to not to use it to target advertising. That’s why
the NAI opt-out program works like it does.
Let’s not linger too long on the technical implementation here: there
may be some topics around which compromises can be found. Some
definition of Do Not Track that suits industry and privacy people may
be crafted. Various issues related to differences between first and thirdparty cookies may be resolved. But the battle over data collection and ad
targeting goes much deeper than the tactical, technical issues
that dominate the discussion.
Let’s assume good faith on behalf of advertising companies and
confront the core issue head-on: Should users be able to stop data collection,
even if companies aren’t doing anything “bad” with it? Should that be a
right, as the White House contends, and more importantly, why?
***
Companies’ ability to track people online has significantly outpaced
cultural norms and expectations of privacy. This is not because online
companies are worse than their offline counterparts, but rather because
what they can do is so, so different. We don’t have a language for talking
about how these companies function or how our society should deal with
them.
I’m Being Followed
The word you hear over and over and over about targeted ads is
creepy. It even crops up in the academic literature, despite its vague
meaning in this context. My intuition is that we use the word creepy
precisely because it is an indeterminate word. It connotes that back-ofthe-neck-tingling feeling, but not necessarily more than that. The creepy
feeling is a sign to pay attention to a possibly harmful phenomenon. But
we can’t sort our feelings into categories—dangerous or harmless—because
we don’t actually know what’s going to happen with all the data that’s
being collected.
Not only are there more than
companies that are collecting data
on us, making it practically impossible to sort good from bad, but there
are key unresolved issues about how we relate to our digital selves and the
machines through which they are expressed.
At the heart of the problem is that we increasingly live two lives: a
physical one in which your name, social security number, passport number,
and driver’s license are your main identity markers; and a digital one in
which you have dozens of identity markers, known to you and me as
cookies. These markers allow data gatherers to keep tabs on you without
your name. Those cookie numbers, which are known only to the entities
that assigned them to you, are persistent markers of who you are, but they
remain unattached to your physical identity through your name. There is
a (thin) wall between the self that buys health insurance and the self that
searches for health-related information online.
For real-time advertising bidding, in which audiences are being served
ads that were purchased milliseconds after users arrive at a Web page, ad
services “match cookies,” so that both sides know who a user is. While that
information may not be stored by both companies—that is, it’s not added
to a user’s persistent file—its existence means that the walls between online
data selves are falling away quickly. Everyone can know who you are, even
if they call you by different numbers.
Furthermore, many companies are just out there collecting data to
sell to other companies. Anyone can combine multiple databases together
into a fully fleshed-out digital portrait. As a Wall Street Journal
investigative report put it, data companies are “transforming the
Internet into a place where people are becoming anonymous in name only.”
part three. Reportage
Joe Turow, who recently published a book on online privacy, had even
stronger words:
If a company can follow your behavior in the digital
environment—an environment that potentially includes
your mobile phone and television set—its claim that you
are “anonymous” is meaningless. That is particularly true
when firms intermittently add off-line information such as
shopping patterns and the value of your house to their online
data and then simply strip the name and address to make it
“anonymous.” It matters little if your name is John Smith, Yesh
Mispar, or
. The persistence of information about you
will lead firms to act based on what they know, share, and
care about you, whether you know it is happening or not.
Militating against this collapse of privacy is a protection embedded in
the very nature of the online advertising system. No person could ever
actually look over the world’s Web tracks. It would be too expensive, and
even if you had all the human laborers in the world, they couldn’t do the
math fast enough to constantly recalculate Web surfers’ value to advertisers.
So, machines are the ones that do all of the work.
When new technologies come up against our expectations of privacy,
I think it’s helpful to make a real-world analogy. But we just do not
have an adequate understanding of anonymity in a world where machines
can parse all of our behavior without human oversight. Most obviously,
with the machine, you have more privacy than if a person were watching
your clickstreams, picking up collateral knowledge. A human could easily
apply analytical reasoning skills to figure out who you were. And any
human could use this data for unauthorized purposes. With our data-driven
advertising world, we are relying on machines’ current dumbness and
inability to “know too much.”
This is a double-edged sword. The current levels of machine intelligence insulate us from privacy catastrophe, so we let data be collected about
us. But we know that this data is not going away, and machine intelligence
is growing rapidly. The results of this process are ineluctable. Left to their
own devices, ad tracking firms will eventually be able to connect your
I’m Being Followed
various data selves. And then they will break down the name wall, if they
are allowed to.
***
Your visit to this story probably generated data for
companies
through our Web site. The great downside to this beautiful, free Web that
we have is that you have to sell your digital self in order to access it. If you’d
like to stop data collection, take a look at Do Not Track Plus. It goes
beyond Collusion and browser-based controls in blocking data collection
outright.
But I am ultimately unclear what I think about using these tools.
Rhetorically, they imply that there will be technological solutions to these
data-collection problems. Undoubtedly, tech elites will use them. The
problem is the vast majority of Internet users will never know what’s
churning beneath their browsers. And the advertising lobby is explicitly
opposed to setting browser defaults for higher levels of “Do Not Track”
privacy. There will be nothing to protect users from unwittingly giving
away vast amounts of data about who they are.
On the other hand, these are the tools that allow Web sites to eke out a
tiny bit more money than they otherwise would. I am all too aware of how
difficult it is for media businesses to survive in this new environment. Sure,
we could all throw up paywalls and try to make a lot more money from a
lot fewer readers. But that would destroy what makes the Web the unique
resource in human history that it is. I want to keep the Internet healthy,
which really does mean keeping money flowing from advertising.
I wish there were more-obvious villains in this story. The saving grace
may end up being that, as companies go to more-obtrusive and higherproduction-value ads, targeting may become ineffective. Avi Goldfarb of
Rotman School of Management and Catherine Tucker of MIT’s Sloan
School found last year that the big, obtrusive ads that marketers love do
not work better with targeting, but worse.
“Ads that match both website content are obtrusive do worse at
increasing purchase intent than ads that do only one or the other,” they
wrote in a 2011 Marketing Science journal paper. “This failure
appears to be related to privacy concerns: the negative effect of combining
targeting with obtrusiveness is strongest for people who refuse to give their
part three. Reportage
income and for categories where privacy matters most.”
Perhaps there are natural limits to what data targeting can do for
advertisers. Perhaps, when we look back in years at why data-collection
practices changed, it will not be because of regulation or self-regulation or
a user uprising. No, it will be because the best ads could not be targeted. It
will be because the whole idea did not work, and the best minds of the next
generation will turn their attention to something else.
26.
Did Facebook Give Democrats
the Upper Hand?
THE MASSIVE SOCIAL
EX P ERIMEN T TO P ROV E
FA C E B O O K C A N G E T P E O P L E T O
VO T E .
by Rebecca J. Rosen
You didn’t know it at the time, but when you logged into Facebook last
Election Day, you became a subject in a mass social experiment. You
clicked around Facebook, then went about your day. You may have voted.
Now your behavior is data that social scientists will scrutinize in the
months ahead, asking one of the core questions of democracy: What makes
people vote? If patterns from earlier research hold true, the experiment’s
designer, James Fowler, says that it is “absolutely plausible that Facebook
drove a lot of the increase” in young-voter participation (thought
part three. Reportage
to be up one percentage point from 2008 as a share of
the electorate). It is, he continued, “even possible that Facebook is
completely responsible.”
Assuming you were over the age of
and using a computer in the
United States, you probably saw a message at the top of your Facebook
page advising you that, surprise!, it was Election Day. There was a link
where you could find your polling place, a button that said either “I’m
voting” or I’m a voter,” and pictures of the faces of friends who had already
declared that they’d voted, which also appeared in your news feed. If you
saw something like that, you were in good company:
percent of and-older U.S. Facebook users got that treatment, assigned randomly, of
course. Though it’s not yet known how many people that is, in a similar
experiment performed in
, the number was
million. Presumably it
was even more on Election Tuesday
, given that Facebook has grown
substantially in the past two years.
But here’s the catch: six percent of people didn’t get the intervention.
Two percent saw nothing—no message, no button, no news stories. Another
percent saw the message, but no stories of friends’ voting behavior
populated their feeds, and a final percent saw only the social content
but no message at the top. By splitting up the population into these
experimental and control groups, researchers will be able to see if the
messages had any effect on voting behavior when they begin matching
the Facebook users to the voter rolls (whom a person voted for is private
information, but whether they voted is public). If those who got the
experimental treatment voted in greater numbers, as is expected, Fowler
and his team will be able to have a pretty good sense of just how many
votes in the
election came directly as a result of Facebook.
They’ve done a very similar experiment before, and the results were
significant. In a paper published earlier this year in Nature, Fowler and
his colleagues announced that a Facebook message and behaviorsharing communication increased the probability that a person votes by
slightly more than percent. That may not seem like a huge effect, but
when you have a huge population, as Facebook does, a small uptick in
probability means substantial changes in voting behavior.
“Our results suggest,” the team wrote, “that the Facebook social message
Did Facebook Give Democrats the Upper Hand?
increased turnout directly by about ,
voters and indirectly through
social contagion by another
,
voters, for a total of
,
additional votes.” This finding—remarkable and novel as it may be—is in concert
with earlier research that has shown that social pressure strongly influences
voting activity, such as in this 2008 study, which found that people
were significantly more likely to vote if they received mailings promising
to later report who from the neighborhood had voted and who had stayed
at home.
Although months of door knocking, phone calls, and other traditional
campaign tactics surely bring more people to the polls, those measures
are expensive and labor-intensive. Nothing seems to come even close to
a Facebook message’s efficacy in increasing voter turnout. “When we
were trying to get published,” Fowler, a professor at UC San Diego, told
me, “We had reviewers who said, ‘These results are so small that they’re
meaningless,’ and other reviewers who said, ‘These results are implausibly
large. There’s no way this can be true.”’ In a country where elections can
turn on just a couple hundred votes, it’s not far-fetched to say that, down
the road, Facebook’s efforts to improve voter participation could swing an
election, if they haven’t already.
Now, it must be said that of course Facebook is not trying to elect
Democrats. Facebook has an admirable civic virtue and has long tried to
increase democratic participation in a strictly nonpartisan way. “Facebook,”
Fowler said to me, “wants everyone to be more likely to vote. Facebook
wants everyone to participate in the fact of democracy.”
But that doesn’t mean the effects of Facebook’s efforts are not lopsided.
Outside of Facebook’s demographic particularities, there are reasons to
believe that improved voter turnout in general helps Democrats, though
there is a debate about this within political science.
In practice, though, there is no such thing as pure a get-out-the-vote
effort, one whose tide raises all votes, and Facebook is no exception. It
skews toward both women and younger voters, two groups which
tended to prefer Democrats in this election. Sixty percent of -to- -yearolds voted for Obama, compared with percent for Romney. The next
age group, -to- -year-olds, gave Obama
percent of their support.
Among Americans older than , Romney won. The implication is clear: If
part three. Reportage
Facebook provides a cheap and effective way to get more people to the polls,
and it seems that it does, that is good news for Democrats. For Republicans,
well, it’s an uncomfortable situation when increasing voter participation is
a losing strategy.
***
Facebook’s effect—however big it is—is at the margins, and in a country
where elections are so close, the margins can matter a lot. But there are longterm trends underfoot that, for Republicans, mean their troubles go beyond
Facebook. This year was the third presidential election in a row where
young-voter participation hovered around
percent (meaning that half
of eligible young people actually voted), according to Peter Levine of the
Center for Information and Research on Civic Learning and Engagement at
Tufts University. This is up from a low of just percent in
. Obviously,
that sort of shift is not solely the result of a Facebook message. “This seems
to be a significant generational change,” he told me.
The reasons for this generational change are complex and not totally
understood. One certain factor is simply that recent campaigns have tried
harder to reach out to young people. “In the
s the conventional wisdom
was that young people don’t vote,” Levine said. “So they would literally look
through a contact list of potential voters to reach out to, and just delete
the young people—not to discriminate but because they were trying to be
efficient.”
But that ended with what Levine calls the “ / nation situation”—
elections so close that campaigns could no longer afford to write off huge
parts of the population. “It had already begun to change, but Obama won
on the strength of young votes in ’ —not only in the general election
but especially in the primaries. They gave him Iowa. Without Iowa, no
President Obama.”
For years, it was thought that one reason young people voted in such
low numbers was because they are so mobile: If you don’t live in a place
for years, voting is, as political scientists term it, “costly.” Every time you
move, you have to re-register, find your polling place, and, if you plan to
vote in local races, spend time getting up to speed on complicated municipal
politics. But this calculus of inaction may be changing: with the Internet,
it’s much easier to find where your polling place is. And our online media
Did Facebook Give Democrats the Upper Hand?
environment privileges national politics over local politics so much that
national politics alone may entice more people to the polls.
Additionally, Constance Flanagan of the University of Wisconsin
argues, there’s been a backlash on college campuses to voter-suppression
efforts. “The voter-suppression thing did make people more aware,” she
said. “Our university newspaper had a front-page story about what are your
rights, do you have to produce an ID . . . It was a conversation topic among
young people and something they passed on to one another.” Particularly,
she said, minority groups who felt targeted responded by organizing themselves and making sure people voted. (Ta-Nehisi Coates and Andrew
Cohen have both written about this backlash here at The Atlantic.)
Even with these recent improvements, Levine reminds us to bear
in mind that we’re still only at
percent, and the other
percent,
those who are not voting, is not a random sample of the population—
they’re typically lower income, lower education levels, and not into politics.
With social media becoming an increasingly important part of political
communications, campaigns and activists should ask whether there’s any
way to use social media to get to them. “I think we do incrementally,
because there’s some serendipity where your old high-school friend gets
into politics and draws you in, but does it happen at a large scale? I don’t
think we know,” he said.
All of this adds up to a shifting electoral environment, as young people
come out to the polls in greater numbers, and are more easily reached
online. “In terms of good news for the Democrats,” Donald Green of
Columbia University told me, “the fact that you have this age cohort that
has been socialized into very strong presidential support for Democrats, is
in some ways the countervailing force to the age cohort that was socialized
into strong Republican support under Reagan.” Voting once is known to
be habit-forming. Those who were brought to the polls by the wave of
enthusiasm for Obama in
will likely vote with greater frequency for
0 s felt.
the rest of their lives, even in a more humdrum election, as
And that’s perhaps the worst news of all for the Republicans. A wave
of enthusiasm is one thing, an anomaly in an otherwise / nation. But
now the tide has gone out, and the seashore looks changed.
27.
The Philosopher and the FTC
H E L E N N I S S E N B AU M ’ S O U T S I Z E
I N F L U E N C E O N P R I VA C Y P O L I C Y
by Alexis C. Madrigal
A mile or two away from Facebook’s headquarters in Silicon Valley, Helen
Nissenbaum of New York University was standing in a basement on
Stanford’s campus explaining that the entire way that we’ve thought about
privacy on the Internet is wrong. It was not a glorious setting. The lighting
was bad. The room was half empty. Evgeny Morozov was challenging her
from the back of the room with strings of tough, almost ludicrously detailed
questions.
Nissenbaum’s March presentation was part of Stanford’s Program on
Liberation Technology and relied heavily on her influential recent
research, which culminated in the
book Privacy in Context,
and subsequent papers like “A Contextual Approach to Privacy
Online.”
But the most important product of Nissenbaum’s work does not
have her byline. She’s played a vital role in reshaping the way
part three. Reportage
our country’s top regulators think about consumer data. As
one measure of her success, the recent Federal Trade Commission report
“Protecting Consumer Privacy in an Era of Rapid Change,”
which purports to lay out a long-term privacy framework for legislators,
businesses, and citizens, uses the word context an astounding times.
Given the intellectual influence she’s had, it’s important to understand
how what she’s saying is different from the conclusions of other privacy
theorists. The standard explanation for privacy freak-outs is that people get
upset because they’ve “lost control” of data about themselves or there is
simply too much data available. Nissenbaum argues that the real problem
“is the inappropriateness of the flow of information due to the mediation
of technology.” In her scheme, messages have senders and receivers, who
communicate different types of information with very specific expectations
of how it will be used. Privacy violations occur not when too much data
accumulates or people can’t direct it, but when one of the receivers or
transmission principles change. The key academic term is “context-relative
informational norms.” Bust a norm and people get upset.
This may sound simple, but it actually leads to different analyses
of current privacy dilemmas and may suggest better ways of dealing
with data on the Internet. A quick example: Remember the hubbub over
Google Street View in Europe? Germans, in particular, objected to the
photo-taking cars. Many people, using the standard privacy paradigm,
thought, What’s the problem? You’re standing out in the street. It’s public!
But Nissenbaum argues that the reason some people were upset is that
reciprocity is a key part of the informational arrangement. If I’m out in the
street, I can see who can see me, and know what’s happening. If Google’s
car buzzes by, I haven’t agreed to that encounter. Ergo, privacy violation.
Nissenbaum gets us past thinking about privacy as a binary: either
something is private or it is public. Nissenbaum puts the context—or social
situation—back into the equation. What you tell your bank, you might
not tell your doctor. What you tell your friend, you might not tell your
father-in-law. What you allow a next-door neighbor to know, you might
not allow Google’s Street View car to know. Furthermore, these differences
in information sharing are not bad or good; they are just the norms.
Perhaps most importantly, Nissenbaum’s paradigm lays out ways in
The Philosopher and the FTC
which sharing can be a good thing. That is to say, more information
becoming available about you may not automatically be a bad thing. In fact,
laying out the transmission principles for given situations may encourage
people, both as individuals and collectively, to share more and attain greater
good. When a House of Representatives committee is holding a hearing
on privacy titled “Balancing Privacy and Innovation,” which really
should be titled “Balancing Privacy and Corporate Interests,” any privacy
regulation that’s going to make it through Congress has to provide clear
ways for companies to continue profiting from data tracking. The key
is coming up with an ethical framework in which they can do so, and
Nissenbaum may have done just that.
Right now, people are willing share data for the free stuff they get on
the Web. Partly, that’s because the stuff on the Web is awesome. And partly,
that’s because people don’t know what’s happening on the Web. When they
visit a Web site, they don’t really understand that a few dozen companies
may collect data on that visit.
The traditional model of how this works says that your information is
something like a currency, and when you visit a Web site that collects data
on you for one reason or another, you enter into a contract with that site.
As long as the site gives you “notice” that data collection occurs—usually
via a privacy policy located through a link at the bottom of the page—and
you give “consent” by continuing to use the site, then no harm has been
done. No matter how much data a site collects, if all they do is use it to
show you advertising they hope is more relevant to you, then they’ve done
nothing wrong.
It’s a free-market kind of thing. You are a consumer of Internet pages,
and you are free to go from one place to another, picking and choosing
among the purveyors of information. Nevermind that if you actually read
all the privacy policies you encounter in a year, it would take work days.
And that calculation doesn’t even account for all the third parties that drain
data from your visits to other Web sites.
Even more to the point: there is no obvious way to discriminate
between two separate Web pages on the basis of their data-collection
policies. While tools have emerged to tell you how many data trackers
are being deployed at any site at a given moment, the dynamic nature of
part three. Reportage
Internet advertising means that it is nearly impossible to know the story
through time. Advertising space can be sold and resold many times. At
each juncture, the new buyer has to have some information about the visit.
Ads can be sold by geography or probable demographic indicators, too, so
many, many companies may be involved with the data on an individual
site.
I asked Evidon, the makers of a track-the-trackers tool called Ghostery,
to see how many data trackers ran during the past month on four news Web
sites and on TheAtlantic.com. The numbers were astonishing. The Drudge
Report and Huffington Post both ran more than
trackers. The New York
Times ran
and The Wall Street Journal . The Atlantic deployed .
Of course, these are just the numbers: data-tracking firms are invasive in
different ways, so it could be possible that our tracking tools collect just
as much data as Drudge’s
. Even if the sheer numbers seem to indicate
that something different in degree is happening at Drudge and Huffington
Post than at our site, I couldn’t tell you for sure that was the case.
How can anyone make a reasonable determination of how their
information might be used when there are more than , or
, or
tools in play on a single Web site in a single month? “I think the biggest
challenge we have right now is figuring out a way to educate the average
user in a way that’s reasonable,” Evidon’s Andy Kahl told me. Some people
talk about something like a nutritional label for data policies. Others, like
Stanford’s Ryan Calo, talk about “visceral notice.”
Nissenbaum doesn’t think it’s possible to explain the current online
advertising ecosystem in a useful way without resorting to a lot of detail.
She calls this the “transparency paradox,” and considers it insoluble. What,
then, is her answer, if she’s thinking about chucking basically the only
privacy protections that we have on the Internet?
Well, she wants to import the norms from the offline world into the
online world. When you go to a bank, she says, you have expectations of
what might happen to your communications with that bank. That should
be true whether you’re online, on the phone, or at the teller. Companies can
use your data to do bank stuff, but they can’t sell your data to car dealers
looking for people with a lot of cash on hand.
That answer, as applied by the FTC in their new framework, is to let
The Philosopher and the FTC
companies do standard data collection but require them to tell people when
they are doing things with data that are inconsistent with the “context of
the interaction” between a company and a person.
I’ve spent this entire story extolling the virtues of Nissenbaum’s privacy
paradigm. But here’s the big downside: it rests on the “norms” that people
expect. While that may be socially optimal, it’s actually quite difficult to
figure out what the norms for a given situation might be. After all, consider
who else depends on norms for his thinking about privacy.
“People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people,” Mark
Zuckerberg told an audience in 2010. “That social norm is just
something that has evolved over time.”
28.
The Rise and Fall of the WELL
A N E A R LY W E L L O R G A N I Z E R
REFLECTS ON I TS INFLUENCE.
by Howard Rheingold
In the late
s, decades before the term social media existed, in a
now legendary and miraculously still-living virtual community called “the
WELL,” a fellow who used the handle Philcat logged on one night in a panic:
his son Gabe had been diagnosed with leukemia, and in the middle of the
night, he had no one else to turn to but the friends he knew primarily by
the text we typed to each other via primitive personal computers and slow
modems.
By the next morning, an online support group had coalesced, including
an MD, an RN, a leukemia survivor, and several dozen concerned parents
and friends. Over the next couple years, we contributed more than $ ,
to Philcat and his family. We hardly saw each other in person until we met
in the last two pews at Gabe’s memorial service.
Flash forward nearly three decades. I have not been active in the WELL
for more than years. But when the word got around in
that I had
part three. Reportage
been diagnosed with cancer (I’m healthy now), people from the WELL
joined my other friends in driving me to my daily radiation treatments.
Philcat was one of them. Like many who harbor a special attachment to
their hometown long after they leave for the wider world, I’ve continued
to take an interest—at a distance—in the place where I learned that strangers
connected only through words on computer screens could legitimately be
called a “community.”
I got the word that the WELL was for sale via Twitter, which seemed
either ironic or appropriate, or both. Salon, which has owned the WELL
since
, has put the database of conversations, the list of subscribers,
and the domain name on the market, I learned.
I was part of the WELL almost from the very beginning. The Whole
Earth ‘Lectronic Link was founded in the spring of
—before Mark
Zuckerberg’s first birthday. I joined in August of that first year.
I can’t remember how many WELL parties, chili cook-offs, trips to the
circus, and other events—somewhat repellingy called “fleshmeets” at the
time—I attended. My baby daughter and my -year-old mother joined me
on many of those occasions. I danced at three weddings of WELLbeings, as
we called ourselves, attended four funerals, on one occasion brought food
and companionship to the bedside of a dying WELLbeing. WELL people
babysat for my daughter, and I watched their children.
Don’t tell me that “real communities” can’t happen online.
In the early
s, I had messed around with BBSs, CompuServe, and
the Source for a couple of years, but the WELL, founded by Stewart Brand
of Whole Earth Catalog fame and Larry Brilliant (who more recently
was the first director of Google’s philanthropic arm, Google.org), was a
whole new level of online socializing. The text-only and often maddeningly
slow-to-load conversations included a collection of people who were more
diverse than the computer enthusiasts, engineers, and college students to
be found on Usenet or in MUDs: the hackers (when hacking meant creative
programming rather than online breaking and entering), political activists,
journalists, actual females, educators, a few people who weren’t white and
middle class.
PLATO, Usenet, and BBSs all pre-dated the WELL. But what happened
in this one particular online enclave in the
s had repercussions we
The Rise and Fall of the WELL
would hardly have dreamed of when we started mixing online and faceto-face lives at WELL gatherings. Steve Case lurked on the WELL before
he founded AOL and so did Craig Newmark, a decade before he started
Craigslist. Wired did a cover story titled “The Epic Saga of the
WELL” by The New York Times reporter Katie Hafner in
(expanded
into a book in
), and in
, Stanford’s Fred Turner published From
Counterculture to Cyberculture: Stewart Brand, the Whole
Earth Network, and the Rise of Digital Utopianism,
which
traced the roots of much of today’s Web culture to the experiments we
performed in the WELL more than a quarter century ago. The WELL was
also the subject of my Whole Earth Review article that apparently put the
term virtual community in the public vocabulary, and a key chapter in my
book, The Virtual Community .
Yet, despite the historic importance of the WELL, I’ve grown accustomed to an online world in which the overwhelming majority of Facebook
users have no idea that a thriving online culture existed in the
s.
In
, the WELL was purchased from its then-owners, the Point
Foundation (the successor to the Whole Earth organization) and NETI,
Larry Brilliant’s defunct computer conferencing-software business. The
buyer, the Rockport shoe heir Bruce Katz, was well-meaning. He upgraded
all the infrastructure and hired a staff. But his intention to franchise the
WELL didn’t meet with the warmest reception from the WELL community.
Let’s just say that there was a communication mismatch between the
community and the new owner.
Panicked that our beloved cyber-home was going to mutate into
McWell, a group of WELLbeings organized to form The River, which was
going to be the business and technical infrastructure for a user-owned
version of the WELL. Since the people who talk to each other online
are both the customers and the producers of the WELL’s product, a coop seemed the way to go. But the panic of
exacerbated existing
animosities—hey, it isn’t a community without feudin’ and fightin’!—and
the River turned into an endless shareholders meeting that never achieved
a critical mass. Katz sold the WELL to Salon. Why and how Salon kept
the WELL alive but didn’t grow it is another story. After the founders and
Katz, Salon was the third benevolent, absentee landlord since the WELL’s
part three. Reportage
founding. It’s healthy for the WELLbeings who remain—it looks like about
a thousand check in regularly, a couple of hundred more are highly active
users, and a few dozen are in the conversation about buying the WELL—to
finally figure out how to fly this thing on their own.
In
, the hardware (a Vax / , with less memory than today’s
smartphones) cost a quarter of a million dollars and required a closet full
of telephone lines and modems. Today, the software that structures WELL
discussions resides in the cloud and expenses for running the community
infrastructure include a bookkeeper, a system administrator, and a support
person. It appears that discussions of the WELL’s future today are far less
contentious and meta than they were years ago. A trusted old-timer—one
of the people who drove me to cancer treatments—is handling negotiations
with Salon. Many, many people have pledged $ ,
and more—several
have pledged $ ,
or $ ,
—toward the purchase.
The WELL has never been an entirely mellow place. It’s possible to
get thrown out for being obnoxious, but only after weeks of “thrash,”
as WELLbeings call long, drawn-out, repetitive, and often nasty metaconversations about how to go about deciding how to make decisions. As a
consequence of a lack of marketing budget, of the proliferation of so many
other places to socialize online, and (in my opinion) as a consequence of
this tolerance for free speech at the price of civility (which I would never
want the WELL to excise; it’s part of what makes the WELL the WELL),
the WELL population topped out at about ,
users in the mids.
It’s been declining ever since. If modest growth of new people becomes
economically necessary, perhaps the atmosphere will change. In any case,
I have little doubt that the WELL community will survive in some form.
Once they achieve a critical mass, and when they’ve survived for -odd
years, virtual communities can be harder to kill than you’d think.
Recent years have seen critical books and media commentary on our
alienating fascination with texting, social media, and mediated socializing
in general. The University of Toronto sociologist Barry Wellman calls
this “the community question,” and conducted empirical research
that demonstrated how people indeed can find “sociability, social support,
and social capital” in online social networks as well as in geographic
neighborhoods. With so much armchair-psychology tut-tutting over our
The Rise and Fall of the WELL
social media habits these days, empirical evidence is a welcome addition to
this important dialogue about sociality and technology. But neither Philcat
nor I need such evidence to prove that the beating heart of community can
thrive among people connected by keyboards and screens as well as those
connected by backyard fences and neighborhood encounters.
29.
The Professor Who Fooled
Wikipedia
A S C A N DA L O U S C L A S S
A S S I G N M E N T, A N D T R U T H O N
THE IN TERNET
by Yoni Appelbaum
A woman opens an old steamer trunk and discovers tantalizing clues that a
long-dead relative may actually have been a serial killer, stalking the streets
of New York in the closing years of the th century. A beer enthusiast
is presented by his neighbor with the original recipe for Brown’s Ale,
salvaged decades before from the wreckage of the old brewery—the very
building where the Star-Spangled Banner was sewn in
. A student buys
a sandwich called the Last American Pirate and unearths the long-forgotten
tale of Edward Owens, who terrorized the Chesapeake Bay in the
s.
These stories have two things in common. They are all tailor-made for
viral success on the internet. And they are all lies.
part three. Reportage
Each tale was carefully fabricated by undergraduates at George Mason
University who were enrolled in T. Mills Kelly’s course Lying About
the Past. Their escapades not only went unpunished, they were actually
encouraged by the students’ professor. Four years ago, students created a
Wikipedia page detailing the exploits of Edward Owens and successfully
fooled Wikipedia’s community of editors. This year, though, one group
of students made the mistake of launching its hoax on Reddit. What the
students learned in the process provides a valuable lesson for anyone who
turns to the Internet for information.
The first time Kelly taught the course, in
, his students confected
the life of Edward Owens, mixing together actual lives and events with
brazen fabrications. They created YouTube videos, interviewed experts,
scanned and transcribed primary documents, and built a Wikipedia page
to honor Owens’s memory. The romantic tale of a pirate plying his trade
in the Chesapeake struck a chord, and quickly landed on USA Today‘s popculture blog. When Kelly announced the hoax at the end of the semester,
some were amused, applauding his pedagogical innovations. Many others
were livid.
Critics decried the creation of a fake Wikipedia page as digital vandalism. “Things like that really, really, really annoy me,” fumed founder Jimmy
Wales, comparing it to dumping trash in the streets to test the willingness
of a community to keep it clean. But the indignation may, in part, have been
compounded by the weaknesses the project exposed. Wikipedia operates on
a presumption of good will. Determined contributors, from public-relations
firms to activists to pranksters, often exploit that, inserting information
they would like displayed. The sprawling scale of Wikipedia, with nearly
four million English-language entries, ensures that even if overall quality
remains high, many such efforts will prove successful.
Last January, as he prepared to offer the class again, Kelly put the
Internet on notice. He posted his syllabus and announced that his new,
larger class was likely to create two separate hoaxes. He told members
of the public to “consider yourself warned—twice.”
This time, the class decided not to create false Wikipedia entries.
Instead, it used a slightly more insidious stratagem, creating or expanding
strictly factual Wikipedia articles, and then using its own Web sites to stitch
The Professor Who Fooled Wikipedia
together these truthful claims into elaborate hoaxes.
One group took its inspiration from the fact that the
original Star-Spangled Banner had been sewn on the floor of Brown’s
Brewery in Baltimore. The group decided that a story that good deserved
a beer of its own. The students crafted a tale of discovering the old recipe
used by Brown’s to make its brews, registered BeerOf1812.com, built
a Wikipedia page for the brewery, and tweeted out the tale on their
Twitter feed. No one suspected a thing. In fact, hardly anyone even noticed.
They did manage to fool one well-meaning DJ in Washington, DC, but
the hoax was otherwise a dud.
The second group settled on the story of the serial killer Joe Scafe. Using
newspaper databases, the students identified four actual women murdered
in New York City from
to
, victims of broadly similar crimes. They
created Wikipedia articles for the victims, carefully following the rules of
the site. They concocted an elaborate story of discovery, and fabricated
images of the trunk’s contents. Then, the class prepared to spring its surprise on an unsuspecting world. A student posing as Lisa Quinn logged into
Reddit, the popular social-news Web site, and posed an eye-catching question: “Opinions please, Reddit. Do you think my ‘Uncle’ Joe
was just weird or possibly a serial killer?”
The post quickly gained an audience. Redditors dug up the victims’
Wikipedia articles, one of which recorded contemporary newspaper speculation that the murderer was the same man who had gone on a killing
spree through London. “The day reddit caught Jack the Ripper,” a redditor exulted. “I want to see these cases busted wide open!” wrote another.
“Yeah! Take that, Digg!” wrote a third.
But it took just
minutes for a redditor to call foul, noting the
Wikipedia entries’ recent vintage. Others were quick to pile on, deconstructing the entire tale. The faded newspaper pages looked artificially aged. The
Wikipedia articles had been posted and edited by a small group of new
users. Finding documents in an old steamer trunk sounded too convenient.
And why had Lisa been savvy enough to ask Reddit, but not enough to
Google the names and find the Wikipedia entries on her own? The hoax
took months to plan but just minutes to fail.
Why did the hoaxes succeed in
and not in
? If thousands
part three. Reportage
of Internet users can be persuaded that Abraham Lincoln invented
Facebook, surely the potential for viral hoaxes remains undiminished.
One answer lies in the structure of the Internet’s various communities.
Wikipedia has a weak community, but centralizes the exchange of information. It has a small number of extremely active editors, but participation
is declining, and most users feel little ownership of the content. And
although everyone views the same information, edits take place on a
separate page, and discussions of reliability on another, insulating ordinary
users from any doubts that might be expressed. Facebook, where the
Lincoln hoax took flight, has strong communities but decentralizes the
exchange of information. Friends are quite likely to share content and to
correct mistakes, but those corrections won’t reach other users sharing or
viewing the same content. Reddit, by contrast, builds its strong community
around the centralized exchange of information. Discussion isn’t a separate
activity but the sine qua non of the site. When one user voiced doubts,
others saw the comment and quickly piled on.
But another, more compelling answer, has to do with trust. Kelly’s
students, like all good con artists, built their stories out of small, compelling
details to give them a veneer of veracity. Ultimately, though, they aimed
to succeed less by assembling convincing stories than by exploiting the
trust of their marks, inducing them to lower their guard. Most of us assess
arguments, at least initially, by assessing those who make them. Kelly’s
students built blogs with strong first-person voices, and hit back hard at
skeptics. Those inclined to doubt the stories were forced to doubt their
authors. They inserted articles into Wikipedia, trading on the credibility
of that site. And they aimed at very specific communities: the “beer lovers
of Baltimore” and Reddit.
That was where things went awry. If the beer lovers of Baltimore form
a cohesive community, the class failed to reach it. And although most
communities treat their members with gentle regard, Reddit prides itself on
winnowing the wheat from the chaff. It relies on the collective judgment of
its members, who click on arrows next to contributions, elevating insightful
or interesting content, and demoting less worthy contributions. Even Mills
says he was impressed by the way in which Redditors “marshaled their
collective bits of expert knowledge to arrive at a conclusion that was largely
The Professor Who Fooled Wikipedia
correct.” It’s tough to con Reddit.
The loose thread, of course, was the Wikipedia articles. The Redditors
didn’t initially clue in on their content, or identify any errors; they focused
on their recent vintage. The whole thing started to look as if someone was
trying too hard to garner attention. Kelly’s class used the imaginary Lisa
Quinn to put a believable face on their fabrications. When Quinn herself
started to seem suspicious, it didn’t take long for the whole con to unravel.
If there’s a simple lesson in all of this, it’s that hoaxes tend to thrive
in communities that exhibit high levels of trust. But on the Internet, where
identities are malleable and uncertain, we all might be well advised to err
on the side of skepticism.
Sometimes even an apparent failure can mask an underlying success.
The students may have failed to pull off a spectacular hoax, but they
surely learned a tremendous amount in the process. “Why would I design
a course,” Kelly asks on his syllabus, “that is both a study of historical
hoaxes and then has the specific aim of promoting a lie (or two) about
the past?” Kelly explains that he hopes to mold his students into “much
better consumers of historical information,” and at the same time, “to
lighten up a little” in contrast to “overly stuffy” approaches to the subject.
He defends his creative approach to teaching the mechanics of the
historian’s craft, and plans to convert the class from an experimental course
into a regular offering.
There’s also an interesting coda to this convoluted tale. The group
researching Brown’s Brewery discovered that the placard in front of the
Star-Spangled Banner at the National Museum of American History lists
an anachronistic name for the building in which it was sewn. The group
has written to the museum to correct the mistake. For those students, at
least, falsifying the historical record may prove less rewarding than setting
it straight.
30.
The Perfect Milk Machine
H O W B I G D ATA T R A N S F O R M E D
T H E DA I R Y I N D U S T R Y
by Alexis C. Madrigal
Although there are more than million Holstein dairy cows in the United
States, there is exactly one bull that has been scientifically calculated to
be the very best in the land. He goes by the name of Badger-Bluff Fanny
Freddie.
Already, Badger-Bluff Fanny Freddie has
daughters who are on the
books, and thousands more that will be added to his progeny count when
they start producing milk. This is quite a career for a young animal: He was
only born in
.
There is a reason, of course, that the semen that Badger-Bluff
Fanny Freddie produces has become such a hot commodity in what one
artificial-insemination company calls “today’s fast paced cattle
semen market.” In January of
, before Badger-Bluff Fanny Freddie
had a single daughter producing milk, the United States Department of
Agriculture took a look at his lineage and more than ,
markers on his
part three. Reportage
genome, and declared him the best bull in the land. Three years and
milk- and data-providing daughters later, it turns out that they were right.
“When Freddie [as he is known] had no daughter records, our equations
predicted from his DNA that he would be the best bull,” the USDA research
geneticist Paul VanRaden e-mailed me with a detectable hint of pride.
“Now he is the best progeny-tested bull (as predicted).”
Data-driven predictions are responsible for a massive transformation
of America’s dairy cows. While other industries are just catching on to
this whole “big data” thing, the animal sciences—and dairy breeding in
particular—have been using large amounts of data since long before
VanRaden was calculating the outsize genetic impact of the most-soughtafter bulls with a pencil and paper in the
s.
Dairy breeding is perfect for quantitative analysis. Pedigree
records have been assiduously kept; relatively easy artificial
insemination has helped centralize genetic information in a small
number of key bulls since the
s; there are a relatively small
number of easily measurable traits—milk production, fat in the
milk, protein in the milk, longevity, udder quality—that breeders want to
optimize; each cow works for three or four years and farmers invest
thousands of dollars into each animal, so it’s worth it to get the best
semen money can buy. The economics push breeders to use the genetics.
The bull market (heh) can be reduced to one key statistic, lifetime
net merit, though there are many nuances that the single number cannot
capture. Net merit denotes the likely additive value of a bull’s genetics. The
number is actually denominated in dollars because it is an estimate of how
much a bull’s genetic material will likely improve the revenue from a given
cow. A very complicated equation weights all of the factors that go
into dairy breeding and—voilà—you come out with this single number. For
example, a bull that could help a cow make an extra ,
pounds of milk
over her lifetime only gets an increase of $ in net merit while a bull who
will help that same cow produce a pound more protein will get $ . more
in net merit. An increase of a single month in predicted productive life
yields $ more.
When you add it all up, Badger-Bluff Fanny Freddie has a net merit of
$ . No other proven sire scores more than $
and only seven bulls
The Perfect Milk Machine
in the country score more than $700. One might assume that this
is largely because the bull can help cows make more milk, but it’s not!
While breeders used to select for greater milk production, that’s no longer
considered the most important trait. For example, the No. bull in America
is named Ensenada Taboo Planet-Et. His predicted transmitting ability for
milk production is +
, more than ,
pounds greater than Freddie’s.
His offspring’s milk will likely contain more protein and fat as well. But
his daughters’ productive life would be shorter and their pregnancy rate
lower. And these factors, as well as some traits related to the hypothetical
daughters’ size and udder quality, trump Planet’s impressive production
stats.
One reason for the change in breeding emphasis is that our cows
already produce tremendous amounts of milk relative to their forbears.
In
, when my father was born, the average dairy cow produced less
than 5,000 pounds of milk in its lifetime. Now, the average cow
produces more than 21,000 pounds of milk. At the same time, the
number of dairy cows has decreased from a high of
million around
the end of World War II to fewer than nine million today. This is an
indisputable environmental win, as fewer cows create less methane, a
potent greenhouse gas, and require less land.
At the same time, it turns out that cow genomes are more complex than
we thought: as milk production amps up, fertility drops. There’s an art to
balancing all the traits that go into optimizing a herd.
While we may worry about the use of antibiotics to
stimulate animal growth or the use of hormones to increase
milk production by up to 25 percent, most of the increase, since
the pastoral days of yore, in the pounds of milk an animal produces comes
from the genetic changes that we’ve wrought within these animals. It
doesn’t matter how the cow is raised—in an idyllic pasture or a feedlot—
either way, the animal of
is not the animal of
or
or even
. A group of USDA and University of Minnesota scientists calculated
that 22 percent of the genome of Holstein cattle has been
altered by human selection over the past years.
In a very real sense, information itself has transformed these animals.
The information did not accomplish this feat on its own, of course.
part three. Reportage
All of this technological and scientific change is occurring within the
social context of American capitalism. Over the past few decades, the
number of dairies has collapsed and the size of herds has
increased. These larger operations are factory farms that are built to
squeeze inefficiencies out of the system to generate profits. They benefit
from economies of scale that allow them to bring in genomic specialists
and to use more expensive bull semen.
No matter how you apportion the praise or blame, the net effect is the
same. Thousands of years of qualitative breeding on family-run farms begat
cows producing a few thousand pounds of milk in their lifetimes; a mere
years of quantitative breeding optimized to suit corporate imperatives
quadrupled what all previous civilization had accomplished. And the crazy
thing is, we’re at the cusp of a new era in which genomic data starts to
compress the cycle of trait improvement, accelerating our path toward the
perfect milk-production machine, also known as the Holstein dairy cow.
***
No experiments in genetics are more famous than the ones undertaken
by the Austrian monk Gregor Mendel on five acres in what is now
the Czech Republic from
to
. Mendel bred ,
pea plants
and discovered the most basic rules of genetics without any
knowledge of the underlying biochemical mechanics.
Smack dab in the middle of Mendel’s experiments, Charles Darwin’s Origin of Species was published, but we don’t have any record of
intellectual mingling between the two men. Even the idea of a gene as an
irreducible unit of inheritance wasn’t presented until years after Mendel
began his experiments. The term and field of genetics would not be fleshed
out until William Bateson and company came along in the early
s.
And its form, DNA, would not be proposed by James Watson and Francis
Crick with indispensable help from Rosalind Franklin until
years after
his last pea plant died. All this to say: Mendel was ahead of his time.
Here’s the simple version of what he did. Mendel took pea plants that
reliably produced purple or white flowers when they self-pollinated. Then
he crossbred them, carefully controlling how the plants reproduced. Now,
you might expect that if you breed a pea plant with a purple flower and a
pea plant with a white flower, you’d get progeny that were sort of mauve, a
The Perfect Milk Machine
mix of the two colors. But what Mendel found instead is that you either got
purple flowers or white flowers. Even more amazingly, sometimes breeding
two purple flowers would yield a white flower. Among the first generation
of crossbreeds, the mix of flower colors occurred at a roughly constant ratio
of about : , purple to white. If the traits of two plants were being mixed to
generate the next generation, how could two purple flowers yield a white
flower? And why would this ratio arise?
Mendel took a conceptual leap and hypothesized that the plants had
two possible copies of its plans (i.e., genes) to make flower color (or any of
the six other traits he analyzed). If the plant received two of the dominant
plan (purple), the flowers would, of course, be purple. If it received one of
each, the dominant plan would still reign. But if the plant received two
recessive plans, then the flowers of that pea would be white.
The monk turned out to be right. For traits controlled by a single gene,
things really do work as he predicted. Mendel’s insights became part of the
central dogma of genetics. You can use his statistical method to calculate
how likely someone is to get sickle cell anemia from her parents. In most
genetics classes, Mendel is where it all starts, and for good reason.
But it turns out that Mendel’s version of things doesn’t give a very
clear picture of the kinds of things we care about most. “Mendel studied
a few traits that happened to be controlled by a single gene, making the
probabilities easier to figure out,” the USDA’s VanRaden said. “Animal
breeders for many decades have used models that assume most traits are
influenced by thousands of genes with very small effects. Some [individual]
genes do have detectable effects, but many studies of plant and animal traits
conclude that most of the genetic variation is from many little effects.”
For dairy cows—or humans, for that matter—it’s not as simple as
the dominant/recessive single-gene paradigm that Mendel created. In fact,
Mendel picked his model organism well. Its simplicity allowed him to focus
in on the smallest possible genetic model and figure it out. He could easily
manipulate the plant breeding; he could observe key traits of the plant; and
these traits happened to be controlled by a single gene, so the math lay
within human computational range. Pea plants were perfect for studying
the basics of genetics.
With that in mind, allow me to suggest, then, that the dairy farmers
part three. Reportage
of America, and the geneticists who work with them, are the Mendels
of the genomic age. That makes the dairy cow the pea plant of this
exciting new time in biology. In late April
, in the Proceedings of the
National Academy of Science, two of the most successful bulls
of all time had their genomes published.
This is a landmark in dairy-herd genomics, but it’s most significant
as a sign that while genomics remains mostly a curiosity for humans,
it’s already coming of age when it comes to cattle. It’s telling that the
cutting-edge genomics company Illumina has precisely one applied market:
animal science. They make a chip that measures ,
markers on the cow
genome for attributes that control the economically important functions of
those animals.
***
Mendel may have worked with plants, but the rules he revealed turned
out to be universal for all living things. The same could be true of the
statistical rules that dairy scientists are learning about how to match up
genomic data with the physical attributes they generate. The rules that
reflect the way dozens or hundreds of genes come together to make a
cow likely to develop mastitis, say, may be formally similar to the rules
that govern what makes people susceptible to schizophrenia or prone to
living for a long time. Researchers like the University of Queensland’s
Peter Visscher are bringing the lessons of animal science to bear on our
favorite animal, ourselves.
Want to live to be
? Well, scientists hope to discover the group
of genes that are responsible for longevity. The problem is that you
have genomic data over here and you have phenotypic data (i.e., how
things actually are) over there. What you need, then, is some way of
translating between these two realms. And it’s that matrix, that series of
transformations, that animal scientists have been working on for the past
decade.
It turned out they were in the perfect spot to look for statistical
rules. They had databases of old and new bull semen. They had
old and new production data. In essence, it wasn’t that difficult to
generate rules for transforming genomic data into real-world
predictions. Despite—or because of—the effectiveness of traditional
The Perfect Milk Machine
breeding techniques, molecular biology has been applied in the field
for years in different ways. Given that breeders were trying to discover
bulls’ hidden genetic profiles by evaluating the traits in their offspring
that could be measured, it just made sense to start generating direct data
about the animals’ genomes.
“Each of the bulls on the sire list, we have ,
genetic markers. Most
of those, we have
,
,” the USDA’s VanRaden said. “Every month, we
get another ,
new calves, the DNA readings come in, and we send the
predictions out. We have a total of
,
animals with DNA analysis.
That’s why it’s been so easy. We had such a good phenotype file, and we
had DNA stored on all these bulls.”
They had all that information because, for decades, scientists have been
taking data from cows to figure out which bulls produced the best offspring.
Typically, once a bull with a promising pedigree reached sexual maturity,
his semen would be used to impregnate a selection of about
test cows.
Those daughters would grow up and start producing milk a few years later.
The data from those cows would be used to calculate the value of that
now “proven” bull. People called the process “progeny testing”; it did not
require that breeders knew the exact genetic makeup of a bull. Instead,
scientists and breeders could simply say: “We do not know the underlying
constellations of genes that make this bull so valuable, but we do know how
much milk his kids will produce.” They learned to use that data to predict
which were the best bulls.
That meant that some bulls became incredibly sought after.
The No. bull of the last century, Pawnee Farm Arlinda Chief, had
more than ,
daughters,
,
granddaughers, and million greatgranddaughters. He’s responsible for about 14 percent of all the
genetic material in all Holsteins, USDA scientists estimate.
“[In the past], we combined performance data—milk yield, protein
yield, confirmation data—with pedigree information, and ran it through
a fairly sophisticated computing gobbledygook,” another USDA
scientist,
Curt Van Tassel, told a group of dairy farmers.
“It spit out at the other end predicted transmitting ability, predicted genetic
values of whatever sort. Now what we’re trying to do is tweak that black
box by introducing genomic data.”
part three. Reportage
There are many different ways you could model the mapping of ,
genetic markers onto a dozen performance traits, especially when you have
to consider all kinds of environmental factors. So the dairy breeders have
been developing and testing statistical models to take all this stuff into
account and spit out good predictions of which bulls herd managers should
ultimately select.The real promise is not that genomic data will actually
be better than the ground-truth information generated from real offspring
(though it might be), but rather that the estimates will be close enough to
real but save three to four years per generation. If you don’t have to wait
for daughters to start cranking out milk, then you can shave those years off
the improvement cycle, speeding it up several times.
Nowadays breeders can choose between “genomic bulls,” which have
been evaluated based purely on their genes, and “proven bulls,” for which
real-world data is available. Discussions among dairy breeders show that
many are beginning to mix younger bulls with good-looking genomic data
into the breeding regimens. How well has it gone? The first of the bulls
who were bred for their genetic profiles alone are receiving their initial
production data. So far, it seems as if the genomic estimates were a little
high, but more accurate than traditional methods alone.
The unique dataset and success of dairy breeders now has other
scientists sniffing around their findings. Leonid Kruglyak, a genomics
professor at Princeton, told me that “a lot of the statistical techniques and
methodology” that connect phenotype and genotype were developed by
animal breeders. In a sense, they are like codebreakers. If you know the
rules of encoding. it’s not difficult to put information in one end and have
it pop out the other as a code. But if you’re starting with the code, that’s a
brutally difficult problem. And it’s the one that diary geneticists have been
working on.
Their work could reach outside the medical realm to help us understand
human evolution as well. For example, Kruglyak said, human-population
geneticists want to figure out how to explain the remarkable lack of genetic
variance between human beings. “The typical [genetic] variation among
humans is one change in a thousand,” he said. “Chimps, though they
obviously have a much smaller population now, have several-fold higher
genetic diversity.” How could this be? Researchers hypothesize that human
The Perfect Milk Machine
beings once went through a bottleneck where there were very few humans
relative both to the current human population and the chimp population.
Few humans meant that the gene pool was limited at some point in the
prehistoric but fairly recent past. We’ve never recovered the diversity we
might have had.
***
It might seem that Badger-Bluff Fanny Freddie is the pinnacle of the
Holstein bull. He’s been the top bull since the day his genetic markers
showed up in the USDA database, and his real-world performance has
backed up his genome’s claims. But he’s far from the best bull that science
can imagine.
John
Cole,
yet
another
USDA
animal
improvement
scientist,
generated an estimate of the perfect bull
by
choosing the optimal observed genetic sequences and hypothetically
combining them. He found that the optimal bull would have a net merit
value of $ , , which absolutely blows any current bull out of the water.
In other words, we’re nowhere near creating the perfect milk machine.
The problem, of course, is that genomes cannot really be cut and pasted
together from the best bits. “When you go extremely far for one trait, you’re
going to upset some of the other traits,” Vanraden said. Breeding is a messy
(i.e., biological) process, no matter how technologically sophisticated the
front end. After decades of breeding cows for milk production, people
realized—to their dismay—that the ability to generate milk and the ability
to have babies were negatively correlated. The more milk you tried to order
up, the fewer babies your herd was likely to have. While we’re nowhere
near the hypothetical limit for Holstein bull value, we do now know that
nature is not so easily transformed without some deleterious effects. We
may have factory farms, but these machines are still flesh and blood.
Except for Badger-Fluff Fanny Freddie and his fellow bulls, that is.
Freddie is a disembodied creature, an animal that is more important as data
than as meat or muscle. Though he’s been mentioned in thousands of Web
pages and dozens of industry trade articles, no one mentions where he was
born or where the animal currently lives. He is, for all intents and purposes
except for his own, genetic material that comes in the handy form of semen.
His thousands of daughters will never smell him, and his physical location
part three. Reportage
doesn’t matter to anyone. He will be replaced very soon by the next top
bull, as subject to the pressures of our economic system as the last version
of the iPhone.
31.
Machine Vision and the Future
of Manufacturing
ROBOTS WI TH CAMERAS MEET
T H E I N T E R N E T.
by Sarah C. Rich and Alexis C. Madrigal
On the outskirts of the Ann Arbor Municipal Airport, where Airport Drive
divides tarmac and asphalt, there is a gigantic office park. This is not
surprising. But it may be surprising to learn who some of its tenants are. In
several of the long, low buildings, the world-famous Zingerman’s empire
produces cheese, gelato, bread, pastries, and happiness.
And in another—if you can pick it out among the indistinguishable
rectangles—there is Maker Works, a new, collaborative micromanufacturing facility.
Opened in May of this year, Maker Works consists of ,
square feet
of workshop rooms and thrilling collections of heavy machines, including
a -D printer, laser cutter, metal lathe, circuit engraver, spot welder, and
Shopbot. Maker Works offers different levels of membership to people who
part three. Reportage
want regular access to the tools, and a variety of classes to those who just
want to try their hand at a new kind of craft. The team also offers a nineweek entrepreneurship course geared toward launching small businesses
“that cater to the ‘handmade marketplace.”’
While Maker Works clearly has fun baked into its DNA—a place where
you can learn to precision-cut a complete metal replica of a T. rex skeleton—
it also has a serious mission: The company wants to catalyze the creation
of more independent, financially-sustainable businesses in Michigan. It’s
an economic-growth play disguised as high-tech shop class.
***
Sight Machine is a very promising current resident of the Maker
Works building. We wish we could devote a month to researching and
telling the company’s story because it’s got all the elements of a great
American narrative. It’s at the forefront of trying to renovate manufacturing by dragging it into the Internet age, and at the same time, it’s
applying new mobile technology to the heavy industries that still power
most of the economy that isn’t banking, insurance, or real estate. They are
taking technologies we use mostly as toys and putting them to work in that
Fkind of way. And they’re doing it on a shoestring budget, hoping
that someone will take a chance on a fantastic idea that seems a little too
Michigan for Silicon Valley and a little too Silicon Valley for Michigan.
A little explanation about what they actually do: Many manufacturing
companies use special “ruggedized” cameras to look at how their processes
are working. These are not ordinary cameras, though. They’ve been
engineered by companies like Cognex to work in factories on existing
manufacturing lines.
This is not a perfect system. While the guys working the lines might
pull out their iPhones when they leave the gates of the plant, mobile and
Web technology is not heavily used at many factories. Here’s an example:
One of Sight Machine’s customers makes a part that goes into engines.
When it’s sliced in half, subtle marks on the metal inside tell the story
of the process that went into forming it. Each of the little lines and the
distances between those lines are significant. This metal contains data, in
other words, that its manufacturer would like to track.
So, the manufacturer scans the product into a computer. and Sight
Machine Vision and the Future of Manufacturing
Machine takes it from there.
“We do a bunch of machine vision. We analyze the image, Photoshopstyle, and pull out all the different features and measurements they want,”
said Katherine Scott, the director of R&D. “We give them numbers. We can
show them historically how they are doing with different data sets.”
To do all these tasks, the Sight Machine team created “an open framework for creating computer vision applications.” Before, most machine
vision work—the kind of stuff Scott used to do at a Department of Defense
contractor—was built from scratch with tools that were akin to assembly
language. Their framework abstracts away a lot of that close-to-the-metal
stuff so that they can build new applications quickly. The team even wrote
a book, Practical Computer Vision with SimpleCV, that O’Reilly
published.
To show us one of the simple computer-vision applications that many
companies still aren’t taking advantage of, they brought out their portable
demo machine. (Sidenote: they built the whole thing in the Maker Works
shop.) The demo is charming. They load different color Gobstoppers into
a holding pen, and then you select a color to sort out of the group. We hit
orange and sure enough, within a few seconds, an orange gobstopper had
been sorted based on color alone.
The founder, Nate Oostendorp (the co-founder of Slashdot, by the way),
would be the first to tell you that a lot of what they do is not rocket science.
“We went and visited different plants and said, ‘Hey, what are the things
you want from your machine vision technology?’ And all of it seems super
easy. Like, we want to have logins so when different people do different
things on the camera, we can tell. We want to know when people make
changes to the software on the camera,” Oostendorp said. “But because it’s
been a completely separate field for so long, it’s grown up sheltered from
what’s been happening in the rest of the world.”
“People either build stuff for the Internet or they build robots with
cameras on them. I came from building robots with cameras on them,” Scott
added. “Nate, more or less, came from building stuff that builds the Internet.
How do you make the robots with the cameras talk with the Internet?
Nobody really does that yet. Nobody does it well.”
We’ve been talking and writing a lot about how the physical world
part three. Reportage
and the digital world are merging. This machine vision stuff is one of the
deepest and most important ways in which that will eventually be true,
either here or in some other country. Perhaps the most impressive demo of
Sight Machine’s technology was its application to an aquaculture research
company. They rigged up a hacked Kinect to see through the
water, allowing the fish farmers to document and measure the movement
of the fish, and feed them far more efficiently.
It all sounds good and interesting, right? This seems like a way to
increase the competitiveness of our manufacturing, which (as everyone
knows) has been under a lot of pressure from lower-cost factories overseas.
“So, what are the hangups?” we asked CEO Jon Sobel, a veteran of Tesla
Motors, SourceForge, and Yahoo.
“One is cultural. There’s a lot of venture money and appetite on the
West Coast for Web technology. Until very recently, the idea has been,
‘let’s do stuff like social media,”’ he said. “At a deeper level, the way
manufacturing people think and like to do business and the way West Coast
people think and like to do business are totally different.”
That’s “totally solvable,” Sobel maintains, but only by having people on
his team who are “fluent” in both cultures. “A lot of our time is spent on
explaining, showing, telling, and convincing,” he said.
After seeing dozens and dozens of Web start-ups get between $ million
and $ million to build failed social networks, it’s depressing to realize that
Sight Machine may have trouble raising more than the $ ,
they’ve
spent building the business so far.
“This is a really good example of applying Internet technology in
industry and IBM and GE are making big bets that’s gonna happen,”
Sobel concluded. “The roadblocks to this have been that the component
technologies weren’t ready and the mindset wasn’t there. We know the
component technologies are ready; we can see that. Now, it’s a mindset
journey.”
32.
Google Maps and the Future of
Everything
INSIDE GROUND TRU TH, THE
SECRETIVE PROGRAM TO BUILD
T H E W O R L D ’ S B E S T A C C U R AT E
MAPS
by Alexis C. Madrigal
Behind every Google Map, there is a much more complex map that’s the
key to answering your queries but hidden from your view. This deeper map
contains the logic of places: their no-left-turns and freeway on-ramps, their
speed limits and traffic conditions. These are the data that you’re drawing
from when you ask Google to guide you from Point A to Point B. Google
showed me the internal map and demonstrated how it was built. It’s the
first time the company has let anyone watch how the project it calls GT, or
“Ground Truth,” actually works.
part three. Reportage
Google opened up at a key moment in its evolution. The company
began as an online search company that made money almost exclusively
from selling ads based on what you were querying. But then the mobile
world exploded. Where you’re searching from has become almost as
important as what you’re searching for. Google responded by creating an
operating system, brand, and ecosystem in Android that has become the
only significant rival to Apple’s iOS.
And for good reason. If Google’s mission is to organize all the world’s
information, the most important challenge—a far larger task than indexing
the Web—is to take the world’s physical information and make it accessible
and useful.
“If you look at the offline world, the real world in which we live,
that information is not entirely online,” Manik Gupta, the senior product
manager for Google Maps, told me. “Increasingly, as we go about our lives,
we are trying to bridge that gap between what we see in the real world and
[the online world], and Maps really plays that part.”
This is not just a theoretical concern. Mapping systems matter on
phones precisely because they are the interface between the offline and
online worlds. If you’re at all like me, you use mapping more than any
other application except for the communications suite (phone, e-mail, social
networks, and text-messaging).
Google is locked in a battle with the world’s largest company, Apple,
about who will control the future of mobile phones. Whereas Apple’s
strengths are in product design, supply-chain management, and retail
marketing, Google’s most obvious realm of competitive advantage is in
information. Geo data—and the apps built to use it—are where Google can
win just by being Google. That didn’t matter on previous generations of
iPhones because they used Google Maps, but now Apple has created its own
service. How the two operating systems incorporate geo data and present
it to users could become a key battleground in the phone wars.
But that would entail actually building a better map.
***
The office where Google has been building the best representation of
the world is not a remarkable place. It has all the free food, ping-pong,
and Google Maps–inspired Christoph Niemann cartoons that
Google Maps and the Future of Everything
you’d expect, but it’s still a low-slung office building just off the
in the
‘burb of Mountain View.
I was slated to meet with Gupta and the engineering ringleader on
his team, the former NASA engineer Michael Weiss-Malik, who’d spent his
“ percent time” working on Google Maps, and Nick Volmar, an “operator”
who actually massages map data.
“So you want to make a map,” Weiss-Malik tells me as we sit down in
front of a massive monitor. “There are a couple of steps. You acquire data
through partners. You do a bunch of engineering on that data to get it into
the right format and conflate it with other sources of data, and then you do
a bunch of operations, which is what this tool is about, to hand-massage
the data. And out the other end pops something that is higher quality than
the sum of its parts.”
This is what they started out with, the TIGER data from the U.S.
Census Bureau (though the base layer could and does come from a
variety of sources in different countries).
Figure
..
On first inspection, these data look great. The roads look like they are
all there and the freeways are differentiated. This is a good map to the
part three. Reportage
untrained eye. But let’s look closer. There are issues where the digital data
does not match the physical world. I’ve circled a few obvious ones below.
Figure
. .
And that’s just from comparing the map with the satellite imagery.
Google also has a variety of other tools at its disposal. One is bringing
in data from other sources, say the U.S. Geological Survey. But Google’s
Ground Truthers can also bring another exclusive asset to bear on the
map’s problem: the Street View cars’ tracks and imagery. In keeping with
Google’s more-data-is-better-data mantra, the maps team, largely driven
by Street View, is publishing more imagery data every two weeks than
Google possessed in total in
. Let’s step back a tiny bit to recall with
wonderment the idea that a single company decided to drive cars with
custom cameras over every road it could access. Google is up to million
miles driven now. Each drive generates two kinds of really useful data for
mapping. One is the actual tracks the cars have taken; these are proofpositive that certain routes can be taken. The other is all the photos. And
what’s significant about the photographs in Street View is that Google
can run algorithms that extract the traffic signs and can even paste them
onto the deep map within their Atlas tool. For a particularly complicated
intersection like this one in downtown San Francisco, that could look like
Google Maps and the Future of Everything
this: Google Street View wasn’t built to create maps like this, but the geo
team quickly realized that computer vision could get it incredible data for
ground-truthing its maps. Not to detour too much, but what you see above
is just the beginning of how Google is going to use Street View imagery.
Think of it like the early Web crawlers (remember those?) going out in
the world, looking for the words on pages. That’s what Street View is
doing. One of its first uses is finding street signs—and addresses—so that
Google’s maps can better understand the logic of human-transportation
systems. But as computer vision and OCR (shorthand for “optical character
recognition”) improve, any word that is visible from a road will become a
part of Google’s index of the physical world. Later in the day, the Google
Maps head Brian McClendon put it like this: “We can actually organize
the world’s physical written information if we can OCR it and place it,”
McClendon said. “We use that to create our maps right now by extracting
street names and addresses, but there is a lot more there.”
More like what? “We already have what we call ‘view codes’ for
million businesses and
million addresses, where we know exactly what
we’re looking at,” McClendon continued. “We’re able to use logo matching
and find out, where are the Kentucky Fried Chicken signs . . . We’re able
to identify and make a semantic understanding of all the pixels we’ve
acquired. That’s fundamental to what we do.”
For now, though, computer vision that will transform Street View
images directly into geo-understanding remains in the future. The best way
to figure out if you can make a left turn at a particular intersection is still
to have a person look at a sign—whether that human is driving, or seeing
an image generated by a Street View car.
There is an analogy to be made to one of Google’s other impressive
projects: Google Translate. What looks like machine intelligence is actually
only a recombination of human intelligence. Translate relies on massive
bodies of text that have been translated into different languages by humans;
it then is able to extract words and phrases that match up. The algorithms
are not actually that complex, but they work because of the massive
amounts of data—human intelligence—that go into the task on the front
end.
Google Maps has executed a similar operation. Humans are coding
part three. Reportage
every bit of the logic of the road onto a representation of the world so
that computers can simply duplicate (infinitely, instantly) the judgments
that a person already made.
This reality is incarnated in Nick Volmar, the operator who has been
showing off Atlas while Weiss-Malik and Gupta explain it. He probably
uses
keyboard shortcuts switching between types of data on the map,
and he shows the kind of twitchy speed that I associate with long-time
designers working with Adobe products, or professional Starcraft players.
Volmar has clearly spent thousands of hours working with these data.
Weiss-Malik told me that it takes hundreds of operators to map a country.
(Rumor has it many of these people work in the Bangalore office, out
of which Gupta was promoted.)
The sheer amount of human effort that goes into Google’s maps is just
mind-boggling. Every road that you see slightly askew in the top image
has been hand-massaged by a human. The most telling moment for me
came when we looked at a couple of the several thousand user reports of
problems with Google Maps that come in every day. The Geo team tries
to address the majority of fixable problems within minutes. One complaint
reported that Google did not show a new roundabout that had been built in
a rural part of the country. The satellite imagery did not show the change,
but a Street View car had recently driven down the street, and its tracks
showed the new road perfectly.
Volmar began to fix the map, quickly drawing the new road and
connecting it to the existing infrastructure. In his haste (and perhaps with
the added pressure of three people watching his every move), he did not
draw a perfect circle of points. Weiss-Malik and I detoured into another
conversation for a couple of minutes. By the time I looked back at the
screen, Volmar had redrawn the circle with perfect precision and upgraded
a few other things while he was at it. The actions were impressively
automatic. This is an operation that promotes perfectionism.
And that’s how you get your maps to look this this:
Some details are worth pointing out. At the top-center, trails have been
mapped out and coded as places for walking. All the parking lots have
been mapped out. All the little roads, say, to the left of the small dirt patch
on the right, have also been coded. Several of the actual buildings have
Google Maps and the Future of Everything
Figure
. .
been outlined. Down at the bottom left, a road has been marked as a no-go.
At each and every intersection, there are arrows that delineate precisely
where cars can and cannot turn. Now imagine doing this for every tile on
Google’s map in the United States and
other countries over the past
four years. Every roundabout perfectly circular, every intersection with
the correct logic. Every new development. Every one-way street. This is
a task of a nearly unimaginable scale. This is not something you can put
together with a few dozen smart engineers. I came away convinced that
the geographic data Google has assembled is not likely to be matched by
any other company. The secret to this success isn’t, as you might expect,
Google’s facility with data, but rather its willingness to commit humans
to combining and cleaning data about the physical world. Google’s map
offerings build in the human intelligence on the front end, and that’s what
allows its computers to tell you the best route from San Francisco to Boston.
* * * It’s probably better not to think of Google Maps as a thing like a
paper map. Geographic information systems represent a jump from paper
maps, like jumping to the computer from the abacus. ”I honestly think
we’re seeing a more profound change, for map-making, than the switch
from manuscript to print in the Renaissance,” the University of London
part three. Reportage
cartographic historian Jerry Brotton told The Sydney Morning Herald.
“That was huge. But this is bigger.”
The maps we used to keep folded in our glove compartments were a
collection of lines and shapes that we overlaid with human intelligence.
Now, as we’ve seen, a map is a collection of lines and shapes with Nick
Volmar’s (and hundreds of others’) intelligence encoded within.
It’s common, when we discuss the future of maps, to reference the Borgesian dream of a : map of the entire world. It seems like a ridiculous
notion that we would need a complete representation of the world when
we already have the world itself. But to take the scholar Nathan Jurgenson’s
conception of augmented reality seriously, we would have to believe that
every physical space is, in his words, “interpenetrated” with information.
All physical spaces are already also informational spaces. We humans all
hold a Borgesian map in our heads of the places we know, and we use it to
navigate and compute physical space. Google’s strategy is to bring all our
mental maps together and process them into accessible, useful forms.
Google’s MapMaker product makes that ambition clear. Projectmanaged by Gupta during his time in India, it’s the “bottom up” version
of Ground Truth. It’s a publicly accessible way to edit Google Maps by
adding landmarks and data about your piece of the world. It’s a way of
sucking data out of human brains and onto the Internet. And it’s a lot like
Google’s open competitor, Open Street Map, which has proven that it,
too, can harness a crowd’s intelligence.
As we slip and slide into a world where our augmented reality is
increasingly visible to us both off- and online, Google’s geographic data
set may become its most valuable asset. Not solely because of these data
alone, but because location data make everything else Google does and
knows more valuable.
Or as my friend, the sci-fi novelist Robin Sloan, put it to me, “I
maintain that this is Google’s core asset. In
years, Google will be the
self-driving car company, powered by this deep map of the world, and oh,
PS, they still have a search engine somewhere.”
Of course, Google will always need one more piece of geographic
information to make all this effort worthwhile: you. Where you are, that
is. Your location is the current that makes its giant geodata machine run.
Google Maps and the Future of Everything
It’s built this whole playground as an elaborate lure for you. As good and
smart and useful as the bait is, good luck resisting.
33.
The Never-Ending Drive of Tony
Cha
NOK IA HAS BETTER MAPS THAN
A P P L E , A N D M AY B E E V E N
GOOGLE.
by Alexis C. Madrigal
Apple’s maps are bad. Even Tim Cook knows this and has apologized
for them. Google’s maps are good, thanks to years of work, massive
computing resources, and thousands of people hand-correcting map data.
But there are more than two horses in the race to create an index of the
physical world. There’s a third company that’s invested billions of dollars,
employs thousands of mapmakers, and even drives around its own version
of Google’s mythic “Street View” cars.
That company is Nokia, the still-giant but oft-maligned Finnish mobilephone maker, which acquired the geographic information systems company Navteq back in
for $ billion. That’s only a bit less than
part three. Reportage
Nokia’s current market value of a bit less than $10 billion, which
has fallen percent since
. This might be bad news for the company’s
shareholders, but if a certain tech giant with a massive interest in mobile
content (Microsoft, Apple, Yahoo) were looking to catch up or stay even
with Google, the company’s Location & Commerce unit might look like a
nice acquisition that could be gotten on the cheap (especially given that the
segment lost 1.5 billion euros last year). Microsoft and Yahoo
are already thick as thieves with Nokia’s mapping crew, but Apple is the
company that needs the most help.
Business considerations aside, I’m fascinated by the process of mapping.
What seems like a rather conventional exercise turns out to be the very
cutting edge of mixed reality, of translating the human world and human
logic into databases that computers can use. And the best part is, unlike
Web crawlers, which were totally automated, indexing the physical world
still requires that people head out on the road, and stare at imagery on
computers back at the home office.
Google has spent literally tens of thousands of
person-hours creating its maps. In September
, I argued
that no other company could beat Google at this game, which turned
out to be my most controversial assertion. People pointed out that while
Google’s Street View cars have driven million miles, UPS, drives 3.3
billion miles a year, not to mention all the miles covered by the
other logistics companies. Whoever had access to these other data sets
might be in the mapping (cough) driver’s seat.
Well, it turns out that Nokia is the company that receives data from
many commercial fleets including FedEx, the company’s senior VP of
location content, Cliff Fox, told me. ”We get over
billion probe data
points per month coming into the organization,” Fox said from his office
in Chicago. “We get probe data not only from commercial vehicles like
FedEx, but we also get it from consumers through navigation applications.”
Depending on the device type, the data that stream into Nokia can
have slight variations. ”The system that they have for tracking the trucks
is different from the way the maps application works on the Nokia device.
You’ll have differences on the amount of times per minute they ping their
location, though typically it’s every to seconds,” Fox said. “It’ll give you
The Never-Ending Drive of Tony Cha
a location, a direction, and a speed as well.”
Nokia can then use that data to identify new or changed roads. In
,
it’s used the GPS data they get to identify ,
road segments. (A road
segment is defined as the strip of surface between intersecting roads.) The
GPS data also come in handy when the company’s building traffic maps,
because it knows the velocity of the vehicles.
(Fascinatingly, one of Nokia’s consumer-privacy protections is to black
out every other -second chunk of tracking information so that the
company can’t say “that a particular individual traveled a particular route.”)
In the future, Nokia may be able to extract as many as
other
attributes from the GPS probe data alone. “That kind of data can help us
keep the map more current without having people go out and drive,” Fox
said.
But for now, as with Google’s map data “operators,” there is still no
better option for gathering data about the physical world than putting a
human being in a car and sending him or her through the streets of the
world.
***
Meet Tony Cha. He’s a world crawler, a driver of one of Nokia’s “True”
vehicles. He’s spent roughly three years on the road, moving from city to
city in a modded car that Chief Technology Officer Henry Tirri called “a
data collection miracle.” Cha tells me, “You could be gone for months at a
time if you’re mapping a big city back east. I live from a hotel. It’s good and
bad. You don’t see your friends and family, but the company is expensing
your travel.” He’s lucky right now. His hometown of Fresno is his latest
collection area. It’ll take a month of driving eight or nine hours a day to
complete that city. And then it’ll be on to the next location in Tony Cha’s
Never-Ending Drive.
“It’s kind of surreal. You think, You drove every single street in this city,”
Cha told me. “There are multiple cities where I drove every single street.”
The camera cars work best in dry, temperate conditions, so the job even
has a seasonality. The drivers do northern cities in the summer, and then the
fleet moves south during the winter. Cha gets to follow the good weather.
Within cities, his route is planned by an algorithm that has determined
the most efficient way to cover every road on the map. It’s flexible and
part three. Reportage
can accommodate mistakes, but the system “has a specific route it wants
you to drive,” he said. He drives for the duration of a work day, then heads
to his hotel. The next day, he returns to where he left off and starts again.
“I’m almost robotic, in a sense,” he said.
Along the way, he’s seen his fair share of weird reactions to the car.
Some people see it and come rushing out of their businesses, so they can
be photographed outside. Others flip him the bird. “It doesn’t really bother
me,” Cha said.
The car is not inconspicuous. Rising out of the roof, there is a tower
of components stacked on top of one another. It folds down for travel, but
deployed, it’s probably six feet tall.
The Volkswagen is stocked with $
,
worth of equipment (that’s
Fox’s number) including six cameras for capturing street signs, a panoramic
camera for doing Bing Street View imagery, two GPS antennae (one on the
wheel, the other on the roof), three laptops—and the crown jewel, a LIDAR
system that shoots
lasers all
degrees around the car to create -D
images of the landscape the car passes through.
There is so much data feeding from the roof of the car into its interior
that the bundle of cables alone looks like a tank tread. The LIDAR, when
it’s switched on, rotates around and emits a high-pitched whine that would
probably drive you crazy if someone piped it into your headphones.
The LIDAR is useful for a whole bunch of things, especially when used
in combination with the other imagery data. Nokia takes the . million
point measurements per second that the LIDAR outputs and combines
them into what amounts to a wireframe of the street. Then, it drapes the
imagery taken with the other cameras on top of that, to create a digital
representation of a place.
Once the company gets that digital representation put together, it can
do all kinds of data extractions from the images. It can determine the
height and length of a bridge, reading the physical world itself. And it
can decipher human signs that provide valuable clues about the road
network. Nokia can extract
different types of road signs in different
countries automatically. That said, the process is still only about percent
automated, with a goal of getting to percent; the company doesn’t think
it’ll be possible to do much better than that.
The Never-Ending Drive of Tony Cha
***
The biggest mapping problem—the thing that makes Tony Cha’s NeverEnding Drive necessary—is not dimensions one through three, but the
fourth. The world changes in time.
“To build it the first time is, relatively, the smaller task compared to
maintaining that map,” Nokia’s Fox said.
It’s like that semi-true urban legend about painting the
Golden Gate Bridge. It’s said that they start at one end, and by the
time they’re done, the spot where they started needs to be repainted. It’s
a maintenance loop from which you can’t escape. And it means there is
no perfect map. Most maps aren’t even
percent accurate, the number
that Apple was tossing around.
“I’d be shocked [if Apple had % accuracy],” Fox commented. “You
have real-world change. From the time you collect to the time it ends up in
a consumer’s hands, there will be more than percent change.”
In fact, there might be much more than one percent change, depending
on the region. In Jacksonville, which isn’t changing much, Nokia “added,
modified, or deleted geometry” for about percent of the road network. In
Houston, which is growing more quickly, there were changes to percent
of the network. And if we’re talking about Delhi, there were edits to
percent of the database.
Now, lived experience tells you that the paths of roads don’t change
percent or percent every year. Many of the changes in the map database
are behind the scenes, in the logic of how a road network works, or in
more-subtle data.
“We capture up to
pieces of information for every road segment. It
could be information about all of the signage, whether or not it is divided
road, the number of lanes, where the lanes constrict or expand,” Fox said.
“There is just an enormous amount of information. When I’m talking about
percentage change, it could be the speed limit or the names that change on
specific roads. It’s a never-ending process of understanding the dynamic
nature of changes on these road networks.”
Nokia has a concept it calls “The Living Map.” The idea is that as
people use the maps—and related location services—the map starts to
know what you’re looking for. Searched for Blue Bottle Coffee in San
part three. Reportage
Francisco? Perhaps you’d like to know which Oakland places serve Blue
Bottle, too. And bit by bit, you wear tiny little grooves into this giant D representation of the entire world, connecting the places that you love
together into a new layer that sits on top of all the road network data and
Tony Cha’s drives and the LIDAR wireframes. The map will come to life,
and you will be in it.
While Cha and I were talking, an unshaven guy in a V-neck T-shirt and
sweats walked by and started peppering us with questions: “Do you only
come out when it’s a blue sky day?” “Does Apple have street view?” “Can
the camera articulate on your rack?” “So you’re directed by software?” This
is not uncommon.
Then, out of nowhere, the passerby said, “We’re so close to the day
when you can put on VR goggles and literally just walk through the world,
anywhere in the world.”
Yes, basically, that is the idea: a map of the world that is also a copy of
the world.
34.
How the Algorithm Won Over
the News Industry
GOOGLE’ S NEWS
by Megan Garber
In April of
, Eric Schmidt delivered the keynote address at a conference of the American Society of News Editors in Washington,
D.C. During the talk, the then-CEO of Google went out of his way
to articulate—and then reiterate—his conviction that “the survival of
high-quality journalism” was “essential to the functioning of modern
democracy.”
This was a strange thing. This was the leader of the most powerful
company in the world informing a roomful of professionals how earnestly
he would prefer that their profession not die. And yet the speech itself—
I heard it live—felt oddly appropriate in its strangeness. Particularly in
light of surrounding events, which would find Bob Woodward accusing
Google of killing newspapers. And Les Hinton, then the publisher of
The Wall Street Journal, referring to Google’s news-aggregation service
as a “digital vampire.” Which would mesh well, of course, with the
part three. Reportage
similarly vampiric accusations that would come from Hinton’s boss, Rupert
Murdoch—accusations against not just Google News, but Google as a media
platform. A platform that was, Murdoch declared in January 2012,
the “piracy leader.”
What a difference a couple years make. Murdoch’s th Century Fox
is now in business, officially, with Captain Google, cutting a deal to
sell and rent the studio’s movies and TV shows through YouTube and
Google Play. It’s hard not to see Murdoch’s grudging acceptance of Google
as symbolic of a broader transition: content producers’ own grudging
acceptance of a media environment in which they are no longer the primary
distributors of their own work. This week’s Pax Murdochiana suggests an
ecosystem that will find producers and amplifiers working collaboratively,
rather than competitively. And working, intentionally or not, toward the
earnest goal that Schmidt articulated two years ago: “the survival of highquality journalism.”
“ ,
Business Opportunities”
There is, on the one hand, an incredibly simple explanation for the
shift in news organizations’ attitude toward Google: clicks. Google News
was founded 10 years ago, on September ,
, and has since functioned not merely as an aggregator of news, but also as a source of traffic
to news sites. Google News, its executives tell me, now “algorithmically
harvests” articles from more than ,
news sources across
editions
and
languages. And Google News–powered results, Google says, are
viewed by about billion unique users a week. (Yep, that’s billion, with a b.)
Which translates, for news outlets overall, to more than billion clicks each
month: billion from Google News itself and an additional billion from
Web search.
As a Google representative put it, “That’s about
,
business
opportunities we provide publishers every minute.”
Google emphasizes numbers like these not just because they are fairly
staggering in the context of a numbers-challenged news industry, but
also because they help the company to make its case to that industry.
(For more on this, see James Fallows’s masterful piece from the
June
issue of The Atlantic.) Talking to Google News executives and
team members myself in
—at the height of the industry’s aggregatory
How the Algorithm Won Over the News Industry
backlash—I often got a sense of veiled frustration. And of just a bit of
bafflement. When you believe that you’re working to amplify the impact
of good journalism, it can be strange to find yourself publicly resented by
journalists. It can be even stranger to find yourself referred to as a vampire.
Or a pirate. Or whatever.
And that was particularly true given that, as an argument to news
publishers, Google News’s claim for itself can be distilled to this: We bring
you traffic. Which brings you money. Which is hard to argue with. As an
addendum to this line of logic, Google staffers will often mention the fact
that participation in Google News is voluntary; publishers who don’t want
their content crawled by Google’s bot can simply append a short line of
code to make themselves invisible. Staffers will mention, as well, the fact
that Google News has been and remains headline-focused—meaning that
its design itself encourages users to follow its links to news publishers’ sites.
This is not aggregation proving its worth in an attention economy, those
staffers suggest. It is aggregation proving its worth in a market economy.
Google News, founder Krishna Bharat told me, is fundamentally “a
gateway—a pathway—to information elsewhere.”
Publishers, as familiar with their referral numbers as Google is, are
coming around to that view. In fact, Murdoch’s transition suggests, they
have pretty much finished the coming-around. In the broad sense of the
long game, Google News is very much a product of its parent company:
The service saw where things were going. It built tools that reflected that
direction. And then it waited, patiently, for everyone else to catch up.
Concession Stands
As far as the Google/news relationship goes, though, numbers are only
half the story. Google has reiterated its stats—did we mention billions, with
a b?—to, yes, pretty much anyone who will listen. But it has also tackled
its industry publicity problem more strategically, in a way that even more
explicitly emphasizes the “Google” component of “Google News”: it has
ingratiated itself into the news industry iteratively, experimentally, and
incrementally.
Google added to its team of engineers staff members with backgrounds
in journalism, people whose jobs were to interact—or, in Google-ese,
to “interface”—with news producers. It experimented with new ways of
part three. Reportage
processing and presenting journalism—Fast Flip, Living Stories—
and framed them as tools that could help journalists do their jobs better.
It introduced sitemaps meant to give publishers greater control over how
their articles get included on the Google News home page. Responding to
outlets’ frustrations that their original work was getting lost among the
work of aggregators, Google created a new tag that publishers could use
to flag standout stories for Google News’s crawlers. Responding to a new
cultural emphasis on the role of individual writers, Google integrated
authors’ social profiles into their displayed bylines. And, nodding
to a news industry that values curation, it implemented Editors’ Picks,
which allows news organizations themselves, independently of the
Google News algorithm, to curate content to be displayed on the
Google News homepage. (The Atlantic is included in the Editor’s Picks
feature.)
All of those developments, on some level, have been concessions to
an indignant industry. Which is also to say, they have been concessions
to an industry that is not populated by engineers. When Google News
launched in
, it’s worth remembering, it did so with the following
delightfully Google-y declaration: “This page was generated entirely by computer algorithms without human editors. No humans were
harmed or even used in the creation of this page.” Since then, as news
publishers have emphasized to Google how human a process news production actually is, the company’s news platform has—carefully, incrementally,
strategically—found ways to balance its core algorithmic approach with
more-human concerns.
There have been the product-level innovations. There have been the
public declarations. (Schmidt, in addition to his pro-journalism speeches,
wrote op-eds re-professing his love of the news. Bharat spent a
year as a professional-in-residence at Columbia’s Graduate School
of Journalism.) But, less obviously and less visibly, there has also been
the infrastructural effort Google has put into making the news industry
a colleague rather than a competitor. “There’s a reporting aspect to it,”
says David Smydra, Google News’s manager of strategic partnerships,
who is himself a former reporter. Google tries to figure out what would
help news producers produce better content, he told me, and responds ac-
How the Algorithm Won Over the News Industry
cordingly. With that in mind, Google News staffers have made themselves a
friendly and patient and constant presence at journalism conferences
and industry events. They have offered tutorials on making use of
Google News and other Google tools. They have written explainers on
becoming a Google News source in the first place. They have visited
individual newsrooms to meet with publishers and other news producers,
listening to their concerns and imagining innovations that might prove
useful to outlets as well as users. They have reiterated, in ways both subtle
and explicit, their good intentions. If Google News is a vampire, it is an
incredibly perky one.
Harvesting the News
Part of Google’s pitch to news organizations, though, has also been
a pitch to users. And it has made its case to both groups at the same
time, through the same vehicle. When it came to the Google News
homepage—the primary user interface—Google iterated. It updated. It
redesigned. It introduced real-time coverage of breaking news events.
It introduced geotargeted local news. It introduced customization.
It integrated a social layer. It introduced video results. It introduced expandable stories. It emphasized contextual news consumption, presenting Wikipedia links along with the top news stories it
displayed for its users.
And it has been, all along, tweaking—and tweaking, and tweaking—its
algorithm. While Google News is notoriously reticent about the particular
elements included in its algorithm, some of its general signals, engineers
have said, include the commonality of a particular story arc; the site an
individual story appears on; and a story’s freshness, location, and relevance.
The algorithm’s main point, a representative told me, is to sort stories
“without regard to political viewpoint or ideology”—and to allow users to
choose among “a wide variety of perspectives on any given story.”
Achieving all this through an algorithm is, of course, approximately
,
percent more complicated than it sounds. For one thing, there’s
the tricky balance of temporality and authority. How do you deal, for
example, with a piece of news analysis that is incredibly authoritative about
a particular story without being, in the algorithmic sense, “fresh”? How
do you balance personal relevance with universal? How do you determine
part three. Reportage
what counts as a “news site” in the first place? How do you account for
irony and cheekiness in a headline? How do you accommodate news coverage’s increasing emphasis on the update as its own form of news
narrative? Andre Rohe, Google News’s head of engineering, summed up
the challenge: “How do I take a story that has ,
articles, potentially,
and showcase all of its variety and breadth to the user?”
And then there’s the Tiger Woods problem. During Woods’s cheating
scandal, Rohe points out, many, many people were following the story of
the golfer’s affairs and their aftermath. “They weren’t necessarily following
that story because they were particularly interested in Tiger Woods,” Rohe
notes; “they weren’t necessarily following that story because they were
particularly interested in golf.” They were following that story, he says,
because of its place within a different kind of news category: a “fall from
grace.”
But, then: How do you quantify that category? How do you work one of
the oldest narratives there is—the plummet, the pathos—into an algorithm?
And how do you translate all of that into the user experience—the content
placed on a page?
“One has to be, actually, rather subtle when doing these things,” Rohe
says.
And those subtleties, he and his colleagues say, are the things that will
continue to challenge Google’s journalism arm as it moves into its teenage
years. Google News, says Richard Gingras, its head of product, is the
result of “continued evolution”—not just in terms of design improvement,
but also in terms of the news system that underscores it. As journalism
changes, Google News will change with it—strategically, yes, but also
inevitably. And it will do so because of the thing that has been at the heart
of Google’s journalism pitch from the beginning: We’re in this together.
For Google News’s next phase, Gingras says, we can expect to see the
“continued evolution of the algorithmic approach to address the changing
ecosystem itself—in some ways subtle, and in other ways, going forward,
likely more profound.”
35.
What Will the "Phone" of
Look Like?
A R O M P T H R O U G H T H E W E I R D,
S C A R Y, AW E S O M E F U T U R E O F
M O B I L E C O M M U N I C AT I O N S
by Alexis C. Madrigal
The near-term future of phones is fairly well established. The iPhone
was released yesterday, and its similarity to every Apple phone since
serves as a reminder that our current mobile devices have been sitting on
the same plateau for years.
Reflecting on Apple’s recent product launches, Clay Shirky, an author
and a professor at NYU’s Interactive Telecommunications Program, told
me, “They’re selling transformation and shipping incrementalism.”
The screens, cameras, and chips have gotten better, the app ecosystems
have grown, the network speeds have increased, and the prices have come
down slightly. But the fundamental capabilities of these phones hasn’t
part three. Reportage
changed much. The way you interact with phones hasn’t changed much
either, unless you count the mild success of Siri and other voice-command
interfaces.
“Is the iPhone the last phone?,” Shirky said. “Not the last phone in a
literal sense, but this is the apotheosis of this device we would call a phone.”
Danny Stillion, of the legendary design consultancy IDEO, calls our current technological moment the “phone-on-glass paradigm,” and it’s proved
remarkably successful over the past half decade, essentially conquering the
entire smartphone market in the United States and around the world. It
seems like this Pax Cupertino could last forever. But if we know a single
thing about the mobile-phone industry, it’s that it has been subject to
disruptions.
No one has tracked these market shifts better than Horace Dediu at
Asymco. He’s documented what he calls “a tale of two disruptions,” one
from above by Apple and one from below by cheap Chinese and Indian
manufacturers. In just the past five years, Nokia, Samsung, LG, and RIM
have seen their market shares and profits collapse due to this pincer
movement. Our conceit is that change will come again to the smartphone
market, and that the phones and market leaders of
will not be the
same as they are today.
What might their input methods be? How might the software work?
What are we going to call these things that we only occasionally use to
make telephone calls?
“It’s not clear to me that there is any such device as the phone in
.
Already, telephony has become a feature, and not even a frequently used
feature, of those things we put in our pockets. Telephony as a purpose-built
device is going away, as it’s been going away for the TV and the radio,”
Shirky said, when I asked him to speculate. “So what are the devices we
have in our pockets?”
(For the record, I tried to get Apple, Google, Microsoft, Samsung, HTC,
and Nokia to talk about what they think the future of phones looks like, but
none of them responded to me by my deadline. Don’t worry, the people I did
get to talk with me were probably more interesting and forthright anyway.)
INPUT
Let’s start with HoraceDediu and how we interact with our machines.
What Will the "Phone" of
Look Like?
“A change in input methods is the main innovation that I expect will happen
in the next decade. It’s only a question of when,” Dediu wrote to me in an email. Looking at his data, he makes a simple, if ominous observation: “I note
that when there is a change in input method, there is usually a disruption
in the market, as the incumbents find it difficult to accept the new input
method as ‘good enough.”’
So when touch screens arrived on the scene, other phone makers didn’t
quite believe that it was Apple’s way or the highway. After all, hadn’t
touchscreens been tried before, and hadn’t they failed? And besides, typing
e-mails was so hard on those things! And people loved their CrackBerrys!
And. And. And then all their customers were gone.
Do we have any reason to expect that the touch screen will continue
to be the way we interact with our mobile devices for the next decade?
Not really. They have proved effective, but there are clear limitations to
interacting with our devices via a glass panel.
One critic of the touch screen is Michael Buckwald, CEO of the
(kind of mindblowing) gesture-interface company Leap Motion. “The
capacitive display was a great innovation, but it’s extremely limiting,”
Buckwald told me. “Even though there are hundreds of thousands of
apps, you can kind of break them down into about a dozen categories. It
seems like the screen is holding back so many possibilities and innovation,
because we have these powerful processors, and the thing that’s limiting us
is the -D flat display and that touch is limited.”
One big problem is that if you want to move something on a touch
screen from point A to point B, you have actually have to drag it all the
way there. That is, there is a : relationship between your movement and
the movement “in” the device. “To move something
pixels, you have
to move your fingers
pixels, and they end up blocking the thing you’re
interacting with,” he told me.
Buckwald, of course, has a solution to this problem. His company makes
a gesture-control system that allows you to move your fingers to control
computers and gadgets. That could mean what Leap Motion calls “natural
input,” which is showing you a -D environment and letting you reach into
it and touch stuff, or it could be a more abstract system that would allow
for controlling the menus and files and channels that we already know.
part three. Reportage
“You can imagine someone sitting in front of a TV, [controlling it] with
the intuitiveness of a touch screen and the efficiency of a mouse,” he said.
But the technology has already been miniaturized, so it could already
be used for controlling phones, too.
“What we envision of the future is a world where the phone is the only
computer that you use. It’s become very small and miniaturized, and it has
a lot of storage, and you carry it around in a pocket or attached to you,
and then it wirelessly connects on different displays based on what you’re
trying to do,” Buckwald said. “If you sit down at a desk, it connects to that
monitor, and Leap would control that. If you’re out on a street, it connects
to a head-mounted display, and Leap would control that.”
Others, like Dediu and Stillion, see voice as the transformative input.
Siri has not been the unabashed success that Apple’s commercials set us
up for. It’s buggy, and it seems to fall into some uncanny valley of humancomputer interaction. Its argot is human, but its errors are bot. The whole
thing is kind of confusing. (Plus, my wife just viscerally hates it, which has
made it a flop in our house.) Nonetheless, Dediu and Stillion think voice
input will play a big role in the future.
“When we communicate to computers, we use tactile input and visual
output. When we communicate with people, we typically use audio for both
input and output. We almost never use tactile input, and consider visual
contact precious and rare,” Dediu wrote to me. “Our brains seem to cope
well with the audio-only interaction method. I therefore think that there is
a great opportunity to engage our other senses for computer interaction.
Mainly because I believe that computers will emerge as companions
and assistants rather than just communication tools. For companionship,
computers will need to be able to interact more like people do.”
IDEO’s Stillion converged on the same thought. He foresaw a future
where your phone sits, jewelry-like, somewhere on your body, controlled
largely via voice, but also acting semi-autonomously. In this scenario,
your phone is hardly a phone anymore, in terms of being a piece of
hardware. Rather, it’s a hyper-connected device with access to your data
from everywhere. It might even have finally lost the misnomer phone. “It’s
no longer your phone, but the feed of your life,” he said. “It’s the data you’re
encountering, either pushed on you or pulled by you. Either the things
What Will the "Phone" of
Look Like?
you’re consuming or the things you’re sharing.”
You could TiVo your life, constantly recording and occasionally sharing.
That sounds exhausting to me, but Stillion said that’s where the artificial
assistants will come in. “What does the right level of artificial intelligence,
when brought to the table, allow us to do with our day-to-day broadcasting
of our lives?” he asked. “Is it dialing in sliders of what interests we want
to share? How open we feel one day versus the next? Someone is going to
deal with that with some kind of fluid affordance.”
Think of it not as frictionless sharing, but as sharing with AI greasing
the wheels. “You’d almost have an attaché or concierge. Someone that’s
whispering in your ear,” he said.
What does this all have to do with input methods? Much of the
interacting we have to do now concerns giving a piece of software a lot
of information and context about what we want. But if it already *knows*
what we want, then we don’t have to input as much information.
What’s fascinating to me is that I think we’ll see an “all of the above”
approach to user input. It’ll be touch screens and gestures and voice and
software knowing what we want before we do, and a whole bunch of
other stuff. When I interviewed the anthropologist and Intel researcher
Genevieve Bell, she asked me to think about what it’s like to sit in a car.
They’re
years old, and yet there are still maybe half a dozen ways
of interacting with the machine! There’s the steering wheel to direct the
wheels, pedals for the gas and break, some kind of gear shifting, a panel
for changing interior conditions, and levers for the windshield wipers and
turn signals. Much work is even done automatically, so it doesn’t need a
system like, say, automatic gear shifting. The car is a living testament to
the durability of multiple input methods for complex machines.
“I had an engineer tell me recently that voice is going to replace
everything. And I looked at him like, ‘In what universe?,”’ Bell said to me.
“Yes, people like to talk, but if everyone is talking to everything all around
them, we’ll all go mad. We’re moving to this world where it’s not about
a single mode of interaction . . . The interesting stuff is about what the
layering is going to look like, not what the single replacement is.”
FORM FACTOR
The first thing Clay Shirky says when I ask him about the future of
part three. Reportage
phones is this: “Bizarrely, I don’t even remember why we were talking
about this, but my -year-old daughter yesterday said, ‘Oh, cellphones are
eventually going to be one [biological] cell big, and you can just talk into
your hand. She totally internalized the idea that the container is going to
keep shrinking. When an -year-old picks it up, it’s not like she’s been
reading the trade press. This is in the culture.”
The Incredible Shrinking Phone is certainly one vision for form-factor
changes. “One thing you can imagine is tiny little devices that are nothing
but multinetwork stacks and a kind of personal identifying fob that lets
you make a phone call from a Bluetooth device in your ear, or embedded in
your ear, or embedded in your hand, as my daughter would say,” he said.
But Shirky presented an alternative that is equally striking.
“And then the parallel or competitive future is the slab of glass, which
gets unbelievably awesome. Rollable and retina display is the normal case,”
he said. “Everyone has this rollable piece of plastic, something that works
like an iPad but can work like a phone when it’s rolled up.”
Look at the recent trend in phone design. All the screens are getting better. More unexpected is that many are also getting bigger. Sure,
the iPhone just got bigger, but the Samsung Galaxy Note II is 5.5
inches long! (One sad consequence of this future would be the permanent dominance of cargo pants.) I’ve seen only one of these in the wild,
at Incheon Airport in Seoul. It seemed like a joke. This ultra-long
iPhone 20 actually is a joke. And yet . . . Unlike Steve Jobs’s vision of
Two Sizes Fitting All, it seems like all the screen sizes from four to nine
inches (and beyond?) are going to be filled with better-than-print resolution
devices.
The other ubiquitous referent for the phone form of the future is Google
Glass. I have to give kudos to Google for creating such an inescapable piece
of technology. No one can seriously discuss what things might look like
in
years without at least namechecking something that looks like this.
Google Glass—or its successors—will allow you to have a kind of headsup display (maybe?) and life-logging recorder right on your face at all
times. They are one vision of a phone that pushes hard on merging digital
and physical information (“augmented reality”). In some sense, they are
Shirky’s or Buckland’s tiny fob plus a transparent screen that sits directly
What Will the "Phone" of
Look Like?
in front of your eye.
THREE VERY OUT-THERE SCENARIOS
So far, we’ve run through ideas that fit most of the trends and visions
of the past few years. But what if there is a far more radical departure from
our current paradigm?
It’s obvious that the future will be full of devices that connect to your
phone wirelessly. Playing music through Bluetooth on a JamBox or printing
from your phone is just the beginning. Last night, a friend of mine, the
writer Andy Isaacson, described a Burning Man camp in which -D printed
objects were delivered by helicopter drone to people who’d ordered them
on the Playa and agreed to carry a GPS tracker so the drone could find
them.
This is really happening today.
Already you can get a tiny helicopter and control it with
your iPhone. Wired‘s Chris Anderson is working on bringing the cost
of full-capability drones—DIY Drones—down to consumer levels. Already
you can have cameras installed in your home and monitor them from your
device. Already you can unlock a Zipcar with your phone. Already you
can control a Roomba with your phone. And none of this mentions
all the actual work you can do with tiny motors and actuators hooked
through the open-source Arduino platform. Add it all up, and your
phone could become the information hub that allows you to monitor and
control your fleet of robot data scavengers, messengers, and servants.
We tend to think of disruptions as coming out of ever-more-capable
technology, but what if the communication devices we actually use in the
future are ultra low-cost, close-to-disposable devices? Already, according to
the wireless-industry trade group CTIA, there are more than million payas-you-go subscriptions in the United States. The capabilities and prices of
these phones will continue to decline. Perhaps in years you will be able
to buy an iPhone 0 s worth of capability for $ .
One can imagine that a possible response to blanket digital- and
physical-data collection by individuals, corporations, and governments
would be to go lower tech and to change phones more often. While some
people may run around with a fob that makes sure their data is with
them all the time, others might elect to carry the dumbest, cheapest phone
part three. Reportage
possible. Imagine if the
equivalent of “clearing your cookies” is buying
a new phone so that you’ll no longer be followed around by targeted
advertisements.
In China, having two or even three phones is not uncommon: one
survey found that a good
to
percent of Chinese mobile users have
two or more phones. IDEO’s Stillion imagined a less dystopian version of
the ubiquitous, low-cost phone model. He imagined we might just leave
phones to act as video cameras so that we could visit places we missed.
You could check in on the redwood forest from your desk. “You can visit
these things when you like, especially when there is some mechanism for
enhanced solar power,” he said.
So, that’s the low-tech scenario. But it’s certainly possible that we
could have a disruptive high-tech scenario. My bet would be on some
kind of brain-computer interface. As we wrote earlier this year, we are
just now beginning to create devices that allow you to control machines
with thought alone. A landmark paper was published in May showing quadriplegic patients controlling a robotic arm. “We now show that
people with long-standing, profound paralysis can move complex realworld machines like robotic arms, and not just virtual devices, like a dot
on a computer,” said one of the lead researchers, the Brown University
neuroscientist John Donoghue. Despite the success, our David Ewing
Duncan explained that the technology wasn’t quite ready for prime time.
[Donoghue] and colleagues at Brown are working to eliminate
the wires and to create a wireless system. They are conducting
work on monkeys, he said, but still need FDA approval for
human testing.
The work is still years away from being ready for routine
use, said Leigh Hochberg, a neurologist at the Massachusetts
General Hospital in Boston and another principal of the
Braingate project. “It has to make a difference in people’s lives,
and be affordable,” he said. The scientists also need to replicate
the data on more people over a longer period of time.
But hey, years is a long time, DARPA has long been interested,
and a brain-computer interface would provide a nice route around the
What Will the "Phone" of
Look Like?
difficult problems of computers communicating in our complex, everevolving languages. Plus, we wouldn’t have to listen to everyone talk to
his or her mobile concierge.
LIMITS
There are two big limits on our dreams for the future of phones:
energy storage and bandwidth. Batteries have improved remarkably over
the past decade, and Steven Chu’s ARPA-E agency wants to create radical
breakthroughs in storing electrons. But it’s not easy, and if we want some
of the wilder scenarios to become realities, we need much better batteries.
One reason to be optimistic here is that materials science has hitched
its wagon to Moore’s Law. Experiments in the field are being carried
out in computer simulations, not the physical world, so they are much,
much faster. The MIT materials scientist Gerbrand Ceder told me long
ago, “Automation allows you to scale.” I wrote about his Materials Genome
Project in my book, Powering the Dream . Ceder said “it wasn’t the Web,
per se, that brought us the wonder of the Web. Rather it was the automation
of information collection by Web crawlers that has made the universe of
data accessible to humans.” And that’s what his team (and others) is trying
to do with the information embedded in the stuff of the physical world.
The other big hang-up is network bandwidth. We all know that cellularnetwork data is slow. Many of us simply work around that by using our
phones on WiFi networks. But that’s not how it is all over the world. Korea,
famously, has very fast mobile broadband.
“If you go on the subway in Seoul, there are people watching livestreaming television underground,” Shirky said. “You get on the New York
subway and I can’t send a text message to my wife . . . You want to know
what the American phone in
is? Imagine what it’s going to be like in
Seoul in
.”
Shirky then reconsidered. “Actually, I’m not sure I’ll be able to watch
streaming television on my phone under the East River a decade from now,”
he said. “I may not be able to do what they took for granted in Seoul in
.”
Mark this down as one area where countries with certain geographical
features and feelings about government infrastructure spending may have
a harder time realizing the possibilities that the technology allows.
part three. Reportage
The last limit is softer—a privacy backlash—even though, so far, we
have no real evidence of the tide turning here in the United States. For
all our computing devices allow us to do, what they ask for in return is a
radical loss of privacy. Every person recording a scene with Google Glass
is changing the implicit social contract with everyone in his or her field of
view. “Surprise! You’re permanently on Candid Camera.” When a guy who
got billed as the world’s first cyborg because he wears a DIY version of
Google Glass got beat up at a McDonald’s in Paris, his eye camera got a
look at the face of the guy who did it. He says that was a malfunction, but
still, an image was recorded on a device—and now he can use that in a way
that no one not wearing an eye cam could.
What if whole cities go “recording free,” like Berkeley is “nuclear free”?
If the pervasive data-logging endemic online comes to the physical world
(and it will!), how will people react to create spaces for anonymity and
impermanence? What kinds of communal norms will develop, and how
might those change the types of technology on offer? This might never
happen, but don’t say I didn’t warn you.
MY MIND ON MY PHONE, MY PHONE ON MY MIND
The last thing I want to say is that all of these technologies are
most important for how they get us to change how we think about the
world. That is to say, the big deal about social networks isn’t just that we
can communicate with the people we know from high school, but that
people start to think about organizing in different ways, imagining less
hierarchical leadership structures.
In the phone realm, I’ll use just two examples from this story, the Leap
Motion gesture controller and Google Glass, to explain what I mean.
I watched a demo of Leap Motion on The Verge featuring Buckwald’s co-founder, David Holz. On his screen is a -D virtual environment.
Holtz uses his hand to grab something on, or rather in, the screen. “Imagine
reaching into a virtual space and being able to move things around in a very
natural, physical way,” Holz says. “Here I’m able to grab space and move
it.”
It’s that prepositional change—in not on, into not on—that signals a
major shift in how we might actually come to feel about computing in
general. Somehow, a -D environment becomes much more real when you
What Will the "Phone" of
Look Like?
can manipulate it like a physical space. A tactile sense of depth is the last
trick we need in order to feel as if “cyberspace” is an actual space.
Meanwhile, Google Glass, no matter how Google is couching
it now, is exciting precisely because it’s about mashing the physical and
virtual realms together—in a sense, making one’s experience of the world
at large more like one’s experience of a computer.
These projects are augmented reality from two directions, one making
the digital more physical, the other making the physical more digital.
Having opened up a chasm between the informational and the material,
we’re rapidly trying to close it. And sitting right at the intersection between
the two is this object that we call a phone, but that is actually the bridge
between the offline and the online. My guess is that however the phone
looks, whoever makes it, and whatever robot army it controls, its role in
years will be to marry our flesh and data ever more closely.
36.
How Do You Publish a City?
GOOGLE AND THE F U T URE OF
AU G M E N T E D R E A L I T Y
by Alexis C. Madrigal
It is The Future. You wake up at dawn and fumble on the nightstand for
your (Google) Glass. Peering out at the world through transparent screens,
what do you see?
If you pick up a book, do you see a biography of its author, an analysis
of the chemical composition of its paper, or the share price for its publisher?
Do you see a list of your friends who’ve read it or a selection of its best
passages or a map of its locations or its resale price or nothing? The problem
for Google’s brains, as it is for all brains, is choosing where to focus
attention and computational power. As a Google-structured augmented
reality gets closer to becoming a product-service combination you can buy,
the particulars of how it will actually merge the offline and online are
starting to matter.
To me, the hardware (transparent screens, cameras, batteries, etc.) and
software (machine vision, language recognition) are starting to look like
part three. Reportage
the difficult but predictable parts. The wildcard is going to be the content.
No one publishes a city; they publish a magazine or a book or a news site.
If we’ve thought about our readers reading, we’ve imagined them at the
breakfast table or curled up on the couch (always curled up! always on the
couch!) or in office cubicles running out the clock. No one knows how to
create words and pictures that are meant to be consumed out there in the
world.
This is not a small problem.
***
I’m sitting with Google’s former maps chief John Hanke in the company’s San Francisco offices looking out at the Bay’s islands and bridges,
which feel close enough to touch. We’re talking about Field Trip, the
new Android app his “internal start-up” built, when he says something that
I realize will be a major theme of my life for the next five or years. Yours
too, probably.
But first, let me explain what Field Trip is. Field Trip is a geopublishing
tool that gently pushes information to you that its algorithms think you
might be interested in. In the ideal use case, it works like this: I go down
to the Apple Store on Fourth Street in Berkeley, and as I get back to my
car, I hear a ding. Looking at my phone, I see an entry from Atlas
Obscura, which informs me that the East Bay Vivarium—a reptilian
wonderland that’s part store, part zoo—is a couple blocks away.
That sounds neat, so I walk over and stare at pythons and horned dragons
for the next hour. Voilà. “Seamless discovery,” as Hanke calls it.
Dozens of publishers are tagging their posts with geocodes that Field
Trip can hoover up and send to users now. Hanke’s team works on finding
the right moment to insert that digital information into your physical
situation.
And when it works well, damn does it work well.
It’s only a slight exaggeration to say that Field Trip is invigorating. It
makes life more interesting. And since I switched back to my iPhone after
a one-week Android/Field Trip test, it’s the one thing that I really miss.
At first, I was tempted to write off this effort as a gimmick, to say that
Field Trip was a deconstructed guide book. But the app is Google’s probe
into the soft side of augmented reality. What the team behind it creates and
How Do You Publish a City?
discovers may become the basis of your daily reality in five or
years.
And that brings me back to Hanke’s comment, the one you could devote a
career to.
“You’ve got things like Google Glass coming. And one of the things
with Field Trip was, if you had [Google Glass], what would it be good for?,”
Hanke said. “Part of the inspiration behind Field Trip was that we’d like
to have that Terminator or Iron Man–style annotation in front of you, but
what would you annotate?”
There’s so much lurking in that word, annotate. In essence, Hanke is
saying: What parts of the digital world do you want to see appear in the
physical world?
If a Field Trip notification popped up about John Hanke, it might tell
you to look for the East Bay hipster with floppy hair almost falling over his
eyes. He looks like a start-up guy, and admits to being one, despite his eight
years at Google. He refers to its co-founders as though they’re old college
friends. (“Sergey was always big on ‘You should be able to blow stuff up
[in Google Earth].”’) Not a kid anymore, Hanke sold an early massively
multiplayer-online-gaming company to the legendary Trip Hawkins in
the ’ s, then co-founded Keyhole, which became the seed from which
Google’s multi-thousand-person map division grew.
When maps got too big for Hanke’s taste, he “ultimately talked with
Larry [Page],” and figured out how to create an “autonomous unit” to play
with the company’s geodata and create novel, native mobile experiences.
This is Google’s Page-blessed skunkworks for this very specific problem.
Hanke’s group is Google, but it has license to be unGoogle.
“You don’t want to show everything from Google Maps. You don’t
want to show every dry cleaner and -Eleven in a floating bubble,” Hanke
said. “I want to show that incremental information that you don’t know.
What would a really knowledgeable neighborhood friend tell you about
the neighborhood you’re moving through? He wouldn’t say, ‘That’s a Eleven. That’s a fire hydrant.’ He would say, ‘Michael Mina is opening this
new place here, and they are going to do this crazy barbecue thing.”’
Some companies, like Junaio, are working on augmented-reality
apps that crowd-source location intelligence through Facebook Places and
FourSquare check-ins. Hold up your phone to the world, and it can tell
part three. Reportage
you where your friends have been. It’s a cool app, and certainly worth
trying out, but there isn’t much value in each piece of information that
you see. The information density of that augmented reality is low, even if
it is socially relevant. If you’re opting into / augmented reality through
something like Glass, that cannot be the model.
***
Google offered up a vision of how Glass might be used in a video it
released earlier this year, to pretty much universal interest. But consider
the cramped view of augmented reality you see in that video. What
information is actually overlaid on the world? The weather, the time, an
appointment, a text message, directions, interior directions (say, within a
bookstore), a location check on a friend, a check-in.
You can see why Google would put this particular vision out there.
It’s basically all the stuff it’s already done, repackaged into this new user
interface. Sure, there’s a believable(ish) voice interface and a cute narrative
and all that. But of all the information that could possibly be seamlessly
transmitted to you from/about your environment, that’s all we get?
I’m willing to bet that people are going to demand a lot more from their
augmented-reality systems, and Hanke’s team is a sign that Google might
think so too. His internal start-up at Google is called Niantic Labs, and if
you get that reference, you are a very particular kind of San Francisco nerd.
The Niantic was a ship that came to California in
, got converted into
a store, burned in a fire, and was buried in the city. Over the next
years,
the ship kept getting rediscovered as buildings were built and rebuilt at
its burial site. Artifacts from the ship now sit in museums, but a piece of
the bow remains under a parking lot near the intersection of Clay
and Sansome, in downtown San Francisco.
Now, not everyone is going to want to know the story of the Niantic,
or at least not as many people as who want to know about the weather.
And the number of people who care about a story like that—or one about
a new restaurant—will be strongly influenced by the telling. The content
determines how engaging Field Trip is. But content is a game that Google,
very explicitly, does not like to play. Not even when the future prospects of
its augmented-reality business may be at stake.
The truth is, most of the alerts that Field Trip sent me weren’t right for
How Do You Publish a City?
the moment. I’d get a Thrillist story that felt way too boostery outside its email-list context. Or I’d get a historical marker from an Arcadia Publishing
book that would have been interesting, but wasn’t designed to be consumed
on my phone. Many of the alerts felt stilted, or not nearly as interesting as
you’d expect (especially for a history nerd like me). You can hand-tune the
sorts of publications that you receive, but of the updates I got, only Atlas
Obscura (and Curbed and Eater, to a lesser extent) seemed designed for this
kind of consumption. Nothing else seemed to want to explain what might
be interesting about a given block to someone walking through it; that’s just
not anyone’s business. And yet stuff that you read on a computer screen at
home has got to be different from stuff that you read in situ.
What happens when the main distribution medium for your work
is pushing it to people as they stumble through the Mission or around
Carroll Gardens? What possibilities does that open up? What others does it
foreclose?
“Most of the people that are publishing now into Field Trip are
publishing it as a secondary feed,” Hanke told me. “But some folks like Atlas
Obscura. They are not a daily site that you go to. They are information on
a map. They are an ideal publishing partner.”
They are information on a map. That’s not how most people think of
their publications. What a terrifying vision for those who grew up with
various media bundles or as Web writers. But it’s thrilling, too. You could
build a publication with a heat map of a city, working out from the most
heavily traveled blocks to the ones where people rarely stroll.
Imagine you’ve got a real-time, spatial-distribution platform. Imagine
that everyone reading about the place you’re writing about is standing right
in front of it. All that talk about search-engine and social optimization?
We’re talking geo-optimization, each story banking on the shared experience of bodies co-located in space.
***
What role will Google play in all this? Enabler and distributor, at least
according to the company’s current thinking. And on that score, it has a
few kinks to work out.
First, in an augmented-reality world, you need really good sound
mixers. Too often, Field Trip would chime in and my music would cut as I
part three. Reportage
walked through streets. This is a tough request, but I want to be informed
without being interrupted. It’s a small thing, but the kind of thing that
makes you bestow that most hallowed of compliments: “It just works.” Take
this also as an example of how important all kinds of “mixes” are going to
be. An augmented-reality annotation will be a live event for the reader; the
production has to be correct.
Second, there is a reason that the Google Glass video was parodied with
a remix that put ads all over the place: no one believes Google will make
products that don’t create a revenue stream. Hanke’s got at least a partial
answer to this one. Certain types of cool services, like Vayable, a kind
of Airbnb for travel experiences, will be a part of the service. And that’s
good, even if I do expect that a real Google Glass will look as much like the
parody video as the original.
Even if you don’t mind the ads, Google will have to master the process
of showing them to you. That’s something that Niantic is putting a lot of
thought into. Hanke frames it as a search for the right way to do and show
things automatically on the phone. We’re used to punching at our screens,
but in this hypothetical future, you’d need a little more help.
“This seamless discovery process, doing things automatically on the
phone,” he said. “I think it’s a whole new frontier in terms of user interface.
What’s the right model there? How do you talk to your user unprompted?”
Hanke’s not the only person at Google thinking about these things, even
if he is one of the most interesting. Google Now, the personal-assistant app
unveiled in June, is traveling over some of the same territory. Its challenge is
to automatically show you the pedestrian things you might want to know.
“Google Now is probably the first example of a new generation of
intelligent software,” Hugo Barra, the director of product management
for Android, told me. “I think there will be a lot more products that are
similarly intelligent and not as demand-based.”
So if you search for a flight, Google Now will track when the flight is
due to leave. It’ll show you sports scores based on the city you’re in. It can
tell you when you need to leave for appointments based on current traffic
conditions. And it will tell you the weather. “We’ve unified all these back
ends. Things you’ve done in [search] history—the place where you are, the
time of the day, your calendar. And in the future, more things, more signals,
How Do You Publish a City?
the people you’re with. Google can now offer you information before you
ask for it,” Barra continued. “It’s something the founders have wanted to do
for a long time.”
Google Now, in other words, is the base layer of the Glass video, or of
any Google augmented-reality future. It’s the servant that trains itself. It’s
the bot that keeps you from having to use your big clumsy thumbs.
In my week of testing, I liked Google Now, but I didn’t love it. Very
few “automagic” things happened, even after a week of very heavy use. I
rarely felt as if it was saving me all that much time. (Anyone have this
experience with Siri, too? I sure have.) And while the traffic alerts tied to
my calendar were legitimately awesome, if Google Now’s info were the
only info embedded in my heads-up display, I’d be seriously disappointed.
Again, as with FieldTrip (not to mention Junaio), the problem is content.
Google’s great with structured data—flight times, baseball box scores—but
it’s not good with the soft, squishy, wordy stuff.
***
Perhaps a writer’s task has always been to translate what is most
interesting about the world into a format that people can understand
and process quickly. We perform a kind of aggregation and compression,
zipping up whole industries’ fortunes in a few short sentences. But if an
augmented-reality future comes to pass—and I think it will in one form or
another—this task will really be laid bare. Given a city block, the challenge
will be to excavate and present the information that the most people are
curious about at the precise moment they walk through it. Information on
a map at a specific time.
Some of what legacy (print and digital) media organizations produce
might work nicely: Newspapers publish (small amounts of) local news
about a city. Patch provides some relevant updates about local events. Some
city weeklies (OC Weekly!) do a fantastic job covering shows and scenes
and openings (while muckraking along the way). But everyone is still
fundamentally writing for an audience made up of people who they expect
are at their computers or curled up on the couch. Their core enterprise is
not to create a database of geo- and time-tagged pieces of text designed to
complement a walk (or drive) through a place.
What you need are awesome “digital notes” out there in the physical
part three. Reportage
world. That’s what Caterina Fake’s Findery (née Pinwheel) is trying to
create. People can leave geocoded posts wherever they want, and then other
people can discover them. It, like Junaio, is very cool. But the posts lack the
kind of polish that I’d voluntarily opt into having pushed to my AR screen.
I wouldn’t want to have to sift through them to find the good stuff.
To me, in the extremely attention-limited environment of augmented
reality, you need a new kind of media. You probably need a new noun to
describe the writing. Newspapers have stories. Blogs have posts. Facebook
has updates. And AR apps have X. You need people who train and get
better and have the time to create perfect digital annotations in the physical
world.
Fascinatingly, such a scenario would require the kind of local knowledge that newspaper reporters used to accumulate, and pair it with the
unerring sense of raw interestingness that the best short-form magazine
writers, bloggers, tweeters, and Finderyers cultivate.
Back to the future.
37.
Pivoting a City
C A N S TA R T U P S H E L P M O R E T H A N
THEMSELVES?
by Alexis C. Madrigal
I have a tip for coastal-dwellers traveling to the brick-and-steel cities of the
Rust Belt. It is a lame trick, and I am ashamed to admit that I used it. But
it is useful and it is my duty as your faithful correspondent in the field to
share it with you.
If you find yourself in Pittsburgh, say, on a Saturday morning, and you
want to get a quick tour of the neighborhoods in which you might find some
interesting things, here’s the shortcut: Go to Yelp. Type “hipster coffee” into
the search box. Up comes a map of places that other Yelpers have helpfully
labeled “hipster,” which tends to mean places where dudes in funny t-shirts
bring their laptops to work. These are the unofficial co-working spaces of
the town.
In a city like Pittsburgh, this data probe also tends to highlight neighborhoods where these people tend to live. This simple search pinpointed the
Strip (historic district), Lawrenceville (the New York Times’ “go-to destina-
part three. Reportage
tion”), the North Side (home of The Mattress Factory Art Museum),
the South Side (the dense home of the accelerator AlphaLab), Squirrel
Hill (east of Carnegie Mellon), and Garfield (home to sundry art galleries,
Awesome Books, and the Center for PostNatural History).
It’s important to note that you don’t even have to like hipster coffee to
deploy this glowing tracer for MacBook Airs. It is just a sign that where
startups go, a very particular kind of culture goes with them. You get
fancy coffee from individual coffee plantations. You get an Apple store. You
get Belgian beer places. You get vintage shops where you can buy many
different things made of teak.
But can start-up culture change a city?
Pittsburgh is a great place to investigate the possibilities of a startup led urban resurgence because of all the cities between the coast and
Chicago, it’s the one that’s farthest along the path towards techdom. It’s got
a world-leading research institution that focuses on artificial intelligence.
Because the steel mills collapsed so quickly and so thoroughly, its leaders
were forced to put together a long-term plan for the city’s future. Here’s
how the New York Times summarized the situation in
:
“If people are looking for hope, it’s here,” said Sabina Deitrick,
an urban studies expert at the University of Pittsburgh. “You
can have a decent economy over a long period of restructuring.”
Pittsburgh’s transition has been proceeding for decades in
fits and starts, benefiting some areas much more than others.
A development plan begun in the
s successfully used the
local universities to pour state funds into technology research.
And much of this story is real. Pittsburgh is a vibrant, fun place with
cool neighborhoods, lots of young people, excellent universities, beautiful
housing stock, strong tech companies. It seems like a great place to be an
entrepreneur.
But can these entrepreneurs become the backbone of this city? Can they
own its problems, not just its advantages?
Startups all over the country tend to be very white. And Pittsburgh,
like many other major cities, has even more acute black unemployment
Pivoting a City
problems than it does general ones. Unemployment data isn’t broken out by
city and race, but nationally, black unemployment was almost twice that of
whites in
, peaking at 16.7 percent (!) in August, according
to the Bureau of Labor Statistics.
So, as long as we’re thinking about scale, the biggest challenge facing
Pittsburgh isn’t how to make a vibrant startup scene (though that’s not easy
either) but how do you make one whose benefits extend beyond the edges
of the start-up bubble?
***
I’m outside StartUptown, a ,
-square-foot co-working facility run
by Dale McNutt, who lives here, too. This is ground zero for where
Pittsburgh problems meet its new solutions. McNutt’s been renovating
the place since
, and it shows. The brick buildings are now set in a
wonderful garden, and the whole place just sparkles with DIY flourishes.
Around me, the streets are mostly deserted. Most of the houses seem
occupied, but in poor condition. There are few businesses. Kitty corner from
StartUptown, there is a mental health facility and the Jubilee Soup
Kitchen.
This area, which McNutt calls Uptown, might fairly be termed the
lower reaches of The Hill, which was, in essence, the Harlem of Pittsburgh. The story of The Hill is sad. The community was leveled by an illconsidered redevelopment plan in the
s that displaced ,
families.
It had a “devastating” impact on the community, one from which it still
has not recovered.
On the other side of Uptown is the Bluff, where steel mills once
lined the banks of the river. There’s not a steel mill left in Pittsburgh
proper, but the old sites are now home to economic development groups
like the Pittsburgh Technology Council and Innovation Works, in
addition to companies and university facilities they’ve helped bring to town.
They even named the street running adjacent to the river “Technology
Drive.”
Downtown is precisely a mile to the west, and Oakland, home of the
University of Pittsburgh and Carnegie Mellon, lies a couple miles to the east.
The street names around here sound aspirational to an outsider: Forbes runs
east, Fifth Avenue runs west.
part three. Reportage
Inside, StartUptown is a cross between a traditional co-working space
and something with more soul. The walls are held together by clamps.
There’s a crazy chandelier made from industrial supplies hanging from
the ceiling, which has the punched tin roof of an artisan cocktail place.
Southern exposure sends daylight streaming in all day long. And McNutt’s
poodle will happily nuzzle any passersby.
It would be hard not to like this place or Dale McNutt. He clearly
used to be/is an artist at heart — he knows the sculptors in town and
recalls the art program at Carnegie Mellon with appropriate nostalgia —
but now he’s taken to tending entrepreneurs. Plenty of people move to
rough neighborhoods thinking they’re going to fix up a place; only tough
optimists like McNutt can manage to stay.
He’s arranged for a couple of companies to meet with me. Both
companies are growing and interesting.
AllPoint lets you take D images and transform them into architectural
renderings with its software. ”We do really rapid D survey and data
capture,” founder Aaron Morris tells me. “A lot of it is going into digital
capture for retrofit design for manufacturing facilities that want to put
new equipment in.” Morris got a PhD from CMU and worked on “an
autonomous robot program that mapped underground spaces
with 3D LiDAR.”
The other founder I met was Robb Meyer, whose company created
the NoWait app, which helps restaurants that don’t take reservations
to manage their wait times. It’s in use at some of the country’s hottest
restaurants, like David Chang’s Momofuku in New York.
They are two very impressive entrepreneurs, as good as I’ve seen on this
trip. We take a quick trip into the basement to meet Harold Lessure, a former CMU physics researcher who has spun out the company Lechtzer to
make ultra-sensitive natural gas detectors. His work space is a crazy lab
filled with doodads for making stuff and prototypes in various states of
assembly. With the boom in natural gas across the United States, it is a
good time to be in the methane-detection business, Lessure tells me.
Lessure is just one of many people I met in Pittsburgh who remind you
why having a premier research university is so important to a city that
wants startups. They can just do and make things that other people can’t.
Pivoting a City
They might not hit on something like Google’s “backrub” algorithm, but
they can make best-in-class products that businesses will want to buy with
American dollars, not Klout points.
***
Our next stop is Google Pittsburgh.
Google obviously isn’t a startup. But having Google put down some
roots in your city is like having Warren Buffett invest in your company. It’s
a mark of distinction. It’s a mark of value. Not only that, the offices are
growing. There are hundreds of Googlers in Pittsburgh now.
The company’s offices in Pittsburgh are stone-cold beautiful. Set-up
in an old Nabisco factory, the office is actually better looking than the
Mountain View buildings. There are bee hives on the roof, and a chicken
coop awaiting tenants. There’s a trapeze net hanging above a corner of
the office that you can have meetings in. (I jumped in. It’s harder to walk
on a net than it looks.) The cafes and event spaces are nice. As office
manager Cathy Serventi and product manager/native Pittsburgher Mike
Capsambelis showed us around, you could feel that this office was a proof
point that tech was changing Pittsburgh. For the good.
Carolina Pais-Barreto Beyers, a VP at Urban Innovation , who is
driving me around, wants to make sure that I see the part of the city that
has not been transformed, just half a mile away. Her organization has a
fascinating mission. They want to connect startup hubs with the broader
community. “We want to make sure everybody does well,” Beyers told me.
They want to find ways for startups to create jobs not just for people with
CS degrees from Carnegie Mellon and Stanford, but people who thought
they’d work in steel mills, people who didn’t finish high school, people
who have grown up watching their communities ripped apart by dumb
development decisions, drugs, and the decline of the industrial economy.
A half a mile from Google’s gorgeous building, just across East Busway,
you find corners that look like this.
Even in Silicon Valley, the hub of startup activity, you can cross the border from Stanford’s home town into East Palo Alto, a community where 96
percent of kids qualify for free or reduced lunches, a measure that indicates the high level of poverty in the community. It’s actually
hard to find the levels of poverty and misery that you’ll find in San
part three. Reportage
Figure
..
Francisco’s Tenderloin anywhere else in America. And along the route that
many drive, bus, or train from San Francisco to the valley, you can find the
homeless camped under the freeway and wandering along its stretches.
Perhaps it is too much to ask of a single industry that it create stability
for an entire region. And that’s fair. But in city after city, we’ve found that
entrepreneurship has become a central tenet of local economic policy. And
yet the literature on how startups can grow a local economy is skimpy. I’m
not saying that it doesn’t happen, but simply that there aren’t a lot of good
studies showing precisely how this is all supposed to work.
On the other hand, what else are you going to do? Build arenas? Recruit
companies from neighboring regions by promising them huge tax breaks? If
there is a proven strategy for lifting large numbers of people out of poverty
in an urban area, cities sure don’t seem to be deploying it at the scale of the
problem.
The main policy vehicle for Urban Innovation 0 s work is the Pittsburgh Central Keystone Innovation Zone that surrounds StartupTown and
pieces of The Hill and the North Side. They describe it like this:
PCKIZ orchestrates a combination of tax incentives,
Pivoting a City
entrepreneurial resources, educational and internship
programs, networking events, and technology showcases. Its
goal is to multiply technology and economic development
activities, creating economic sustainability and transforming
central Pittsburgh into a vibrant community.
This method of development–economic gardening, people call it
sometimes–sure seems like a better idea than the ones we’ve had in the
past about how to help underserved populations. But you know what they
say about good intentions.
The one thing you hear over and over from entrepreneurs is that the
idea hardly matters. Winning or losing is in the execution, day after day,
challenge after challenge. Perhaps there are hundreds of ways to connect
ghettos and Google. But who wants to put that effort in when there is no
exit, no acquisition, no IPO? The reward is just a thriving, safe, equitable
city. And where are the legions who are actually willing to work that hard
for the noble, capital-inefficient goal?
The mentality that dominates startup culture is all about efficiency.
Finding better, cheaper, faster ways of doing things. There’s nothing
lean, though, about providing mental health services or soup kitchens for
starving people. I wonder how well such a worldview can deal with the
legacy problems of big cities. I think the jury is out. But judging from
the sheer magnitude of startup experiments across the whole Rust Belt,
we’re going to have a lot more data soon. I’m rooting for Pittsburgh to
become that model of development. Because if they can’t do it, I’m not
sure anybody can.
38.
Voyager at the Edge of the
Solar System
WE’ RE ON THE CUSP OF ONE OF
T H E G R E AT E S T S C I E N T I F I C
AC C O M P L I S H M E N T S O F A L L
TIME.
by Rebecca J. Rosen
Last week, in the corners of the Internet devoted to outer space, things
started to get a little, well, hot. Voyager , the man-made object farthest
away from Earth, was encountering a sharp uptick in the
number of a certain kind of energetic particle around it.
Had the spacecraft become the first human creation to “officially” leave the
solar system?
It’s hard to overstate how wild an accomplishment this would be: a
machine, built here on Earth by humans, had sailed from Florida, out
part three. Reportage
of Earth’s orbit, beyond Mars, and past the gas giants of Jupiter and
Saturn, and may now have left the heliosphere—the tiny dot in the universe
beholden to our sun. Had it really happened? How would we know?
We’re not quite there yet, Voyager‘s project scientist and former head
of NASA’s Jet Propulsion Lab, Edward Stone, told me. The spacecraft is on
its way out—”it’s leaving the solar system”—but we don’t know how far it
has to go or what that transition to interstellar space will look like.
Voyager launched in
. Today, Voyager is about
astronomical
units away (one astronomical unit is roughly equal to the distance between
the sun and the Earth). The radio signals it transmits take
hours to
reach us. (Voyager is about
astronomical units—approximately seven
years—behind.) It is traveling at about kilometers a second ( ,
miles
an hour), propelled by the slingshot effect created by flying past Jupiter
and Saturn. (“It’s well above escape velocity,” Stone said.) The spacecraft’s
cameras have been turned off since
, when they took the pictures
for the famous Family Portrait mosaic capturing the planets as Voyager
looked back over the solar system it had traveled across.
Now the data coming back aren’t photographs but levels of different
kinds of particles on the outer edge of the sun’s bubble (the heliosphere),
known as the heliosheath, the farthest point the solar winds reach. Voyager
entered the heliosheath in December
. And it was some of those
data—the levels of a certain cosmic-ray particle—that provoked the recent
speculation that Voyager had finally flown the coop.
Some cosmic-ray particles enter the heliosphere, and we can see them
here from Earth. But a slower type has a hard time entering the heliosphere.
Last month, the sum of those slower particles suddenly ticked up about
percent, “the fastest increase we’ve seen,” Stone said. But an uptick does not
mean Voyager has crossed over, though it does mean we’re getting close.
When Voyager does finally leave and enter the space “out there where all
the particles are,” the level will stop rising. The rising itself tells us that
Voyager is not out there yet. “But,” cautioned Stone, “we don’t know. I mean,
this is the first time any spacecraft has been there.” Since nothing’s ever
been beyond the heliosphere before, we don’t know what the crossover
point looks like, which makes recognizing it at all a bit difficult. “That’s the
exciting thing.”
Voyager at the Edge of the Solar System
Two other indicators that Voyager has left the heliosphere—an absence
of certain low-energy particles that don’t leave our system, and a change in
the magnetic field—have not yet happened, though there have been some
decreases in those particles’ numbers. To complicate matters even further,
beyond the heliosphere, in interstellar space, there are comets that orbit the
sun and are thus part of the solar system.
It would be satisfying if, at the edge of the heliosphere, there was, well,
an actual edge, a boundary between our bubble and the cosmos. But it’s
probably not so cut-and-dried. “The boundary,” Stone postulated, “will not
be an instantaneous thing. [Voyager] won’t suddenly be outside.” Rather,
the exit will be turbulent, “a mix of inside and outside.” Stone and the other
Voyager scientists are trying to square the different data—the particles and
the magnetic field—to try to understand what that transition from inside
to outside looks like. That turbulent region may take several months to get
through.
But even without a clean break in the offing, it’s hard not to sit on
the edge of your seat to wait for this moment—this months-long moment.
“We’re looking at our data every day—we listen to these spacecraft every
day, for a few hours every day—to keep track of what’s going on,” Stone
said. “It’s very exciting from a scientific point of view, when you’re seeing
something that nobody’s seen before.”
So perhaps Voyager won’t make its mark with a sudden, defining event
that echoes across generations as a sort of before-and-after dividing line
through human history, like the line separating the time when a human’s
voice had never traveled across a wire to an ear miles away, and when it
had; or before a human foot had left its imprint on the moon, and when it
had. But Stone is okay with that: “Voyager has had a lot of those moments
as we flew by Jupiter, Saturn, Uranus, and Neptune. One after the other, we
found something that we hadn’t realized was there to be discovered.”
39.
When the Nerds Go Marching In
HOW A DREAM T EAM OF
E N G I N E E R S F R O M FA C E B O O K ,
TWI TTER, AND GOOGLE BUILT
T H E S O F T WA R E T H AT D R O V E
B A R A C K O B A M A’ S R E E L E C T I O N
by Alexis C. Madrigal
The Obama campaign’s technologists were tense and tired. It was game
day, and everything was going wrong.
Josh Thayer, the lead engineer of Narwhal, had just been informed that
they’d lost another one of the services powering their software. That was
bad: Narwhal was the code name for the data platform that underpinned
the campaign and let it track voters and volunteers. If it broke, so would
everything else.
They were talking with people at Amazon Web Services, but all they
part three. Reportage
knew was that they had packet loss. Earlier that day, they’d lost their
databases, their East Coast servers, and their memcache clusters. Thayer
was ready to kill Nick Hatch, a DevOps engineer who was the official bearer
of bad news. Another of their vendors, PalominoDB, was fixing databases,
but needed to rebuild the replicas. It was going to take time, Hatch said.
They didn’t have time.
They’d been working -hour days, six or seven days a week, trying to
reelect the president, and now everything had broken at just the wrong
time. It was like someone had written a Murphy’s Law algorithm and
deployed it at scale.
And that was the point. “Game day” was October . The election
was still days away, and this was a live-action role playing (LARPing!)
exercise that the campaign’s chief technology officer, Harper Reed, was inflicting on his team. “We worked through every possible disaster situation,”
Reed told me after the election. “We did three actual all-day sessions of
destroying everything we had built.”
Hatch was playing the role of dungeon master, calling out devilishly
complex scenarios designed to test each and every piece of their system
as they entered the exponential traffic-growth phase of the election. Mark
Trammell, an engineer whom Reed hired after he left Twitter, saw a couple
game days. He said they reminded him of his time in the Navy. “You ran
firefighting drills over and over and over, to make sure that you not just
know what you’re doing,” he said, “but you’re calm because you know you
can handle your shit.”
The team had elite and, for tech, senior talent—by which I mean that
most of them were in their s—from Twitter, Google, Facebook, Craigslist,
Quora, and some of Chicago’s own software companies, such as Orbitz
and Threadless, where Reed had been CTO. But even these people, maybe
especially these people, knew enough about technology not to trust it. “I
think the Republicans fucked up in the hubris department,” Reed told me.
“I know we had the best technology team I’ve ever worked with, but we
didn’t know if it would work. I was incredibly confident it would work. I
was betting a lot on it. We had time. We had resources. We had done what
we thought would work, and it still could have broken. Something could
have happened.”
When the Nerds Go Marching In
In fact, the day after the October
game day, Amazon Services—
on which the whole campaign’s tech presence was built—went down.
“We didn’t have any downtime, because we had done that scenario
already,” Reed said. Hurricane Sandy hit on another game day, October ,
threatening the campaign’s entire East Coast infrastructure. “We created a
hot backup of all our applications to U.S.-West in preparation for U.S.-East
to go down hard,” Reed said.
“We knew what to do,” Reed maintained, no matter what the scenario
was. “We had a runbook that said if this happens, you do this, this, and this.
They did not do that with Orca.”
THE NEW CHICAGO MACHINE vs. THE GRAND OLD PARTY
Orca was supposed to be the Republican answer to Obama’s perceived
tech advantage. It was going to allow volunteers at polling places to
update the Romney camp’s database of voters in real time, as people cast
their ballots. That would supposedly allow them to deploy resources more
efficiently and wring every last vote out of Florida, Ohio, and the other
battleground states. Orcas, a Romney spokesperson told PBS, are the
only known predator of the one-tusked narwhal.
The billing the Republicans gave the tool confused almost everyone
inside the Obama campaign. Narwhal wasn’t an app for a smartphone.
It was the architecture of the campaign’s sophisticated data operation.
Narwhal unified what Obama for America knew about voters, canvassers,
event-goers, and phone-bankers, and it did it in real time. Based on the
descriptions of the Romney camp’s software that were available then and
now, Orca was not even in the same category as Narwhal. Likening Orca to
Narhwal was like touting the iPad as a Facebook-killer, or comparing a GPS
device to an engine. And besides, in the scheme of a campaign, a digitized
strike list is cool, but it’s not, like, a game changer. It’s just a nice thing to
have.
So it was with more than a hint of schadenfreude that Reed’s team
heard of Orca’s crash early on Election Day. Later, reports posted
by rank-and-file Romney volunteers describe chaos descending on
polling locations, as only a fraction of the tens of thousands of volunteers
organized for the effort were able to use Orca properly to turn out the vote.
Of course, they couldn’t snicker too loudly. Obama’s campaign had
part three. Reportage
created a similar app in
called Houdini. As detailed in Sasha
Issenberg’s groundbreaking book, The Victory Lab, Houdini’s rollout
went great, until about : a.m. Eastern on the day of the election. Then it
crashed in much the same way Orca did.
In
, Democrats had a new version, built by the vendor NGP VAN. It
was called Gordon, after the man who killed Houdini. But the
failure,
among other needs, drove the
Obama team to bring technologists inhouse.
With Election Day fast approaching, they knew they could not go down.
And yet they had to accommodate much more strain on the systems as
interest in the election picked up toward the end, like it always does. Mark
Trammell, who worked for Twitter during its period of exponential growth,
thought it would have been easy for the Obama team to stumble into many
of the pitfalls that the social network had back then. But while the problems
of quickly scaling both technology and culture might have been similar, the
stakes were much higher. A fail whale (cough) in the days leading up to or
on November would have been neither charming nor funny. In a race
that at least some people thought might be very close, it could have cost
the president the election.
And of course, the team’s only real goal was to elect the president. “We
have to elect the president. We don’t need to sell our software to Oracle,”
Reed told his team. But the secondary impact of their success or failure
would be to prove that campaigns could effectively hire and deploy toplevel programming talent. If they failed, it would be evidence that this stuff
might be best left to outside political technology consultants, who had long
dominated the arena. If Reed’s team succeeded, engineers might become
as enshrined in the mechanics of campaigns as social-media teams already
were.
We now know what happened. The grand technology experiment
worked. So little went wrong that Trammell and Reed even had time to
cook up a little pin to celebrate. It said YOLO, short for “You Only Live
Once,” with the Obama O’s.
When the Obama campaign’s chief, Jim Messina, signed off on hiring
Reed, he told him, “Welcome to the team. Don’t fuck it up.” As Election
Day ended and the dust settled, it was clear: Reed had not fucked it up.
When the Nerds Go Marching In
The campaign had turned out more volunteers and gotten more donors
than in
. Sure, the field organization was more entrenched and
more experienced, but the difference stemmed in large part from better
technology. The tech team’s key products—Dashboard, the Call Tool, the
Facebook Blaster, the PeopleMatcher, and Narwhal—made engaging with
the president’s reelection effort simpler and easier for everyone.
But it wasn’t easy. Reed’s team came in as outsiders to the campaign
and, by most accounts, remained that way. The divisions among the tech,
digital, and analytics teams were never quite resolved, even if the end
product has salved the sore spots that developed over the stressful months.
At their worst, in early
, the cultural differences between tech and
everybody else threatened to derail the whole grand experiment.
By the end, the campaign produced exactly what it should have: a
hybrid of the desires of everyone on Obama’s team. Obama for America
raised hundreds of millions of dollars online, made unprecedented progress
in voter targeting, and built everything atop the most stable technical
infrastructure of any previous (or simultaneous) presidential campaign. To
go a step further, I’d even say that this clash of cultures was a good thing:
the nerds shook up an ossifying Democratic tech structure, and the politicos
taught the nerds a thing or two about stress, small-p politics, and the
significance of elections.
YOLO: MEET THE OBAMA CAMPAIGN’S CTO
If you’re a nerd, Harper Reed is an easy guy to like. He’s brash and
funny and smart. He gets you and where you came from. He, too, played
with computers when they weren’t cool, and learned to code, because he
just could not help himself. You could call out nouns, phenomena, and
he’d be right there with you: BBS, warez, self-organizing systems, Rails,
the quantified self, singularity. He wrote his first programs at age , games
that his mom typed into their Apple IIC. He, too, has a memory that all
nerds share: late at night, light from a chunky monitor illuminating his
face, fingers flying across a keyboard, he figured something out.
TV-news segments about cybersecurity might look as though they’ve
been lifted straight from Reed’s memories, but the B-roll of darkened rooms
and typing hands cannot convey the sense of exhilaration he felt when
he built something that works. Harper Reed got the city of Chicago to
part three. Reportage
create an open and real-time feed of its transit data by reverse-engineering
how it served bus-location information. Why? Because this made the work
commute for his wife, Hiromi, a little easier. Because it was fun to extract
data from the bureaucracy and make that information available to anyone
who wanted it. Because he is a nerd.
Yet Reed has friends like the manager of the hip-hop club Empire, who,
when we walk into the place early on the Friday after the election, says,
“Let me grab you a shot.” Surprisingly, Harper Reed is a chilled-vodka
kind of guy. Unsurprisingly, Harper Reed read Steven Levy’s Hackers as
a kid. Surprisingly, the manager, who is tall and handsome, with rockand-roll hair flowing from beneath a red beanie, returns to show Harper
photographs of his kids. They’ve known each other for a long while. The
kids are really growing up.
As the night rolls on, and the club starts to fill up, another friend
approaches us: DJ Hiroki, who is spinning tonight. Harper Reed knows the
DJ. Of course. Hiroki grabs us another shot. (At this point I’m thinking,
By the end of the night, either I pass out or Reed tells me something good.)
Hiroki’s been DJing at Empire for years, since Harper Reed was the crazy
guy in his public Facebook photos. In one shot from
, a skinny
Reed sits in a bathtub with a beer in his hand, two thick band tattoos
running across his chest and shoulders. He is not wearing any clothes. The
caption reads, “I was cold.—with Stop staring, it’s not there i swear! in
Chicago, IL.” What makes Harper Reed different isn’t just that the photo
exists, but that he kept it public during the election.
Yet if you’ve spent a lot of time around tech people, around Burning
Man devotees, around start-ups, around San Francisco, around BBSs,
around Reddit, Harper Reed probably makes sense to you. He’s a cool
hacker. He gets profiled by Mother Jones even though he couldn’t
talk with Tim Murphy, their reporter. He supports open source. He likes
Japan. He says fuck a lot. He goes to hipster bars that serve vegan Mexican
food, and where a quarter of the staff and clientele have mustaches.
He may be like you, but he also juggles better than you, and is wilder
than you, more fun than you, cooler than you. He’s what a king of the
nerds really looks like. Sure, he might grow a beard and a little potbelly,
but he wouldn’t tuck in his T-shirt. He is not that kind of nerd. Instead, he’s
When the Nerds Go Marching In
got plugs in his ears and a shock of gloriously product-mussed hair and
hipster glasses. And he doesn’t own a long-sleeve dress shirt, in case you
were wondering.
“Harper is an easy guy to underestimate, because he looks funny. That
might be part of his brand,” said Chris Sacca, a well-known Silicon Valley
venture capitalist and major Obama bundler who brought a team of more
than a dozen technologists out for an Obama-campaign hack day.
Reed, for his part, has the kind of self-awareness that faces outward.
His self-announced flaws bristle like quills. “I always look like a fucking
idiot,” Reed told me. “And if you look like an asshole, you have to be really
good.”
This was a lesson he learned early in Greeley, Colorado, where he grew
up. “I had this experience where my dad hired someone to help him out
because his network was messed up, and he wanted me to watch. And this
was at a very unfortunate time in my life where I was wearing very baggy
pants and I had a Marilyn Manson shirt on and I looked like an asshole. And
my father took me aside and was like, ‘Why do you look like an asshole?’
And I was like, ‘I don’t know. I don’t have an answer.’ But I realized I was
just as good as the guys fixing it,” Reed recalled. “And they didn’t look like
me and I didn’t look like them. And if I’m going to do this and look like an
idiot, I have to step up. Like, if we’re all at zero, I have to be at , because
I have this stupid mustache.”
And in fact, he may actually be at . Sacca said that with technical
people, it’s one thing to look at their résumés and another to see how they
are viewed among their peers. “And it was amazing how many incredibly
well-regarded hackers that I follow on Twitter rejoiced and celebrated
[when Reed was hired],” Sacca said. “Lots of guys who know how to spit
out code, they really bought that.”
When Sacca brought his Silicon Valley contingent out to Chicago, he
called the technical team “top notch.” After all, we’re talking about a group
of people who had Eric Schmidt sitting in with them on Election Day. You
had to be good. The tech world was watching.
Terry Howerton, the head of the Illinois Technology Association and
a frank observer of Chicago’s tech scene, had only glowing things to say
about Reed. “Harper Reed? I think he’s wicked smart. He knows how to
part three. Reportage
pull people together. I think that was probably what attracted the rest of
the people there. Harper is responsible for a huge percentage of the people
who went over there.”
Reed’s own team found their co-workers particularly impressive. One
testament to that: several start-ups might spin out of the connections people
made at the OFA headquarters, not unlike Optimizely, a tool for Web-site
A/B testing, which spun out of Obama’s
bid. (Sacca’s an investor in
that one.)
“A CTO role is a weird thing,” said Carol Davidsen, who left Microsoft
to become the product manager for Narwhal. “The primary responsibility
is getting good engineers. And there really was no one else like him that
could have assembled these people that quickly and get them to take a pay
cut and move to Chicago.”
And yet, the very things that make Reed an interesting and beloved
person are the same things that make him an unlikely pick to become
the chief technology officer for the reelection campaign of the president of
the United States. Political people wear khakis. They only own long-sleeve
dress shirts. Their old photos on Facebook show them canvassing for local
politicians and winning cross-country meets.
I asked Michael Slaby, Obama’s
CTO, and the guy who hired
Harper Reed this time around, if it wasn’t risky to hire this wild guy. “It’s
funny to hear you call it risky. It seems obvious to me,” Slaby said. “It seems
crazy to hire someone like me as CTO when you could have someone like
Harper as CTO.”
THE NERDS ARE INSIDE THE BUILDING
The strange truth is that campaigns have long been low-technologist, if
not low-technology, affairs. Think of them as a weird kind of niche start-up,
and you can see why. You have very little time, maybe a year, really. You
can’t afford to pay very much. The job security, by design, is nonexistent.
And even though you need to build a massive “customer” base and develop
the infrastructure to get money and votes from them, no one gets to exit and
make a bunch of money. So campaign tech has been dominated by people
who care about the politics of the thing, not the technology of the thing.
The Web sites might have looked like solid consumer Web applications, but
under the hood, they weren’t.
When the Nerds Go Marching In
For all the hoopla surrounding the digital savvy of President Obama’s
campaign, and as much as everyone I spoke with loved it, it was not
as heavily digital or technological as it is now remembered to be. “Facebook
was about one-tenth of the size that it is now. Twitter was a nothingburger for the campaign. It wasn’t a core or even peripheral part of our
strategy,” said Teddy Goff, the digital director of Obama for America and
a veteran of both campaigns. Think about the killer tool of that campaign,
my.barackobama.com—it borrowed the my from MySpace.
Sure, the ’ campaign had Facebook co-founder Chris Hughes, but
Hughes was the spokesperson, not the technical guy. The ’ campaigners,
Slaby told me, had been “opportunistic users of technology” who “bruteforced and baling-wired” different pieces of software together. Things
worked (most of the time), but everyone, Slaby especially, knew that they
needed a more stable platform for
.
Campaigns, however—even Howard Dean’s famous
Internetenabled run at the Democratic nomination—did not hire a bunch of
technologists. They hired a couple, like Clay Johnson, but mostly they
bought technology from outside consultants. For
, Slaby wanted to
change all that. He wanted dozens of engineers in-house, and he got them.
“The real innovation in
is that we had world-class technologists
inside a campaign,” Slaby told me. “The traditional technology stuff inside
campaigns had not been at the same level.” And yet the technologists, no
matter how good they were, brought a different worldview, a different set
of personalities, and different expectations.
A campaign is not just another Fortune
company or top- Web
site. It has its own culture and demands, strange rigors and schedules. The
deadlines are hard, and the pressure would be enough to press the T-shirt
of even the most battle-tested start-up veteran.
To really understand what happened behind the scenes at the Obama
campaign, you need to know a bit about its organizational structure.
Tech was Harper Reed’s domain. “Digital” was Joe Rospars’ kingdom;
his team was composed of the people who sent you all those e-mails,
designed some of the consumer-facing pieces of BarackObama.com, and
ran the campaign’s most-excellent accounts on Facebook, Twitter, Tumblr,
YouTube, and the like. Analytics was run by Dan Wagner, and his team
part three. Reportage
was responsible for coming up with ways of finding and targeting voters
they could convince or turn out. Jeremy Bird ran Field, the on-the-ground
operations of organizing voters at the community level, which many
consider Obama’s secret sauce . The tech for the campaign was
supposed to help the field, analytics, and digital teams do their jobs better.
In a campaign—at least in this campaign, but perhaps in any successful
campaign—tech has to play a supporting role. The goal was not to build
a product. The goal was to reelect the president. As Reed put it, if the
campaign were Moneyball, he wouldn’t be Billy Beane; he’d be “Google
Boy.”
There’s one other interesting component of the campaign’s structure.
And that’s the presence of two big tech vendors interfacing with the various
teams—Blue State Digital and NGP Van. The most obvious is the firm that
Rospars, Jascha Franklin-Hodge, and Clay Johnson co-founded, Blue State
Digital. It’s the preeminent progressive digital agency, and a decent chunk—
maybe
percent—of its business comes from providing technology to
campaigns. Of course, BSD’s biggest client is the Obama campaign, and
has been for some time. BSD and Obama for America were and are so
deeply enmeshed, it would be difficult to say where one ended and the other
began. After all, both Goff and Rospars, the company’s principals, were
paid staffers of the Obama campaign. And yet between
and
, BSD
was purchased by WPP, one of the largest ad agencies in the world. What
had been an obviously progressive organization was now owned by a huge
conglomerate and had clients that weren’t other Democratic politicians.
One other thing to know about Rospars, specifically: “He’s the Karl
Rove of the Internet,” someone who knows him very well told me. What
Rove was to direct mail—the undisputed king of the medium—Rospars is to
e-mail. He and Goff are the brains behind Obama’s unprecedented online
fund-raising efforts. They knew what they were doing, and had proven that
time and again.
The complex relationship between BSD and the Obama campaign adds
another dimension to the introduction of an inside team of technologists.
If all campaigns started bringing their technology in-house, perhaps BSD’s
tech business would begin to seem less attractive, particularly if many of
the tools that such an inside team created were developed as open-source
When the Nerds Go Marching In
products.
So perhaps the tech team was bound to butt heads with Rospars’s
digital squad. Slaby would noted, too, that the organizational styles of the
two operations were very different. “Campaigns aren’t traditionally that
collaborative,” he said. “Departments tend to be freestanding. They are
organized kind of like disaster response—siloed and super hierarchical so
that things can move very quickly.”
Looking at it all from the outside, both the digital and tech teams had
really good, mission-oriented reasons for wanting their way to carry the
day. The tech team could say, “Hey, we’ve done this kind of tech before at
larger scale and with more stability than you’ve ever had. Let us do this.”
And the digital team could say, “Yeah, well, we elected the president and
we know how to win, regardless of the technology stack. Just make what
we ask for.”
The conflict played out over things like user experience, or UX, on the
Web site. Jason Kunesh was the director of UX for the tech team. He had
many years of consulting under his belt for big and small companies alike,
including Microsoft and LeapFrog. He, too, from an industry perspective,
knew what he was doing. So he ran some user-interrupt tests on the Web
site to determine how people were experiencing www.barackobama.com.
What he found was that the Web site wasn’t even trying to make a go
at convincing voters. Rather, everyone got funneled into the fund-raising
“trap.” When he raised that issue with Goff and Rospars, he got a response
that I imagine was something like “Duh. Now STFU,” but perhaps in more
words. Think about it from the Goff/Rospars perspective: the system they’d
developed could raise $ million with a single e-mail. The sorts of moves
they had learned had made a difference of tens, if not hundreds, of millions
of dollars. Why was this Kunesh guy coming around trying to tell them
how to run a campaign?
From Kunesh’s perspective, though, there was no reason to think that
you had to run this campaign the same way you ran the last one. The
outsider status that the team both adopted and had applied to them gave
them the right to question previous practices.
Tech sometimes had difficulty building what the field team, a hallowed
group within the campaign’s world, wanted. Most of that related to the
part three. Reportage
way Dashboard, the online outreach tool, was launched. If you looked at
Dashboard at the end of the campaign, you saw a beautifully polished
product that let you volunteer any way you wanted. It was secure and
intuitive and had tremendously good uptime as the campaign drew to a
close.
But that wasn’t how the first version of Dashboard looked.
The tech team’s plan was to roll out version with a limited feature set,
iterate, roll out version , iterate, and so on and so forth until the software
was complete and bulletproof. Per Kunesh’s telling, the field people were
used to software that looked complete but that was unreliable under the
hood. It looked as if you could do the things you needed to do, but
the software would keep falling down and getting patched, falling down
and getting patched, all the way through a campaign. The tech team did
not want that. They might be slower, but they were going to build solid
products.
Reed’s team began to trickle into Chicago beginning in May of
. They promised, overoptimistically, that they’d release a version of
Dashboard just a few months after their arrival. The first version was not
impressive. “August ,
, my birthday, we were supposed to have a
prototype out of Dashboard that was going to be the public launch,” Kunesh
told me. “It was freaking horrible. You couldn’t show it to anyone. But I’d
only been there weeks, and most of the team had been there half that
time.”
As the tech team struggled to translate what people wanted into usable
software, trust in the tech team—already shaky—kept eroding. By February
of
, Kunesh started to get word that people on both the digital and field
teams had agitated to pull the plug on Dashboard and replace the tech team
with somebody, anybody, else.
“A lot of the software is kind of late. It’s looking ugly, and I go out on
this field call,” Kunesh remembered. “And people are like, ‘Man, we should
fire your bosses, man . . . We gotta get the guys from the DNC. They don’t
know what the hell you’re doing.’ I’m sitting there going, ‘I’m gonna get
another margarita.”’
While the tech team was certainly responsible for the early struggles,
there were mitigating factors. For one, nobody had ever done what they
When the Nerds Go Marching In
were attempting to do. Narwhal had to connect to a bunch of different
vendors’ software, some of which turned out to be surprisingly arcane and
difficult. Not only that, but there were differences between the way field
offices in some states did things and how campaign headquarters thought
they did things. Tech wasted time building things that people didn’t need
or want.
“In the movie version of the campaign, there’s probably a meeting
where I’m about to get fired and I throw myself on the table,” Slaby told
me. But what actually happened was that Obama’s campaign chief Jim
Messina would come by Slaby’s desk and tell him, “Dude, this has to work.”
And Slaby would respond, “I know. It will,” and then go back to work.
In fact, some shake-ups were necessary. Reed and Slaby sent some
product managers packing and brought in more-traditional ones, like the
former Microsoft project manager Carol Davidsen. “You very much have
to understand the campaign’s hiring strategy: ‘We’ll hire these product
managers who have campaign experience, then hire engineers who have
technical experience—and these two worlds will magically come together.’
That failed,” Davidsen said. “Those two groups of people couldn’t talk to
each other.”
Then, in the late spring, all the products that the tech team had been
promising started to show up. Dashboard got solid. You didn’t have to log
in a bunch of times if you wanted to do different things on the Web site.
Other, smaller products rolled out. “The stuff we told you about for a year
is actually happening,” Kunesh recalled telling the field team. “You’re going
to have one log-in and have all these tools and it’s all just gonna work.”
Perhaps most important, Narwhal really got on track, thanks no doubt
to Davidsen’s efforts, as well as Josh Thayer’s, the lead engineer, who
arrived in April. Narwhal fixed a problem that had long plagued campaigns.
You have all this data coming in from all these places—the voter file, various
field offices, the analytics people, the Web site, mobile stuff. In
, and all
previous races, the numbers changed once a day. It wasn’t real-time. And
the people looking to hit their numbers in various ways out in the field
offices—number of volunteers and dollars raised and voters convinced—
were used to seeing that update happen in that fashion.
But from an infrastructure level, how much better would it be if you
part three. Reportage
could sync that data in real time across the entire campaign? That’s what
Narwhal was supposed to do. Davidsen, in true product-manager form,
broke down precisely how it all worked. First, she said, Narwhal wasn’t
one thing, but several. Narwhal was just an initially helpful brand for the
bundle of software.
In reality, it had three components. “One is vendor integration: BSD,
NGP, VAN [the latter two companies merged in
]. Just getting all of
that data into the system and getting it in real time as soon as it goes in one
system to another,” she said. “The second part is an API portion. You don’t
want a million consumers getting data via SQL.” The API, or application
programming interface, allowed people to access parts of the data without
letting them get at the SQL database on the back end. It provided a safe
way for Dashboard, the Call Tool (which helped people make calls), and
the Twitter Blaster to pull data. And the last part was the presentation of
the data that were in the system. The dream had been for all applications
to run through Narwhal in real time, but that turned out to be infeasible.
So the tech team split things into real-time applications like the Call Tool.
And then they provided a separate way for the analytics people, who had
very specific needs, to get the data in a different form. Then whatever they
came up with was fed back into Narwhal.
By the end, Davidsen thought all the teams’ relationships had improved,
even before Obama’s big win. She credited a weekly Wednesday drinking
and hanging-out session that brought together all the various people
working on the campaign’s technology. By the very end, some front-end
designers from the digital team had embedded with the tech squad to get
work done faster. Tech might not have been fully integrated, but it was
fully operational. High-fives were in the air.
Slaby, with typical pragmatism, put it like this: “Our supporters don’t
give a shit about our org chart. They just want to have a meaningful
experience. We promise them they can play a meaningful role in politics,
and they don’t care about the departments in the campaign. So we have to
do the work on our side to look integrated and have our shit together. That
took some time. You have to develop new trust with people.” He added, “It’s
just change management. It’s not complicated; it’s just hard.“
WHAT THEY ACTUALLY BUILT
When the Nerds Go Marching In
Of course, the tech didn’t exist for its own sake. It was meant to be
used by the organizers in the field and the analysts in the lab. Let’s just
run through some of the things that actually got accomplished by the tech,
digital, and analytics teams beyond Narwhal and Dashboard.
They created the most sophisticated e-mail fund-raising program
ever. The digital team, under Rospars’s leadership, took their data-driven
strategy to a new level. Any time you received an e-mail from the Obama
campaign, it had been tested on smaller groups and the response rates
had been gauged. The campaign thought all the letters had a good chance
of succeeding, but the worst-performing letters did only to
percent
of what the best-performing e-mails could deliver. So if a good performer
could do $ . million, a poor performer might net only $ ,
. The
genius of the campaign was that it learned to stop sending poor performers.
Obama became the first presidential candidate to appear on Reddit, the
massively popular social-networking site. And yes, he really did type in
his own answers during that Ask Me Anything, with Goff at his side. One
fascinating outcome of the AMA is that ,
Redditors registered to vote
after the president dropped in a link to his campaign’s voter-registration
page. Oh, and the campaign also officially has the most tweeted tweet and
the most popular Facebook post. Not bad. I would also note that Laura
Olin, a former strategist at Blue State Digital who moved to the Obama
campaign, ran the best campaign Tumblr the world will probably ever see.
With Davidsen’s help, the analytics team built a tool they called the
Optimizer, which allowed the campaign to buy eyeballs on television more
cheaply. They took set-top box (that is to say, your cable or satellite box or
DVR) data from Davidsen’s old start-up, Navik Networks, and correlated it
with the campaign’s own data. This occurred through a third party called
Epsilon: the campaign sent its voter file, and the television provider sent
their billing file, and boom, a list came back of people who had done certain
things—say, watched the first presidential debate. Having that data allowed
the campaign to buy ads that it knew would get in front of the most people
at the lowest cost. It didn’t have to buy the traditional stuff, like the local
news. Instead, it could run ads targeted to specific types of voters during
reruns or off-peak hours.
According to CMAG/Kantar, the Obama campaign’s cost per ad was
part three. Reportage
lower ($ ) than the Romney campaign’s ($ ) or any other major buyer
in the campaign cycle. That difference may not sound impressive, but the
Obama campaign itself aired more than
,
ads. And it wasn’t just
about cost, either. The campaign could see that some households were
watching only a couple hours of TV a day, and might be willing to spend
more to get in front of those harder-to-reach people.
The digital, tech, and analytics teams worked to build Twitter and
Facebook Blasters. They ran on a service that generated microtargeting
data built by Will St. Clair. It was code named Taargüs for some reason.
For Twitter, one of the company’s former employees, Mark Trammell,
helped build a tool that could send individual users direct messages. “We
built an influence score for the people following the [Obama for America]
accounts, and then cross-referenced those for specific things we were trying
to target—battleground states, that sort of stuff.” Meanwhile, the teams
also built an opt-in Facebook outreach program that sent people messages
saying, essentially, “Your friend, Dave in Ohio, hasn’t voted yet. Go tell
him to vote.” Goff described the Facebook tool as “the most significant new
addition to the voter-contact arsenal that’s come around in years, since the
phone call.”
Last but certainly not least, you have the digital team’s Quick Donate.
It essentially brought the ease of Amazon’s one-click purchases to political
donations. “It’s the absolute epitome of how you can make it easy for
people to give money online,” Goff said. “In terms of fundraising, that’s
as innovative as we needed to be.” Storing people’s payment information
also let the campaign receive donations via text messages long before the
Federal Elections Commission approved an official way of doing so. They
could simply text people who’d opted in a simple message like “Text back
with how much money you’d like to donate.” Sometimes people texted
much larger dollar amounts than the $ increments that mobile carriers
allow.
It’s an impressive array of accomplishments. What you can do with
data and code just keeps advancing. “After the last campaign, I got
introduced as the CTO of the most technically advanced campaign ever,”
Slaby said. “But that’s true of every CTO of every campaign every time.”
Or, rather, it’s true of one campaign CTO every time.
When the Nerds Go Marching In
EXIT MUSIC
That next most technically advanced CTO, in
, will not be Harper
Reed. A few days after the election, sitting with his wife at Wicker Park’s
Handlebar, eating fish tacos and drinking a Daisy Cutter pale ale, Reed
looks happy. He told me earlier in the day that he’d never experienced
stress until the Obama campaign, and I believe him.
He regales us with stories about his old performance troupe, Jugglers
Against Homophobia; wild clubbing; and DJs. “It was this whole world of
having fun and living in the moment,” Reed says. “And there was a lot of
doing that on the Internet.”
“I spent a lot of time hacking, doing all this stuff, building Web sites,
building communities, working all the time, ” Reed says, “and then a lot of
time drinking, partying, and hanging out. And I had to choose when to do
which.”
We leave Handlebar and make a quick pit stop at the coffee shop,
Wormhole, where he first met Slaby. Reed cracks that it’s like Reddit come
to life. Both of them remember the meeting the same way: Slaby playing the
role of square, Reed playing the role of hipster. And two minutes later, they
were ready to work together. What began months ago in that very spot
is finally coming to an end. Reed can stop being Obama for America’s CTO
and return to being “Harper Reed, probably one of the coolest
guys ever,” as his personal Web page is titled.
But of course, he and his whole team of nerds were changed by the
experience. They learned what it was like to have—and work with people
who have—a higher purpose than building cool stuff. “Teddy [Goff] would
tear up talking about the president. I would be like, ‘Yeah, that guy’s cool,”’
Reed says. “It was only toward the end, the middle of
, when we
realized the gravity of what we were doing.”
Part of that process involved Reed, a technologist’s technologist, learning the limits of his own power. “I remember at one point basically breaking
down during the campaign, because I was losing control. The success of it
was out of my hands,” he tells me. “I felt like the people I hired were right,
the resources we argued for were right. And because of a stupid mistake,
or people were scared and they didn’t adopt the technology or whatever,
something could go awry. We could lose.”
part three. Reportage
And losing, Reed and his team felt more and more deeply as the
campaign went on, would mean horrible things for the country. They
started to worry about the next Supreme Court justices while they coded.
“There is the egoism of technologists. We do it because we can create. I
can handle all of the parameters going into the machine, and I know what
is going to come out of it,” Reed says. “In this, the control we all enjoyed
about technology was gone.”
We finish our drinks, ready for what is almost certainly going to be a
long night, and head to our first club. The last thing my recorder picks up
over the bass is me saying to Harper, “I just saw someone buy Hennessy.
I’ve never seen someone buy Hennessy.” Then, all I can hear is music.
part four.
Essays
40.
Earth Station
THE AFTERLIFE OF TECHNOLOGY
AT T H E E N D O F T H E W O R L D
by Alexis C. Madrigal
Let me tell you about Jensen Camp, California. The homes of the trailer
park house about
people in a variety of ramshackle arrangements. By
most accounts, drugs and alcohol are a problem, but many who live there
are simply independent souls without much money. The water at the camp
has too much fluoride, so people’s teeth fall out, and kids’ bones break and
don’t heal. People buy bottled water when they can, but drink from the
tap when they have to. A few miles up the road is the Zen monastery of
Tassajara, where a sign has to remind visitors, “Life is transient.” Jensen
Camp is a few miles from Carmel-by-the-Sea, one of those coastal towns
where the average home price is more than $ ,
.
Jensen Camp may be “wracked with drug and alcohol
problems, domestic abuse and unsafe living conditions,”
but it is more than its problems. A chef named Mike Jones set up shop next
to the Cachagua general store, and has kept a blog about the Camp’s
part four. Essays
Figure
..
characters and his organic catering business since 2005. His stories are
full of food and family, guns and drugs, drinking and fighting, helping out
and being helped.
The Cachagua Valley is wild and beautiful: lichen hanging off trees
and wild turkeys running around doing whatever they do. Even radio
signals have a hard time penetrating the valley, which is one reason that,
less than a quarter mile from Jensen Camp, the Communications Satellite
Corporation and AT&T built the Jamesburg Earth Station. The Earth
Station is a massive dish-shaped receiver that was used for communication
with satellites perched over the Pacific Ocean for more than three decades.
It was thanks to Jamesburg that people saw the Apollo moon landing,
Richard Nixon’s trip to China, Vietnam War reporting, and
the Tiananmen Square demonstrations, not to mention tens of
thousands of more ordinary events. A Chinese delegation sent by the first
Earth Station
prime minister of China even visited Jamesburg, a milestone in the effort
to plug the world’s most populous country into the global communications
grid.
When we talk about the space program, we think of rockets and
command modules and astronauts and blinking satellites in the night sky.
But every piece of hardware in orbit requires far more infrastructure down
on the ground. Satellites are simple. Their only jobs are to stay in space
and bounce signals from one place to another; the real magic of satellite
communications occurrs on the ground, in the detection, decoding, and
transmission of those electronic signals from orbit.
Yet while every NASA scrap and tin can is prized by collectors and
archived in museums, the history of people like John P. Scroggs, the
manager of the Jamesburg Earth Station manager, is almost unknown and
on the verge of being lost for good.
In fact, aside from a few references in old newspapers and a stack
of photos buried in the archives at Johns Hopkins University, the only
person who possesses interesting stuff from Jamesburg’s glory days turns
out to be Eric Lancaster, who I met underneath the canopy of oak trees
outside the Cachagua general store. He’d told me that he had “some
real documentation of Apollo trips: notes, signatures, serious dated stuff.”
Lancaster hinted that the documents might be very valuable, and they were
certainly the kind of thing I was looking for. He hadn’t scanned anything
and didn’t use the Internet, so we arranged a meeting, and my fiancee and
I drove to Cachagua.
Lancaster wore a black leather coat and a white shirt unbuttoned to
below his chest. He seemed nervous as he rose to shake hands with me.
Next to a small backpack, on top of a plastic chair, was a stack of mildewed
manila folders held together by rusting metal clips.
“I was thinking that we might be able to make a few bucks, maybe even
sell these to you guys,” Lancaster said.
I knew I wasn’t going to buy them, but I wanted to see what was inside
anyway.
***
The first trip to the moon was considered a technological triumph, and
rightly so. Traveling
,
miles, landing on another celestial body, and
part four. Essays
returning to the Earth is no small feat. But the Apollo mission might
have been the single most successful media event in history. Not only did
Neil Armstrong say, “One small step for man, one giant leap for mankind,”
but people across the globe saw him do so live. In the moments before
Armstrong actually stepped on the moon, the chatter between Buzz
Aldrin and Earth was not only about the moon itself, but about lunar
media production.
“You’ve got a good picture, huh?,” Aldrin asked as Armstrong began
his descent down the ladder. “There’s a great deal of contrast in it, and
currently, it’s upside-down on our monitor, but we can make out a fair
amount of detail,” Bruce McCandless confirmed from NASA’s command
post in Houston, before pointing out the correct aperture settings for the
camera. “Okay,” Aldrin replied, and Armstrong reached the foot of the
ladder.
At this moment, something unexpected happened. Apollo ‘s transmission was being captured by multiple tracking stations simultaneously.
Goldstone, in the Mojave Desert, had been expected to capture the
broadcast and send it on to Houston and the rest of the world. But the
best picture was actually coming from a tracking station in Australia called
Honeysuckle Creek, via the Moree Earth Station on that continent. So
seconds before Armstrong touched the moon’s surface, NASA made an
on-the-fly switch to the Australian feed, which sent the broadcast up to a satellite and down to the Earth Station at Jamesburg, across
the street from the Cachagua general store, which at the time was also a
saloon. A few years ago, a local character named Grandma DeeDee told
a Monterey County Weekly reporter that in the ’ s, locals would
“ride horses in the bar and shoot pistols at the bartender’s feet.” Another
local, ne’er-do-well Grant Risdon, also reminisced about the hijinx
at the bar, fondly recalling a time “when the cops were afraid to come out
here, because their radios didn’t work on this side of the mountain. It was
the last stand for the outlaws.”
When The Christian Science Monitor visited the station the day before
the Apollo
broadcast, the reporter and his photographer would have
passed the store on their right, and then hung a left less than a quarter
of a mile down the road leading to the Earth Station. “It has taken man
Earth Station
thousands of years to reach the Moon, but it takes less than
seconds for
a picture from the Moon to be distributed to millions of television viewers
on earth,” the story concluded.
So it was that the most glorious moment of the space program and
the most fantastic television broadcast ever ended up routing through
this corner of Central California. Countless other satellite broadcasts
from Asia would soon pass through here as well. The science writer Lee
Dye summarized Jamesburg’s role in a 1972 feature for the Los
Angeles Times : “Earth Station Jamesburg is the principal earth facility
that has permitted a worldwide audience to participate in history in the
making.”
The Jamesburg Earth Station was co-owned by the Communications
Satellite Corporation (Comsat) and AT&T. Dozens of similar ones
were built around the world to communicate with newly launched satellites.
The Earth stations were part of a system initiated by John F. Kennedy,
which created the International Telecommunications Satellite
Organization. Intelsat, as it was known, was co-owned by more than
nations, though it was basically controlled by the United States and its
Cold War allies.
Although NASA launched the satellites, Intelsat paid for them, passing
the costs on to the national corporations (like Comsat) that controlled the
Earth stations and finally on to the customers who wanted to use the
satellite links.
In the early
s, percent of the traffic on the system was telephone
conversations; the Earth station could handle ,
simultaneous conversations and
television hookups. The satellite signal was faint, which
meant that it had be amplified a lot. That, in turn, meant that Jamesburg
needed a massive HVAC system in order to keep the satellite receiver at
the ridiculously low temperature of –
degrees Fahrenheit—just nine
degrees away from absolute zero. “If the temperatures were not kept low,”
Dye explained, “molecular activity would be so great that it would compete
with the weak signal.”
Satellite communication is a triumph of th-century progress. It is
the connection point between NASA’s glorious space exploration and the
important but less dramatic telecommunications research at places like
part four. Essays
AT&T’s Bell Labs. When Dye was writing, both of those tremendous
projects were bundled together in the -foot dish in a last stand for
outlaws. “The system is so complex and so futuristic that it boggles the
mind,” Dye wrote, “but nowhere is that more apparent than here in the
Cachagua Valley.”
***
In the early
s, people worked in the ,
-square-foot main
building at the Jamesburg Earth Station. By the time I visited the place, it
was empty except for its owner, Jeffrey Bullis. When he bought the station,
Bullis did not intend to use it as a communications facility. He already had
a business to run, a contract electronics-manufacturing firm
in San Jose that had made him a considerable amount of money. Before
that, he’d worked for the Otis Elevator Company, and had been a welder, a
fleet manager, and a heavy equipment operator. This was supposed to be
his place to relax. “I just bought it for the land, really,” Bullis said. “It was
that kind of thing.”
The plan had been to build the Earth station into a wild house for Bullis
and his family, especially his son Adam, who loved to box, play guitar, ride
ATVs, and shoot guns. He seemed at home in the valley and liked to spend
time up there. Bullis had an architect draw up plans to redo the whole thing,
busting through some of the walls and dropping a big fireplace right in the
middle of the old operations room.
But then Adam was diagnosed with cancer, and succumbed to it in
August of
. He was . Suddenly, Jamesburg was not a happy place for
Jeffrey Bullis. Since then, he has been pondering selling it; he finally put it
on the market in January
. He’s asking $ million for the Earth station
and all
acres surrounding it. A local TV station picked up the
story, and soon every nerdy corner of the Internet was talking
about it. For the first time since the
s, Jamesburg was famous! I
searched the Internet for more information. But almost everything that you
could find about Jamesburg had been created in the previous six weeks.
I wanted to tell the story of what this place actually was, so I called up
Bullis, and we met at Jamesburg on a Saturday morning.
There was a small, unassuming sign at the entrance to the property, and
a gate that looked like it had been left unlocked for my fiancee and me. We
Earth Station
drove through it under the eye of a surveillance camera. Bullis was waiting
for us at the small caretaker’s house at the bottom of the property. We all
shook hands.
Tall and solidly built, Bullis looked like the cross between an Idahoborn kid and an electronics millionaire that he is. He’s got big hands and
wore a fleece with southwestern-patterned epaulettes. When I hopped into
his car for the brief ride up the road to the satellite receiver, I instinctively
reached for my seat belt. “You’re not putting that on,” he informed me. No,
men do not wear seat belts on the playground that Bullis purchased from
AT&T in
.
We drove past some scattered cattle, just a few head that keep the
grass down, then curved up a hill. I took stock of what I knew about the
building. It is ,
-plus square feet. The dish is feet across and housed
in a building several stories tall. There is a massive HVAC system, backup
batteries, and room for generators. If the satellite were restored to working
order, it could receive communications from all over the place. Fourteen Tlines run into the place. The concrete walls are two feet thick. Add in the
bucolic setting, the cows and orchard and river, and you think, This place
was designed for the post-apocalypse. Because it was.
Security, in these not-quite end times, was mostly incidental. “We shoot
our guns off often enough to where people don’t want to come up here,”
Bullis told me.
And suddenly, there it was, gleaming white. It cast a long shadow, just
like it does on Google Earth. Nothing about the shape or nature of the
satellite receiver would surprise anyone who has seen a DirecTV dish. But
the scale, the size: it’s inhuman. I ran around it and up the metal stairs,
looking out at the valley, thinking about the people who’d stood there
before, and how they were doing their part for the free world and science
and progress. In the photos my fiancee took of me at its base, I am almost
too small to see.
The rest of the Earth station seemed low-slung in comparison with the
massive dish, but it is not. The ceilings must be
feet high. Bullis led us
in, exhorting, “Grab a flashlight”—he kept the lights and heat off to avoid
astronomical energy bills. The first things I saw were the lockers of the men
who had worked there. They were empty, but I ran my flashlight inside
part four. Essays
them anyway, thinking I’d find some traces of the workers. Nothing.
We walked down a corridor. My flashlight illuminated racks of lead
batteries to our left. Then the lights powered on with a sound I’d heard only
in empty gymnasiums—ka-chunk, and a hum. Fluorescents shone above us,
revealing the spareness of the space.
Bullis led us into the old break room. On the wall was a huge map that
plotted the various satellite–Earth station connections with pushpins and
yarn: blue yarn for the Pacific routes, yellow for the Atlantic information
trade, green atop the Indian Ocean. The rest of the room could have been
found in any office park in America.
Our next stop was the main operations room, which was as big as a
roller-skating rink. When Bullis bought it, the room had been “filled with
rack after rack after rack of electronics. Of course, it was all obsolete.”
Now the room is nearly empty, save for a pool table on a dirty patch of
carpet and a podium that looks perfect for giving a military briefing. Cords
dangle from the ceiling. Against the wall, a large whiteboard with beautiful
mid-century lettering says JAMESBURG OPERATIONS STATUS. Nothing
is written on the board except for a few inscrutable acronyms and a date,
April
. Behind the podium, a poster with a waving American flag on it
reads UNITED WE STAND. A chalkboard next to it features a drawing of
Beavis from Beavis and Butthead, as well as the score of a long-forgotten
darts game. Our voices echo on my recording.
“The kids come in here and rollerblade and have all kinds of fun,” Bullis
said, stooping to pick up a cigarette dropped by a careless pool player. We
pass by an exercise room with weights and multiple cardio machines. “My
son used to work out here all the time.”
A few offices were converted into bedrooms, but the rest of the building
is one huge, empty room after another. One housed all the old landline
telephone equipment. “We converted it into a shooting range, as you can
see,” Bullis said, gesturing toward a target in front of some hay bales down
at the end of the room. A basketball hoop hung to its left.
Passing through, we reached the famous system for chilling the satellite
receiver, which enabled broadcasts from the moon. It powered up with a
sound like a spaceship. The chillers still worked, as did an ancient laptop
that the last AT&T employee left behind. It was running DOS.
Earth Station
Finally, we reached a room filled with filing cabinets. The historian
in me lit up. Here we’d find records of how the place ran. I imagined
schedules with employees’ names and rosters with amounts of food and
fuel consumed. There’d be lists of the broadcasts that ran through the
station and photographs of important events, diaries even. Coffee rings
would show that humans once labored here, proudly. I would find all the
little details to transport me back to the time when this place was part of
our national project, and maybe in the smell of the carbon paper and the
blue ink of the signatures, I could sense what that time was like.
“What’s in there?,” I asked Bullis, pointing to the cabinets.
“Old stuff,” he said, and he was right.
I started pulling open cabinets and digging through files. But the more I
looked, the more I realized that I was looking through manuals for long-lost
electronics, directories of parts suppliers, and schematics of the building.
Much as I wanted them to exist, there were no people in these documents.
Their stories had been leached out.
Bullis drove us back down to the caretaker’s house at the bottom of
the property. That’s where he stays most of the time. I asked to use the
bathroom in the house, and as I came back out, I saw a photo of his son
on a small wooden table by the front door. When I shook his hand to say
goodbye, I said, “I’m sorry about your son.” He said, “Life goes on.”
All told, while Bullis has owned the station,
dumpsters’ worth of
stuff have been pulled out of the Jamesburg Earth Station and sent to a
recycler. A local guy named Eric Lancaster was hired to do the demolition.
***
After our trip through the station itself, I realized that I wanted to talk
with someone who knew the Earth station when it was up and running.
The Internet, I already knew, returned nothing for this search. That left the
general store. We made the -second drive from Bullis’s place, pulled into
the parking lot, and went inside. I found a woman named Liz behind the
counter of the small, surprisingly well-stocked store.
Liz is a mountain person. When I asked for her last name, she said,
“You don’t need my last name,” but not in an unfriendly way. Probably
in her early s, she is strong and spry, country like an oak tree. We
started talking, and I said something about how strange it was that this
part four. Essays
tiny little place at the end of the world had been a major node in the global
telecommunications grid.
She thought for a minute and then told me a remarkable story about
her relationship with technology during the past
years living up the
mountain a bit east of where we stood.
“I pretty much stayed on the mountain. There are no phone lines. There
is no electricity,” she said. “I have my iPhone and I can get G and I can get
what I want, and I have a little solar panel and propane and candles. I’ve
been off the grid forever. Now I have the small solar panel, and I can turn
on the light and charge my cellphone. I’m not used to it. My daughter tells
me, ‘You can plug things in!’ And I say, ‘I don’t have anything to plug in.’
Blow out the lights, not turn out the lights, is my thing.”
Her boss, the chef Michael Jones, filled in the rest of Liz’s story on his
blog (punctuation all his). “Liz lives in a trailer on the mountain with no
power and no water. . . two horses, a goat and two dogs. Cats don’t count.
She carries water in plastic buckets to the critters. . . .and to her own self,” he
wrote. “She pays child support to a scumbag in Missouri or one of those
other M states or square states. . . ..Her daughter that I know is an honor
student at Davis. . . . . . .Because she has no power or water, Liz hangs with
us after working her hr shift at The Store. We are her TV.”
For this couple of minutes, I was her TV, and she was mine. Did she
know anyone who worked at the Jamesburg Earth Station? “I knew a couple
of the people who worked there for a long time, and then some of them
have passed away,” she responded. “Gosh, and some of them are retired
and moved away. It was a good job to have if you were out here, because it
was close to home.”
How’d she end up living with no power? She and her man were
nomads, living out of their cars and taking in the natural beauty of the place.
People heard about them and kept asking them to take care of different
properties. So they did, and then she did it alone.
I could have talked a long time, but I didn’t want to overstay my
welcome. She said I should leave a note about my story, seeing as most
people who live around there pass through the store and sit out on the porch
chatting. She gave me a lined card and a pen and a pushpin, and we said
goodbye. Before I left, I saw that there were coffee mugs with “Jamesburg
Earth Station
Earth Station” written on them. I tried to buy one from her, but instead, she
gave it to me.
A few days later, I got a voicemail message.
“Hi, my name is Eric Lancaster. I found your note down at the general
store in Cachagua. I have some real documentation of Apollo trips, notes,
signatures—serious dated stuff. I worked up there and I have some stuff that
I kept. I have photos . . . Apollo , Apollo , Apollo trips. Like, straight
notes from NASA, original stuff. Call me back.”
He gave his phone number with the last seven digits, then the area code.
He concluded, “I’m serious. I’m not fooling around. It’s the real-deal stuff
dating back all the way to ’ , with lots of information. Call me back.”
Lancaster sounded so young in his message. I’d been expecting an old
man. How could he have worked at Jamesburg with a voice so young? I
called him back.
As it turns out, Lancaster spent his whole life around Cachagua. As a
kid, he had heard the dish moving to keep fixed on its satellite. “We used
to climb up on it,” he said. “We used to feel it move.” Some people at Jensen
Camp thought that the satellite was nefarious. “One guy thought it caused
him not to be able to sleep in his house, so he put metal siding on,” he said.
“But it’s the water out here [that] affects the people, not the satellite.”
What I wondered was why—out of all the stuff that had gone in those
dumpsters—he’d decided to keep these few pieces of paper.
“They are neat. I thought, I better keep those. There are communications
between here and NASA,” Lancaster told me. “I have photos of when the
Republic of China came over here to visit in
. It talked about them
staying at the Holiday Inn.” Lancaster thought they might be important.
“There might be stuff in there that’s not supposed to get out to the public,”
he said.
And so it came to pass that we were standing under the huge oak
trees outside the general store, staring at four mildewed folders sitting on a
plastic chair right before that eventide moment when the golden yellow
light retreats and everything goes gray. I had my camera with me and
was hoping to photograph whatever was in the folders, so I was anxiously
watching the color bleed from the world.
There Lancaster and his girlfriend were, facing my fiancee and me, and
part four. Essays
they had just asked us to buy the documents, and suddenly it felt like some
illicit deal, this meeting out in the middle of nowhere, as if we were being
handed some files we were going to pass off to the Russians before fleeing
to Czechoslovakia.
My fiancee, also a journalist, quickly explained that the standard ethics
of our profession prevented us from buying anything, but perhaps the story
we wrote would end up sparking interest in the documents from potential
buyers. I explained that I knew a space-memorabilia collector who might
be able to help them find a purchaser, too. Interest in Jamesburg might be
high, thanks to all the stories about the place floating around the Internet,
I said. They stared at me blankly, perhaps because they were disappointed,
perhaps because they did not know that millions of people had read about
the very bizarre home for sale not
yards from where we were standing.
Seizing the moment, I said I’d better take some photos quickly, before
the light got bad, and I opened up the first file folder.
***
Lancaster was right.
The files did contain several references to the Holiday Inn down
in Monterey. That’s where the Chinese Satellite Communications Study
Group stayed when it came to visit Jamesburg in July of
. The group
rode around the backcountry in a limousine, followed by John P. Scroggs
in a station wagon.
During the day, the Americans showed the Chinese delegation how the
Earth station operated, and at night, everyone had dinner together. They
ate fruit salad with honey dressing as well as salmon and abalone. One
afternoon, they had a BBQ. They drank Monterey Riesling and Coors. The
visit, along with the rest of the Chinese delegation’s trip, was a key event in
the opening-up of the Chinese communications system. George Sampson,
a former general and vice president at Comsat who coordinated the trip,
detailed how it all happened in a 1985 oral history.
It began with Nixon
visit to China, which is widely considered a
landmark in global international relations. The visit was broadcast all over
the world—a Nixon assistant had sent Sampson over to set up the technical
infrastructure. While he was there, he built a relationship with Chinese
technologists and talked up joining Intelsat, the global satellite network.
Earth Station
He described how Earth stations worked and how the Chinese could set
up their own to communicate throughout their large country and with the
rest of the world. Satellite communication was of sufficient interest to the
Chinese that Chou En Lai, the first prime minister of the country, met with
Sampson in Washington, D.C.
Eventually, the Chinese sent a few people over to the U.S. to check
things out for themselves, and it was this group, led by the government’s
top long-distance-communications official, Liu Yuan, who arrived in Jamesburg on July and stayed at the Holiday Inn.
The letters and -by- photographs that serve as the record of that
visit are sitting in one of the folders on the plastic chair. Included is a letter
Sampson sent Scroggs, reminding him, among other things, not to “make
any reference to the opposite sex.” because “such remarks which might be
humorous to us are quite offensive to the Chinese.”
The decaying, overexposed photos are beautiful. I have two favorites. In
one, a young Chinese man talks into a telephone, presumably with someone
in China awaiting his call. The grin on his face looks so genuine: we are
seeing someone make a transoceanic call for the first time. Two smiling
Americans look on.
The other is more meta. It is nominally a photo of the combined group
of American and Chinese engineers posing in front of the Earth station.
But a Chinese photographer stepped into the shot, so it actually records the
recording of the event, and the satellite pointing west toward China.
As I frantically took photos of the old pictures, Lancaster’s girlfriend
read aloud from the Apollo documents that formed the rest of their
collection. Most of it consists of testing procedures and operations for the
various Apollo missions. These are work documents without much flavor.
But among the technical bits, we found a letter that Scroggs sent his staff.
It was a letter meant to be saved, a commemorative memo (!) from Apollo
given to employees as a souvenir.
If I had to guess, I’d say Lancaster has the last copy of that “souvenier”
left on Earth. I would also say that it is pretty much the only human trace
of what it was like to work at Jamesburg before it was demoted from our
national dreams, before the site and the people who worked there became
subject to the logic of a market that was immune to its sublime project.
part four. Essays
Figure
. .
Scroggs, I later found out, died in
and is buried at the El Carmelo
Cemetery, in ritzy Pacific Grove.
I finished taking photographs. There wasn’t much else to say. Lancaster
seemed a bit out of sorts, but also excited that I’d be writing about him. I
promised to mail him a printout of the story. I think saving the documents
he did and holding onto them for years was a kind of heroism, a tribute to
his country. He knew that these documents should not be thrown away, for
one reason or another. And if he can convert his act of preservation into a
few bucks, more power to him.
Lancaster and his girlfriend packed the files into the backpack and
walked back across the road and over the creek, the one that often floods
this whole area. In years past, when the water got too high, the Jamesburg
Earth Station had been Jensen’s emergency shelter.
***
A few years after the Chinese Satellite Communications Study Group
left Jamesburg, the counterculture icon Stewart Brand published a piece in
Earth Station
CoEvolution Quarterly by the physicist and space promoter Gerard O’Neill,
which proposed the idea of a self-sustaining space colony.
As O’Neill described it, the space colony would have been a utopia with
nice homes and beautiful flora and fauna. The colony could be modeled
on the most “desirable” places on Earth. “A section of the California coast
like Carmel could be easily fit within one of the ‘valleys’ of a Model III
Colony,” O’Neill explained. Paintings were even made of what that might
look like.
Many of Brand’s friends and colleagues derided the idea as an abandonment of the values of the counterculture. But one critique, by the solar
inventor Steve Baer, was more subtle and more damning. It got at the way
O’Neill tried to leave behind the inevitable grit of human life.
The project is spoken of as if it were as direct as . . . flinging
people into space. But I know that instead it consists of orderforms, typewriters, carpets, offices, and bookkeepers; a frontier
for PhD’s, technicians and other obedient personnel.
Once on board, in my mind’s eye I don’t see the landscape
of Carmel-by-the-Sea as Gerard O’Neill suggests . . . Instead, I
see acres of air-conditioned Greyhound bus interior, glinting
slightly greasy railings, old rivet heads needing paint—I don’t
hear the surf at Carmel and smell the ocean—I hear piped
music and smell chewing gum. I anticipate a continuous vague
low-key “airplane fear.”
A space colony would not be like Carmel-by-the-Sea, but like Cachagua.
It would take a lot of Jensen Camps and Jamesburg Earth Stations to make
anything as grand as a space colony work. The area above the Earth might
be known as the heavens, but there would be no escaping being human.
No matter how glorious the triumph, humans have to grind through all
of it, scheduling meetings and making coffee, documenting and processing,
trimming and forgetting. No technology stands outside society.
In our technological narratives, progress advances like the tide, lifting
up everyone and everything. But we rarely look closely to see the unevenness of the diffusion of our inventions. In a poor valley somewhere a few
miles from Carmel, a satellite receiver took in pictures from the moon
part four. Essays
during a time when locals still rode horses to the saloon. Technology may
move onward and upward, but everything retains its links to the old and
weird and human.
Jamesburg Earth Station is now known on the Internet as a “great
place for Armageddon” and also appears on my favorite coffee mug.
The building and the last remaining documents that testify to its importance are both for sale. This is what th-century dreams look like in the
st century.
41.
Consider the Coat Hanger
A TWISTED PIECE OF WIRE ISN’ T
J U S T A S Y M B O L O F DA N G E R O U S
ABORTIONS; I T’ S A SYMBOL OF
I N E Q UA L I T Y.
by Rebecca J. Rosen
In the mids, a woman went to an abortionist. She had been raped and
now, pregnant, she sought his help.
As he prepared to perform the procedure, he said to her, “You can take
your pants down now, but you shoulda’—ha! ha!—kept ‘em on before.”
For the service, he charged her $ ,
but, as Leslie Reagan recounts in
her essential book, When Abortion Was a Crime, “offered to return $ if
she would give him a ‘quick blow job.’ ”
Degrading? Yes. Humiliating? Certainly. And also: expensive—very.
Contrast that scenario with some of the at-home remedies undertaken
by another woman, seemingly lacking the spare $ ,
. “One woman,”
part four. Essays
Reagan writes, “described taking ergotrate, then castor oil, then squatting
in scalding hot water, then drinking Everclear alcohol. When these methods
failed, she hammered at her stomach with a meat pulverizer before going
to an illegal abortionist.”
This was in
, when abortion was illegal in America. If you are
one of the roughly 160 million Americans born after 1973 (the
majority of the population), abortion has been legal all of your life, though,
depending on where you live and your resources, actually getting one may
not always be easy or even possible.
Earlier this week, Republican Party leaders, drafting the GOP’s official
position on abortion, proposed language that would make history of
the -year period since Roe v. Wade. They are calling for a “humanlife amendment,” which, by extending the Fourteenth Amendment to
fetuses, would prohibit abortions entirely, even in cases of rape
or incest, or to save the life of the mother.
Within short order, the homepage of The Huffington Post
featured an image of a wire hanger. Soon, the Tumblr belonging to
Newsweek followed suit, a bit less elegantly, converting the cursor
of your mouse on their page into an image of a tiny coat
hanger (which, as many people pointed out, was not even the right kind
of hanger).
This simple tool is our shorthand for that earlier time, the time of illegal
abortions. If we’re going to pull it out of the closet—and, even more to
the point, if the Republicans are going to have a platform that earnestly
seeks to pull that legal regime out of its grave—we can’t do it flippantly.
I’m sympathetic to those who believe that abortion is legalized murder,
but to ban it outright would create victims too (especially, as would in
all likelihood be the case, if you do not simultaneously increase and ease
access to contraceptives and sex education). Who would those victims be?
We need to know what the hanger means.
We all think we know what the hanger means: dangerous, illegal
abortions. It is a tool of last resort, a hack of a household object, conjured
out of desperation when nothing else worked. That alone is significant,
because the most basic point, as Reagan and other historians have shown
over and over again, is that even in the age of illegal abortions, women still
Consider the Coat Hanger
had abortions—many, many abortions. Making something illegal doesn’t
make it disappear. Abortion, during the century of its criminalization, was
common, though its prevalence varied with the generations.
No official statistics were kept, but Reagan cites some late- th-century
doctors who estimated a rate of about million abortions a year. One study,
of some ,
working-class women who visited birth-control clinics in
the late
s, found that
to
percent had had abortions. A smaller
study at a clinic in the Bronx in the early
s found that
percent
of women—Catholics, Protestants, and Jews alike—had had at least one
abortion. And of course, because abortion occurred mostly on the black
market, they were very dangerous: one estimate placed the annual
death toll at ,
women.
The numbers point to another lesson that can be drawn from the
period: criminalizing abortion did not convince Americans that abortion
was morally wrong. A physician at the time noted a “matter of fact attitude
[among] women of all ages and nationalities and every social status.”
Reagan writes, “The illegality of abortion has hidden the existence of an
unarticulated, alternative, popular morality, which supported women who
had abortions. This popular ethic contradicted the law, the official attitude
of the medical profession, and the teachings of some religions.”
So despite the law, abortion persisted. Public policy exists in words—on
the books, so to speak. But where it matters is where it is carried out: in city
apartments, doctor’s offices, women’s-health clinics, and, proverbially, back
alleys. To seriously consider the meaning of the hanger or, less abstractly,
the outcome of the Republican platform if realized, is to concern yourself
with that reality, with the lives of women who had unwanted pregnancies
during the century before Roe v. Wade.
That’s where the hanger comes in, because that’s what the hanger is
meant to stand for: unsafe back-alley abortions that left women dead. But
is that an accurate picture of the period?
Yes and no. Here’s another portrait of an abortion, this one taken from
an article written by a Mrs. X for the August 1965 Atlantic . Mrs. X
wrote:
My visit did a good deal to quell the panic which had been
building steadily in spite of my efforts at self-control. The of-
part four. Essays
fice seemed orderly, the tools of the trade were neatly arrayed
in the glass cases dear to the hearts of the medical fraternity;
the doctor’s examination was brief and businesslike, and as
far as I could tell identical with those performed on me over
the years by obstetricians and gynecologists under different
circumstances. He explained in simple and understandable
terms exactly how he would perform the operation, how long
it would take, that it would be painful, but not intolerably so,
for a few minutes. (I gather that except for abortions done
in hospitals, anesthetics are almost never used. For obvious
reasons, these physicians work without assistance of any kind.
They are thus not equipped to deal with the possible ill effects
of anesthesia; nor can they keep patients in their offices for any
great length of time without arousing suspicion about their
practices.) The doctor I was consulting described precisely the
minimal aftereffects I might expect. We fixed a date at mutual
convenience a couple of days off for the operation.
The operation was successfully concluded as scheduled.
Forty-five minutes after I entered the doctor’s office for the
second time, I walked out, flagged a passing cab, and went
home. Admirably relaxed for the first time in two weeks, I
dozed over dinner, left the children to wash the dishes, and
dove into bed to sleep for twelve hours. The operation and
its aftereffects were exactly as described by the physician. For
some five minutes I suffered “discomfort” closely approximating the contractions of advanced labor. Within ten minutes
this pain subsided, and returned in the next four or five days
only as the sort of mild twinge which sometimes accompanies
a normal menstrual period. Bleeding was minimal.
No meat pulverizers, no hangers, minimal blood. And that’s because of
the crucial idea that the hanger symbolizes: the brunt of an abortion ban
is borne unevenly. The hanger does not merely symbolize the dangers of
illegal abortions; it symbolizes inequality.
That twisted piece of wire—like the meat pulverizer, Everclear alcohol, and God knows what else—was a hack, a tool repurposed because
Consider the Coat Hanger
the proper one was not accessible. Safe abortions were there for those
with the means to get them. But those with less privilege, less money,
fewer connections—blacks, Latinas, and lower-class whites, including many
Catholics—had the hacks.
Part of this was for the obvious reasons: the illegality of abortions drove
up costs, and those with more means could pay for better quality. But
other reasons were more subtle: Women with access to psychiatric care
could mimic symptoms to receive diagnoses that would pave the way for
“therapeutic” abortions (legal abortions provided in some states for health
reasons). Other times, as in the case of Mrs. X, privilege manifested itself
in a knowledgeable network of well-off friends, friends who were able to
recommend their own high-quality abortion providers.
Unfortunately for poorer women, sometimes their needs for abortions
were even more desperate than those who had better access to health care.
Reagan writes:
Poor women sought abortions because they were already
overburdened with household work and child care and each
additional child meant more work. A baby had to be nursed,
cuddled, and watched. A baby generated more laundry. Young
children required the preparation of special foods. Mothers
shouldered all of this additional work, though they expected
older children to pick up some of it. A new child represented
new household expenses for food and clothing. In
, a
twenty-two-year-old mother of three despaired when she
suspected another pregnancy. Her husband had tuberculosis
and could barely work. They had taken in his five orphaned
brothers and sisters, and she now cared for a family of ten. She
did “all the cooking, housework and sewing for all” and cared
for her baby too. The thought of one more made her “crazy,”
and she took drugs to bring on her “monthly sickness.”
Moreover, poorer women had worse access to birth control, meaning
that pregnancy was difficult to avoid. Middle-class couples, according to
Reagan, “could afford douches and condoms and had family physicians
who more readily provided middle-class women with diaphragms . . .
Even if poor women obtained contraceptives, the conditions in which
part four. Essays
they lived made using those contraceptives difficult. For women living in
crowded tenements that lacked the privacy they might want when inserting
diaphragms and the running water they needed to clean the devices, using a
diaphragm would have meant another chore that only the most determined
could manage. For the poor, withdrawal was certainly a cheaper and more
accessible method, if the husband chose to use it.”
This illustrates an important point: just as access to the illegal
service of abortion was unequal, so too was access to perfectly legal
resources, such as birth control, sex ed, and health care. This continues
to be true in today, a fact highlighted by recent Republican
efforts to allow health insurers and employers to exempt
contraceptives from their plans. Legally, women may have a right
to choose whether to abort an early unwanted pregnancy, and a right to
take birth control to prevent one, but for many women those choices are
elusive, constrained by the limits of their resources, social, financial, or
geographic. The bright line that runs between the twin spheres of legal and
illegal is not what makes something available or keeps it out of reach.
This is not to say that the future the Republican platform heralds is
in fact the past. Medical technology, record-keeping, and regulation are
all dramatically different now than they were even at the time of Roe.
Who knows how the changes of the past
years would manifest in a
revived, and even more extreme, legal regime? But the basic lesson of this
sad history, the lesson of the hanger, remains unchanged: those with more
power suffer less, and those with less suffer more.
42.
Interview: Automated
Surveillance
K E V I N M AC N I S H O N S E E I N G T O O
MUCH
by Ross Andersen
Over the past decade, video surveillance has exploded. In many cities,
we might as well have drones hovering overhead, given how closely
we’re being watched, perpetually, by the thousands of cameras perched
on buildings. So far, people’s inability to watch the millions of hours of
video has limited the practical uses of surveillance. But video is data, and
computers are being set to work mining that information on behalf of
governments and anyone else who can afford the software. And this kind
of automated surveillance is only going to get more sophisticated, thanks
to new technologies like iris scanners and gait analysis.
Yet little thought has been given to the ethics of perpetually recording
vast swaths of the world. What, exactly, are we getting ourselves into?
The New Aesthetic isn’t just a cool art project; machines really are
part four. Essays
watching us, and they have their own way of seeing—they make mistakes
that humans don’t. Before automated surveillance reaches a critical mass,
we are going to have to think carefully about whether its security benefits
are worth the human costs it imposes. The ethical issues go beyond just
video; think about data surveillance, about algorithms that can mine your
financial history or your Internet searches for patterns that could suggest
you’re an aspiring terrorist. You’d want to be sure that a technology like
that was accurate.
Fortunately, our British friends are slightly ahead of the curve when
it comes to thinking through the dilemmas posed by ubiquitous electronic
surveillance. As a result of an interesting and contingent set of historical
circumstances, the British now live under the watchful eye of a massive
video surveillance system. British philosophers are starting to gaze back
at the CCTV cameras watching them, and they’re starting to demand
justification for the existence of those cameras. In a new paper called “The
Unblinking Eye: The Ethics of Automating Surveillance,” one
such philosopher, Kevin Macnish, argues that the political and cultural
costs of excessive surveillance could be so great that we ought to be as
hesitant about using it as as we are about warfare. That is to say, we ought
to limit automated surveillance to those circumstances where we know it
to be extremely effective. I spoke to Macnish about his theory, and about
how technology is changing surveillance, for better and for worse.
I was thinking the other day that it’s curious that CCTV should
have bloomed in Britain, whose population we think of as being less
security-crazed than the population of the United States. British is
more urban than America, but it can’t just be that, can it?
One interesting historical point—and I don’t think this clarifies the whole
thing, but it helps—is that most other Western countries have a recent
history of some form of dictatorship, the U.S. exempted. Most of Europe
was under a dictator or occupied by a dictatorship within living memory,
and so I think there is an awareness there about the dangers of government.
It’s possible that Britain might be a little bit more laissez-faire about
surveillance because we haven’t had that level of autocratic control since
the th century. I think in America, while the history is a little bit different,
you have a very strong social consciousness about separation of powers
Interview: Automated Surveillance
within the state, and between the state and the people. I think there is a
general suspicion of the state in America, which we often just don’t have
in the U.K.
Then you have to couple that with some very powerful images. In
,
there was an infamous case of a -year-old named James Bulger, who was
kidnapped by two other children, who were themselves about or . They
kidnapped him and then killed him in a very horrible way that mimicked a
murder from one of the Child’s Play films, which led to a massive reaction
against horror films and whatever else. At the time, there was a CCTV
image taken of the two boys picking up this toddler and walking off with
him while holding his hand. Ironically, the CCTV didn’t actually help with
solving the case. The police had already heard about the case of these two
boys and were already investigating them, but the image came across on
our TV screens and came into our newspapers, and it was really powerful.
That helped to favor people toward CCTV here. It hadn’t been thoroughly
researched at the time, and it was sort of suspected at a commonsense level
that it would help deter crime, and that it would detect and catch criminals,
and that it would be able to help to find lost children. So, the government
poured hundreds and millions of pounds into CCTV cameras all around
the country, and then retailers and businesses bought CCTV cameras for
their own security—it just took off. As a sociological study, it’s fascinating.
A lot of my American friends who come here feel really freaked out by the
amount of cameras we have, and with good reason.
What is automated surveillance? Where and how is it most commonly used? I know the Chinese have been developing a kind of gait
analysis, a way to identify people on video based on the length and
speed of their stride. In what other ways is this technology gathering
steam?
There are things like iris recognition; there are areas where people are
looking at parts of the face for identification purposes. There are all of
these ways that you can now automate the recognition of individuals, or the
intentions of individuals. You have a ton of research on these capabilities, in
the U.S. and China, especially, and as a result, these techniques are catching
on in a way that they weren’t five or years ago, when we didn’t yet have
the technology to implement them. We’ve had the artificial-intelligence
part four. Essays
capabilities for a while—since the late ’ s, we’ve been able to write
programs that could recognize when a bag had been left by a particular
person in a public place. But we didn’t have the camera technology or
processing technology to roll it out.
Now you have digital cameras, and increased storage and processing capacity, and so you’re starting to see these really startling things happening
in automated surveillance.
What advantages does automated surveillance have over traditional, human-directed surveillance?
The problem with human surveillance is the humans. People get bored; they
look away. In many operation centers, there will be one person monitoring
as many as
cameras, and that’s not a recipe for accuracy. Science has
demonstrated that it’s possible for a person to be watching one screen and
miss what’s happening on it, and so you can imagine watching a busy scene
in a mall, and there are
people in it, or a field of
different screens—
you’re not going to be able to see what every single person does. You might
very well miss the person who puts their bag down and walks off, and
that bag might be the one containing the bomb. If you can automate that
process, then, in theory, you’re removing the weakest link in the chain, and
you’re saving a human being a lot of effort. The other problem with us
humans is that we tend to be subject to prejudices. As a result, we may
focus our attention on people we find attractive, or on people we think are
more likely to be terrorists or are more likely to be up to no good, and in
the meantime we might miss the target we’re supposed to be looking for.
And this doesn’t just happen with terrorists—it can happen with shoplifters
too.
On the other hand, we humans have common sense, which is something that computers lack and will probably always lack. For instance,
there are computer surveillance programs designed to recognize a person
bending down next to a car for a certain period of time, because this
is behavior associated with stealing cars. At the moment, the processing
capacity is such that a computer can recognize a person bending down by
a car and staying bent by a car for five seconds, at which point it will send
an alert. Now, if a human is watching a person bending down next to a car,
they will look to see if they’re bending down to pet their dog, or to tie a
Interview: Automated Surveillance
shoelace, or because they’ve dropped their keys. The computer isn’t going
to know that.
In your paper, you describe how cultural standards can dictate
the way that people move through crowds. For instance, in Saudi
Arabia, people walk much slower than they do in London. And in some
cultures, people require less personal space than in others. Why are
those differences problematic for automated surveillance?
The particular automated surveillance I was looking at was designed to
measure the distance between people to determine whether or not they
were walking together. The theory behind it was that if you and I are
walking together through a train station and I put my bag down next to
you so that I could go off and get a newspaper or something like that,
then clearly the bag is not unattended. This is one of those cases where
a human being would instantly recognize that we are walking together
and that we are friends, and that the bag isn’t a danger. But the computer
wouldn’t recognize that we were friends. Instead, the computer would see
an unattended bag and it would send out an alert, and so when I come back
from getting my coffee, or my newspaper, I might find you swarmed by
security guards, guns drawn. The programmers behind this project were
trying to write software that could determine whether two people walking
in public are associated with each other in some way, and the way that they
did this was to use an algorithm called a “social-force model,” which looks
at how closely people are walking together, how far apart they are, how
they interact with nearby objects, and how people walking toward them
react to them. Those data points, together, can give you a determination
of whether or not people are associated in some way. But problems appear
when you consider that different cultural groups have different norms and
habits, and that the social and spatial parameters of middle-class white guys
in the West might be totally different from the social and spatial parameters
of two Indian women. There are all these subtle aspects and differences in
the ways people from different cultures interact, and there is very little
data on how people of different cultures, different sexes, and different ages
walk and act in public. Most of our data is drawn from Western-middleclass scenarios, things like universities or whatever. It’s not the deliberate
prejudice that you might see with a camera operator, who might focus on
part four. Essays
Somalis or Arabs, or some other particular group, but its effects can be just
as bad.
Your paper argues for a theory of efficacy, when it comes to
surveillance. You seem to say that this can only be ethical if we do
it very well.
Yes, but it goes deeper than that. My overall project is to argue that
the questions that are typically raised in the Just War tradition are the
questions that we should be asking about surveillance, in order to see
whether surveillance is justified. One way of doing that is to question
these technologies’ chances of success. In Just War theory, we have this
notion that a war is unethical if you are unlikely to succeed when you
enter into it, because it means sending soldiers to die in vain. That was the
perspective that I was coming from with the argument about efficacy—if
there isn’t a considerable chance of success, then we shouldn’t be pursuing
these techniques.
But that rationale, Just War theory, is specific to war, and it’s
specific to war for a very important reason. If we embark on ineffective
wars, we run into disastrous consequences with enormous human
costs. It’s not clear that surveillance ought to have a precautionary
principle as strong as the one governing warfare. Why do you think
that it should?
You have to look at the counterfactual: if we have arbitrary surveillance—
which you could argue is what we have in the U.K., where we have virtually
no regulation of CCTV cameras—there is an extent to which you start to
wonder why we’re being surveilled. Why are we being watched? And the
surveillance can have quite an impact on society—it can shape society in
ways that that we may not want. If you notice all of this surveillance, and
you also notice that it’s ineffective, you start to wonder if there’s an ulterior
motive for it. Heavy surveillance, of which CCTV is only one variety, can
create a lot of fear in a population; it creates a sense of vulnerability,
a fear of being open to blackmail or other forms of manipulation as a
result of what’s being recorded by surveillance. And these can, together,
create what are typically called chilling effects, where people cease to
engage in democratic speech or democratic traditions, because they’re
concerned about what might be discovered about them or said about
Interview: Automated Surveillance
them. For instance, you might think twice about attending a political
demonstration or political meeting if you know you’re going to be watched.
In the U.K., there is a special police unit called FIT [Foreign Intelligence
Team] that watches demonstrations, looking for certain troublemakers
within political demonstrations. And that might dissuade people from
going to demonstrate. There is now a response protest group called FIT
Watch, which is going out to watch the FIT officers who are watching
the demonstrators, to try ameliorate this problem, which is viewed as
potentially damaging democratic engagement.
On balance, what about Britain’s CCTV system? How does it score
in your efficacy framework?
I think it probably fails on most counts. I was thinking about this last
night. I’ve been kind of getting into probes and automated warfare more
recently. Boeing is currently working on a drone that can stay in the air
for five years without refueling. One that can stay up for four days was
just successfully tested a couple of days ago. Think about a drone flying
above you for five years. If you’re in occupied Afghanistan, that is going to
be very, very intimidating, and it would be just as intimidating if that were
happening in our own country, if there were surveillance drones constantly
flying above us. That could feel very intimidating. Ultimately, there is very
little difference between a drone flying above a city and the sort of CCTV
surveillance that we have here all the time. It’s just that one is more out of
the ordinary because we’re kind of used to other.
You argue that in some ways, automated surveillance is less likely
to trigger privacy concerns than manual surveillance. Why is that?
Say you are taking a shower and a person walks in while you’re in the
bathroom. You might feel an invasion of privacy, especially if you don’t
know that person. If a dog walks in, are you going to feel an invasion of
privacy? Probably not. I mean, there might be some sense of Hey, I don’t
want this dog looking at me, but it’s only a dog. It might be that being
watched by a computer is like being watched by the dog; you aren’t entirely
comfortable with it, but it’s better than a human being, a stranger. Now, if it
recorded the images it saw and then allowed a human to see those images,
then, yes, that would be an invasion of privacy. If it had some automated
process where instead of just seeing what you do in private, it took some
part four. Essays
action, that would likewise be an invasion of privacy. But yes, one benefit
of automated surveillance is that it can take the human out of the equation,
and that can be a net positive for privacy under certain circumstances.
In your paper, you argue for a middle ground between manual
surveillance and automated surveillance. What does that ideal middle
ground look like in the context of something like the CCTV system in
the U.K.?
One reason that I argue for a middle ground goes back to the fact
that computers don’t have much common sense, which can lead to false
positives, as we saw with the unattended bag or the person who drops their
keys in a parking garage. A computer could be very helpful for filtering
out some obvious false positives, but ideally, a human should come in to
look at what’s left. A computer can provide a good filtering mechanism, for
purposes of privacy. For instance, a computer could blur out people’s faces,
or their entire bodies, so that a human operator sees only the action in
question. At that point, if the action is still deemed suspicious, the operator
can specifically request that the image be unblurred so he can see who the
person is and see how he needs to respond to them.
In the context of automated surveillance, does privatization worry
you?
That’s a really interesting question. I think the privatization of creating
the software and the hardware in and of itself doesn’t bother necessarily
me; what concerns me more is the privatization of the operation of the
surveillance. So: privatizing the people who are watching the cameras,
privatizing what is done with the information from the cameras—when
private companies hold that sort of information, especially if they’re not
regulated, there are all sorts of abuses that could flow from that. There’s a
second thing that might be worth saying about that as well, and it ties back
in with the Arab Spring. After Mubarak fell, when we went into his secretpolice headquarters, we found all sorts of British, French, and American
spying equipment, which people like Boeing and whoever else sold to the
Libyans and Egyptians knowing very well what would happen with it. Of
course, there are companies right now that are either still doing, or recently
stopped doing, the same for Syria. I think that’s a legitimate concern as well.
Video surveillance like CCTV is only one kind of automated
Interview: Automated Surveillance
surveillance; automated data surveillance is another. I’m thinking
about intelligence organizations looking for patterns in millions of
financial transactions and Internet searches. Are there overlaps in the
ethical issues presented by data surveillance and camera surveillance?
Definitely. The same questions that we’re asking about CCTV should
be asked about data surveillance. Potentially, I think that could be very
concerning. And that’s not just true of intelligence organizations, but of
commercial organizations as well. The New York Times recently ran an
article about Target and the lengths it would go to know that a -yearold girl was pregnant—so much so that they knew before her dad did.
Those are the kinds of questions commercial organizations are looking to
answer. And you have to ask what they do with that information—are they
offering better deals to the sort of customers they would rather have as their
clientele? Are they trying to put people off who they would rather not have
as their clientele? For instance, frequent fliers get all sorts of deals on their
flights because frequent fliers spend a lot of money on the airline. Are you
creating a situation where the rich, successful people are the ones who get
offered better deals to fly on the planes, whereas poorer people don’t get
those same offers? The questions raised by Big Data are very interesting.
It’s actually a very rich area for research; we haven’t even scratched the
surface of it.
43.
Communion on the Moon
THE RELIGIOUS EX PERIENCE IN
S PA C E
by Rebecca J. Rosen
Before a launch in June
of three human beings into the ether of space
around the earth, before they boarded their Soyuz spacecraft, and before
the rockets were fired, precautions were taken. Not the humdrum checklists
and redundancies of space exploration—assessing the weather, the equipment, the math—but a preparation with a more mystical dimension: the
blessing, by a Russian Orthodox priest, of the spacecraft, as it sat on the
launchpad in the Kazakh steppe.
The scene presents a tableau that seems incongruent but may just be
fitting.
The discordance is obvious: there we were, on the brink of a new
expedition to space, a frontier of human exploration and research that is
the capstone of our scientific achievement. “The idea of traveling to other
celestial bodies reflects to the highest degree the independence and agility
of the human mind. It lends ultimate dignity to man’s technical and sci-
part four. Essays
entific endeavors,” the rocket scientist Krafft Arnold Ehricke
once said. “Above all, it touches on the philosophy of his very existence.”
His secular existence.
And yet, there was a priest, outfitted in the finery of a centuries-old
church, shaking holy water over the engines, invoking God’s protection for
a journey to near-earth orbit. That these two spheres of human creation coexist is remarkable. That they interact, space agencies courting the sanction
of Russian Orthodox Christianity, is strange.
For reasons both straightforward and opaque, the secular, scientific
work of space exploration cannot shake religion, and over the past few
decades of human space travel, astronauts of Christian, Jewish, and Muslim
faith have taken their religious beliefs into orbit, praying out of duty, in awe,
and for their safe return.
That last reason—risk—is perhaps the most basic explanation for
the religious appeals of space explorers. On the ground, people led
by popes, presidents, and their own instincts pray for astronauts’
safe deliverance. Is there any supplication more succinct than what
the astronaut Scott Carpenter radioed to John Glenn as the
rockets powered him off the ground? “Godspeed, John Glenn.” The Book
of Common Prayer includes astronauts in an optional line in its Prayer
for Travelers: “For those who travel on land, on water, or in the air [or
through outer space], let us pray to the Lord.”
Of course astronauts pray for their own safety. It’s hard to imagine
atheists in foxholes; it is at least as hard to imagine them in space shuttles.
In his memoir, the astronaut Mike Mullane recalled the night before launch,
when he lay in bed wracked by fears. He checked his nightstand for a
Bible and found that there wasn’t one. But, he writes, “I didn’t need
a Bible to talk to God. I prayed for my family. I prayed for myself. I prayed
I wouldn’t blow up and then I prayed harder that I wouldn’t screw up.”
But prayers for safety are basic. Astronauts’ religious practice in space
has played out in more beautiful and complicated ways. There is no more
moving example of this than when the astronauts of Apollo —the first
humans to orbit the moon and see the Earth appear over the moon’s
horizon—read the first verses of Genesis.
Here’s the scene: It’s Christmas Eve,
. The spaceship with three
Communion on the Moon
men onboard had hurtled toward the moon for three days, and it has now
finally entered the moon’s orbit, a move requiring a maneuver so dicey that
just a tiny mistake could have sent the men off into an unwieldy elliptical
orbit or crashing to the moon’s surface. But all went smoothly, and they
are orbiting the moon. On their fourth pass (of ), the astronaut William
Anders snaps the famous Earthrise shot that will appear in Life magazine.
On their ninth orbit, they begin a broadcast down to Earth. Astronaut Frank
Borman introduces the men of the mission, and then read this:
“And the earth was without form, and void; and darkness was upon the
face of the deep. And the spirit of God moved upon the face of the waters
and God said, ‘Let there be light.’ ”
And it was so.
Through this broadcast and that photograph, I think we can begin to
taste the kind of spiritual experience astronauts must have as they travel
to distances, and perspectives, so few have known. As John Glenn said,
“To look out at this kind of creation out here and not believe in God is to
me impossible . . . It just strengthens my faith. I wish there were words to
describe what it’s like.”
This ultimate scientific endeavor does not answer the questions religion
seeks to answer; it brings humans into a close encounter with their own
smallness, the Earth’s beauty, and the vastness of the cosmos. Faced with
these truths, is it any wonder that some astronauts turn to religion?
Some surely find comfort in the words of secular philosopher-scientists
like Carl Sagan and Neil deGrasse Tyson. But others will find that the
ancient religions of Earth have some greater power, some deeper resonance,
when they have traveled safely so far from their homes. The astronaut
James Irwin put it this way: “As we got farther and farther away it
diminished in size. Finally it shrank to the size of a marble, the most
beautiful marble you can imagine. That beautiful, warm, living object
looked so fragile, so delicate, that if you touched it with a finger it would
crumble and fall apart. Seeing this has to change a man, has to make a man
appreciate the creation of God and the love of God.”
This is in part the sentiment Buzz Aldrin relays in his
memoir
as he recounts how he took communion in the minutes between when he
and Neil Armstrong became the first humans on the moon’s surface, and
part four. Essays
when Armstrong set his foot down on the dust. Aldrin says he had planned
the ceremony as “an expression of gratitude and hope.” The ceremony was
kept quiet (unaired) because of NASA’s caution following a lawsuit over
the Apollo Genesis reading, but Aldrin proceeded with a tiny vial of wine
and a wafer he had transported to the moon in anticipation of the moment
(personal items were rigidly restricted by weight, so everything had to be
small). He writes:
During those first hours on the moon, before the planned
eating and rest periods, I reached into my personal preference
kit and pulled out the communion elements along with a threeby-five card on which I had written the words of Jesus: “I
am the vine, you are the branches. Whoever remains in me,
and I in him, will bear much fruit; for you can do nothing
without me.” I poured a thimblefull of wine from a sealed
plastic container into a small chalice, and waited for the wine
to settle down as it swirled in the one-sixth Earth gravity of
the moon. My comments to the world were inclusive: “I would
like to request a few moments of silence . . . and to invite
each person listening in, wherever and whomever they may
be, to pause for a moment and contemplate the events of the
past few hours, and to give thanks in his or her own way.” I
silently read the Bible passages as I partook of the wafer and
the wine, and offered a private prayer for the task at hand and
the opportunity I had been given.
Neil watched respectfully, but made no comment to me at
the time.
He continued, reflecting:
Perhaps, if I had it to do over again, I would not choose to
celebrate communion. Although it was a deeply meaningful
experience for me, it was a Christian sacrament, and we
had come to the moon in the name of all mankind—be they
Christians, Jews, Muslims, animists, agnostics, or atheists. But
at the time I could think of no better way to acknowledge the
enormity of the Apollo experience than by giving thanks to
Communion on the Moon
God. It was my hope that people would keep the whole event
in their minds and see, beyond minor details and technical
achievements, a deeper meaning—a challenge, and the human
need to explore whatever is above us, below us, or out there.
Aldrin gets at the heart of religious experience in space: this achievement is so momentous, so otherworldly (almost literally), that the rituals
and words of one’s own religion become, as he says, “deeply meaningful.”
Other astronauts of other faiths—Jewish and Muslim—have also brought
their religious practices into orbit, resulting in some thorny questions
at the intersection of theology and practicality. For example, how often
should a Jew who experiences
sunrises and
sunsets every -hour
period observe the Sabbath? Every seventh “day”—which means every
hours or so—for just -ish minutes? When the Israeli astronaut
Ilan Ramon was on the space station, rabbis decided he could just
follow Cape Canaveral time. Ramon was killed during the space shuttle
Columbia‘s re-entry, so we don’t have his post-mission reflections on what
that experience was like. In anticipation of his journey, he said that though
he was not particularly religious, observing the Sabbath in space was
important because, as a representative of Jewish people everywhere and
the son of a Holocaust survivor, bringing those traditions into space, into
the st century, represented a spirit of continuity. “I’m kind of the proof
for my parents and their generation that whatever we’ve been fighting for
in the last century is becoming true,” he told the BBC.
Similarly, the Muslim astronaut Sheikh Muszaphar Shukor had to figure
out how, exactly, one faces Mecca during prayers when you are moving at
about ,
miles per hour and its location relative to you is changing
minute to minute, sometimes as much as
degrees in the course of one
prayer. It was decided that Shukor, who was on the International Space
Station during Ramadan, could do no more than the best of his abilities
in trying to face Mecca, kneel, and perform ritual washing. A video from
the space station showed how this wound up working and just how hard
and how odd it can be to bring religion into space exploration, in a way
not unlike that of the Russian Orthodox priest preparing a spaceship for
launch.
For many people, space represents its own religion, a spiri-
part four. Essays
tual experience on its own, secular terms, with no help from the divine
or ancient rituals. But for those who believe, and travel into space, the
experience can endow their faith with greater significance. There is awe in
science because, simply, there is awe in reality. We use science to discover
that reality, and some use religion to understand it, to feel it deeply.
There is perhaps nothing more human than the curiosity that compels
exploration. But paired with that curiosity is a search for meaning—we
don’t want to know just what is out there, we want to turn it into something
with a story, something with sense. We turn to the gods for that meaning,
and we turn to them for our safety as we go. Same as it’s always been, same
as it ever was. As President Kennedy concluded his
speech at Rice
University on our mission to the moon, “Space is there and we’re going to
climb it, and the moon and planets are there, and new hopes for knowledge
and peace are there. And, therefore, as we set sail, we ask God’s blessing on
the most hazardous and dangerous and greatest adventure on which man
has ever embarked.”
44.
The Animated GIF, Zombie
W H AT H A P P E N S W H E N A N
A N I M AT E D G I F S P R I N G S T O L I F E .
by Alexis C. Madrigal
To learn about the Internet, we tend to turn to . . . the Internet. Blogs
comment on blogs commenting on blogs. Tweets build on tweets. It can
be one hell of a beautiful snow globe.
Yet the most vital investigation I’ve seen in years of our complex,
befuddling, empowering, discouraging relationship with technology came
courtesy of Cara DeFabio’s dance/performance show, She Was a Computer.
Now, DeFabio’s a friend, and I heard about the genesis of the production
from her, so I am far from a disinterested observer. Nonetheless, I was
shocked at how effective the medium of dance-heavy experimental theater
was at lending depth to the everyday questions we have about technology.
Dance with its embodied rhythmic repetitions—human algorithms—
strikes me as an art form that suddenly works on a whole new level now
that we live Internet-mediated lives. Dance, unlike art made for paper or
part four. Essays
screen, cannot be turned into bits and sent hurtling along fiber-optic cables.
Dance is atoms moving.
There is something about incarnating these ideas and debates. Adding
bodies makes some things ludicrous and other things suddenly sensible.
We live in a time of loops (the Animated GIF Era) and tightly coupled
systems, where weather data cause cascading effects inside algorithmic
trading computers, which send corn prices spiraling up, driving changes
in farmers’ GPS-driven tractors.
These feedback loops were originally studied through that strange midcentury field of cybernetics, which initially concerned itself with biologicaltechnological hybrids. This is where we got the word cyborg, after all.
People spent time thinking about these things because they realized how
stable our bodies are under most conditions, and they realized that doing
certain things (say, blasting people into space) might disrupt how we
operate. Larger systems, whole ecologies, were stable with some human
influence, but might be sensitive to big perturbations.
Along with these fears, though, some of the earliest cyberneticians
thought that perhaps by extending ourselves through technology, we might
find a new freedom. Think of Donna Haraway’s promotion of a cyborg
feminism with the motto “I’d rather be a cyborg than a goddess” at its core.
Or check out Manfred Clynes and Nathan Kline in the paper that
coined the term cyborg : “The purpose of the Cyborg, as well as his
own homeostatic systems, is to provide an organizational system in which
such robot-like problems are taken care of automatically and unconsciously,
leaving man free to explore, to create, to think, and to feel.”
Dance aims to do the exact opposite of this version of the cyborg. It
takes the things we all do robotically—walk, run, move—and makes them a
focus of exploration, creation, thinking, and feeling. Maybe the only way to
study our increasingly cyborg selves is to go to the absolute opposite end of
the mediation spectrum, forcing the inspection of our experience through
human bodies alone.
The show opened with what I would call an animated GIF come to
life. A woman in a dress (the dancer Sara Yassky) spins in a circle at the
front of the stage, to the sound of Lana Del Rey’s “Video Games,” which is
playing on a phonograph on the stage. At first, she’s moving like Del Rey
The Animated GIF, Zombie
on her hipster-infamous Saturday Night Live performance. This move, this
lackadaisical spin, has been memed into oblivion. Any place there is a clear
foreground or background, you can find Lana Del Rey spinning: on famous
men’s shoulders, in bird’s nests, at the Arc de Triomphe, in toilets, in old
music videos, inside videogames, at the scene of a bloody killing.
While the meme remixed from her show was wildly popular, her
actual performance was panned. Musician Juliette Lewis tweeted, “Wow,
watching this ‘singer’ on SNL is like watching a -year-old in their
bedroom when they’re pretending to sing and perform.” But I would tell
Lewis: Exactly! People loved her spin because it was girlish and scared and
oddly unsavvy. What was she doing on national television?!
Back in the real life of DeFabio’s show, the dancer is still spinning.
It’s been a minute, at least. The woman is caught in a loop, a live
animated GIF. It’s as if she was wandering along in her life and suddenly,
these few frames snatched from the flow of time became what she was.
Remember the title of the show: She Was a Computer. That was drawn from
DeFabio’s grandmother’s experience working a calculating machine back
before microchips, when “a computer” was a person, generally a woman,
who performed repetitive calculations over and over and over. Almost as if
caught in a strange loop, or trapped inside an animated GIF.
But the thing is, a computer can run a loop perfectly, indefinitely. That
very attribute is what makes algorithms work. Computers can run the same
set of instructions a million times in a row exactly the same way. A human,
even a human computer, cannot. As we watch the dancer spin, we realize:
a meme incarnated is a dangerous thing; a live animated GIF cannot be as
predictable as its digital avatar. Slowly, on what must be her th or th
turn, we see her face start to change. What had been a dull look, almost
as if she couldn’t see us, morphs into a kind of knowing grin. It’s not nice
though. It’s nasty. She smiles. She snarls. She’s superior.
But then the tables turn. (By now, this slow spinning has been going on
for two minutes.) She starts to look worried. Her eyes move quickly as she
turns. We may be paying to watch her, but she’s the one stuck performing
the same motion over and over and over. (“Play Freebird!”) After another
minute, she breaks into full mammalian panic without being able to break
out of her spin. She claws at the fabric clinging to her thighs. But she can’t
part four. Essays
grab on to anything that would pull her from her loop. The sheer need to
repeat and repeat ensures that her rage remains unfulfilled. The turning
must go on, and no one, least of all her, knows why. Her mouth screams
but no words come out. The actions that used to work no longer do. The
only reason she exists is to turn, and her emotions about that turning are
unfortunate side effects taking on the airs of consciousness. Her feelings are
just pretenders to the throne of meaning. Because, no matter what, she has
to keep turning. That’s the logic of the system.
All of which to say: to make a GIF human, to pull this fundamental
Internet thing outside of its box, is an ingenious piece of stagecraft. For
every GIF meme, there is a person in there somewhere. Susan Sontag
identified photography with death. She put it like this in On Photography:
All photographs are memento mori. To take a photograph
is to participate in another person’s (or thing’s) mortality,
vulnerability, mutability. Precisely by slicing out this moment
and freezing it, all photographs testify to time’s relentless melt.
The memento mori says, “Remember you must die.” The animated GIF
says, “Remember you will live on in media.” That is to say, if photography
is a reaper, the animated GIF is its zombie cousin.
In the recent spate of high-end zombie books and movies, there is often
an intimation that at some point in the process, the zombies could have a
moment of self-awareness before or after they eat your brains. Certainly,
there could be no greater despair than knowing you could live forever but
only in this horrific, extruded, rotting form. Perhaps despair itself could
be described by the score that DeFabio gave her dancer Yassky: APATHY,
DISDAIN, MASKED FEAR, PANIC, APATHY. Each emotion got roughly a
minute of the allotted four minutes and seconds of the track.
From that stunning beginning, DeFabio went on to consider many
other aspects of our relationship with technology. During one particularly
memorable sequence, a woman (played brilliantly by Stacey Swan) is
seated in the center of the stage with only a telephone by her side. She lip
syncs a Dorothy Parker short story about waiting for a man to call, which
had been read for radio as a monologue by Tallulah Bankhead. Hope and
malice mingle and combine in the very object of the phone. It is as if the
The Animated GIF, Zombie
phone is controlling whether or not the man calls, a rather improbable but
somewhat comforting proposition.
The longest sequence of She Was a Computer was a high-concept skit,
almost vintage Monty Python. A female doctor (portrayed by Niki Selken)
receives a stream of patients who are suffering from physical ailments. She
diagnoses their diseases as stemming from the digital side of their hybrid
reality, and her prescriptions for her patients’ cramps and headaches are
gadget-related.
The gag, essentially, is that our devices can make us hurt and sick. And
therefore, perhaps our technologies can heal us, too. Lymph node swollen?
Try resetting your browser cache. Sprained knee? You need a new hard
drive. Can’t sleep? You may have Restless Touchpad Syndrome.
This is absurd! Yes, of course it is. But it’s just a reversal of one
of the most common complaints of the chattering classes. Our machines
are making us stupid and crazy, ruining our health and fitness. If the
technological fears of the
s were about what technology would do to
the world, nowadays we’re more worried about what they’re doing to us,
our insides, our biology, our brains. When we first touched the screen, we
did not agree to become cyborgs, and yet, here we are!
Toward the end of the show, a hair dryer takes on a life of its own in
a pas de deux with the ridiculously athletic Pearl Marill as she prepares
for an Instagram photoshoot. Marill may be pushing the button to start the
dryer, but it, in turn, drives the movement of her body; she dances when it’s
on, stops when it’s off. It moves her around the stage even as she seems to
direct it. The complex interplay between the control she exerts—and in so
doing, paradoxically gives away—is one of the most mesmerizing, funniest,
and heaviest commentaries on technological determinism that you’re likely
to see anywhere. And here it was on the small Counterpulse stage in a
kind of rough part of San Francisco.
Is she enhanced? Is she controlled? A body can give answers that words
cannot.
45.
The IVF Panic
"ALL HELL WILL BREAK LOOSE,
P O L I T I C A L LY A N D M O R A L LY, A L L
OV E R T H E WO R L D. "
by Megan Garber
Lesley Brown, the mother of the world’s first baby conceived
through in vitro fertilization, has died. She was .
By the time she turned , Brown and her husband, John, had been
trying for nine years to conceive. As they tried, the doctors Patrick Steptoe
and Robert Edwards were making strides in in vitro fertilization—
the process that brings the egg and sperm together in a lab setting, with
the embryo implanted after fertilization. The procedure—as one doctor put
it, “an incredible leap into the unknown“—had never led to a fullterm pregnancy. By the late
s, however, Steptoe, a gynecologist, and
Edwards, a biologist, were getting close. When Brown and her husband
volunteered for in vitro, the process was—finally—successful. Brown delivered a daughter, Louise, on July ,
.
part four. Essays
Given the number of babies that have now been conceived through
IVF—more than 4 million of them at last count—it’s easy to forget
how controversial the procedure was during the time when, medically
and culturally, it was new. We weren’t quite sure what to make of this
process that, on the one hand, offered hope to infertile women and, on
the other, seemed to carry shades of Aldous Huxley. People feared that
babies might be born with cognitive or developmental abnormalities. They
weren’t entirely sure how IVF was different from cloning, or from the
“ethereal conception” that was artificial insemination. They balked
at the notion of “assembly-line fetuses grown in test tubes.” In
press coverage of Brown’s pregnancy, “test-tube baby”—a phrase that
reflects both dismissal and fear, and which we now use mostly ironically—
was pretty much a default descriptor. (That’s especially noteworthy because
Louise Brown was conceived not in a test tube, but a petri dish: in
vitro simply means ”in glass.”)
For many, IVF smacked of a moral overstep—or at least of a potential
one. In a 1974 article headlined “The Embryo Sweepstakes,” The New
York Times considered the ethical implications of what it called “the brave
new baby”: the child “conceived in a test tube and then planted in a womb.”
(The scare phrase in that being not “test tube” so much as “a womb” and its
menacingly indefinite article.) And no less a luminary than James Watson—
yes, that James Watson—publicly decried the procedure, telling a
Congressional committee in
that a successful embryo transplant would
lead to “all sorts of bad scenarios.”
Specifically, he predicted: “All hell will break loose, politically and
morally, all over the world.”
Despite those warnings, though, IVF development moved forward,
until the procedure took on an aura of inevitability. “No one doubts that
well-documented embryo implants and transplants will occur, probably
within a year or two,” the Times noted, also in
.
Indeed, test-tube production is advancing at such a pace
that Dr. Bentley Glass, former president of the American
Association of Advancement of Science, has predicted that by
the end of this century a fully formed baby could be ‘decanted’
from an artificial womb.
The IVF Panic
But whether such a procedure should be performed at
all, and under what circumstances, is currently the subject
of an unusually wide-ranging and sometimes bitter debate.
It is a debate that rages in the background as the potential
‘predestinators’—the researchers themselves—go forward, proceeding ‘Russianlike’ in their operating theaters, their critics
say, oblivious to the hue and cry rising behind them.
For many of those critics, the fear wasn’t of in vitro itself, but of the
slippery slope it suggested. “First came artificial insemination, then the testtube baby,” Anne Taylor Fleming wrote in 1980; “now researchers are
experimenting with transplanting embryos from woman to woman. Such
scientific breakthroughs raise fears of a brave new world where parents
can select their child’s gender and traits, where babies will gestate in
laboratories and where the question of abortion ethics pales in the face of
an even more complicated question—the ethics of manufacturing human
life.”
And the complexity of that question meant that Lesley Brown faced
the danger not just of a brand-new medical procedure, but also of a
morally indignant public. While on bed rest in a public hospital during
her pregnancy, Brown had to be moved from her room in response to
a bomb threat (later proved a hoax). She, John, and Louise had to move
to a new home—one with a private backyard—so that she could take
Louise outside without encountering camped-out reporters. As Louise, now
and a mother to her own son, put it: “Mum was a very quiet and
private person who ended up in the world spotlight because she wanted a
family so much.”
Today, more than
years after Lesley Brown got her family (she
would go on to have another daughter, Natalie, also via IVF), the
procedure that got it for her isn’t without controversies. The Catholic
Church teaches that “IVF violates the rights of the child: it deprives
him of his filial relationship with his parental origins and can hinder the
maturing of his personality. It objectively deprives conjugal fruitfulness
of its unity and integrity, it brings about and manifests a rupture
between genetic parenthood, gestational parenthood, and responsibility
part four. Essays
for upbringing.” In
, Catholic bishops in Poland branded IVF “the
younger sister of eugenics.”
In the broader culture, though, IVF has won the best thing that a
controversial technology can: widespread acceptance. Just a year after
Lesley Brown gave birth to her first daughter, cultural normalization
seemed a foregone conclusion. In a 1979 year-in-review edition
of its paper, the Sarasota Herald Tribune reprinted James Watson’s “all
hell will break loose” exhortations against IVF. It then remarked, simply:
“He was mistaken.” And in
, Robert Edwards won the Nobel Prize
in medicine for his role in pioneering the IVF procedure. The Nobel
committee cited achievements that have “made it possible to treat infertility,
a medical condition affecting a large proportion of humanity.”
In recognizing those achievements—achievements that have also led to,
among other things, advances in stem cell research—the committee made
a point of stating the obvious: “Today,” it declared, “IVF is an established
therapy throughout the world.”
46.
The Broken Beyond
H O W S PA C E T U R N E D I N T O A N
O F F I C E PA R K
by Ian Bogost
I am a Space Shuttle child. I ogled big, exploded-view posters of the
spaceship in classrooms. I built models of it out of plastic and assembled
gliders in its shape out of foam. I sat silent with my classmates watching
television news on a VCR cart after Challenger exploded on January ,
. Six years later, I worked as an instructor at the New Mexico Museum
of Space History’s summer “Shuttle Camp,” a name that will soon seem
retrograde, if it doesn’t already.
The last space shuttle took its last space flight, then in September
it took its last worldly one. It ended my generation’s era of space
marvel, which turned out to take a very different path from that of our
parents. During the
s and
s, space exploration was primarily a
proxy for geopolitical combat. It was largely symbolic, even if set against
a background of earnest frontiersmanship. First satellite, first man in space,
first spacewalk, first manned moon mission, and so on. Space as a frontier
part four. Essays
was a thing for science-fiction fantasy, although we dipped our toes far
enough across that border to make it clear that such exploration was
possible, even if not yet feasible.
By the
s, space had become a laboratory rather than a frontier.
Despite its status as a “space station,” Skylab was first called Orbital
Workshop, making it sound more like Dad’s vision for his garage than
like Kubrik’s vision of
. The fact that Skylab was permanently
disfigured during launch only concretized the program’s ennui. Space
exploration became self-referential: missions were sent to Skylab in order
to repair Skylab. The space shuttle turned the workaday space lab into a
suburban delivery and odd-jobs service. Satellites were deployed, space
labs serviced, probes released, crystals grown. Meanwhile, the aspects of
space travel that really interest people—such as the fact that it’s travel in
motherfucking outer space—were downplayed or eliminated.
For one of our Shuttle Camp classroom gimmicks, we’d have a kid
hold a real high-temperature reusable surface insulation tile, one of the
,
such tiles that line the orbiter’s underbelly to facilitate reentry.
After finishing her freeze-dried space spaghetti and Tang, this unassuming
third-grader would clasp at the edges of the impossibly light tile, which
seemed like little more than styrofoam. We’d heat its surface with a propane
torch until the heat made it glow red with hazard, only to dissipate a few
moments later. The danger was real, and the kids knew it. A decade later, a
chunk of foam insulation would break free of Columbia’s external fuel tank
on launch and damage part of this thermal protection system, dooming the
orbiter to destruction.
The very idea of a reusable space vehicle is contrary to everything that
space travel had previously represented–wealth and power for one, but also
enormity and smallness and risk and brazenness and uncertainty and dark,
dark darkness–expedition rather than experimentation. It’s no wonder the
space spaghetti and the thermal protection tiles were so interesting to those
kids. They represented the experience of space (the frontier) rather than
its taming as laboratory (the settlement). Look at the Saturn V. It’s a
badass rocket. Now look at the Space Shuttle. It’s a humble tractor.
* * *
After retiring, the shuttle began its funeral procession. Endeavour me-
The Broken Beyond
andered around California on a tour of the state’s monumuents. Awkward
like a big RV, Endeavour was hoisted atop its shuttle carrier aircraft (SCA),
a modified version of the Boeing
- , a once miraculous jumbo jet
that’s itself reached middle age. It was like watching an adult man taking
an elderly father on a final tour: the Golden Gate, the state Capitol, the
Hollywood sign. To mount Endeavour on the SCA, NASA uses a custombuilt mate-demate device (MDD) that lifts the orbiter on and off the jet’s
back. A NASA file photo of the MDD mating process taken after STSin December
shows the proud orbiter taking its position on a shiny
, evening light pouring over both. Today, the image reads differently:
a cripple hoisted uncomfortably by machine for an easy-does-it orbit
measured in feet rather than miles. Sunsetting.
Watching Californians watch their once-starbound vessel sing its silent
swan song, one can’t help but think of another
s icon who came
home to live forever in Los Angeles: Ronald Reagan. Reagan neither
initiated nor retired the space-shuttle program, but as president during
its zenith, he is forever inseparable from it. That January evening after
Challenger‘s destruction, Reagan addressed the nation: “We’ve grown
used to wonders in this century. It’s hard to dazzle us. But for years the
United States space program has been doing just that. We’ve grown used to
the idea of space, and perhaps we forget that we’ve only just begun. We’re
still pioneers.”
Flying low before California’s landmarks, the orbiter had an easier
time dazzling us this month. Crowds gathered, pointing skyward with glee,
apparently unaware they were watching a wake instead of a parade. Tweets,
Instagrams, and Facebook posts followed. A generation of millennial hightech start-up employees young enough to be my former campgoers took a
break from setting up their new iPhone s to point at a spaceship flying
lower than a biplane. So little have we come to expect from space travel
that near-earth travel is now sufficient spectacle. As the idea of a product is
sufficient implementation, the idea of a spaceship has become sufficient
thrill. “Nothing ends here,” Reagan told us in
. But things do end,
eventually. Even the iPhone has come down to earth, having removed the
longstanding default lock-screen image of Earth from space in favor
of a more humble pond ripple.
part four. Essays
Reagan’s cortège was more involved and protracted than Endeavour‘s,
starting at the Ronald Reagan Presidential Library in Simi Valley and
taking a detour through D.C. for the president to lie in state in the Capitol
Rotunda, before making a somber returning to Southern California that
punctuated the end of an era, for good and for ill. Endeavour‘s arrived
at LAX with more fanfare. As it landed, Smithsonian magazine proudly
announced that Endeavour’s trip wasn’t quite finished: “Right now, it’s
being prepared for a cross-town move from the airport to the [California]
Science Center,” marking “the first, last and only time a space shuttle will
travel through urban, public city streets.” Not even a biplane’s flight will be
necessary to impress us anymore: on its way to interment at the center’s
air and space exhibits in Exposition Park, Endeavour will be content as a
commuter car, or a parade float. The stars come down to earth.
* * *
In one of hundreds of images of Endeavour atop the SCA, employees
at SpaceX clambered to the roof of their headquarters in Hawthorne,
near LAX. They are the shuttle program’s accidental legacy. Created
by the PayPal co-founder Elon Musk in
, the company produces
the Falcon 9 two-stage-to-orbit launch vehicle and the Dragon capsule,
the first commercial spacecraft to be recovered successfully from orbit. This
fall, Falcon /Dragon will commence deliveries to the International Space
Station (ISS) under what remains of NASA’s low-Earth space efforts, going
by the workaday name of Commercial Orbital Transportation Services
(COTS). A cosmic UPS service.
Musk is a hero of the entrepreneurs and venture capitalists who have
themselves taken over the role of hero from Yuri Gagarin and John Glenn
and Neil Armstrong and Sally Ride. He’s also perhaps the closest realworld counterpart to Tony Stark, the fictional playboy and industrialist who
becomes Iron Man in Stan Lee’s comic books. Musk started SpaceX shortly
before selling PayPal in
. Like Stark, he’s a modest man, taking only
the titles of CEO and CTO at SpaceX, in addition to his role as chairman
and CEO at Tesla Motors, the electric-car manufacturer he founded a year
later. SpaceX’s contract under the NASA COTS program is worth up to $ .
billion, more than twice what eBay shelled out for PayPal.
Musk is in the space freight business, hauling materials and equipment
The Broken Beyond
from earth to sky, a kind of st-century Cornelius Vanderbilt in the making.
Elsewhere, rich men lust jealously for space now that Earth’s challenges
have proven tiresome. John Carmack, the co-founder of iD software and
co-creator of Doom, started Armadillo Aerospace in
, eyeing space
tourism via a suborbital commercial craft. The Amazon founder Jeff Bezos
helped found another private spaceflight company, Blue Origin, in the same
year. And of course, the Virgin Group founder Richard Branson established
Virgin Galactic in
, to provide sub-orbital space tourism as well as
orbital satellite launch. In
, Richard Garriott, the role-playing-game
creator and son of the American Skylab astronaut Owen K. Garriott, paid
Space Adventures a reported $ million to be flown via a Russian Soyuz
spacecraft to the ISS. Just four years later, Branson’s Virgin Galactic was
selling tickets for suborbital rides on SpaceshipTwo for a mere $
,
.
Ashton Kutcher and Katy Perry have already signed up. TMZ Galactic can’t
be far behind.
In grade school during the early days of the shuttle program, I
remember writing and illustrating “astronaut” as a response to the dreaded
“What do you want to be when you grow up?” prompt. I didn’t really want
to be an astronaut, but I knew that unlike my first inclination, garbage
collector, it would be accepted as a suitably ambitious aspiration.
Space, once a place for governments and dreamers who would really
just be civil servants, has become a playground for the hyper-affluent.
Owen Garriott was an engineer from Oklahoma and a U.S. Naval Officer
selected for life-science service in space. Richard Garriott was a lucky rich
guy with connections. We don’t have flying cars, but we have a billionaire
who sells electric cars to millionaires. We don’t have space vacations, but
we have another billionaire who will take you on a magic carpet ride for
large. Today, a kid who says “I want to be an astronaut” is really just
saying “I want to be rich.” Isn’t that what everyone wants? All of today’s
dreams are dreams of wealth.
The official mission of the final space shuttle, STS-135, reads more
like a joke from The Office than a science-fiction fantasy: “Space Shuttle
Atlantis is carrying the Raffaello multipurpose logistics module to deliver
supplies, logistics and spare parts to the International Space Station.” Among
its tasks: the delivery of a new tank for a urine recycling system, and
part four. Essays
the removal of a malfunctioning space sewage pump. If only I’d known in
that astronaut and garbage collector would turn out to be such similar
jobs.
Despite what you read in comic books, even Stark Industries has to
bend metal and mold plastic. Elon Musk will take over the task of shipping
sewage pumps and waste-processing units and air-filtration systems to the
ISS. Richard Branson will sell Justin Bieber and Mitt Romney tickets past
the Kármán line. Eventually, inevitably, Mark Zuckerberg will slip a bill to
the surly bonds of earth and start his own space enterprise, just to keep up
with the Rothschilds. The quiet maybe-billionaire Craig Newmark will
expand his eponymous service to taxi unwanted minibikes and toasters and
other worldly junk into space, the Final Landfill.
It’s not so much that the space program is broken in the sense
of inoperative. Space is alive and well, for the wealthy at least, where it’s
become like the air and the land and the sea: a substrate for commerce,
for generating even more wealth. Instead, the space program is broken in
the sense of tamed, domesticated, housebroken. It happens to all frontiers:
they get settled. How many nights can one man dance the skies? Better
to rent out laughter-silvered wings by the hour so you can focus on
your asteroid-mining start-up.
In the
s we went to the moon not because it was easy but because
it was hard. In the
s we went to low-Earth orbit because, you know,
somebody got a grant to study polymers in zero gravity, or because a
high-price pharmaceutical could be more readily synthesized, or because a
communications satellite had to be deployed, or because a space telescope
had to be repaired. The space shuttle program strove to make space
exploration repeatable and predictable, and it succeeded. It turned space
into an office park. Now the tenants are filing in. Space: Earth’s suburbs.
Office space available.
47.
Interview: What Happened
Before the Big Bang?
T I M M AU D L I N O N T H E N E W
P H I L O S O P H Y O F C O S M O L O G Y.
by Ross Andersen
Last May, Stephen Hawking gave a talk at Google’s Zeitgeist Conference
in which he declared philosophy to be dead. In his book The Grand Design,
Hawking went even further. “How can we understand the world in which
we find ourselves? How does the universe behave? What is the nature of
reality? Where did all this come from? Traditionally these were questions
for philosophy, but philosophy is dead,” Hawking wrote. “Philosophy has
not kept up with modern developments in science, particularly physics.”
In December, a group of professors from America’s top philosophy
departments, those of including Rutgers, Columbia, Yale, and NYU, set out
to establish the philosophy of cosmology as a new field of study within the
philosophy of physics. The group aims to bring a philosophical approach
to the basic questions at the heart of physics, including those concerning
part four. Essays
the nature, age, and fate of the universe. This past week, a second group of
scholars from Oxford and Cambridge announced their intention to launch
a similar project in the United Kingdom.
One of the founding members of the American group, Tim Maudlin,
was recently hired by New York University, which has the top-ranked
philosophy department in the English-speaking world. Maudlin is a philosopher of physics whose interests range from the foundations of physics to
topics more firmly within the domain of philosophy, like metaphysics and
logic.
Yesterday I spoke with Maudlin by phone about cosmology, multiple
universes, the nature of time, the odds of extraterrestrial life, and why
Stephen Hawking is wrong about philosophy.
Your group has identified the central goal of the philosophy of
cosmology to be the pursuit of outstanding conceptual problems at the
foundation of cosmology. As you see it, what are the most striking of
those problems?
Maudlin: So, I guess I would divide that into two classes. There
are foundational problems and interpretational problems in physics,
generally—say, in quantum theory, or in space-time theory, or in trying to
come up with a quantum theory of gravity—that people will worry about
even if they’re not doing what you would call the philosophy of cosmology.
But sometimes those problems manifest themselves in striking ways when
you look at them on a cosmological scale. So some of this is just a different
window on what we would think of as foundational problems in physics,
generally.
Then there are problems that are fairly specific to cosmology. Standard
cosmology, or what was considered standard cosmology
years ago, led
people to the conclude that the universe that we see around us began in
a big bang, or put another way, in some very hot, very dense state. And
if you think about the characteristics of that state, in order to explain the
evolution of the universe, that state had to be a very low-entropy state,
and there’s a line of thought that says anything that is very low entropy
is in some sense very improbable or unlikely. And if you carry that line of
thought forward, you then say, “Well gee, you’re telling me the universe
began in some extremely unlikely or improbable state,” and you wonder if
Interview: What Happened Before the Big Bang?
there’s any explanation for that. Is there any principle that you can use to
account for the big bang state?
This question of accounting for what we call the “big bang state”—the
search for a physical explanation of it—is probably the most important
question within the philosophy of cosmology, and there are a couple
different lines of thought about it. One that’s becoming more and more
prevalent in the physics community is the idea that the big bang state itself
arose out of some previous condition, and that, therefore, there might be an
explanation of it in terms of the previously existing dynamics by which it
came about. There are other ideas, for instance that maybe there might be
special sorts of laws, or special sorts of explanatory principles, that would
apply uniquely to the initial state of the universe.
One common strategy for thinking about this is to suggest that what
we used to call the whole universe is just a small part of everything there is,
and that we live in a kind of bubble universe, a small region of something
much larger. And the beginning of this region, what we call the big bang,
came about by some physical process, from something before it, and that
we happen to find ourselves in this region because this is a region that can
support life. The idea being that there are lots of these bubble universes,
maybe an infinite number of bubble universes, all very different from one
another. Part of the explanation of what’s called the anthropic principle
says, “Well now, if that’s the case, we as living beings will certainly find
ourselves in one of those bubbles that happens to support living beings.”
That gives you a kind of account for why the universe we see around us
has certain properties.
Is the philosophy of cosmology, as a project, a kind of translating,
then, of existing physics into a more common language of meaning,
or into discrete, recognizable concepts? Or do you expect that it will
contribute directly to physics, whether that means suggesting new
experiments or participating directly in theoretical physics?
I don’t think this is a translation project. This is a branch of the
philosophy of physics, in which you happen to be treating the entire
universe—which is one huge physical object—as a subject of study, rather
than, say, studying just electrons by themselves, or studying only the solar
system. There are particular physical problems, problems of explanation,
part four. Essays
that arise in thinking about the entire universe, which don’t arise when
you consider only its smaller systems. I see this as trying to articulate what
those particular problems are, and what the avenues are for solving them,
rather than trying to translate from physics into some other language. This
is all within the purview of a scientific attempt to come to grips with the
physical world.
There’s a story about scientific discovery that we all learn in school,
the story of Isaac Newton discovering gravity after being struck by an
apple. That story is now thought by some to have been a myth, but
suppose that it were true, or that it was a substitute for some similar,
or analogous, eureka moment. Do you consider a breakthrough like
that, which isn’t contingent on any new or specialized observations,
to be philosophical in nature?
What occurred to Newton was that there was a force of gravity,
which of course everybody knew about, it’s not like he actually discovered gravity—everybody knew there was such a thing as gravity. But if you
go back into antiquity, the way that the celestial objects—the moon, the
sun, and the planets—were treated by astronomy had nothing to do with
the way things on earth were treated. These were entirely different realms,
and what Newton realized was that there had to be a force holding the
moon in orbit around the earth. This is not something that Aristotle or his
predecessors thought, because they were treating the planets and the moon
as though they just naturally went around in circles. Newton realized some
force had to be holding the moon in its orbit around the earth, to keep it
from wandering off, and he knew also that a force was pulling the apple
down to the earth. And so what suddenly struck him was that those forces
could be one and the same.
That was a physical discovery, a physical discovery of momentous
importance, as important as anything you could ever imagine because it
knit together the terrestrial realm and the celestial realm into one common
physical picture. It was also a philosophical discovery in the sense that
philosophy is interested in the fundamental natures of things.
Newton would call what he was doing “natural philosophy.” That’s
actually the name of his book: Mathematical Principles of Natural Philosophy. Philosophy, traditionally, is what everybody thought they were
Interview: What Happened Before the Big Bang?
doing. It’s what Aristotle thought he was doing when he wrote his book
called Physics. So it’s not as if there’s this big gap between physical
inquiry and philosophical inquiry. They’re both interested in the world
on a very general scale, and the group that works on the foundations of
physics is about equally divided among people who live in philosophy
departments, people who live in physics departments, and people who live
in mathematics departments.
In May of last year, Stephen Hawking gave a talk for Google in
which he said that philosophy was dead, and that it was dead because
it had failed to keep up with science, and in particular with physics. Is
he wrong, or is he describing a failure of philosophy that your project
hopes to address?
Hawking is a brilliant man, but he’s not an expert in what’s going on
in philosophy, evidently. In the past years, the philosophy of physics has
become seamlessly integrated with the foundations of physics work done
by actual physicists, so the situation is actually the exact opposite of what
he describes. I think he just doesn’t know what he’s talking about. There’s
no reason why he should. Why should he spend a lot of time reading the
philosophy of physics? I’m sure it’s very difficult for him to do. But I think
he’s just . . . uninformed.
Do you think that physics has neglected some of these foundational questions as it has become, increasingly, a kind of engine for
the applied sciences, focusing on the manipulation, rather than the
explanation, say, of the physical world?
Look, physics has definitely avoided what were traditionally considered
to be foundational physical questions, but the reason for that goes back
to the foundation of quantum mechanics. The problem is that quantum
mechanics was developed as a mathematical tool. Physicists understood
how to use it as a tool for making predictions, but without an agreement
or understanding about what it was telling us about the physical world.
And that’s very clear when you look at any of the foundational discussions.
This is what Einstein was upset about; this is what Schrodinger was upset
about. Quantum mechanics was merely a calculational technique that was
not well-understood as a physical theory. Bohr and Heisenberg tried to
argue that asking for a clear physical theory was something you shouldn’t
part four. Essays
do anymore. That it was something outmoded. And Bohr and Heisenberg
were wrong about that. But the effect of their argument was to shut down
perfectly legitimate physics questions within the physics community for
about half a century. And now we’re coming out of that, fortunately.
And what’s driving the renaissance?
Well, the questions never went away. There were always people who
were willing to ask them. Probably the greatest physicist in the last half
of the th century was John Stewart Bell, who pressed very hard on these
questions. So you can’t suppress it forever; it will always bubble up. It came
back because people became less and less willing to simply say, “Well, Bohr
told us not to ask those questions,” which is sort of a ridiculous thing to say.
Are the topics that have scientists completely flustered especially
fertile ground for philosophers? For example, I’ve been doing a ton
of research for a piece about the James Webb Space Telescope, the
successor to the Hubble Space Telescope, and none of the astronomers
I’ve talked to seem to have a clue as to how to use it to solve the
mystery of dark energy. Is there, or will there be, a philosophy of
dark energy in the same way that a body of philosophy seems to have
flowered around the mysteries of quantum mechanics?
There will be. There can be a philosophy of anything really, but it’s
perhaps not as fancy as you’re making it out. The basic philosophical
question, going back to Plato, is “What is x?” What is virtue? What is
justice? What is matter? What is time? You can ask that about dark energy:
What is it? And it’s a perfectly good question.
There are different ways of thinking about the phenomena we attribute
to dark energy. Some ways of thinking about them say that what you’re
really doing is adjusting the laws of nature themselves. Some other ways
of thinking about them suggest that you’ve discovered a component
or constituent of nature that we need to understand better, and seek
the source of. So the question—What is this thing fundamentally?—is a
philosophical question and is a fundamental physical question, and will
lead to interesting avenues of inquiry.
One example of philosophy of cosmology that seems to have
trickled out to the layman is the idea of fine-tuning—the notion that
in the set of all possible physics, the subset that permits the evolution
Interview: What Happened Before the Big Bang?
of life is very small, and that from this it is possible to conclude
that the universe is either one of a large number of universes, a
multiverse, or that perhaps some agent has fine-tuned the universe
with the expectation that it generate life. Do you expect that idea
to have staying power, and if not, what are some of the compelling
arguments against it?
A lot of attention has been given to the fine-tuning argument. Let me
just say first of all, that the fine-tuning argument as you state it, which is
a perfectly correct statement of it, depends upon making judgments about
the likelihood, or probability, of something. For example, “How likely is it
that the mass of the electron would be related to the mass of the proton in
a certain way?” Now, one can first be a little puzzled by what you mean by
how “likely” or “probable” something like that is. You can ask how likely
it is that I’ll roll double sixes when I throw dice, but we understand the
way you get a handle on the use of probabilities in that instance. It’s not as
clear how you even make judgments like that about the likelihood of the
various constants of nature (and so on) that are usually referred to in the
fine-tuning argument.
Now, let me say one more thing about fine-tuning. I talk to physicists
a lot, and none of the physicists I talk to want to rely on the fine-tuning
argument to argue for a cosmology that has lots of bubble universes, or lots
of worlds. What they want to argue is that this arises naturally from an
analysis of the fundamental physics; that the fundamental physics, quite
apart from any cosmological considerations, will give you a mechanism by
which these worlds will be produced, and a mechanism by which different
worlds will have different constants, or different laws, and so on. If that’s
true, then if there are enough of these worlds, likely some of them have the
right combination of constants to permit life. But their arguments tend to
be not, “We have to believe in these many worlds to solve the fine-tuning
problem,” they tend to be, “These many worlds are generated by physics we
have other reasons for believing in.”
If we give up on that, and it turns out there aren’t these many worlds,
that physics is unable to generate them, then it’s not that the only option
is that there was some intelligent designer. It would be a terrible mistake to
think that those are the only two ways things could go. You would have to
part four. Essays
again think hard about what you mean by probability, and about what sorts
of explanations there might be. Part of the problem is that right now there
are just way too many freely adjustable parameters in physics. Everybody
agrees about that. There seem to be many things we call constants of nature
that you could imagine setting at different values, and most physicists think
there shouldn’t be that many, that many of them are related to one another.
Physicists think that at the end of the day there should be one complete
equation to describe all physics, because any two physical systems interact
and physics has to tell them what to do. And physicists generally like to
have only a few constants, or parameters of nature. This is what Einstein
meant when he famously said he wanted to understand what kind of
choices God had—using his metaphor, how free his choices were in creating
the universe—which is just asking how many freely adjustable parameters
there are. Physicists tend to prefer theories that reduce that number, and as
you reduce it, the problem of fine-tuning tends to go away. But, again, this
is just stuff we don’t understand well enough yet.
I know that the nature of time is considered to be an especially
tricky problem for physics, one that physicists seem prepared, or even
eager, to hand over to philosophers. Why is that?
That’s a very interesting question, and we could have a long conversation about that. I’m not sure it’s accurate to say that physicists want
to hand time over to philosophers. Some physicists are very adamant
about wanting to say things about it; Sean Carroll, for example, is very
adamant about saying that time is real. You have others saying that time
is just an illusion, that there isn’t really a direction of time, and so forth.
I myself think that all of the reasons that lead people to say things like
that have very little merit, and that people have just been misled, largely
by mistaking the mathematics they use to describe reality for reality itself.
If you think that mathematical objects are not in time, and mathematical
objects don’t change—which is perfectly true—and then you’re always
using mathematical objects to describe the world, you could easily fall into
the idea that the world itself doesn’t change, because your representations
of it don’t.
There are other, technical reasons why people have thought that
you don’t need a direction of time, or that physics doesn’t postulate a
Interview: What Happened Before the Big Bang?
direction of time. My own view is that none of those arguments are very
good. To the question as to why a physicist would want to hand time
over to philosophers, the answer would be that physicists for almost a
hundred years have been dissuaded from trying to think about fundamental
questions. I think most physicists would quite rightly say, “I don’t have the
tools to answer a question like ‘what is time?’—I have the tools to solve a
differential equation.” The asking of fundamental physical questions is just
not part of the training of a physicist anymore.
I recently came across a paper about Fermi’s Paradox and SelfReplicating Probes, and while it had kind of a science-fiction tone
to it, it occurred to me as I was reading it that philosophers might
be uniquely suited to speculating about, or at least evaluating the
probabilistic arguments for, the existence of life elsewhere in the
universe. Do you expect philosophers of cosmology to enter into those
debates, or will the discipline confine itself to issues that emerge
directly from physics?
This is really a physical question. If you think of life, of intelligent
life, it is, among other things, a physical phenomenon—it occurs when the
physical conditions are right. And so the questions of how likely it is that
life will emerge, and how frequently it will emerge, does connect up to
physics and to cosmology, because when you’re asking how likely it is
that somewhere there’s life, you’re talking about the broad scope of the
physical universe. And philosophers do tend to be pretty well schooled in
certain kinds of probabilistic analysis, and so it may come up. I wouldn’t
rule it in or rule it out.
I will make one comment about these kinds of arguments which
seems to me to somehow have eluded everyone. When people make these
probabilistic equations—like the Drake Equation, which you’re familiar
with—they introduce variables for the frequency of earth-like planets, for
the evolution of life on those planets, and so on. The question remains
as to how often, after life evolves, you’ll have intelligent life capable of
making technology. What people haven’t seemed to notice is that on earth,
of all the billions of species that have evolved, only one has developed
intelligence to the level of producing technology. Which means that kind
of intelligence is really not very useful. It’s not actually, in the general case,
part four. Essays
of much evolutionary value. We tend to think—because we love to think of
ourselves—that human beings are the top of the evolutionary ladder; that
the intelligence we have, which makes us human beings, is the thing that
all of evolution is striving toward. But we know that’s not true. Obviously
it doesn’t matter that much, if you’re a beetle, that you be really smart.
If it were, evolution would have produced much more intelligent beetles.
We have no empirical data to suggest that there’s a high probability that
evolution on another planet would lead to technological intelligence. There
is just too much we don’t know.
48.
A Portrait of the Artist as a
Game Studio
I N T H AT G A M E C O M PA N Y ’ S
PAT H B R E A K I N G A N D G O R G E O U S
GAMES, WE GET THE RARE
C H A N C E T O WAT C H T H E S E
A R T I S T S AT W O R K A G A I N S T A
FIX ED TECHNOLOGICAL
B AC K D R O P.
by Ian Bogost
Artists’ aesthetics evolve and deepen over time. You can see it in their work,
as immaturity and coarseness give way to sophistication and polish. In most
media, an audience witnesses this aesthetic evolution take place within the
part four. Essays
most mature form of that medium.
Between
s and the
s, for example, the abstract expressionist
painter Mark Rothko’s work evolved from mythical surrealism to multiform abstractions to his signature style of rectilinear forms. Different
motivations and inspirations moved Rothko during these decades, but at
every stage of his artistic career, the painter’s work could be experienced as
painting, as medium on canvas. As flatness and pigment on linen.
Likewise, the contemporary American novelist Ben Marcus has explored his unique brand of fiction in three novels, and his style and effect
have changed and deepened as his writing career has progressed. Marcus’s
novel, The Age of Wire and String, uses a technical perversion of
English that the author coerces into fantastic and nearly inscrutable tales
of rural life. The
follow-up Notable American Women refines his semantic surrealism into a more legible narrative, but one in which language
itself remains untrustworthy. And with this year’s Flame Alphabet, Marcus
reaches a new summit, a book in which language kills from the inside out.
Once more, an artist births and refines experimental style, but carries out
that evolution within the standard form of the art in question: the offsetprinted hardback book.
Aesthetic evolution need not move from lesser to greater effect. Since
, M. Night Shyamalan has practiced his signature brand of filmmaking, in which supernatural situations end in dramatic plot twists. But
between The Sixth Sense (
) and The Last Airbender (
), Shyamalan’s
artistic success faltered even as his films continued to perform well at
the box office. Decline notwithstanding, all his films were still printed on
celluloid and projected onto anamorphic cinema screens.
In painting, literature, and film, the public can see an artist’s work
evolve (or devolve), because that work is accessible to audiences in its
native form. Archivists or scholars might dig into a creator’s sketchbooks
or retrieve early works, but such museum work is not required for the
ordinary viewer or reader to grasp the changes and refinements of work
over time. This perception of creative progress is part of the pleasure of art,
whether through the joy of growth or the schadenfreude of decay.
In video games, it’s far less common to see a creator’s work evolve in
this way. In part, this is because game makers tend to have less longevity
A Portrait of the Artist as a Game Studio
than other sorts of artists. In part, it’s because games are more highly
industrialized even than film, and aesthetic headway is often curtailed by
commercial necessity. And in part, it’s because games are so tightly coupled
to consumer electronics that technical progress outstrips aesthetic progress
in the public imagination.
When game makers do have a style, it likely has evolved over a
long duration. Consider Will Wright’s discovery and later mastery of the
“software toy” simulation, from SimCity to SimEarth to The Sims; or John
Carmack and John Romero’s revolutionary exploitation of new powers in
real-time -D and -D graphics in Commander Keen, Doom, and Quake; or
Hideo Kojima’s development and refinement of the stealth-action games of
the Metal Gear series, characterized by solitude, initial weakness, cinematic
cut-scenes, and self-referential commentary.
These styles evolved over decades, and they did so in the arms of
financial success and corporate underwriting. Structurally speaking, these
game makers are more like Shyamalan than like Rothko and Marcus, the
latter two artists having struggled to find their respective styles outside of
the certainty of commercial success.
In independent games, wherein we must hope that aesthetics, more
than commercialism, drive creators, creative evolution often takes place
tentatively, in forms far less refined and mature than the video-game
console that serves as the medium’s equivalent to the cinema or the firstrun hardback. Experimental titles may humbly take their first form on a PC
or a mobile device. If their makers are very fortunate, as have been Jonathan
Blow (Braid), Jonathan Mak (Everyday Shooter), and Kyle Gabler and Ron
Carmel (World of Goo), those games might find their way to the Nintendo
Wii or the Xbox
or the PlayStation . But today, the artists who work
in game development for its beauty before its profitability typically don’t
get to experiment and come of age artistically in the most-public venues.
Thatgamecompany’s new title Journey is an exception. It’s the third
in a three-game exclusive deal with Sony that the studio’s principals signed
right out of grad school at the University of Southern California. Thanks
to the Sony exclusive and the oversight of Sony’s Santa Monica studio, all
three of the games the studio has produced have targeted the PlayStation
from the beginning. This is not a remarkable feat for a Rothko or a Marcus—
part four. Essays
such artists simply pick up the generic media of canvas or page and work
with them directly. But the PS is tightly controlled and its development
kits are expensive. The machine sets a high bar, too—a complex multicore
architecture with streamlined coprocessors meant to enhance speed and
throughput for specialized tasks, especially vector processing for graphical
rendering.
Thatgamecompany’s work thus offers us an unusual window into the
creative evolution of a game maker, one in which the transition from green
students to venerable artists took place before our very eyes over a short
half-decade on a single and very public videogame platform.
Flow, Flower, Flowest
During graduate school, thatgamecompany’s creative director, Jenova
Chen, became obsessed with the psychologist Mihaly Csikszentmihalyi’s
concept of “flow,” the psychological feeling of being fully involved in
an experience. Csikszentmihalyi’s book on the subject was published
in
, but a definition for the phenomenon is often cribbed from a
1996 Wired interview: “Being completely involved in an activity for
its own sake. The ego falls away. Time flies. Every action, movement, and
thought follows inevitably from the previous one, like playing jazz. Your
whole being is involved, and you’re using your skills to the utmost.” In
musical terms, flow means being “in the groove”; in athletic terms, we call it
being “in the zone.” Flow is a state of being, one in which a task’s difficulty
is perfectly balanced against a performer’s skill, resulting in a feeling of
intense, focused attention.
Chen devoted his MFA thesis to the application of flow in games.
In his interpretation, flow can be graphed on a two-dimensional axis,
with challenge on the horizontal axis and ability on the vertical. He
then identifies a space surrounding the line that extends from low to
high challenge and ability, which he calls the “flow zone.” This zone is
nestled between anxiety above (too much challenge, insufficient ability)
and boredom below (not enough challenge, too much ability). Different
players, Chen argues, have different flow zones, representing higher or
lower capacities for each characteristic.
Chen contends that to reach broader audiences, games need to fit
a wider variety of flow zones, either by expanding those zones, or by
A Portrait of the Artist as a Game Studio
adjusting the game to better match a specific player’s zone. The latter could
be done implicitly through automated adjustment, or explicitly via player
choice.
To illustrate this principle, Chen and several USC colleagues made a
slick, abstract online game aptly titled flOw. In the game, the player controls
a microorganism in a pool of water. The player’s creature grows by eating
loose bits (or the bits of other, smaller creatures). Two types of orbs allow
the player to dive deeper into the murk, where the enemies are slightly
more threatening, or to rise to a level above.
flOw was the game that led Sony to sign Chen and his collaborators,
including fellow USC students Kellee Santiago, Nick Clark, and John
Edwards. The PS version, released in
, is really just a fancier and
more beautiful version of the Flash original.
You can see what Chen was aiming for: flOw was meant to allow
players to move through the game at their own pace, either adjusting
challenge by diving deeper, or adjusting ability by devouring more creature
bits. But there was a problem.
Even though the game ticked the boxes Chen had theorized, the player
controlled the creatures by manipulating the pitch and yaw axes of the
gyroscopic Sixaxis controller. This awkward interface couldn’t be tuned
by player or by machine. The strange and surprising exertion the game
demanded was further amplified by its mildly hallucinogenic, throbbing
visuals. Chen’s theory of flow in games hadn’t taken into account the
interface and environmental elements, only the game’s system.
Another factor contributed to a dissonance between flOw in practice and flow in theory. In creating his model of flow zones in games,
Chen didn’t adopt Csikszentmihalyi’s approach, but rather simplified it
significantly. For Csikszentmihalyi, flow does not exist between anxiety
and boredom; those states correspond with high challenge/low skill and
low challenge/medium skill, respectively. True flow does not exist all
along the line bisecting the two axes, but only at its top-rightmost corner,
where both challenge and skill are highest.
The combination of these two factors reveal the game’s flaw: being ”in
the zone” or “in the groove” may seem like a type of hallucinatory, out-ofbody experience, but it’s really a practice of awareness so deep that it moves
part four. Essays
beyond conscious decision. flOw externalized the quietude and smoothness
of flow into the game’s visual aesthetics, which are truly striking. But the
experience itself suggests a misinterpretation rather than an embrace of
Csikszentmihalyi’s concept. Flow is not a matter of planning and comfort,
but one of deep, durable expertise.
Thatgamecompany’s
follow-up, Flower, could be called a threedimensional, representational version of flOw. Instead of a multicellular
creature, the player controls the wind, blowing flower petals through the
air with the pitch and roll axes of the Sixaxis controller. By flying near
flowers, the player’s wind gust can pick up additional petals, and the groups
of petals can be used to unlock or “enliven” dead zones—restoring life and
color in a world dark with industry.
If flOw erred on the side of behavior, Flower steered too far in
the direction of environment. The game is so lush and beautiful, with
its wafting grasses and rosy sunsets that the repetitive petal-collecting
experience detracts from an otherwise idyllic experience of visitation.
Where flOw proved violent and delirious, Flower became overdemanding
and distracting, a nuisance of a game getting in the way of the experience
of its gorgeous computer scenery.
Like Goldilocks’s porridge, Journey finally reconciles these two poles:
neither too anxious nor too distracting. The game finally admits that the
application of flow in games is best left to those that allow mastery at
the highest levels of skill and challenge—games like basketball and Street
Fighter and chess and go and Starcraft. Journey forgoes abstract, dynamically adjusted gameplay in favor of simple exploration, which allows
the player to enjoy the haunting desert civilization the game erects from
invented, abstract myth.
As it turns out, the appealing aspects of flOw and Flower are found less
in their openness to new players through tunable gameplay and more in
the unique and striking worlds they created for players to explore.
flOw’s modest environment was already enough; the turquoise, shallow
murk giving way to threatening dark blue as the player descends into the
ocean that is the game’s setting. The undiscovered creatures darken the
shadows below, previewed in a deft visual portent. The game’s relative
difficulty or facility never had anything on the tiny intrigue of a droplet.
A Portrait of the Artist as a Game Studio
For its part, Flower offered a world rather than a microcosm, but it forced
the player to focus on its fauna, and eventually on the tenuous couplings
between the manmade world and the natural one. These settings were the
stars of the games.
Journey finally learns this lesson. Set in a mysterious, mythical desert
civilization, the game abandons the cloying framework of Flower’s levels,
which claimed to offer the dreams of citybound buds. Instead, Journey explains nothing and apologizes for nothing. Like Star Wars or Spirited
Away, Journey makes the correct assumption that a bewitching, lived-in
world is enough.
So much goes unanswered in Journey, from the very first screen.
The creatures are humanoid but not human, or not identifiably so. They
have eyes and dark skin, or else eyes but no faces. The desert dunes
are littered with monuments—are they pathmarkers? Tombstones? Relics?
Advertisements? Sandfalls douse chasms lined with temples dressed in
patterns reminiscent of Islamic geometric art. Fabric banners flap in the
breeze, awaiting the player’s touch. Pixel shaders push synesthesia: the
yellow sands feel hot somehow, and the pink sands cool. One environment—
level seems too prosaic a word here—is cast entirely in shadow, and the blue
sand and rising ribbons pay homage to the underwater worlds of flOw.
In Journey, thatgamecompany finally discovers that facility was never
the design problem they thought. Its games are about the feeling of being
somewhere, not about the feeling of solving something.
Thatgamecompany’s titles are elemental, each pursuing a precise,
careful characterization of a material form. For flOw, it was water.
For Cloud (another student game that predates the studio), vapor.
For Flower, grass. And for Journey, sand. In flOw, the material surrounds
the player. In Cloud, the player ejects it. In Flower, the player passes
through it on the way elsewhere. But in Journey, the sand has texture: It
slips under the player’s nomad at times; at others, dunes force it back. It
covers the air like murk and, when pushed to its limits, flips into snow.
These materials and environments make Journey, partly because of
their conception, and partly thanks to the smooth, delightful rendering that
John Edwards and his engineers manage to squeeze out of the PS . The
machine may have implicitly promised enormous, realistic game environ-
part four. Essays
ments like those of Red Dead Redemption or Saints Row, but Journey shows
that a world is fashioned from its tiny details as much as its cities.
Journey also—finally—abandons the Sixaxis control in favor of the
more conventional analog stick (although the device can be tilted to look
in different directions). While I suspect that the designers feared they
might descend into the ghetto of the adventure game by making such a
compromise, instead the more-traditional controls finally allow the serenity
and mystery that have been on the surface of each of the previous games
to embrace the reality of experience and not just the theory of design.
Zero’s Journey
Indeed, given the usual subjects of video games, players would be
forgiven for mistaking Journey’s title for an adventure. The hero’s journey
is a common theme in video games, but that formula requires a call to
adventure, an ordeal, a reward, and a return. Journey offers none of these
features, but something far more modest instead.
When the game starts, the player ascends a sand dune to view a tall
mountain with a slit peak. The destination is implied, but no reason given.
To progress, the player crosses the sands to discover and collect orbs that
extend a scarf onto his or her robes. When filled with the symbols imbued
by orbs or cloth, the player can fly, briefly, to reach new summits or to avoid
obstacles. The same symbols line the walls of the game’s temple ruins and
emanate above the player to signal others and carry out actions—a lost
language with no meaning implied nor deciphered.
As the player moves from dunes to temples to lost cities, she must
spread energy to neglected apparatuses. Just as the player’s scarf lightens
her feet, so cloth seems to be generally transformative in Journey’s universe.
Cloth portals spread bridges over chasms, and unleash fabric birds and
jellyfish.
Fantastic, yes, but not a hero’s journey. Insofar as the game has a story,
it seems impossible not to read it allegorically instead of mythically: an
individual progresses from weakness, or birth, or ignorance, or an origin
of any kind, through discovery and challenge and danger and confusion,
through to completion. It could be a coming of age, or a metaphor for
life, or an allegory of love or friendship or work or overcoming sickness
or slouging off madness. It could mean anything at all.
A Portrait of the Artist as a Game Studio
Thatgamecompany should be both praised and faulted for taking such
a morally, culturally, and religiously ambiguous position; surely every sect
and creed will be able to read their favorite meaning into the game. On the
one hand, this move underscores thatgamecompany’s sophistication: in a
medium where interpretation is scorned as indulgent and pretentious, Journey gives no ground; the player must bring something to the table.
On the other hand, the careful player may find the result as barren as
it is receptive. After each environment, a white figure (A god? A mother?
The mind’s mirror? The artist’s muse?) incants silently to the player’s redrobed humanoid. When she does, recent events are added to an inscription
of the journey thus far, rendered in symbol as if on rock or papyrus. But
not just thus far, also a bit further—the theme of the next scene revealed
in abstract, hieroglyphic form. Is the future being foretold, or is everyone’s
future always the same? In a very real way, the latter is true for Journey,
as everyone’s journey through the game will follow the same overall
progression through the same environments. With one exception.
That Journey is an online game is mystery many players may never
discover. The game itself never makes any such claims, and when downloaded, it arrives with no manual or instructions. Save for a subtle nod at
the end of the game’s credits (which many players may overlook or miss
entirely), only reviews and interviews with the creators reveal a feature
whose extensive design and engineering becomes the silent center of the
game, the wind that moves it.
Sometimes while you play, the game will invisibly match you up with
another PS owner who is also playing in the same environment. There’s
not much you can do with your companion—speech isn’t possible, but
touching each another refills the energy in the cloth of both characters’
scarves. Pressing one of the controller buttons emits a ping that can help a
player find his companion and, when used improvisationally, might allow
basic signaling. Only one companion appears at a time, although a player
might encounter many over the course of the game.
These encounters with the other are both touching and disturbing.
There is no mistaking a companion for an artificial intelligence; she moves
too erratically, or speeds ahead to steal the next objective too definitively,
or falls behind too listlessly. Even given the minimal actions of Journey,
part four. Essays
somehow these ghost players appear rounder than most of the scripted,
voice-acted characters in contemporary video games.
For another matter, you don’t really play with these other players. They
are there with you, doing what you do, helping at times and hindering at
others, plodding senselessly toward a mountain peak that has no meaning
save for those imbued by a few foreboding, pregnant camera pans. You’re
comforted by their presence. It’s like sitting on the couch close to someone
while watching TV.
Journey’s anonymous multiplayer interactions are touching, but they
are also tragic, like a Beckett novel with characters in red robes mumbling
“I can’t go on, I’ll go on” in inscrutable pictograms. At one point in the
deep-scarlet shadow of the caves, I swear I saw my companion crumble
to dust. If only Sartre had known that one could always just turn off the
console, Matrix-style.
If Journey’s journey is anyone’s, then it can mean anything we make
of it. But a tabula rasa carries all meaning and no meaning simultaneously.
For me, the journey was less my own than that of thatgamecompany itself,
a band of students stumbling toward improbable success and surfing it,
clumsily at first, then more certainly.
Thatgamecompany’s crew is still largely comprised of MFA alumni
from the interactive media division of USC’s famed cinema school. It’s
no wonder that their games are cinematic, not only in appearance and
duration (Journey lasts a little over two hours), but also in structure. Journey and Flower demonstrate a rare mastery of denouement in games. Good
filmic storytellers end their tales quickly and definitively after resolving the
main conflict. After its laborious levels, Flower erupted in the fast-paced,
colorful rebirth of a deadened, grayscale city and then concluded. Journey’s
denouement is even more dramatic and far more sentimental. Near the
mountain’s summit, in the snow, progress becomes more and more difficult.
Pace slows, then stops. My character, red hood now grey with the crust of
ice, succumbs to the cold earth. The screen goes white.
Then, suddenly, the mysterious white god-mother appears and looks
over me. What she does remains ambiguous: some will say she resurrects
me, others will claim my spirit is ejected into eternity, and still others
will interpret the last scene as a final bodily hallucination. But through
A Portrait of the Artist as a Game Studio
whirlwinds and cloth banners and the bright colors of sun and snow
and dawn, I rush up to the summit. Who can resist the exhilaration? It’s
invigorating, like a cold winter wind on flushed cheeks.
When they speak about their games, Jenova Chen and Kellee Santiago
often express a hope that they might explore or arouse positive emotions
in their players, emotions not felt from other sorts of games. Isn’t this
sense of delight and vitality precisely what they are after? Yes, to be sure.
But it is also the thrill of all victories, and the vertigo of all dizzinesses.
Chen and Santiago sell themselves short with this trite incantation about
emotions. For their journey has not been one of creating outcomes but of
culturing style, an aesthetic that defines the experience without need for
their aphorisms. The sand and the ruins. The wind and the fabric. The
silence of a cryptographic mythology. The vertigo of breeze, the swish of
dunes.
For my part, I plodded through the snow near the summit of Journey’s
cleft mountain with another traveler, one who entered that scene with a
regal scarf flowing far behind, easily twice as long as my own. We stumbled
up the mountain together, cowering behind stone tablets to avoid the wind.
At one point, I hobbled out foolishly before one of the game’s serpentine
flying enemies, who dove and sent us flying back. The impact eviscerated
most of my companion’s scarf, and I felt guilty.
We took our final slog through the dense snow and thick wind, and
we both collapsed together under its weight. Thinking back, I elongate the
short moment before the game interrupted me with its cloying samsaric
angel, and I imagine that this fallen other was Jenova or Kellee rather than
some stranger, that they had allowed me to join them on their journey to
journeyman. Before the screen goes white, I imagine whispering my tiny
counsel in the hope they might yet reach mastery: This. This is enough.
49.
A Fort in Time
W H AT I F O U R T E C H N O L O G Y
ISN’ T THE PROBLEM?
by Rebecca J. Rosen
The idea of a tech-specific Sabbath has been floating around for about a
decade. It is presented as an antidote to our “busyness”—a busyness that
exhausts us and inhibits deeper connections with people and places. If we
set aside our gadgets, the thinking goes, we can give ourselves a break from
this busyness and find the time to really connect.
It seems paradoxical that abstaining from networked technology
should help us connect better. As Jason Farman wrote in The
Atlantic last week, cellphones and a variety of apps can actually
foster deeper connections with the people and places around us, whether
through basic phone conversations or the ability to know more about our
surroundings.
But some people are clearly troubled by their relationships with
technology. Why are these modern-day Sabbatarians rhapsodic about the
benefits that a day away from technology can deliver?
part four. Essays
Perhaps it’s not so much the lack of technology as the creation of some
dedicated time for connecting. With a ritual at the beginning and one at the
end, and all manner of work proscribed in between, the Sabbath carves out
an unfragmented period; Rabbi Abraham Joshua Heschel, one of the great
Jewish scholars of the th century, called the Sabbath a “palace in time.”
Everyone observing the Sabbath is in the same metaphorical place, primed
for human connection.
***
Technological progress has meant changes in the way we experience
time. As the historian and critic Lewis Mumford wrote, “The clock, not
the steam-engine, is the key machine of the modern industrial age.” In
the domestic sphere, an older way of measuring time still reigns—taskmeasured time, in which you do something for as long as it takes (feed
a baby, cook a stew). But at work we measure time with clocks. It’s no
accident that a common expression for working is being “on the clock.”
At the core of the changes isn’t merely that we can measure time more
precisely, but that we actually divide time into smaller and smaller pieces.
In his landmark
-country time-diary study, German sociologist
Erwin Scheuch found that the more industrialized a country became,
the more activities its people crammed into a -hour period. He called
this phenomenon time-deepening, but, as Judith Shulevitz writes in her
book, The Sabbath World: Glimpses of a Different Order of Time, this phrase
is misleading “because stuffing life with more things and distractions makes
time feel shallower, not deeper.”
The Sabbath has always acted as a sort of protection from “timedeepening.” In a metaphorical sense, this is true even of the prototypical
Sabbath, during which God rested from the work of the first fragmentation
of time—the day from the night.
Today, time-deepening has been accelerated—or, at least, we sense that
it has been accelerated—by changes in our economy requiring more Americans to work longer hours and to be more reachable when they’re at home.
Shulevitz writes, “More Americans work during the off-hours than they
did half a century ago, the heyday of the nine-to-five, Monday-throughFriday workweek. According to the sociologist Harriet B. Presser, as of
, two-fifths of American workers were working non-standard hours—
A Fort in Time
’in the evening, at night, on a rotating shift, or during the weekend’—and
she wasn’t counting those who bring their work home and do it on their
off-hours, or who are self-employed.”
When we experience time-deepening we don’t merely feel that we are
doing more in a day; we feel that time is actually moving faster. Research
into how people perceive time suggests that when people are
distracted—when their focus is divided or elsewhere—they underestimate
the passage of time, thinking that less time has passed than actually
has. It’s perhaps because of this perception that when Monika Bauerlein
and Clara Jeffery described changes in work-life patterns in a recent
issue of Mother Jones , they called them the “great speed-up.” The
digital Sabbath isn’t the only strategy for dealing with this speedup:
a related set of “slow” movements—slow food, slow travel, even slow
science—have proliferated in recent years.
When we make a Sabbath and push back against the many claims on
our time, we are, in some ways, rebelling against this speedup and the
intrusion of work and labor into our domestic sphere. For this reason, some
strains of Sabbatarian thought have an anti-corporate cast. As Douglas
Rushkoff wrote of the Sabbath in Adbusters in
, “It’s our way of
disengaging from the corporate machine.” Turning off all your devices
doesn’t just disable Web browsing, e-mail, and YouTube; it makes you
invisible, for that period of time, to the companies that track you as you
click your way across the Web.
It’s for all these reasons that a Sabbath, digital or otherwise, can be
reinvigorating. When we take a day away from our tools and create a day
entirely under our own control, we create that “palace in time” where we
can meet our friends and family and, finally, connect.
***
If one concedes the point that a Sabbath for restorative reasons need
not proscribe technology, it may seem pointless to argue against the digital
Sabbath. What’s the harm?
The reason is that if we allow ourselves to blame the technology for
distracting us from our children or connecting with our communities, then
the solution is simply to put away the technology. We absolve ourselves
of the need to create social, political, and, sure, technological structures
part four. Essays
that allow us to have the kinds of relationships we want with the people
around us. We need to realize that at the core of our desire for a Sabbath
is a need to escape not the blinking screens of our electronic world, but the
ways that work and other obligations have intruded upon our lives and our
relationships.
We can begin by mimicking the Sabbath in miniature, by recognizing
that dedicating time to one activity or one person, without interruption
from gadgets, work, or other people, will help us slow down and connect.
We can use our gadgets to do this—a long talk on the phone is the most
obvious way—or we can leave them out of it.
Such minimal steps won’t build something profound like Heschel’s
“palace in time.” They’ll result in something smaller—a little fort in time.
There, we can take shelter, replenish our resources, and gear up for the
battles of the week ahead.