Course Reader - Mark D. Pepper

Transcription

Course Reader - Mark D. Pepper
I
Lev Manovich
The Language of New Media
43
I. What is New Media?
What is new media? We may begin answering this question by listing the
categories which are commonly discussed under this topic in popular press:
Internet, Web sites, computer multimedia, computer games, CD-ROMs and DVD,
virtual reality. Is this all new media is? For instance, what about television
programs which are shot on digital video and edited on computer workstations?
Or what about feature films which use 3D animation and digital compositing?
Shall we count these as new media? In this case, what about all images and textimage compositions — photographs, illustrations, layouts, ads — which are also
created on computers and then printed on paper? Where shall we stop?
As can be seen from these examples, the popular definition of new media
identifies it with the use of a computer for distribution and exhibition, rather than
with production. Therefore, texts distributed on a computer (Web sites and
electronic books) are considered to be new media; texts distributed on paper are
not. Similarly, photographs which are put on a CD-ROM and require a computer
to view them are considered new media; the same photographs printed as a book
are not.
Shall we accept this definition? If we want to understand the effects of
computerization on culture as a whole, I think it is too limiting. There is no
reason to privilege computer in the role of media exhibition and distribution
machine over a computer used as a tool for media production or as a media
storage device. All have the same potential to change existing cultural languages.
And all have the same potential to leave culture as it is.
The last scenario is unlikely, however. What is more likely is that just as
the printing press in the fourteenth century and photography in the nineteenth
century had a revolutionary impact on the development of modern society and
culture, today we are in the middle of a new media revolution -- the shift of all of
our culture to computer-mediated forms of production, distribution and
communication. This new revolution is arguably more profound than the previous
ones and we are just beginning to sense its initial effects. Indeed, the introduction
of printing press affected only one stage of cultural communication -- the
distribution of media. In the case of photography, its introduction affected only
one type of cultural communication -- still images. In contrast, computer media
revolution affects all stages of communication, including acquisition,
manipulating, storage and distribution; it also affects all types of media -- text,
still images, moving images, sound, and spatial constructions.
How shall we begin to map out the effects of this fundamental shift? What
are the ways in which the use of computers to record, store, create and distribute
media makes it “new”?
44
In section “Media and Computation” I show that new media represents a
convergence of two separate historical trajectories: computing and media
technologies. Both begin in the 1830's with Babbage's Analytical Engine and
Daguerre's daguerreotype. Eventually, in the middle of the twentieth century, a
modern digital computer is developed to perform calculations on numerical data
more efficiently; it takes over from numerous mechanical tabulators and
calculators already widely employed by companies and governments since the
turn of the century. In parallel, we witness the rise of modern media technologies
which allow the storage of images, image sequences, sounds and text using
different material forms: a photographic plate, a film stock, a gramophone record,
etc. The synthesis of these two histories? The translation of all existing media into
numerical data accessible for computers. The result is new media: graphics,
moving images, sounds, shapes, spaces and text which become computable, i.e.
simply another set of computer data. In “Principles of New Media” I look at the
key consequences of this new status of media. Rather than focusing on familiar
categories such as interactivity or hypermedia, I suggest a different list. This list
reduces all principles of new media to five: numerical representation, modularity,
automation, variability and cultural transcoding. In the last section, “What New
Media is Not,” I address other principles which are often attributed to new media.
I show that these principles can already be found at work in older cultural forms
and media technologies such as cinema, and therefore they are by themselves are
not sufficient to distinguish new media from the old.
45
How Media Became New
On August 19, 1839, the Palace of the Institute in Paris was completely full with
curious Parisians who came to hear the formal description of the new
reproduction process invented by Louis Daguerre. Daguerre, already well-known
for his Diorama, called the new process daguerreotype. According to a
contemporary, "a few days later, opticians' shops were crowded with amateurs
panting for daguerreotype apparatus, and everywhere cameras were trained on
buildings. Everyone wanted to record the view from his window, and he was
10
lucky who at first trial got a silhouette of roof tops against the sky." The media
frenzy has begun. Within five months more than thirty different descriptions of
the techniques were published all around the world: Barcelona, Edinburg, Halle,
Naples, Philadelphia, Saint Petersburg, Stockholm. At first, daguerreotypes of
architecture and landscapes dominated the public's imagination; two years later,
after various technical improvements to the process, portrait galleries were
opened everywhere — and everybody rushed in to have their picture taken by a
11
new media machine.
In 1833 Charles Babbage started the design for a device he called the
Analytical Engine. The Engine contained most of the key features of the modern
digital computer. The punch cards were used to enter both data and instructions.
This information was stored in the Engine's memory. A processing unit, which
Babbage referred to as a "mill," performed operations on the data and wrote the
results to memory; final results were to be printed out on a printer. The Engine
was designed to be capable of doing any mathematical operation; not only would
it follow the program fed into it by cards, but it would also decide which
instructions to execute next, based upon intermediate results. However, in contrast
to the daguerreotype, not even a single copy of the Engine was completed. So
while the invention of this modern media tool for the reproduction of reality
impacted society right away, the impact of the computer was yet to be measured.
Interestingly, Babbage borrowed the idea of using punch cards to store
information from an earlier programmed machine. Around 1800, J.M. Jacquard
invented a loom which was automatically controlled by punched paper cards. The
loom was used to weave intricate figurative images, including Jacquard's portrait.
This specialized graphics computer, so to speak, inspired Babbage in his work on
the Analytical Engine, a general computer for numerical calculations. As Ada
Augusta, Babbage's supporter and the first computer programmer, put it, "the
Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves
12
flowers and leaves." Thus, a programmed machine was already synthesizing
images even before it was put to process numbers. The connection between the
Jacquard loom and the Analytical Engine is not something historians of
46
computers make much of, since for them computer image synthesis represents just
one application of the modern digital computer among thousands of others; but
for a historian of new media it is full of significance.
We should not be surprised that both trajectories — the development of
modern media, and the development of computers — begin around the same time.
Both media machines and computing machines were absolutely necessary for the
functioning of modern mass societies. The ability to disseminate the same texts,
images and sounds to millions of citizens thus assuring that they will have the
same ideological beliefs was as essential as the ability to keep track of their birth
records, employment records, medical records, and police records. Photography,
film, the offset printing press, radio and television made the former possible while
computers made possible the latter. Mass media and data processing are the
complimentary technologies of a modern mass society; they appear together and
develop side by side, making this society possible.
For a long time the two trajectories run in parallel without ever crossing
paths. Throughout the nineteenth and the early twentieth century, numerous
mechanical and electrical tabulators and calculators were developed; they were
gradually getting faster and their use was became more wide spread. In parallel,
we witness the rise of modern media which allows the storage of images, image
sequences, sounds and text in different material forms: a photographic plate, film
stock, a gramophone record, etc.
Let us continue tracing this joint history. In the 1890s modern media took
another step forward as still photographs were put in motion. In January of 1893,
the first movie studio — Edison's "Black Maria" — started producing twenty
seconds shorts which were shown in special Kinetoscope parlors. Two years later
the Lumière brothers showed their new Cinématographie camera/projection
hybrid first to a scientific audience, and, later, in December of 1895, to the paying
public. Within a year, the audiences in Johannesburg, Bombay, Rio de Janeiro,
Melbourne, Mexico City, and Osaka were subjected to the new media machine,
13
and they found it irresistible. Gradually the scenes grew longer, the staging of
reality before the camera and the subsequent editing of its samples became more
intricate, and the copies multiplied. They would be sent to Chicago and Calcutta,
to London and St. Petersburg, to Tokyo and Berlin and thousands and thousands
of smaller places. Film images would soothe movie audiences, who were too
eager to escape the reality outside, the reality which no longer could be
adequately handled by their own sampling and data processing systems (i.e., their
brains). Periodic trips into the dark relaxation chambers of movie theaters became
a routine survival technique for the subjects of modern society.
The 1890s was the crucial decade, not only for the development of media,
but also for computing. If individuals' brains were overwhelmed by the amounts
of information they had to process, the same was true of corporations and of
government. In 1887, the U.S. Census office was still interpreting the figures from
47
the 1880 census. For the next 1890 census, the Census Office adopted electric
tabulating machines designed by Herman Hollerith. The data collected for every
person was punched into cards; 46, 804 enumerators completed forms for a total
population of 62,979,766. The Hollerith tabulator opened the door for the
adoption of calculating machines by business; during the next decade electric
tabulators became standard equipment in insurance companies, public utilities
companies, railroads and accounting departments. In 1911, Hollerith's Tabulating
Machine company was merged with three other companies to form the
Computing-Tabulating-Recording Company; in 1914 Thomas J. Watson was
chosen as its head. Ten years later its business tripled and Watson renamed the
14
company the International Business Machines Corporation, or IBM.
We are now in the new century. The year is 1936. This year the British
mathematician Alan Turing wrote a seminal paper entitled "On Computable
Numbers." In it he provided a theoretical description of a general-purpose
computer later named after its inventor the Universal Turing Machine. Even
though it was only capable of four operations, the machine could perform any
calculation which can be done by a human and could also imitate any other
computing machine. The machine operated by reading and writing numbers on an
endless tape. At every step the tape would be advanced to retrieve the next
command, to read the data or to write the result. Its diagram looks suspiciously
like a film projector. Is this a coincidence?
If we believe the word cinematograph, which means "writing movement,"
the essence of cinema is recording and storing visible data in a material form. A
film camera records data on film; a film projector reads it off. This cinematic
apparatus is similar to a computer in one key respect: a computer's program and
data also have to be stored in some medium. This is why the Universal Turing
Machine looks like a film projector. It is a kind of film camera and film projector
at once: reading instructions and data stored on endless tape and writing them in
other locations on this tape. In fact, the development of a suitable storage medium
and a method for coding data represent important parts of both cinema and
computer pre-histories. As we know, the inventors of cinema eventually settled on
using discrete images recorded on a strip of celluloid; the inventors of a computer
— which needed much greater speed of access as well as the ability to quickly
read and write data — came to store it electronically in a binary code.
In the same year, 1936, the two trajectories came even closer together.
Starting this year, and continuing into the Second World War, German engineer
Konrad Zuse had been building a computer in the living room of his parents'
apartment in Berlin. Zuse's computer was the first working digital computer. One
of his innovations was program control by punched tape. The tape Zuse used was
15
actually discarded 35 mm movie film.
One of these surviving pieces of this film shows binary code punched over
the original frames of an interior shot. A typical movie scene — two people in a
48
room involved in some action — becomes a support for a set of computer
commands. Whatever meaning and emotion was contained in this movie scene
has been wiped out by its new function as a data carrier. The pretense of modern
media to create simulation of sensible reality is similarly canceled; media is
reduced to its original condition as information carrier, nothing else, nothing
more. In a technological remake of the Oedipal complex, a son murders his father.
The iconic code of cinema is discarded in favor of the more efficient binary one.
Cinema becomes a slave to the computer.
But this is not yet the end of the story. Our story has a new twist — a
happy one. Zuse's film, with its strange superimposition of the binary code over
the iconic code anticipates the convergence which gets underway half a century
later. The two separate historical trajectories finally meet. Media and computer —
Daguerre's daguerreotype and Babbage's Analytical Engine, the Lumière
Cinématographie and Hollerith's tabulator — merge into one. All existing media
are translated into numerical data accessible for the computers. The result:
graphics, moving images, sounds, shapes, spaces and text become computable,
i.e. simply another set of computer data. In short, media becomes new media.
This meeting changes both the identity of media and of the computer
itself. No longer just a calculator, a control mechanism or a communication
device, a computer becomes a media processor. Before the computer could read a
row of numbers outputting a statistical result or a gun trajectory. Now it can read
pixel values, blurring the image, adjusting its contrast or checking whether it
contains an outline of an object. Building upon these lower-level operations, it can
also perform more ambitious ones: searching image databases for images similar
in composition or content to an input image; detecting shot changes in a movie; or
synthesizing the movie shot itself, complete with setting and the actors. In a
historical loop, a computer returned to its origins. No longer just an Analytical
Engine, suitable only to crunch numbers, the computer became Jacqurd's loom —
a media synthesizer and manipulator.
49
Principles of New Media
The identity of media has changed even more dramatically. Below I summarize
some of the key differences between old and new media. In compiling this list of
differences I tried to arrange them in a logical order. That is, the principles 3-5 are
dependent on the principles 1-2. This is not dissimilar to axiomatic logic where
certain axioms are taken as staring points and further theorems are proved on their
basis.
Not every new media object obeys these principles. They should be
considered not as some absolute laws but rather as general tendencies of a culture
undergoing computerization. As the computerization affects deeper and deeper
layers of culture, these tendencies will manifest themselves more and more.
1. Numerical Representation
All new media objects, whether they are created from scratch on computers or
converted from analog media sources, are composed of digital code; they are
numerical representations. This has two key consequences:
1.1. New media object can be described formally (mathematically). For
instance, an image or a shape can be described using a mathematical function.
1.2. New media object is a subject to algorithmic manipulation. For
instance, by applying appropriate algorithms, we can automatically remove
"noise" from a photograph, improve its contrast, locate the edges of the shapes, or
change its proportions. In short, media becomes programmable.
When new media objects are created on computers, they originate in numerical
form. But many new media objects are converted from various forms of old
media. Although most readers understand the difference between analog and
digital media, few notes should be added on the terminology and the conversion
process itself. This process assumes that data is originally continuos, i.e. “the axis
or dimension that is measured has no apparent indivisible unit from which it is
16
composed.” Converting continuos data into a numerical representation is called
digitization. Digitization consists from two steps: sampling and quantization.
First, data is sampled, most often at regular intervals, such as the grid of pixels
used to represent a digital image. Technically, a sample is defined as “a
measurement made at a particular instant in space and time, according to a
specified procedure.” The frequency of sampling is referred to as resolution.
Sampling turns continuos data into discrete data. This is data occurring in distinct
units: people, pages of a book, pixels. Second, each sample is quantified, i.e.
50
assigned a numerical vale drawn from a defined range (such as 0-255 in the case
17
of a 8-bit greyscale image).
While some old media such as photography and sculpture is truly
continuos, most involve the combination of continuos and discrete coding. One
example is motion picture film: each frame is a continuos photograph, but time is
broken into a number of samples (frames). Video goes one step further by
sampling the frame along the vertical dimension (scan lines). Similarly, a
photograph printed using a halftone process combine discrete and continuos
representations. Such photograph consist from a number of orderly dots (i.e.,
samples), however the diameters and areas of dots vary continuously.
As the last example demonstrates, while old media contains level(s) of
discrete representation, the samples were never quantified. This quantification of
samples is the crucial step accomplished by digitization. But why, we may ask,
modern media technologies were often in part discrete? The key assumption of
modern semiotics is that communication requires discrete units. Without discrete
units, there is no language. As Roland Barthes has put it, “language is, as it were,
that which divides reality (for instance the continuos spectrum of the colors is
18
verbally reduced to a series of discontinuous terms). In postulating this,
semioticians took human language as a prototypical example of a communication
system. A human language is discrete on most scales: we speak in sentences; a
sentence is made from words; a word consists from morphemes, and so on. If we
are to follow the assumption that any form of communication requires discrete
representation, we may expect that media used in cultural communication will
have discrete levels. At first this explanation seems to work. Indeed, a film
samples continuos time of human existence into discrete frames; a drawing
samples visible reality into discrete lines; and a printed photograph samples it into
discrete dots. This assumption does not universally work, however: photographs,
for instance, do not have any apparent units. (Indeed, in the 1970s semiotics was
criticized for its linguistic bias, and most semioticians came to recognize that
language-based model of distinct units of meaning can’t be applied to many kinds
of cultural communication.) More importantly, the discrete units of modern media
are usually not the units of meanings, the way morphemes are. Neither film
frames not the halftone dots have any relation to how film or a photographs affect
the viewer (except in modern art and avant-garde film — think of paintings by
Roy Lichtenstein and films of Paul Sharits — which often make the “material”
units of media into the units of meaning.)
The more likely reason why modern media has discrete levels is because it
emerges during Industrial Revolution. In the nineteenth century, a new
organization of production known as factory system gradually replaced artisan
labor. It reached its classical form when Henry Ford installed first assembly line
in his factory in 1913. The assembly line relied on two principles. The first was
standardization of parts, already employed in the production of military uniforms
51
in the nineteenth century. The second, never principle, was the separation of the
production process into a set of repetitive, sequential, and simple activities that
could be executed by workers who did not have to master the entire process and
could be easily replaced.
Not surprisingly, modern media follows the factory logic, not only in
terms of division of labor as witnessed in Hollywood film studios, animation
studios or television production, but also on the level of its material organization.
The invention of typesetting machines in the 1880s industrialized publishing
while leading to standardization of both type design and a number and types of
fonts used. In the 1890s cinema combined automatically produced images (via
photography) with a mechanical projector. This required standardization of both
image dimensions (size, frame ratio, contrast) and of sampling rate of time (see
“Digital Cinema” section for more detail). Even earlier, in the 1880s, first
television systems already involved standardization of sampling both in time and
in space. These modern media systems also followed the factory logic in that once
a new “model” (a film, a photograph, an audio recording) was introduced,
numerous identical media copies would be produced from this master. As I will
show below, new media follows, or actually, runs ahead of a quite a different
logic of post-industrial society — that of individual customization, rather that of
mass standardization.
2. Modularity
This principle can be called "fractal structure of new media.” Just as a fractal has
the same structure on different scales, a new media object has the same modular
structure throughout. Media elements, be it images, sounds, shapes, or behaviors,
are represented as collections of discrete samples (pixels, polygons, voxels,
characters, scripts). These elements are assembled into larger-scale objects but
they continue to maintain their separate identity. The objects themselves can be
combined into even larger objects -- again, without losing their independence. For
example, a multimedia "movie" authored in popular Macromedia Director
software may consist from hundreds of still images, QuickTime movies, and
sounds which are all stored separately and are loaded at run time. Because all
elements are stored independently, they can be modified at any time without
having to change Director movie itself. These movies can be assembled into a
larger "movie," and so on. Another example of modularity is the concept of
“object” used in Microsoft Office applications. When an object is inserted into a
document (for instance, a media clip inserted into a Word document), it continues
to maintain its independence and can always be edited with the program used
originally to create it. Yet another example of modularity is the structure of a
HTML document: with the exemption of text, it consists from a number of
52
separate objects — GIF and JPEG images, media clips, VRML scenes,
Schockwave and Flash movies -- which are all stored independently locally
and/or on a network. In short, a new media object consists from independent parts
which, in their turn, consist from smaller independent parts, and so on, up to the
level of smallest “atoms” such as pixels, 3D points or characters.
World Wide Web as a whole is also completely modular. It consists from
numerous Web pages, each in its turn consisting from separate media elements.
Every element can be always accessed on its own. Normally we think of elements
as belonging to their corresponding Web sites, but this just a convention,
reinforced by commercial Web browsers. Netomat browser which extract
elements of a particular media type from different Web pages (for instance, only
images) and display them together without identifying the Web sites they come
from, highlights for us this fundamentally discrete and non-hierarchical
organization of the Web (see introduction to “Interface” chapter for more on this
browser.)
In addition to using the metaphor of a fractal, we can also make an
analogy between modularity of new media and the structured computer
programming. Structural computer programming which became standard in the
1970s involves writing small and self-sufficient modules (called in different
computer languages subroutines, functions, procedures, scripts) which are
assembled into larger programs. Many new media objects are in fact computer
programs which follow structural programming style. For example, most
interactive multimedia applications are programs written in Macromedia
Director’s Lingo. A Lingo program defines scripts which control various repeated
actions, such as clicking on a button; these scripts are assembled into larger
scripts. In the case of new media objects which are not computer programs, an
analogy with structural programming still can be made because their parts can be
accessed, modified or substituted without affecting the overall structure of an
object. This analogy, however, has its limits. If a particular module of a computer
program is deleted, the program would not run. In contrast, just as it is the case
with traditional media, deleting parts of a new media object does not render its
meaningless. In fact, the modular structure of new media makes such deletion and
substitution of parts particularly easy. For example, since a HTML document
consists from a number of separate objects each represented by a line of HTML
code, it is very easy to delete, substitute or add new objects. Similarly, since in
Photoshop the parts a digital image are usually placed on separate layers, these
parts can be deleted and substituted with a click of a button.
3. Automation
53
Numerical coding of media (principle 1) and modular structure of a media object
(principle 2) allow to automate many operations involved in media creation,
manipulation and access. Thus human intentionally can be removed from the
19
creative process, at least in part.
The following are some of the examples of what can be called “lowlevel” automation of media creation, in which the computer user modifies or
creates from scratch a media object using templates or simple algorithms. These
techniques are robust enough so that they are included in most commercial
software for image editing, 3D graphics, word processing, graphic layout, and so
on. Image editing programs such as Photoshop can automatically correct scanned
images, improving contrast range and removing noise. They also come with filters
which can automatically modify an image, from creating simple variations of
color to changing the whole image as though it was painted by Van Gog, Seurat
or other brand-name artist. Other computer programs can automatically generate
3D objects such as trees, landscapes, human figures and detailed ready-to-use
animations of complex natural phenomena such as fire and waterfalls. In
Hollywood films, flocks of birds, ant colonies and crowds of people are
automatically created by AL (artificial life) software. Word processing, page
layout, presentation and Web creation programs come with "agents" which can
automatically create the layout of a document. Writing software helps the user to
create literary narratives using formalized highly conventions genre convention.
Finally, in what maybe the most familiar experience of automation of media
generation to most computer users, many Web sites automatically generate Web
pages on the fly when the user reaches the site. They assemble the information
from the databases and format it using generic templates and scripts.
The researchers are also working on what can be called “high-level”
automation of media creation which requires a computer to understand, to a
certain degree, the meanings embedded in the objects being generated, i.e. their
semantics. This research can be seen as a part of a larger initiative of artificial
intelligence (AI). As it is well known, AI project achieved only very limited
success since its beginnings in the 1950s. Correspondingly, work on media
generation which requires understanding of semantics is also in the research stage
and is rarely included in commercial software. Beginning in the 1970s, computers
were often used to generate poetry and fiction. In the 1990s, the users of Internet
chat rooms became familiar with bots -- the computer programs which simulate
human conversation. The researchers at New York University showed a “virtual
theater” composed of a few “virtual actors” which adjust their behavior in real20
time in response to user’s actions. The MIT Media Lab developed a number of
different projects devoted to “high-level” automation of media creation and use: a
“smart camera” which can automatically follow the action and frame the shots
21
given a script; ALIVE, a virtual environment where the user interacted with
54
22
animated characters; a new kind of human-computer interface where the
computer presents itself to a user as an animated talking character. The character,
generated by a computer in real-time, communicates with user using natural
language; it also tries to guess user’s emotional state and to adjust the style of
23
interaction accordingly.
The area of new media where the average computer user encountered AI
in the 1990s was not, however, human-computer interface, but computer games.
Almost every commercial game includes a component called AI engine. It stands
for part of the game’s computer code which controls its characters: car drivers in a
car race simulation, the enemy forces in a strategy game such as Command and
Conquer, the single enemies which keep attacking the user in first-person shooters
such as Quake. AI engines use a variety of approaches to simulate human
intelligence, from rule-based systems to neural networks. Like AI expert systems,
these characters have expertise in some well-defined but narrow area such as
attacking the user. But because computer games are highly codified and rulebased, these characters function very effectively. That is, they effectively respond
to whatever few things the user are allowed to ask them to do: run forward, shoot,
pick up an object. They can’t do anything else, but then the game does not
provide the opportunity for the user to test this. For instance, in a martial arts
fighting game, I can’t ask questions of my opponent, nor do I expect him or her to
start a conversation with me. All I can do is to “attack” my opponent by pressing
a few buttons; and within this highly codified situation the computer can “fight”
me back very effectively. In short, computer characters can display intelligence
and skills only because the programs put severe limits on our possible interactions
with them. Put differently, the computers can pretend to be intelligent only by
tricking us into using a very small part of who we are when we communicate with
them. So, to use another example, at 1997 SIGGRAPH convention I was playing
against both human and computer-controlled characters in a VR simulation of
some non-existent sport game. All my opponents appeared as simple blobs
covering a few pixels of my VR display; at this resolution, it made absolutely no
difference who was human and who was not.
Along with “low-level” and “high-level” automation of media creation,
another area of media use which is being subjected to increasing automation is
media access. The switch to computers as means to store and access enormous
amount of media material, exemplified by the by “media assets” stored in the
databases of stock agencies and global entertainment conglomerates, as well as by
the public “media assets” distributed across numerous Web sites, created the need
to find more efficient ways to classify and search media objects. Word processors
and other text management software for a long time provided the abilities to
search for specific strings of text and automatically index documents. UNIX
operating system also always included powerful commands to search and filter
text files. In the 1990s software designers started to provide media users with
55
similar abilities. Virage introduced Virage VIR Image Engine which allows to
search for visually similar image content among millions of images as well as a
24
set of video search tools to allow indexing and searching video files. By the end
of the 1990s, the key Web search engines already included the options to search
the Internet by specific media such as images, video and audio.
The Internet, which can be thought of as one huge distributed media
database, also crystallized the basic condition of the new information society:
over-abundance of information of all kind. One response was the popular idea of
software “agents” designed to automate searching for relevant information. Some
agents act as filters which deliver small amounts of information given user's
criteria. Others are allowing users to tap into the expertise of other users,
following their selections and choices. For example, MIT Software Agents Group
developed such agents as BUZZwatch which “distills and tracks trends, themes,
and topics within collections of texts across time” such as Internet discussions and
Web pages; Letizia, “a user interface agent that assists a user browsing the World
Wide Web by… scouting ahead from the user's current position to find Web
pages of possible interest”; and Footprints which “uses information left by other
25
people to help you find your way around.”
By the end of the twentieth century, the problem became no longer how to
create a new media object such as an image; the new problem was how to find the
object which already exists somewhere. That is, if you want a particular image,
chances are it is already exists -- but it may be easier to create one from scratch
when to find the existing one. Beginning in the nineteenth century, modern
society developed technologies which automated media creation: a photo camera,
a film camera, a tape recorder, a video recorder, etc. These technologies allowed
us, over the course of one hundred and fifty years, to accumulate an
unprecedented amount of media materials: photo archives, film libraries, audio
archives…This led to the next stage in media evolution: the need for new
technologies to store, organize and efficiently access these media materials. These
new technologies are all computer-based: media databases; hypermedia and other
ways of organizing media material such the hierarchical file system itself; text
management software; programs for content-based search and retrieval. Thus
automation of media access is the next logical stage of the process which was
already put into motion when a first photograph was taken. The emergence of new
media coincides with this second stage of a media society, now concerned as
26
much with accessing and re-using existing media as with creating new one.
(See “Database” section for more on databases).
4. Variability
56
A new media object is not something fixed once and for all but can exist in
different, potentially infinite, versions. This is another consequence of numerical
coding of media (principle 1) and modular structure of a media object (principle
2). Other terms which are often used in relation to new media and which would be
appropriate instead of “variable” is “mutable” and “liquid.”
Old media involved a human creator who manually assembled textual,
visual and/or audio elements into a particular composition or a sequence. This
sequence was stored in some material, its order determined once and for all.
Numerous copies could be run off from the master, and, in perfect correspondence
with the logic of an industrial society, they were all identical. New media, in
contrast, is characterized by variability. Instead of identical copies a new media
object typically gives rise to many different versions. And rather being created
completely by a human author, these versions are often in part automatically
assembled by a computer. (The already quoted example of Web pages
automatically generated from databases using the templates created by Web
designers can be invoke here as well.) Thus the principle of variability is closely
connected to automation.
Variability would also will not be possible without modularity. Stored
digitally, rather than in some fixed medium, media elements maintain their
separate identity and can be assembled into numerous sequences under program
control. In addition, because the elements themselves are broken into discrete
samples (for instance, an image is represented as an array of pixels), they can be
also created and customized on the fly.
The logic of new media thus corresponds to the post-industrial logic of
"production on demand" and "just in time" delivery which themselves were made
possible by the use of computers and computer networks in all stages of
manufacturing and distribution. Here "culture industry" (the term was originally
coined by Theodor Adorno in the 1930s) is actually ahead of the rest of the
industry. The idea that a customer determines the exact features of her car at the
showroom, the data is then transmitted to the factory, and hours later the new car
is delivered, remains a dream, but in the case of computer media, it is reality.
Since the same machine is used as a showroom and a factory, i.e., the same
computer generates and displays media -- and since the media exists not as a
material object but as data which can be sent through the wires with the speed of
light, the customized version created in response to user’s input is delivered
almost immediately. Thus, to continue with the same example, when you access a
Web site, the server immediately assembles a customized Web page.
Here are some particular cases of the variability principle (most of them
will be discussed in more detail in later chapters):
4.1. Media elements are stored in a media database; a variety of end-user
objects which vary both in resolution, in form and in content can be generated,
either beforehand, or on demand, from this database. At first, we may think that
this is simply a particular technological implementation of variability principle,
57
but, as I will show in “Database” section, in a computer age database comes to
function as a cultural form of its own. It offers a particular model of the world and
of the human experience. It also affects how the user conceives of data which it
contains.
4.2. It becomes possible to separate the levels of "content" (data) and
interface. A number of different interfaces can be created to the same data. A new
media object can be defined as one or more interfaces to a multimedia database
(see introduction to “Interface” chapter and “Database” section for more
27
discussion of this principle).
4.3. The information about the user can be used by a computer program to
automatically customize the media composition as well as to create the elements
themselves. Examples: Web sites use the information about the type of hardware
and browser or user's network address to automatically customize the site which
the user will see; interactive computer installations use information about the
user's body movements to generate sounds, shapes, and images, or to control
behaviors of artificial creatures.
4.4. A particular case of 4.3 is branching-type interactivity (sometimes
also called menu-based interactivity.) This term refers to programs in which all
the possible objects which the user can visit form a branching tree structure.
When the user reaches a particular object, the program presents her with choices
and let her pick. Depending on the value chosen, the user advances along a
particular branch of the tree. For instance, in Myst each screen typically contains
a left and a right button, clicking on the button retrieves a new screen, and so on.
In this case the information used by a program is the output of user's cognitive
process, rather than the network address or body position. (See “Menus, Filters,
Plug-ins” for more discussion of this principle.)
4.5. Hypermedia is another popular new media structure, which
conceptually is close to branching-type interactivity (because quite often the
elements are connected using a branch tree structure). In hypermedia, the
multimedia elements making a document are connected through hyperlinks. Thus
the elements and the structure are independent of each other --rather than hardwired together, as in traditional media. World Wide Web is a particular
implementation of hypermedia in which the elements are distributed throughout
the network . Hypertext is a particular case of hypermedia which uses only one
media type — text. How does the principle of variability works in this case? We
can conceive of all possible paths through a hypermedia document as being
different versions of it. By following the links the user retrieves a particular
version of a document.
4.6. Another way in which different versions of the same media objects
are commonly generated in computer culture is through periodic updates.
Networks allow the content of a new media object to be periodically updating
while keeping its structure intact. For instance, modern software applications can
58
periodically check for updates on the Internet and then download and install these
updates, sometimes without any actions from the user. Most Web sites are also
periodically updated either manually or automatically, when the data in the
databases which drives the sites changes. A particularly interesting case of this
“updateability” feature is the sites which update some information, such as such
as stock prices or weather, continuosly.
4.7. One of the most basic cases of the variability principle is scalability,
in which different versions of the same media object can be generated at various
sizes or levels of detail. The metaphor of a map is useful in thinking about the
scalability principle. If we equate a new media object with a physical territory,
different versions of this object are like maps of this territory, generated at
different scales. Depending on the scale chosen, a map provides more or less
detail about the territory. Indeed, different versions of a new media object may
vary strictly quantitatively, i.e. in the amount of detail present: for instance, a full
size image and its icon, automatically generated by Photoshop; a full text and its
shorter version, generated by “Autosummarize” command in Microsoft Word 97;
or the different versions which can be created using “Outline” command in Word.
Beginning with version 3 (1997), Apple’s QuickTime format also made possible
to imbed a number of different versions which differ in size within a single
QuickTime movie; when a Web user accesses the movie, a version is
automatically selected depending on connection speed. Conceptually similar
technique called “distancing” or “level of detail” is used in interactive virtual
worlds such as VRML scenes. A designer creates a number of models of the
same object, each with progressively less detail. When the virtual camera is close
to the object, a highly detailed model is used; if the object is far away, a lesser
detailed version is automatically substituted by a program to save unnecessary
computation of detail which can’t be seen anyway.
New media also allows to create versions of the same object which differ
from each other in more substantial ways. Here the comparison with maps of
diffident scales no longer works. The examples of commands in commonly used
software packages which allow to create such qualitatively different versions are
“Variations” and “Adjustment layers” in Photoshop 5 and “writing style” option
in Word’s “Spelling and Grammar” command. More examples can be found on
the Internet were, beginning in the middle of the 1990s, it become common to
create a few different versions of a Web site. The user with a fast connection can
choose a rich multimedia version while the user with a slow connection can settle
for a more bare-bones version which loads faster.
Among new media artworks, David Blair’s WaxWeb, a Web site which is
an “adaptation” of an hour long video narrative, offers a more radical
implementation of the scalability principle. While interacting with the narrative,
the user at any point can change the scale of representation, going from an imagebased outline of the movie to a complete script or a particular shot, or a VRML
59
28
scene based on this shot, and so on. Another example of how use of scalability
principle can create a dramatically new experience of an old media object is
Stephen Mamber’s database-driven representation of Hitchock’s Birds. Mamber’s
software generates a still for every shot of the film; it then automatically
combines all the stills into a rectangular matrix. Every cell in the matrix
corresponds to a particular shot from the film. As a result, time is spatialized,
similar to how it was done in Edisons’s early Kinetoscope cylinders (see “The
Myths of New Media.”) Spatializing the film allows us to study its different
temporal structures which would be hard to observe otherwise. As in WaxWeb,
the user can at any point change the scale of representation, going from a
complete film to a particular shot.
As can be seen, the principle of variability is a useful in allowing us to
connect many important characteristics of new media which on first sight may
appear unrelated. In particular, such popular new media structures as branching
(or menu) interactivity and hypermedia can be seen as particular instances of
variability principle (4.4 and 4.5, respectively). In the case of branching
interactivity, the user plays an active role in determining the order in which the
already generated elements are accessed. This is the simplest kind of interactivity;
more complex kinds are also possible where both the elements and the structure
of the whole object are either modified or generated on the fly in response to
user's interaction with a program. We can refer to such implementations as open
interactivity to distinguish them from the closed interactivity which uses fixed
elements arranged in a fixed branching structure. Open interactivity can be
implemented using a variety of approaches, including procedural and objectoriented computer programming, AI, AL, and neural networks.
As long as there exist some kernel, some structure, some prototype which
remains unchanged throughout the interaction, open interactivity can be thought
of as a subset of variability principle. Here useful analogy can be made with
theory of family resemblance by Witgenstein, later developed into the influential
theory of prototypes by cognitive psychologist Eleonor Rosh. In a family, a
number of relatives will share some features, although no single family member
may posses all of the features. Similarly, according to the theory of prototypes,
the meanings of many words in a natural language derive not through a logical
definition but through a proximity to certain prototype.
Hypermedia, the other popular structure of new media, can also be seen
as a particular case of the more general principle of variability. According to the
definition by Halacz and Swartz, hypermedia systems “provide their users with
the ability to create, manipulate and/or examine a network of information29
containing nodes interconnected by relational links.” Since in new media the
individual media elements (images, pages of text, etc.) always retain their
individual identity (the principle of modularity), they can be "wired" together into
more than one object. Hyperlinking is a particular way to achieve this wiring. A
60
hyperlink creates a connection between two elements, for example between two
words in two different pages or a sentence on one page and an image in another,
or two different places within the same page. The elements connected through
hyperlinks can exist on the same computer or on different computers connected
on a network, as in the case of World Wide Web.
If in traditional media the elements are "hardwired" into a unique structure
and no longer maintain their separate identity, in hypermedia the elements and the
structure are separate from each other. The structure of hyperlinks -- typically a
branching tree - can be specified independently from the contents of a document.
To make an analogy with grammar of a natural language as described in Noam
30
Chomsky’s early linguistic theory, we can compare a hypermedia structure
which specifies the connections between the nodes with a deep structure of a
sentence; a particular hypermedia text can be then compared with a particular
sentence in a natural language. Another useful analogy is with computer
programming. In programming, there is clear separation between algorithms and
data. An algorithm specifies the sequence of steps to be performed on any data,
just as a hypermedia structure specifies a set of navigation paths (i.e., connections
between the nodes) which potentially can be applied to any set of media objects.
The principle of variability also exemplifies how, historically, the changes
in media technologies are correlated with changes the social change. If the logic
of old media corresponded to the logic of industrial mass society, the logic of new
media fits the logic of the post-industrial society which values individuality over
conformity. In industrial mass society everybody was supposed to enjoy the same
goods -- and to have the same beliefs. This was also the logic of media
technology. A media object was assembled in a media factory (such as a
Hollywood studio). Millions of identical copies were produced from a master and
distributed to all the citizens. Broadcasting, cinema, print media all followed this
logic.
In a post-industrial society, every citizen can construct her own custom
lifestyle and "select" her ideology from a large (but not infinite) number of
choices. Rather than pushing the same objects/information to a mass audience,
marketing now tries to target each individual separately. The logic of new media
technology reflects this new social logic. Every visitor to a Web site automatically
gets her own custom version of the site created on the fly from a database. The
language of the text, the contents, the ads displayed — all these can be
customized by interpreting the information about where on the network the user is
coming from; or, if the user previously registered with the site, her personal
profile can be used for this customization. According to a report in USA Today
(November 9, 1999), “Unlike ads in magazines or other real-world publications,
‘banner’ ads on Web pages change wit every page view. And most of the
companies that place the ads on the Web site track your movements across the
Net, ‘remembering’ which ads you’ve seen, exactly when you saw them, whether
61
you clicked on them, where you were at the time and the site you have visited just
31
before.”
More generally, every hypertext reader gets her own version of the
complete text by selecting a particular path through it. Similarly, every user of an
interactive installation gets her own version of the work. And so on. In this way
new media technology acts as the most perfect realization of the utopia of an ideal
society composed from unique individuals. New media objects assure users that
their choices — and therefore, their underlying thoughts and desires — are
unique, rather than pre-programmed and shared with others. As though trying to
compensate for their earlier role in making us all the same, today descendants of
the Jacqurd's loom, the Hollerith tabulator and Zuse's cinema-computer are now
working to convince us that we are all unique.
The principle of variability as it is presented here is not dissimilar to how
32
the artist and curator Jon Ippolito uses the same concept. I believe that we differ
in how we use the concept of variability in two key respects. First, Ippolito uses
variability to describe a characteristic shared by recent conceptual and some
digital art, while I see variability as a basic condition of all new media. Second,
Ippolito follows the tradition of conceptual art where an artist can vary any
dimension of the artwork, even its content; my use of the term aims to reflect the
logic of mainstream culture where versions of the object share some well-defined
“data.” This “data” which can be a well-known narrative (Psycho), an icon (CocaCola sign), a character (Mickey Mouse) or a famous star (Madonna), is referred in
media industry as “property.” Thus all cultural projects produced by Madonna
will be automatically united by her name. Using the theory of prototypes, we can
say that the property acts as a prototype, and different versions are derived from
this prototype. Moreover, when a number of versions are being commercially
released based on some “property”, usually one of these versions is treated as the
source of the “data,” with others positioned as being derived from this source.
Typically the version which is in the same media as the original “property” is
treated as the source. For instance, when a movie studio releases a new film,
along with a computer game based on it, along with products tie-ins, along with
music written for the movie, etc., usually the film is presented as the “base” object
from which other objects are derived. So when George Lucas releases a new Star
Wars movie, it refers back to the original property — the original Star Wars
trilogy. This new movie becomes the “base” object and all other media objects
which are released along with refer to this object. Conversely, when computer
games such as Tomb Rider are re-made into movies, the original computer game
is presented as the “base” object.
While I deduced the principle of variability from more basic principles of
new media — numerical representation (1) and modularity of information (2) —
it can also be seen as a consequence of computer’s way of to represent data and
model the world itself: as variables rather than constants. As new media theorist
62
and architect Marcos Novak notes, a computer — and computer culture in its
33
wake — substitute every constant by a variable. In designing all functions and
data structures, a computer programmer tries to always use variables rather than
constants. On the level of human-computer interface, this principle means that the
user is given many options to modify the performance of a program of a media
object, be it a computer game, a Web site, a Web browser, or the operating system
itself. The user can change the profile of a game character, modify how the
folders appear on the desktop, how files are displayed, what icons are used, etc. If
we apply this principle to culture at large, it would mean that every choice
responsible for giving a cultural object a unique identity can potentially remain
always open. Size, degree of detail, format, color, shape, interactive trajectory,
trajectory through space, duration, rhythm, point of view, the presence or absence
of particular characters, the development of the plot — to name just a few
dimensions of cultural objects in different media — all these can be defined as
variables, to be freely modified by a user.
Do we want, or need, such freedom? As the pioneer of interactive
filmmaking Graham Weinbren argued in relation to interactive media, making a
34
choice involves a moral responsibility. By passing these choices to the user, the
author also passes the responsibility to represent the world and the human
condition in it. (This is paralleled by the use of phone or Web-based automated
menu systems by all big companies to handle their customers; while the
companies are doing this in the name of “choice” and “freedom,” one of the
effects of this automation is that labor to be done is passed from company’s
employees to the customer. If before a customer would get the information or buy
the product by interacting with a company employee, now she has to spend her
own time and energy in navigating through numerous menus to accomplish the
same result.) The moral anxiety which accompanies the shift from constants to
variables, from tradition to choices in all areas of life in a contemporary society,
and the corresponding anxiety of a writer who has to portray it, is well rendered in
this closing passage of a short story written by a contemporary American writer
35
Rick Moody (the story is about the death of his sister):
I should fictionalize it more, I should conceal myself. I should consider the
responsibilities of characterization, I should conflate her two children into one, or
reverse their genders, or otherwise alter them, I should make her boyfriend a
husband, I should explicate all the tributaries of my extended family (its
remarriages, its internecine politics), I should novelize the whole thing, I should
make it multigenerational, I should work in my forefathers (stonemasons and
newspapermen), I should let artifice create an elegant surface, I should make the
events orderly, I should wait and write about it later, I should wait until I’m not
angry, I shouldn’t clutter a narrative with fragments, with mere recollections of
63
good times, or with regrets, I should make Meredith’s death shapely and
persuasive, not blunt and disjunctive, I shouldn’t have to think the unthinkable, I
shouldn’t have to suffer, I should address her here directly (these are the ways I
miss you), I should write only of affection, I should make our travels in this
earthy landscape safe and secure, I should have a better ending, I shouldn’t say
her life was short and often sad, I shouldn’t say she had demons, as I do too.
5. Transcoding
Beginning with the basic, “material” principles of new media — numeric coding
and modular organization — we moved to more “deep” and far reaching ones —
automation and variability. The last, fifth principle of cultural transcoding aims to
describe what in my view is the most substantial consequence of media’s
computerization. As I have suggested, computerization turns media into computer
data. While from one point of view computerized media still displays structural
organization which makes sense to its human users — images feature
recognizable objects; text files consist from grammatical sentences; virtual spaces
are defined along the familiar Cartesian coordinate system; and so on — from
another point of view, its structure now follows the established conventions of
computer's organization of data. The examples of these conventions are different
data structures such as lists, records and arrays; the already mentioned substitution
of all constants by variables; the separation between algorithms and data
structures; and modularity.
The structure of a computer image is a case in point. On the level of
representation, it belongs to the side of human culture, automatically entering in
dialog with other images, other cultural “semes” and “mythemes.” But on another
level, it is a computer file which consist from a machine-readable header,
followed by numbers representing RGB values of its pixels. On this level it enters
into a dialog with other computer files. The dimensions of this dialog are not the
image’s content, meanings or formal qualities, but file size, file type, type of
compression used, file format and so on. In short, these dimensions are that of
computer’s own cosmogony rather than of human culture.
Similarly, new media in general can be thought of as consisting from two
distinct layers: the “cultural layer” and the “computer layer.” The examples of
categories on the cultural layer are encyclopedia and a short story; story and plot;
composition and point of view; mimesis and catharsis, comedy and tragedy. The
examples of categories on the computer layer are process and packet (as in data
packets transmitted through the network); sorting and matching; function and
variable; a computer language and a data structure.
Since new media is created on computers, distributed via computers,
stored and archived on computers, the logic of a computer can be expected to
64
significant influence on the traditional cultural logic of media. That is, we may
expect that the computer layer will affect the cultural layer. The ways in which
computer models the world, represents data and allows us to operate on it; the key
operations behind all computer programs (such as search, match, sort, filter); the
conventions of HCI — in short, what can be called computer’s ontology,
epistemology and pragmatics — influence the cultural layer of new media: its
organization, its emerging genres, its contents.
Of course what I called a computer layer is not itself fixed but is changing
in time. As hardware and software keep evolving and as the computer is used for
new tasks and in new ways, this layer is undergoing continuos transformation.
The new use of computer as a media machine is the case in point. This use is
having an effect on computer’s hardware and software, especially on the level of
the human-computer interface which looks more and more like the interfaces of
older media machines and cultural technologies: VCR, tape player, photo camera.
In summary, the computer layer and media/culture layer influence each
other. To use another concept from new media, we can say that they are being
composited together. The result of this composite is the new computer culture: a
blend of human and computer meanings, of traditional ways human culture
modeled the world and computer’s own ways to represent it.
Throughout the book, we will encounter many examples of the principle
of transcoding at work. For instance, “The Language of Cultural Interfaces”
section will look at how conventions of printed page, cinema and traditional HCI
interact together in the interfaces of Web sites, CD-ROMs, virtual spaces and
computer games.
“Database” section will discuss how a database, originally a computer technology
to organize and access data, is becoming a new cultural form of its own. But we
can also reinterpret some of the principles of new media already discussed above
as consequences of the transcoding principle. For instance, hypermedia can be
understood as one cultural effect of the separation between a algorithm and a data
structure, essential to computer programming. Just as in programming algorithms
and data structures exist independently of each other, in hypermedia data is
separated from the navigation structure. (For another example of the cultural
effect of algorithm—data structure dichotomy see “Database” section.) Similarly,
the modular structure of new media can be seen as an effect of the modularity in
structural computer programming. Just as a structural computer program consist
from smaller modules which in their turn consist from even smaller modules, a
new media object as a modular structure, as I explained in my discussion of
modularity above.
In new media lingo, to “transcode” something is to translate it into another
format. The computerization of culture gradually accomplishes similar
transcoding in relation to all cultural categories and concepts. That is, cultural
categories and concepts are substituted, on the level of meaning and/or the
language, by new ones which derive from computer’s ontology, epistemology and
65
pragmatics. New media thus acts as a forerunner of this more general process of
cultural re-conceptualization.
Given the process of “conceptual transfer” from computer world to culture
at large, and given the new status of media as computer data, what theoretical
framework can we use to understand it? Since on one level new media is an old
media which has been digitized, it seems appropriate to look at new media using
the perspective of media studies. We may compare new media and old media,
such as print, photography, or television. We may also ask about the conditions of
distribution and reception and the patterns of use. We may also ask about
similarities and differences in the material properties of each medium and how
these affect their aesthetic possibilities.
This perspective is important, and I am using it frequently in this book; but it is
not sufficient. It can't address the most fundamental new quality of new media
which has no historical precedent — programmability. Comparing new media to
print, photography, or television will never tell us the whole story. For while from
one point of view new media is indeed another media, from another is simply a
particular type of computer data, something which is stored in files and databases,
retrieved and sorted, run through algorithms and written to the output device. That
the data represents pixels and that this device happened to be an output screen is
besides the point. The computer may perform perfectly the role of the Jacquard
loom, but underneath it is fundamentally Babbage's Analytical Engine - after all,
this was its identity for one hundred and fifty years. New media may look like
media, but this is only the surface.
New media calls for a new stage in media theory whose beginnings can be
traced back to the revolutionary works of Robert Innis and Marshall McLuhan of
the 1950s. To understand the logic of new media we need to turn to computer
science. It is there that we may expect to find the new terms, categories and
operations which characterize media which became programmable. From media
studies, we move to something which can be called software studies; from media
theory — to software theory. The principle of transcoding is one way to start
thinking about software theory. Another way which this book experiments with is
using concepts from computer science as categories of new media theory. The
examples here are “interface” and “database.” And, last but not least, I follow the
analysis of “material” and logical principles of computer hardware and software
in this chapter with two chapters on human-computer interface and the interfaces
of software applications use to author and access new media objects.
75
II. The Interface
In 1984 the director of Blade Runner Ridley Scott was hired to create a
commercial which introduced Apple Computer’s new Macintosh. In retrospect,
this event is full of historical significance. Released within two years of each
other, Blade Runner (1982) and Macintosh computer (1984) defined the two
aesthetics which, twenty years, still rule contemporary culture. One was a
futuristic dystopia which combined futurism and decay, computer technology and
fetishism, retro-styling and urbanism, Los Angeles and Tokyo. Since Blade
Runner release, its techno-noir was replayed in countless films, computer games,
novels and other cultural objects. And while a number of strong aesthetic systems
have been articulated in the following decades, both by individual artists (Mathew
Barney, Mariko Mori) and by commercial culture at large (the 1980s “postmodern” pastiche, the 1990s techno-minimalism), none of them was able to
challenge the hold of Blade Runner on our vision of the future.
In contrast to the dark, decayed, “post-modern” vision of Blade Runner,
Graphical User Interface (GUI), popularized by Macintosh, remained true to the
modernist values of clarity and functionality. The user’s screen was ruled by strait
lines and rectangular windows which contained smaller rectangles of individual
files arranged in a grid. The computer communicated with the user via rectangular
boxes containing clean black type rendered again white background. Subsequent
versions of GUI added colors and made possible for users to customize the
appearance of many interface elements, thus somewhat deluding the sterility and
boldness of the original monochrome 1984 version. Yet its original aesthetic
survived in the displays of hand-held communicators such as Palm Pilot, cellular
telephones, car navigation systems and other consumer electronic products which
use small LCD displays comparable in quality to 1984 Macintosh screen.
Like Blade Runner, Macintosh’s GUI articulated a vision of the future,
although a very different one. In this vision, the lines between human and is
technological creations (computers, androids) are clearly drawn and decay is not
tolerated. In computer, once a file is created, it never disappears except when
explicitly deleted by the user. And even then deleted items can be usually
recovered. Thus if in “meatspace” we have to work to remember, in cyberspace
we have to work to forget. (Of course while they run, OS and applications
constantly create, write to and erase various temporary files, as well as swap data
between RAM and virtual memory files on a hard drive, but most of this activity
remains invisible to the user.)
Also like Blade Runner, GUI vision also came to influence many other
areas of culture. This influence ranges from purely graphical (for instance, use of
GUI elements by print and TV designers) to more conceptual. In the 1990s, as the
Internet progressively grew in popularity, the role of a digital computer shifted
76
from being a particular technology (a calculator, a symbol processor, an image
manipulator, etc.) to being a filter to all culture, a form through which all kinds of
cultural and artistic production is being mediated. As a window of a Web browser
comes to replace cinema and television screen, a wall in art gallery, a library and
a book, all at once, the new situation manifest itself: all culture, past and present,
is being filtered through a computer, with its particular human-computer
57
interface.
In semiotic terms, the computer interface acts as a code which carries
cultural messages in a variety of media. When you use the Internet, everything
you access — texts, music, video, navigable spaces — passes through the
interface of the browser and then, in its turn, the interface of the OS. In cultural
communication, a code is rarely simply a neutral transport mechanism; usually it
affects the messages transmitted with its help. For instance, it may make some
messages easy to conceive and render others unthinkable. A code may also
provide its own model of the world, its own logical system, or ideology;
subsequent cultural messages or whole languages created using this code will be
limited by this model, system or ideology. Most modern cultural theories rely on
these notions which I will refer to together as “non-transparency of the code”
idea. For instance, according to Whorf-Sapir hypothesis which enjoyed popularity
in the middle of the twentieth century, human thinking is determined by the code
of natural language; the speakers of different natural languages perceive and think
58
about world differently. Whorf-Sapir hypothesis is an extreme expression of
“non-transparency of the code” idea; usually it is formulated in a less extreme
form. But then we think about the case of human-computer interface, applying a
“strong” version of this idea makes sense. The interface shapes how the computer
user conceives the computer itself. It also determines how users think of any
media object accessed via a computer. Stripping different media of their original
distinctions, the interface imposes its own logic on them. Finally, by organizing
computer data in particular ways, the interface provides distinct models of the
world. For instance, a hierarchical file system assumes that the world can be
organized in a logical multi-level hierarchy. In contrast, a hypertext model of the
World Wide Web models the world as a non-hierarchical system ruled by
metonymy. In short, far from being a transparent window into the data inside a
computer, the interface bring with it strong messages of its own.
As an example of how the interface imposes its own logic on media,
consider “cut and paste” operation, standard in all software running under modern
GUI. This operation renders insignificant the traditional distinction between
spatial and temporal media, since the user can cut and paste parts of images,
regions of space and parts of a temporal composition in exactly the same way. It
is also “blind” to traditional distinctions in scale: the user can cut and paste a
single pixel, an image, a whole digital movie in the same way. And last, this
operation also renders insignificant traditional distinctions between media: “cut
77
and paste” can be applied to texts, still and moving images, sounds and 3D objects
in the same way.
The interface comes to play a crucial role in information society yet in a
another way. In this society, not only work and leisure activities increasingly
involve computer use, but they also converge around the same interfaces. Both
“work” applications (word processors, spreadsheet programs, database programs)
and “leisure” applications (computer games, informational DVD) use the same
tools and metaphors of GUI. The best example of this convergence is a Web
browser employed both in the office and at home, both for work and for play. In
this respect information society is quite different from industrial society, with its
clear separation between the field of work and the field of leisure. In the
nineteenth century Karl Marx imagined that a future communist state would
overcome this work-leisure divide as well as the highly specialized and piecemeal character of modern work itself. Marx's ideal citizen would be cutting wood
in the morning, gardening in the afternoon and composing music in the evening.
Now a subject of information society is engaged in even more activities during a
typical day: inputting and analyzing data, running simulations, searching the
Internet, playing computer games, watching streaming video, listening to music
online, trading stocks, and so on. Yet in performing all these different activities
the user in essence is always using the same few tools and commands: a computer
screen and a mouse; a Web browser; a search engine; cut, paste, copy, delete and
find commands. (In the introduction to “Forms” chapter I will discuss how the
two key new forms of new media — database and navigable space — can be also
understood in relation to work--leisure opposition.)
If human-computer interface become a key semiotic code of the
information society as well as its meta-tool, how does this affect the functioning
of cultural objects in general and art objects in particular? As I already noted
(“Principles of New Media,” 4.2), in computer culture it becomes common to
construct the number of different interfaces to the same “content.” For instance,
the same data can be represented as a 2D graph or as an interactive navigable
space. Or, a Web site may guide the user to different versions of the site
depending on the bandwidth of her Internet connection. (I will elaborate on this in
“Database” section where a new media object will be defined as one or more
interfaces to a multimedia database.) Given these examples, we may be tempted
to think of a new media artwork as also having two separate levels: content and
interface. Thus the old dichotomies content — form and content — medium can
be re-written as content — interface. But postulating such an opposition assumes
that artwork’s content is independent of its medium (in an art historical sense) or
its code (in a semiotic sense). Situated in some idealized medium-free realm,
content is assumed to exist before its material expression. These assumptions are
correct in the case of visualization of quantified data; they also apply to classical
art with its well-defined iconographic motives and representational conventions.
78
But just as modern thinkers, from Whorf to Derrida, insisted on “nontransparency of a code” idea, modern artists assumed that content and form can’t
be separated. In fact, from the 1910s “abstraction” to the 1960s “process," artists
keep inventing concepts and procedures to assure that they can’t paint some preexistent content.
This leaves us with an interesting paradox. Many new media artworks
have what can be called “an informational dimension,” the condition which they
share with all new media objects. Their experience includes retrieving, looking at
and thinking about quantified data. Therefore when we refer to such artworks we
are justified in separating the levels of content and interface. At the same time,
new media artworks have more traditional “experiential” or aesthetic dimensions,
which justifies their status as art rather than as information design. These
dimensions include a particular configuration of space, time, and surface
articulated in the work; a particular sequence of user’s activities over time to
interact with the work; a particular formal, material and phenomenological user
experience. And it is the work’s interface that creates its unique materiality and
the unique user experience. To change the interface even slightly is to
dramatically change the work. From this perspective, to think of an interface as a
separate level, as something that can be arbitrary varied is to eliminate the status
of a new media artwork as art.
There is another way to think about the difference between new media
design and new media art in relation to the content — interface dichotomy. In
contrast to design, in art the connection between content and form (or, in the case
of new media, content and interface) is motivated. That is, the choice of a
particular interface is motivated by work’s content to such degree that it can no
longer be thought of as a separate level. Content and interface merge into one
entity, and no longer can be taken apart.
Finally, the idea of content pre-existing the interface is challenged in yet
another way by new media artworks which dynamically generate their data in real
time. While in a menu-based interactive multimedia application or a static Web
site all data already exists before the user accesses it, in dynamic new media
artworks the data is created on the fly, or, to use the new media lingo, at run time.
This can be accomplished in a variety of ways: procedural computer graphics,
formal language systems, Artificial Intelligence (AI) and Artificial Life (AL)
programming. All these methods share the same principle: a programmer setups
some initial conditions, rules or procedures which control the computer program
generating the data. For the purposes of the present discussion, the most
interesting of these approaches are AL and the evolution paradigm. In AL
approach, the interaction between a number of simple objects at run time leads to
the emergence of complex global behaviors. These behaviors can only be
obtained in the course of running the computer program; they can’t be predicted
beforehand. The evolution paradigm applies the metaphor of the evolution theory
to the generation of images, shapes, animations and other media data. The initial
79
data supplied by the programmer acts as a genotype which is expanded into a full
phenotype by a computer. In either case, the content of an artwork is the result of
a collaboration between the artist/programmer and the computer program, or, if
the work is interactive, between the artist, the computer program and the user.
New media artists who most systematically explored AL approach is the team of
Christa Sommerer and Laurent Mignonneau. In their installation "Life Spacies”
virtual organisms appear and evolve in response to the position, movement and
interactions of the visitors. Artist/programmer Karl Sims made the key
contribution to applying the evolution paradigm to media generation. In his
installation “Galapagos” the computer programs generates twelfth different virtual
organisms at every iteration; the visitors select an organism which will continue to
59
leave, copulate, mutate and reproduce. The commercial products which use AL
and evolution approaches are computer games such as Creatures series
(Mindscape Entertainment) and ”virtual pet” toys such as Tamagochi.
In organizing this book I wanted to highlight the importance of the
interface category by placing its discussion right in the beginning. The two
sections of this chapter present the examples of different issues raised this
category -- but they in no way exhaust it. In “The Language of Cultural Interface”
I introduce the term “cultural interfaces” to describe interfaces used by standalone hypermedia (CD-ROM and DVD titles), Web sites, computer games and
other cultural objects distributed via a computer. I think we need such a term
because as the role of a computer is shifting from being a tool to a universal
media machine, we are increasingly "interfacing" to predominantly cultural data:
texts, photographs, films, music, multimedia documents, virtual environments.
Therefore, human-computer interface is being supplemented by human-computerculture interface, which I abbreviate as “cultural interface.” The section then
discusses the how the three cultural forms -- cinema, the printed word, and a
general-purpose human-computer interface — contributed to shaping the
appearance and functionality of cultural interfaces during the 1990s.
The second section “The Screen and the User” discusses the key element
of the modern interface — the computer screen. As in the first section, I am
interested in analyzing continuities between a computer interface and older
cultural forms, languages and conventions. The section positions the computer
screen within a longer historical tradition and it traces different stages in the
development of this tradition: the static illusionistic image of Renaissance
painting; the moving image of film screen, the real-time image of radar and
television; and real-time interactive image of a computer screen.
80
The Language of Cultural Interfaces
Cultural Interfaces
The term human-computer interface (HCI) describes the ways in which the user
interacts with a computer. HCI includes physical input and output devices such a
monitor, a keyboard, and a mouse. It also consists of metaphors used to
conceptualize the organization of computer data. For instance, the Macintosh
interface introduced by Apple in 1984 uses the metaphor of files and folders
arranged on a desktop. Finally, HCI also includes ways of manipulating this data,
i.e. a grammar of meaningful actions which the user can perform on it. The
example of actions provided by modern HCI are copy, rename and delete file; list
the contents of a directory; start and stop a computer program; set computer’s date
and time.
The term HCI was coined when computer was mostly used as a tool for
work. However, during the 1990s, the identity of computer has changed. In the
beginning of the decade, a computer was still largely thought of as a simulation of
a typewriter, a paintbrush or a drafting ruler -- in other words, as a tool used to
produce cultural content which, once created, will be stored and distributed in its
appropriate media: printed page, film, photographic print, electronic recording.
By the end of the decade, as Internet use became commonplace, the computer's
public image was no longer that of tool but also that a universal media machine,
used not only to author, but also to store, distribute and access all media.
As distribution of all forms of culture becomes computer-based, we are
increasingly “interfacing” to predominantly cultural data: texts, photographs,
films, music, virtual environments. In short, we are no longer interfacing to a
computer but to culture encoded in digital form. I will use the term "cultural
interfaces" to describe human-computer-culture interface: the ways in which
computers present and allows us to interact with cultural data. Cultural interfaces
include the interfaces used by the designers of Web sites, CD-ROM and DVD
titles, multimedia encyclopedias, online museums and magazines, computer
games and other new media cultural objects.
If you need to remind yourself what a typical cultural interface looked in
the second part of the 1990s, say 1997, go back in time and click to a random
Web page. You are likely to see something which graphically resembles a
magazine layout from the same decade. The page is dominated by text: headlines,
hyperlinks, blocks of copy. Within this text are few media elements: graphics,
photographs, perhaps a QuickTime movie and a VRML scene. The page also
includes radio buttons and a pull-down menu which allows you to choose an item
from the list. Finally there is a search engine: type a word or a phrase, hit the
81
search button and the computer will scan through a file or a database trying to
match your entry.
For another example of a prototypical cultural interface of the 1990s, you
may load (assuming it would still run on your computer) the most well-known
CD-ROM of the 1990s — Myst (Broderbund, 1993). Its opening clearly recalls a
movie: credits slowly scroll across the screen, accompanied by a movie-like
soundtrack to set the mood. Next, the computer screen shows a book open in the
middle, waiting for your mouse click. Next, an element of a familiar Macintosh
interface makes an appearance, reminding you that along with being a new
movie/book hybrid, Myst is also a computer application: you can adjust sound
volume and graphics quality by selecting from a usual Macintosh-style menu in
the upper top part of the screen. Finally, you are taken inside the game, where the
interplay between the printed word and cinema continue. A virtual camera frames
images of an island which dissolve between each other. At the same time, you
keep encountering books and letters, which take over the screen, providing with
you with clues on how to progress in the game.
Given that computer media is simply a set of characters and numbers
stored in a computer, there are numerous ways in which it could be presented to a
user. Yet, as it always happens with cultural languages, only a few of these
possibilities actually appear viable in a given historical moment. Just as early
fifteenth century Italian painters could only conceive of painting in a very
particular way — quite different from, say, sixteenth century Dutch painters —
today's digital designers and artists use a small set of action grammars and
metaphors out of a much larger set of all possibilities.
Why do cultural interfaces — Web pages, CD-ROM titles, computer
games — look the way they do? Why do designers organize computer data in
certain ways and not in others? Why do they employ some interface metaphors
and not others?
My theory is that the language of cultural interfaces is largely made up
from the elements of other, already familiar cultural forms. In the following I will
explore the contributions of three such forms to this language during its first
decades -- the 1990s. The three forms which I will focus make their appearance in
the opening sequence of the already discussed prototypical new media object of
the 1990s — Myst. Its opening activates them before our eyes, one by one. The
first form is cinema. The second form is the printed word. The third form is a
general-purpose human-computer interface (HCI).
As it should become clear from the following, I use words "cinema" and
"printed word" as shortcuts. They stand not for particular objects, such as a film
or a novel, but rather for larger cultural traditions (we can also use such words as
cultural forms, mechanisms, languages or media). "Cinema" thus includes mobile
camera, representation of space, editing techniques, narrative conventions,
activity of a spectator -- in short, different elements of cinematic perception,
language and reception. Their presence is not limited to the twentieth-century
82
institution of fiction films, they can be already found in panoramas, magic lantern
slides, theater and other nineteenth-century cultural forms; similarly, since the
middle of the twentieth century, they are present not only in films but also in
television and video programs. In the case of the "printed word" I am also
referring to a set of conventions which have developed over many centuries (some
even before the invention of print) and which today are shared by numerous forms
of printed matter, from magazines to instruction manuals: a rectangular page
containing one or more columns of text; illustrations or other graphics framed by
the text; pages which follow each sequentially; a table of contents and index.
Modern human-computer interface has a much shorter history than the
printed word or cinema -- but it is still a history. Its principles such as direct
manipulation of objects on the screen, overlapping windows, iconic
representation, and dynamic menus were gradually developed over a few decades,
from the early 1950s to the early 1980s, when they finally appeared in
commercial systems such as Xerox Star (1981), the Apple Lisa (1982), and most
60
importantly the Apple Macintosh (1984). Since than, they have become an
accepted convention for operating a computer, and a cultural language in their
own right.
Cinema, the printed word and human-computer interface: each of these
traditions has developed its own unique ways of how information is organized,
how it is presented to the user, how space and time are correlated with each other,
how human experience is being structured in the process of accessing
information. Pages of text and a table of contents; 3D spaces framed by a
rectangular frame which can be navigated using a mobile point of view;
hierarchical menus, variables, parameters, copy/paste and search/replace
operations -- these and other elements of these three traditions are shaping cultural
interfaces today. Cinema, the printed word and HCI: they are the three main
reservoirs of metaphors and strategies for organizing information which feed
cultural interfaces.
Bringing cinema, the printed word and HCI interface together and treating
them as occupying the same conceptual plane has an additional advantage -- a
theoretical bonus. It is only natural to think of them as belonging to two different
kind of cultural species, so to speak. If HCI is a general purpose tool which can be
used to manipulate any kind of data, both the printed word and cinema are less
general. They offer ways to organize particular types of data: text in the case of
print, audio-visual narrative taking place in a 3D space in the case of cinema. HCI
is a system of controls to operate a machine; the printed word and cinema are
cultural traditions, distinct ways to record human memory and human experience,
mechanisms for cultural and social exchange of information. Bringing HCI, the
printed word and cinema together allows us to see that the three have more in
common than we may anticipate at first. On the one hand, being a part of our
culture now for half a century, HCI already represents a powerful cultural
83
tradition, a cultural language offering its own ways to represent human memory
and human experience. This language speaks in the form of discrete objects
organized in hierarchies (hierarchical file system), or as catalogs (databases), or as
objects linked together through hyperlinks (hypermedia). On the other hand, we
begin to see that the printed word and cinema also can be thought of as interfaces,
even though historically they have been tied to particular kinds of data. Each has
its own grammar of actions, each comes with its own metaphors, each offers a
particular physical interface. A book or a magazine is a solid object consisting
from separate pages; the actions include going from page to page linearly,
marking individual pages and using table of contexts. In the case of cinema, its
physical interface is a particular architectural arrangement of a movie theater; its
metaphor is a window opening up into a virtual 3D space.
Today, as media is being "liberated" from its traditional physical storage
media — paper, film, stone, glass, magnetic tape — the elements of printed word
interface and cinema interface, which previously were hardwired to the content,
become "liberated" as well. A digital designer can freely mix pages and virtual
cameras, table of contents and screens, bookmarks and points of view. No longer
embedded within particular texts and films, these organizational strategies are
now free floating in our culture, available for use in new contexts. In this respect,
printed word and cinema have indeed became interfaces -- rich sets of metaphors,
ways of navigating through content, ways of accessing and storing data. For a
computer user, both conceptually and psychologically, their elements exist on the
same plane as radio buttons, pull-down menus, command line calls and other
elements of standard human-computer interface.
Let us now discuss some of the elements of these three cultural traditions - cinema, the printed word and HCI -- to see how they have shaped the language
of cultural interfaces.
Printed Word
In the 1980's, as PCs and word processing software became commonplace, text
became the first cultural media to be subjected to digitization in a massive way.
But already in the 1960's, two and a half decades before the concept of digital
media was born, researchers were thinking about having the sum total of human
written production -- books, encyclopedias, technical articles, works of fiction and
61
so on -- available online (Ted Nelson's Xanadu project ).
Text is unique among other media types. It plays a privileged role in
computer culture. On the one hand, it is one media type among others. But, on the
other hand, it is a meta-language of computer media, a code in which all other
media are represented: coordinates of 3D objects, pixel values of digital images,
the formatting of a page in HTML. It is also the primary means of communication
84
between a computer and a user: one types single line commands or runs computer
programs written in a subset of English; the other responds by displaying error
62
codes or text messages.
If a computer uses text as its meta-language, cultural interfaces in their
turn inherit the principles of text organization developed by human civilization
throughout its existence. One of these is a page: a rectangular surface containing a
limited amount of information, designed to be accessed in some order, and having
a particular relationship to other pages. In its modern form, the page is born in the
first centuries of the Christian era when the clay tablets and papyrus rolls are
replaced by a codex — the collection of written pages stitched together on one
side.
Cultural interfaces rely on our familiarity with the "page interface" while
also trying to stretch its definition to include new concepts made possible by a
computer. In 1984, Apple introduced a graphical user interface which presented
information in overlapping windows stacked behind one another — essentially, a
set of book pages. The user was given the ability to go back and forth between
these pages, as well as to scroll through individual pages. In this way, a traditional
page was redefined as a virtual page, a surface which can be much larger than the
limited surface of a computer screen. In 1987, Apple shipped popular Hypercard
program which extended the page concept in new ways. Now the users were able
to include multimedia elements within the pages, as well as to establish links
between pages regardless of their ordering. A few years later, designers of HTML
stretched the concept of a page even more by enabling the creation of distributed
documents, where different parts of a document are located on different
computers connected through the network. With this development, a long process
of gradual "virtualization" of the page reached a new stage. Messages written on
clay tablets, which were almost indestructible, were replaced by ink on paper. Ink,
in its turn, was replaced by bits of computer memory, making characters on an
electronic screen. Now, with HTML, which allows parts of a single page to be
located on different computers, the page became even more fluid and unstable.
The conceptual development of the page in computer media can also be
read in a different way — not as a further development of a codex form, but as a
return to earlier forms such as the papyrus roll of ancient Egypt, Greece and
Rome. Scrolling through the contents of a computer window or a World Wide
Web page has more in common with unrolling than turning the pages of a modern
book. In the case of the Web of the 1990s, the similarity with a roll is even
stronger because the information is not available all at once, but arrives
sequentially, top to bottom, as though the roll is being unrolled.
A good example of how cultural interfaces stretch the definition of a page
while mixing together its different historical forms is the Web page created in
63
1997 by the British design collective antirom for HotWired RGB Gallery. The
designers have created a large surface containing rectangular blocks of texts in
85
different font sizes, arranged without any apparent order. The user is invited to
skip from one block to another moving in any direction. Here, the different
directions of reading used in different cultures are combined together in a single
page.
By the mid 1990's, Web pages included a variety of media types — but
they were still essentially traditional pages. Different media elements — graphics,
photographs, digital video, sound and 3D worlds — were embedded within
rectangular surfaces containing text. To that extent a typical Web age was
conceptually similar to a newspaper page which is also dominated by text, with
photographs, drawings, tables and graphs embedded in between, along with links
to other pages of the newspaper. VRML evangelists wanted to overturn this
hierarchy by imaging the future in which the World Wide Web is rendered as a
64
giant 3D space, with all the other media types, including text, existing within it.
Given that the history of a page stretches for thousands of years, I think it is
unlikely that it would disappear so quickly.
As Web page became a new cultural convention of its own, its dominance
was challenged by two Web browsers created by artists — Web Stalker (1997) by
65
66
I/O/D collective and Netomat (1999) by Maciej Wisniewski. Web Stalker
emphasizes the hypertextual nature of the Web. Instead of rendering standard
Web pages, it renders the networks of hyperlinks these pages embody. When a
user enters a URL for a particular page, Web Stalker displays all pages linked to
this page as a line graph. Netomat similarly refuses the page convention of the
Web. The user enters a word or a phrase which are passed to search engines.
Netomat then extracts page titles, images, audio or any other media type, as
specified by the user, from the found pages and floats them across the computer
screen. As can be seen, both browsers refuse the page metaphor, instead
substituting their own metaphors: a graph showing the structure of links in the
case of Web Stalker, a flow of media elements in the case of Netomat.
While the 1990's Web browsers and other commercial cultural interfaces
have retained the modern page format, they also have come to rely on a new way
of organizing and accessing texts which has little precedent within book tradition
— hyperlinking. We may be tempted to trace hyperlinking to earlier forms and
practices of non-sequential text organization, such as the Torah's interpretations
and footnotes, but it is actually fundamentally different from them. Both the
Torah's interpretations and footnotes imply a master-slave relationship between
one text and another. But in the case of hyperlinking as implemented by HTML
and earlier by Hypercard, no such relationship of hierarchy is assumed. The two
sources connected through a hyperlink have an equal weight; neither one
dominates the other .Thus the acceptance of hyperlinking in the 1980's can be
correlated with contemporary culture’s suspicion of all hierarchies, and preference
for the aesthetics of collage where radically different sources are brought together
within the singular cultural object ("post-modernism").
86
Traditionally, texts encoded human knowledge and memory, instructed,
inspired, convinced and seduced their readers to adopt new ideas, new ways of
interpreting the world, new ideologies. In short, the printed word was linked to the
art of rhetoric. While it is probably possible to invent a new rhetoric of
hypermedia, which will use hyperlinking not to distract the reader from the
argument (as it is often the case today), but instead to further convince her of
argument's validity, the sheer existence and popularity of hyperlinking
exemplifies the continuing decline of the field of rhetoric in the modern era.
Ancient and Medieval scholars have classified hundreds of different rhetorical
figures. In the middle of the twentieth century linguist Roman Jakobson, under the
influence of computer's binary logic, information theory and cybernetics to which
he was exposed at MIT where he was teaching, radically reduced rhetoric to just
67
two figures: metaphor and metonymy. Finally, in the 1990's, the World Wide
Web hyperlinking has privileged the single figure of metonymy at the expense of
68
all others. The hypertext of the World Wide Web leads the reader from one text
to another, ad infinitum. Contrary to the popular image, in which computer media
collapses all human culture into a single giant library (which implies the existence
of some ordering system), or a single giant book (which implies a narrative
progression), it maybe more accurate to think of the new media culture as an
infinite flat surface where individual texts are placed in no particular order, like
the Web page designed by antirom for HotWired. Expanding this comparison
further, we can note that Random Access Memory, the concept behind the group's
name, also implies the lack of hierarchy: any RAM location can be accessed as
quickly as any other. In contrast to the older storage media of book, film, and
magnetic tape, where data is organized sequentially and linearly, thus suggesting
the presence of a narrative or a rhetorical trajectory, RAM "flattens" the data.
Rather than seducing the user through the careful arrangement of arguments and
examples, points and counterpoints, changing rhythms of presentation (i.e., the
rate of data streaming, to use contemporary language), simulated false paths and
dramatically presented conceptual breakthroughs, cultural interfaces, like RAM
69
itself, bombards the users with all the data at once.
In the 1980's many critics have described one of key's effects of "postmodernism" as that of spatialization: privileging space over time, flattening
historical time, refusing grand narratives. Computer media, which has evolved
during the same decade, accomplished this spatialization quite literally. It
replaced sequential storage with random-access storage; hierarchical organization
of information with a flattened hypertext; psychological movement of narrative in
novel and cinema with physical movement through space, as witnessed by endless
computer animated fly-throughs or computer games such as Myst, Doom and
countless others (see “Navigable Space.”) In short, time becomes a flat image or a
landscape, something to look at or navigate through. If there is a new rhetoric or
87
aesthetic which is possible here, it may have less to do with the ordering of time
by a writer or an orator, and more with spatial wandering. The hypertext reader is
like Robinson Crusoe, walking through the sand and water, picking up a
navigation journal, a rotten fruit, an instrument whose purpose he does not know;
leaving imprints in the sand, which, like computer hyperlinks, follow from one
found object to another.
Cinema
Printed word tradition which has initially dominated the language of cultural
interfaces, is becoming less important, while the part played by cinematic
elements is getting progressively stronger. This is consistent with a general trend
in modern society towards presenting more and more information in the form of
time-based audio-visual moving image sequences, rather than as text. As new
generations of both computer users and computer designers are growing up in a
media-rich environment dominated by television rather than by printed texts, it is
not surprising that they favor cinematic language over the language of print.
A hundred years after cinema's birth, cinematic ways of seeing the world,
of structuring time, of narrating a story, of linking one experience to the next, are
being extended to become the basic ways in which computer users access and
interact with all cultural data. In this way, the computer fulfills the promise of
cinema as a visual Esperanto which pre-occupied many film artists and critics in
the 1920s, from Griffith to Vertov. Indeed, millions of computer users
communicate with each other through the same computer interface. And, in
contrast to cinema where most of its "users" were able to "understand" cinematic
language but not "speak" it (i.e., make films), all computer users can "speak" the
language of the interface. They are active users of the interface, employing it to
perform many tasks: send email, organize their files, run various applications, and
so on.
The original Esperanto never became truly popular. But cultural interfaces
are widely used and are easily learned. We have an unprecedented situation in the
history of cultural languages: something which is designed by a rather small
group of people is immediately adopted by millions of computer users. How is it
possible that people around the world adopt today something which a 20something programmer in Northern California has hacked together just the night
before? Shall we conclude that we are somehow biologically "wired" to the
interface language, the way we are "wired," according to the original hypothesis
of Noam Chomsky, to different natural languages?
The answer is of course no. Users are able to "acquire" new cultural
languages, be it cinema a hundred years ago, or cultural interfaces today, because
these languages are based on previous and already familiar cultural forms. In the
88
case of cinema, it was theater, magic lantern shows and other nineteenth century
forms of public entertainment. Cultural interfaces in their turn draw on older
cultural forms such as the printed word and cinema. I have already discussed
some ways in which the printed word tradition structures interface language; now
it is cinema's turn.
I will begin with probably the most important case of cinema's influence
on cultural interfaces — the mobile camera. Originally developed as part of 3D
computer graphics technology for such applications as computer-aided design,
flight simulators and computer movie making, during the 1980's and 1990's the
camera model became as much of an interface convention as scrollable windows
or cut and paste operations. It became an accepted way for interacting with any
data which is represented in three dimensions — which, in a computer culture,
means literally anything and everything: the results of a physical simulation, an
architectural site, design of a new molecule, statistical data, the structure of a
computer network and so on. As computer culture is gradually spatializing all
representations and experiences, they become subjected to the camera's particular
grammar of data access. Zoom, tilt, pan and track: we now use these operations to
interact with data spaces, models, objects and bodies.
Abstracted from its historical temporary "imprisonment" within the
physical body of a movie camera directed at physical reality, a virtualized camera
also becomes an interface to all types of media and information beside 3D space.
As an example, consider GUI of the leading computer animation software —
70
PowerAnimator from Alias/Wavefront. In this interface, each window,
regardless of whether it displays a 3D model, a graph or even plain text, contains
Dolly, Track and Zoom buttons. It is particularly important that the user is
expected to dolly and pan over text as if it was a 3D scene. In this interface,
cinematic vision triumphed over the print tradition, with the camera subsuming
the page. The Guttenberg galaxy turned out to be just a subset of the Lumières'
universe.
Another feature of cinematic perception which persists in cultural
71
interfaces is a rectangular framing of represented reality. Cinema itself inherited
this framing from Western painting. Since the Renaissance, the frame acted as a
window onto a larger space which was assumed to extend beyond the frame. This
space was cut by the frame's rectangle into two parts: "onscreen space," the part
which is inside the frame, and the part which is outside. In the famous formulation
of Leon-Battista Alberti, the frame acted as a window onto the world. Or, in a
more recent formulation of French film theorist Jacques Aumont and his coauthors, "The onscreen space is habitually perceived as included within a more
vast scenographic space. Even though the onscreen space is the only visible part,
72
this larger scenographic part is nonetheless considered to exist around it."
89
Just as a rectangular frame of painting and photography presents a part of
a larger space outside it, a window in HCI presents a partial view of a larger
document. But if in painting (and later in photography), the framing chosen by an
artist was final, computer interface benefits from a new invention introduced by
cinema: the mobility of the frame. As a kino-eye moves around the space
revealing its different regions, so can a computer user scroll through a window's
contents.
It is not surprising to see that screen-based interactive 3D environments,
such as VRML words, also use cinema's rectangular framing since they rely on
other elements of cinematic vision, specifically a mobile virtual camera. It may be
more surprising to realize that Virtual Reality (VR) interface, often promoted as
73
the most "natural" interface of all, utilizes the same framing. As in cinema, the
world presented to a VR user is cut by a rectangular frame. As in cinema, this
74
frame presents a partial view of a larger space. As in cinema, the virtual camera
moves around to reveal different parts of this space.
Of course, the camera is now controlled by the user and in fact is
identified with his/her own sight. Yet, it is crucial that in VR one is seeing the
virtual world through a rectangular frame, and that this frame always presents
only a part of a larger whole. This frame creates a distinct subjective experience
which is much more close to cinematic perception than to unmediated sight.
Interactive virtual worlds, whether accessed through a screen-based or a
VR interface, are often discussed as the logical successor to cinema, as potentially
the key cultural form of the twenty-first century, just as cinema was the key
cultural form of the twentieth century. These discussions usually focus on the
issues of interaction and narrative. So, the typical scenario for twenty-first century
cinema involves a user represented as an avatar existing literally "inside" the
narrative space, rendered with photorealistic 3D computer graphics, interacting
with virtual characters and perhaps other users, and affecting the course of
narrative events.
It is an open question whether this and similar scenarios commonly
invoked in new media discussions of the 1990's, indeed represent an extension of
cinema or if they rather should be thought of as a continuation of some theatrical
traditions, such as improvisational or avant-garde theater. But what undoubtedly
can be observed in the 1990's is how virtual technology's dependence on cinema's
mode of seeing and language is becoming progressively stronger. This coincides
with the move from proprietary and expensive VR systems to more widely
available and standardized technologies, such as VRML (Virtual Reality
Modeling Language). (The following examples refer to a particular VRML
75
browser — WebSpace Navigator 1.1 from SGI. Other VRML browsers have
similar features.)
90
The creator of a VRML world can define a number of viewpoints which
76
are loaded with the world. These viewpoints automatically appear in a special
menu in a VRML browser which allows the user to step through them, one by
one. Just as in cinema, ontology is coupled with epistemology: the world is
designed to be viewed from particular points of view. The designer of a virtual
world is thus a cinematographer as well as an architect. The user can wander
around the world or she can save time by assuming the familiar position of a
cinema viewer for whom the cinematographer has already chosen the best
viewpoints.
Equally interesting is another option which controls how a VRML browser
moves from one viewpoint to the next. By default, the virtual camera smoothly
travels through space from the current viewpoint to the next as though on a dolly,
its movement automatically calculated by the software. Selecting the "jump cuts"
option makes it cut from one view to the next. Both modes are obviously derived
from cinema. Both are more efficient than trying to explore the world on its own.
With a VRML interface, nature is firmly subsumed under culture. The eye
is subordinated to the kino-eye. The body is subordinated to a virtual body of a
virtual camera. While the user can investigate the world on her own, freely
selecting trajectories and viewpoints, the interface privileges cinematic perception
— cuts, pre-computed dolly-like smooth motions of a virtual camera, and preselected viewpoints.
The area of computer culture where cinematic interface is being
transformed into a cultural interface most aggressively is computer games. By the
1990's, game designers have moved from two to three dimensions and have begun
to incorporate cinematic language in a increasingly systematic fashion. Games
started featuring lavish opening cinematic sequences (called in the game business
"cinematics") to set the mood, establish the setting and introduce the narrative.
Frequently, the whole game would be structured as an oscillation between
interactive fragments requiring user's input and non-interactive cinematic
sequences, i.e. "cinematics." As the decade progressed, game designers were
creating increasingly complex — and increasingly cinematic — interactive virtual
worlds. Regardless of a game's genre — action/adventure, fighting, flight
simulator, first-person action, racing or simulation — they came to rely on
cinematography techniques borrowed from traditional cinema, including the
expressive use of camera angles and depth of field, and dramatic lighting of 3D
computer generated sets to create mood and atmosphere. In the beginning of the
th
decade, many games such as The 7 Guest (Trilobyte, 1993) or Voyeur (1994) or
used digital video of actors superimposed over 2D or 3D backgrounds, but by its
77
end they switched to fully synthetic characters rendered in real time. This
switch allowed game designers to go beyond branching-type structure of earlier
games based on digital video were all the possible scenes had to be taped
beforehand. In contrast, 3D characters animated in real time move arbitrary
91
around the space, and the space itself can change during the game. (For instance,
when a player returns to the already visited area, she will find any objects she left
there earlier.) This switch also made virtual words more cinematic, as the
78
characters could be better visually integrated with their environments.
A particularly important example of how computer games use — and
extend — cinematic language, is their implementation of a dynamic point of view.
In driving and flying simulators and in combat games, such as Tekken 2 (Namco,
1994 -), after a certain event takes place (car crashes, a fighter being knocked
down), it is automatically replayed from a different point of view. Other games
such as the Doom series (Id Software, 1993 -) and Dungeon Keeper (Bullfrog
Productions, 1997) allow the user to switch between the point of view of the hero
and a top down "bird's eye" view. The designers of online virtual worlds such as
Active Worlds provide their users with similar capabilities. Finally, Nintendo
went even further by dedicating four buttons on their N64 joypad to controlling
the view of the action. While playing Nintendo games such as Super Mario 64
(Nintendo, 1996) the user can continuously adjust the position of the camera.
Some Sony Playstation games such as Tomb Rider (Eidos, 1996) also use the
buttons on the Playstation joypad for changing point of view. Some games such as
Myth: The Fallen Lords (Bungie, 1997) go further, using an AI engine (computer
code which controls the simulated “life” in the game, such as human characters
the player encounters) to automatically control their camera.
The incorporation of virtual camera controls into the very hardware of a
game consoles is truly a historical event. Directing the virtual camera becomes as
important as controlling the hero's actions. This is admitted by the game industry
itself. For instance, a package for Dungeon Keeper lists four key features of the
game, out of which the first two concern control over the camera: "switch your
perspective," "rotate your view," "take on your friend," "unveil hidden levels." In
games such as this one, cinematic perception functions as the subject in its own
79
right. Here, the computer games are returning to "The New Vision" movement
of the 1920s (Moholy-Nagy, Rodchenko, Vertov and others), which foregrounded
new mobility of a photo and film camera, and made unconventional points of
view the key part of their poetics.
The fact that computer games and virtual worlds continue to encode, step
by step, the grammar of a kino-eye in software and in hardware is not an accident.
This encoding is consistent with the overall trajectory driving the computerization
of culture since the 1940's, that being the automation of all cultural operations.
This automation gradually moves from basic to more complex operations: from
image processing and spell checking to software-generated characters, 3D worlds,
and Web Sites. The side effect of this automation is that once particular cultural
codes are implemented in low-level software and hardware, they are no longer
seen as choices but as unquestionable defaults. To take the automation of imaging
as an example, in the early 1960's the newly emerging field of computer graphics
92
incorporated a linear one-point perspective in 3D software, and later directly in
80
hardware. As a result, linear perspective became the default mode of vision in
computer culture, be it computer animation, computer games, visualization or
VRML worlds. Now we are witnessing the next stage of this process: the
translation of cinematic grammar of points of view into software and hardware.
As Hollywood cinematography is translated into algorithms and computer chips,
its convention becomes the default method of interacting with any data subjected
to spatialization, with a narrative, and with other human beings. (At SIGGRAPH
'97 in Los Angeles, one of the presenters called for the incorporation of
Hollywood-style editing in multi-user virtual worlds software. In such
implementation, user interaction with other avatar(s) will be automatically
81
rendered using classical Hollywood conventions for filming dialog. ) To use the
terms from the 1996 paper authored by Microsoft researchers and entitled “The
Virtual Cinematographer: A Paradigm for Automatic Real-Time Camera Control
and Directing,” the goal of research is to encode “cinematographic expertise,”
82
translating “heuristics of filmmaking” into computer software and hardware.
Element by element, cinema is being poured into a computer: first one-point
linear perspective; next the mobile camera and a rectangular window; next
cinematography and editing conventions, and, of course, digital personas also
based on acting conventions borrowed from cinema, to be followed by make-up,
set design, and the narrative structures themselves. From one cultural language
among others, cinema is becoming the cultural interface, a toolbox for all cultural
communication, overtaking the printed word.
Cinema, the major cultural form of the twentieth century, has found a new
life as the toolbox of a computer user. Cinematic means of perception, of
connecting space and time, of representing human memory, thinking, and
emotions become a way of work and a way of life for millions in the computer
age. Cinema's aesthetic strategies have become basic organizational principles of
computer software. The window in a fictional world of a cinematic narrative has
become a window in a datascape. In short, what was cinema has become humancomputer interface.
I will conclude this section by discussing a few artistic projects which, in
different ways, offer alternatives to this trajectory. To summarize it once again,
the trajectory involves gradual translation of elements and techniques of cinematic
perception and language into a de-contextualized set of tools to be used as an
interface to any data. In the process of this translation, cinematic perception is
divorced from its original material embodiment (camera, film stock), as well as
from the historical contexts of its formation. If in cinema the camera functioned as
a material object, co-existing, spatially and temporally, with the world it was
showing us, it has now become a set of abstract operations. The art projects
described below refuse this separation of cinematic vision from the material
93
world. They reunite perception and material reality by making the camera and
what it records a part of a virtual world's ontology. They also refuse the
universalization of cinematic vision by computer culture, which (just as postmodern visual culture in general) treats cinema as a toolbox, a set of "filters"
which can be used to process any input. In contrast, each of these projects
employs a unique cinematic strategy which has a specific relation to the particular
virtual world it reveals to the user.
In The Invisible Shape of Things Past Joachim Sauter and Dirk
Lüsenbrink of the Berlin-based Art+Com collective created a truly innovative
83
cultural interface for accessing historical data about Berlin's history. The
interface de-virtualizes cinema, so to speak, by placing the records of cinematic
vision back into their historical and material context. As the user navigates
through a 3D model of Berlin, he or she comes across elongated shapes lying on
city streets. These shapes, which the authors call "filmobjects", correspond to
documentary footage recorded at the corresponding points in the city. To create
each shape the original footage is digitized and the frames are stacked one after
another in depth, with the original camera parameters determining the exact
shape. The user can view the footage by clicking on the first frame. As the frames
are displayed one after another, the shape is getting correspondingly thinner.
In following with the already noted general trend of computer culture
towards spatialization of every cultural experience, this cultural interface
spatializes time, representing it as a shape in a 3D space. This shape can be
thought of as a book, with individual frames stacked one after another as book
pages. The trajectory through time and space taken by a camera becomes a book
to be read, page by page. The records of camera's vision become material objects,
sharing the space with the material reality which gave rise to this vision. Cinema
is solidified. This project, than, can be also understood as a virtual monument to
cinema. The (virtual) shapes situated around the (virtual) city, remind us about
the era when cinema was the defining form of cultural expression — as opposed
to a toolbox for data retrieval and use, as it is becoming today in a computer.
Hungarian-born artist Tamás Waliczky openly refuses the default mode of
vision imposed by computer software, that of the one-point linear perspective.
Each of his computer animated films The Garden (1992), The Forest (1993) and
The Way (1994) utilizes a particular perspectival system: a water-drop
perspective in The Garden, a cylindrical perspective in The Forest and a reverse
perspective in The Way. Working with computer programmers, the artist created
custom-made 3D software to implement these perspectival systems. Each of the
systems has an inherent relationship to the subject of a film in which it is used. In
The Garden, its subject is the perspective of a small child, for whom the world
does not yet have an objective existence. In The Forest, the mental trauma of
emigration is transformed into the endless roaming of a camera through the forest
which is actually just a set of transparent cylinders. Finally, in The Way, the self-
94
sufficiency and isolation of a Western subject are conveyed by the use of a
reverse perspective.
In Waliczky's films the camera and the world are made into a single
whole, whereas in The Invisible Shape of Things Past the records of the camera
are placed back into the world. Rather than simply subjecting his virtual worlds to
different types of perspectival projection, Waliczky modified the spatial structure
of the worlds themselves. In The Garden, a child playing in a garden becomes the
center of the world; as he moves around, the actual geometry of all the objects
around him is transformed, with objects getting bigger as he gets close to him. To
create The Forest, a number of cylinders were placed inside each other, each
cylinder mapped with a picture of a tree, repeated a number of times. In the film,
we see a camera moving through this endless static forest in a complex spatial
trajectory — but this is an illusion. In reality, the camera does move, but the
architecture of the world is constantly changing as well, because each cylinder is
rotating at its own speed. As a result, the world and its perception are fused
together.
HCI: Representation versus Control
The development of human-computer interface, until recently, had little to do
with distribution of cultural objects. Following some of the main applications
from the 1940's until the early 1980's, when the current generation of GUI was
developed and reached the mass market together with the rise of a PC (personal
computer), we can list the most significant: real-time control of weapons and
weapon systems; scientific simulation; computer-aided design; finally, office
work with a secretary as a prototypical computer user, filing documents in a
folder, emptying a trash can, creating and editing documents ("word processing").
Today, as the computer is starting to host very different applications for access
and manipulation of cultural data and cultural experiences, their interfaces still
rely on old metaphors and action grammars. Thus, cultural interfaces predictably
use elements of a general-purpose HCI such as scrollable windows containing text
and other data types, hierarchical menus, dialogue boxes, and command-line
input. For instance, a typical "art collection" CD-ROM may try to recreate "the
museum experience" by presenting a navigable 3D rendering of a museum space,
while still resorting to hierarchical menus to allow the user to switch between
different museum collections. Even in the case of The Invisible Shape of Things
Past which uses a unique interface solution of "filmobjects" which is not directly
traceable to either old cultural forms or general-purpose HCI, the designers are
still relying on HCI convention in one case — the use of a pull-down menu to
switch between different maps of Berlin.
95
In their important study of new media Remediation, Jay David Bolter and
84
Richard Grusin define medium as “that which remediates.” In contrast to a
modernist view aims to define the essential properties of every medium, Bolter
and Grusin propose that all media work by “remediating,” i.e. translating,
refashioning, and reforming other media, both on the levels of content and form.
If we are to think of human-computer interface as another media, its history and
present development definitely fits this thesis. The history of human-computer
interface is that of borrowing and reformulating, or, to use new media lingo,
reformatting other media, both past and present: the printed page, film, television.
But along with borrowing conventions of most other media and eclectically
combining them together, HCI designers also heavily borrowed “conventions” of
human-made physical environment, beginning with Macintosh use of desktop
metaphor. And, more than an media before it, HCI is like a chameleon which
keeps changing its appearance, responding to how computers are used in any
given period. For instance, if in the 1970s the designers at Xerox Park modeled
the first GUI on the office desk, because they imagined that the computer were
designing will be used in the office, in the 1990s the primary use of computers as
media access machine led to the borrowing of interfaces of already familiar media
devices, such as VCR or audio CD player controls.
In general, cultural interfaces of the 1990's try to walk an uneasy path
between the richness of control provided in general-purpose HCI and an
"immersive" experience of traditional cultural objects such as books and movies.
Modern general-purpose HCI, be it MAC OS, Windows or UNIX, allow their
users to perform complex and detailed actions on computer data: get information
about an object, copy it, move it to another location, change the way data is
displayed, etc. In contrast, a conventional book or a film positions the user inside
the imaginary universe whose structure is fixed by the author. Cultural interfaces
attempt to mediate between these two fundamentally different and ultimately noncompatible approaches.
As an example, consider how cultural interfaces conceptualize the
computer screen. If a general-purpose HCI clearly identifies to the user that
certain objects can be acted on while others cannot (icons representing files but
not the desktop itself), cultural interfaces typically hide the hyperlinks within a
continuous representational field. (This technique was already so widely accepted
by the 1990's that the designers of HTML offered it early on to the users by
implementing the "imagemap" feature). The field can be a two-dimensional
collage of different images, a mixture of representational elements and abstract
textures, or a single image of a space such as a city street or a landscape. By trial
and error, clicking all over the field, the user discovers that some parts of this
field are hyperlinks. This concept of a screen combines two distinct pictorial
conventions: the older Western tradition of pictorial illusionism in which a screen
functions as a window into a virtual space, something for the viewer to look into
96
but not to act upon; and the more recent convention of graphical human-computer
interfaces which, by dividing the computer screen into a set of controls with
clearly delineated functions, essentially treats it as a virtual instrument panel. As a
result, the computer screen becomes a battlefield for a number of incompatible
definitions: depth and surface, opaqueness and transparency, image as an
illusionary space and image as an instrument for action.
The computer screen also functions both as a window into an illusionary
space and as a flat surface carrying text labels and graphical icons. We can relate
this to a similar understanding of a pictorial surface in the Dutch art of the
seventeenth century, as analyzed by art historian Svetlana Alpers in her classical
The Art of Describing. Alpers discusses how a Dutch painting of this period
functioned as a combined map / picture, combining different kids of information
85
and knowledge of the world.
Here is another example of how cultural interfaces try to find a middle
ground between the conventions of general-purpose HCI and the conventions of
traditional cultural forms. Again we encounter tension and struggle — in this
case, between standardization and originality. One of the main principles of
modern HCI is consistency principle. It dictates that menus, icons, dialogue boxes
and other interface elements should be the same in different applications. The user
knows that every application will contain a "file" menu, or that if she encounters
an icon which looks like a magnifying glass it can be used to zoom on documents.
In contrast, modern culture (including its "post-modern" stage) stresses
originality: every cultural object is supposed to be different from the rest, and if it
is quoting other objects, these quotes have to be defined as such. Cultural
interfaces try to accommodate both the demand for consistency and the demand
for originality. Most of them contain the same set of interface elements with
standard semantics, such as "home," "forward" and "backward" icons. But
because every Web site and CD-ROM is striving to have its own distinct design,
these elements are always designed differently from one product to the next. For
instance, many games such as War Craft II (Blizzard Entertainment, 1996) and
Dungeon Keeper give their icons a "historical" look consistent with the mood of
an imaginary universe portrayed in the game.
The language of cultural interfaces is a hybrid. It is a strange, often
awkward mix between the conventions of traditional cultural forms and the
conventions of HCI — between an immersive environment and a set of controls;
between standardization and originality. Cultural interfaces try to balance the
concept of a surface in painting, photography, cinema, and the printed page as
something to be looked at, glanced at, read, but always from some distance,
without interfering with it, with the concept of the surface in a computer interface
as a virtual control panel, similar to the control panel on a car, plane or any other
86
complex machine. Finally, on yet another level, the traditions of the printed
word and of cinema also compete between themselves. One pulls the computer
97
screen towards being dense and flat information surface, while another wants it to
become a window into a virtual space.
To see that this hybrid language of the cultural interfaces of the 1990s
represents only one historical possibility, consider a very different scenario.
Potentially, cultural interfaces could completely rely on already existing
metaphors and action grammars of a standard HCI, or, at least, rely on them much
more than they actually do. They don't have to "dress up" HCI with custom icons
and buttons, or hide links within images, or organize the information as a series of
pages or a 3D environment. For instance, texts can be presented simply as files
inside a directory, rather than as a set of pages connected by custom-designed
icons. This strategy of using standard HCI to present cultural objects is
encountered quite rarely. In fact, I am aware of only one project which uses it
completely consciously, as a though through choice rather than by necessity : a
CD-ROM by Gerald Van Der Kaap entitled BlindRom V.0.9. (Netherlands,
1993). The CD-ROM includes a standard-looking folder named "Blind Letter."
Inside the folder there are a large number of text files. You don't have to learn yet
another cultural interface, search for hyperlinks hidden in images or navigate
through a 3D environment. Reading these files required simply opening them in
standard Macintosh SimpleText, one by one. This simple technique works very
well. Rather than distracting the user from experiencing the work, the computer
interface becomes part and parcel of the work. Opening these files, I felt that I
was in the presence of a new literary form for a new medium, perhaps the real
medium of a computer — its interface.
As the examples analyzed here illustrate, cultural interfaces try to create
their own language rather than simply using general-purpose HCI. In doing so,
these interfaces try to negotiate between metaphors and ways of controlling a
computer developed in HCI, and the conventions of more traditional cultural
forms. Indeed, neither extreme is ultimately satisfactory by itself. It is one thing to
use a computer to control a weapon or to analyze statistical data, and it is another
to use it to represent cultural memories, values and experiences. The interfaces
developed for a computer in its functions of a calculator, control mechanism or a
communication device are not necessarily suitable for a computer playing the role
of a cultural machine. Conversely, if we simply mimic the existing conventions of
older cultural forms such as the printed word and cinema, we will not take
advantage of all the new capacities offered by a computer: its flexibility in
displaying and manipulating data, interactive control by the user, the ability to run
simulations, etc.
Today the language of cultural interfaces is in its early stage, as was the
language of cinema a hundred years ago. We don't know what the final result will
be, or even if it will ever completely stabilize. Both the printed word and cinema
eventually achieved stable forms which underwent little changes for long periods
of time, in part because of the material investments in their means of production
and distribution. Given that computer language is implemented in software,
98
potentially it can keep on changing forever. But there is one thing we can be sure
of. We are witnessing the emergence of a new cultural meta-langauge, something
which will be at least as significant as the printed word and cinema before it.
1 Is Google Making Us
Stupid?
Nicholas Carr
What the Internet is doing to our brains
"Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the
supercomputer HAL pleads with the implacable astronaut Dave
Bowman in a famous and weirdly poignant scene toward the end of
Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been
sent to a deep-space death by the malfunctioning machine, is calmly,
coldly disconnecting the memory circuits that control its artificial “
brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can
feel it.”
I can feel it, too. Over the past few years I’ve had an uncomfortable
sense that someone, or something, has been tinkering with my brain,
2 remapping the neural circuitry, reprogramming the memory. My mind
isn’t going—so far as I can tell—but it’s changing. I’m not thinking the
way I used to think. I can feel it most strongly when I’m reading.
Immersing myself in a book or a lengthy article used to be easy. My
mind would get caught up in the narrative or the turns of the argument,
and I’d spend hours strolling through long stretches of prose. That’s
rarely the case anymore. Now my concentration often starts to drift after
two or three pages. I get fidgety, lose the thread, begin looking for
something else to do. I feel as if I’m always dragging my wayward brain
back to the text. The deep reading that used to come naturally has
become a struggle.
I think I know what’s going on. For more than a decade now, I’ve been
spending a lot of time online, searching and surfing and sometimes
adding to the great databases of the Internet. The Web has been a
godsend to me as a writer. Research that once required days in the
stacks or periodical rooms of libraries can now be done in minutes. A
few Google searches, some quick clicks on hyperlinks, and I’ve got the
telltale fact or pithy quote I was after. Even when I’m not working, I’m
as likely as not to be foraging in the Web’s info-thickets’reading and
writing e-mails, scanning headlines and blog posts, watching videos and
listening to podcasts, or just tripping from link to link to link. (Unlike
footnotes, to which they’re sometimes likened, hyperlinks don’t merely
point to related works; they propel you toward them.)
For me, as for others, the Net is becoming a universal medium, the
conduit for most of the information that flows through my eyes and ears
and into my mind. The advantages of having immediate access to such
an incredibly rich store of information are many, and they’ve been
widely described and duly applauded. “The perfect recall of silicon
memory,” Wired’s Clive Thompson has written, “can be an enormous
boon to thinking.” But that boon comes at a price. As the media theorist
Marshall McLuhan pointed out in the 1960s, media are not just passive
channels of information. They supply the stuff of thought, but they also
shape the process of thought. And what the Net seems to be doing is
chipping away my capacity for concentration and contemplation. My
mind now expects to take in information the way the Net distributes it:
in a swiftly moving stream of particles. Once I was a scuba diver in the
sea of words. Now I zip along the surface like a guy on a Jet Ski.
I’m not the only one. When I mention my troubles with reading to
friends and acquaintances—literary types, most of them—many say
they’re having similar experiences. The more they use the Web, the
more they have to fight to stay focused on long pieces of writing. Some
of the bloggers I follow have also begun mentioning the phenomenon.
Scott Karp, who writes a blog about online media, recently confessed
that he has stopped reading books altogether. “I was a lit major in
3 college, and used to be [a] voracious book reader,” he wrote. “What
happened?” He speculates on the answer: “What if I do all my reading
on the web not so much because the way I read has changed, i.e. I’m just
seeking convenience, but because the way I THINK has changed?”
Bruce Friedman, who blogs regularly about the use of computers in
medicine, also has described how the Internet has altered his mental
habits. “I now have almost totally lost the ability to read and absorb a
longish article on the web or in print,” he wrote earlier this year. A
pathologist who has long been on the faculty of the University of
Michigan Medical School, Friedman elaborated on his comment in a
telephone conversation with me. His thinking, he said, has taken on a
“staccato” quality, reflecting the way he quickly scans short passages of
text from many sources online. “I can’t read War and Peace anymore,”
he admitted. “I’ve lost the ability to do that. Even a blog post of more
than three or four paragraphs is too much to absorb. I skim it.”
Anecdotes alone don’t prove much. And we still await the long-term
neurological and psychological experiments that will provide a definitive
picture of how Internet use affects cognition. But a recently published
study of online research habits , conducted by scholars from University
College London, suggests that we may well be in the midst of a sea
change in the way we read and think. As part of the five-year research
program, the scholars examined computer logs documenting the
behavior of visitors to two popular research sites, one operated by the
British Library and one by a U.K. educational consortium, that provide
access to journal articles, e-books, and other sources of written
information. They found that people using the sites exhibited “a form of
skimming activity,” hopping from one source to another and rarely
returning to any source they’d already visited. They typically read no
more than one or two pages of an article or book before they would
“bounce” out to another site. Sometimes they’d save a long article, but
there’s no evidence that they ever went back and actually read it. The
authors of the study report:
It is clear that users are not reading online in the traditional sense;
indeed there are signs that new forms of “reading” are emerging as users
“power browse” horizontally through titles, contents pages and abstracts
going for quick wins. It almost seems that they go online to avoid
reading in the traditional sense.
Thanks to the ubiquity of text on the Internet, not to mention the
popularity of text-messaging on cell phones, we may well be reading
more today than we did in the 1970s or 1980s, when television was our
4 medium of choice. But it’s a different kind of reading, and behind it lies
a different kind of thinking—perhaps even a new sense of the self. “We
are not only what we read,” says Maryanne Wolf, a developmental
psychologist at Tufts University and the author of Proust and the Squid:
The Story and Science of the Reading Brain. “We are how we read.”
Wolf worries that the style of reading promoted by the Net, a style that
puts “efficiency” and “immediacy” above all else, may be weakening our
capacity for the kind of deep reading that emerged when an earlier
technology, the printing press, made long and complex works of prose
commonplace. When we read online, she says, we tend to become “mere
decoders of information.” Our ability to interpret text, to make the rich
mental connections that form when we read deeply and without
distraction, remains largely disengaged.
Reading, explains Wolf, is not an instinctive skill for human beings. It’s
not etched into our genes the way speech is. We have to teach our minds
how to translate the symbolic characters we see into the language we
understand. And the media or other technologies we use in learning and
practicing the craft of reading play an important part in shaping the
neural circuits inside our brains. Experiments demonstrate that readers
of ideograms, such as the Chinese, develop a mental circuitry for
reading that is very different from the circuitry found in those of us
whose written language employs an alphabet. The variations extend
across many regions of the brain, including those that govern such
essential cognitive functions as memory and the interpretation of visual
and auditory stimuli. We can expect as well that the circuits woven by
our use of the Net will be different from those woven by our reading of
books and other printed works.
Sometime in 1882, Friedrich Nietzsche bought a typewriter—a MallingHansen Writing Ball, to be precise. His vision was failing, and keeping
his eyes focused on a page had become exhausting and painful, often
bringing on crushing headaches. He had been forced to curtail his
writing, and he feared that he would soon have to give it up. The
typewriter rescued him, at least for a time. Once he had mastered touchtyping, he was able to write with his eyes closed, using only the tips of
his fingers. Words could once again flow from his mind to the page.
But the machine had a subtler effect on his work. One of Nietzsche’s
friends, a composer, noticed a change in the style of his writing. His
already terse prose had become even tighter, more telegraphic. “Perhaps
you will through this instrument even take to a new idiom,” the friend
wrote in a letter, noting that, in his own work, his “‘thoughts’ in music
and language often depend on the quality of pen and paper.”
Also see:
5 Living With a Computer
(July 1982) "The process works this way. When I sit down to write a letter or start
the first draft of an article, I simply type on the keyboard and the words appear
on the screen..." By James Fallows
“You are right,” Nietzsche replied, “our writing equipment takes part in
the forming of our thoughts.” Under the sway of the machine, writes the
German media scholar Friedrich A. Kittler , Nietzsche’s prose “changed
from arguments to aphorisms, from thoughts to puns, from rhetoric to
telegram style.”
The human brain is almost infinitely malleable. People used to think
that our mental meshwork, the dense connections formed among the
100 billion or so neurons inside our skulls, was largely fixed by the time
we reached adulthood. But brain researchers have discovered that that’s
not the case. James Olds, a professor of neuroscience who directs the
Krasnow Institute for Advanced Study at George Mason University, says
that even the adult mind “is very plastic.” Nerve cells routinely break old
connections and form new ones. “The brain,” according to Olds, “has the
ability to reprogram itself on the fly, altering the way it functions.”
As we use what the sociologist Daniel Bell has called our “intellectual
technologies”—the tools that extend our mental rather than our physical
capacities—we inevitably begin to take on the qualities of those
technologies. The mechanical clock, which came into common use in the
14th century, provides a compelling example. In Technics and
Civilization, the historian and cultural critic Lewis Mumford described
how the clock “disassociated time from human events and helped create
the belief in an independent world of mathematically measurable
sequences.” The “abstract framework of divided time” became “the point
of reference for both action and thought.”
The clock’s methodical ticking helped bring into being the scientific
mind and the scientific man. But it also took something away. As the
late MIT computer scientist Joseph Weizenbaum observed in his 1976
book, Computer Power and Human Reason: From Judgment to
Calculation, the conception of the world that emerged from the
widespread use of timekeeping instruments “remains an impoverished
version of the older one, for it rests on a rejection of those direct
experiences that formed the basis for, and indeed constituted, the old
reality.” In deciding when to eat, to work, to sleep, to rise, we stopped
listening to our senses and started obeying the clock.
The process of adapting to new intellectual technologies is reflected in
the changing metaphors we use to explain ourselves to ourselves. When
the mechanical clock arrived, people began thinking of their brains as
operating “like clockwork.” Today, in the age of software, we have come
6 to think of them as operating “like computers.” But the changes,
neuroscience tells us, go much deeper than metaphor. Thanks to our
brain’s plasticity, the adaptation occurs also at a biological level.
The Internet promises to have particularly far-reaching effects on
cognition. In a paper published in 1936, the British mathematician Alan
Turing proved that a digital computer, which at the time existed only as
a theoretical machine, could be programmed to perform the function of
any other information-processing device. And that’s what we’re seeing
today. The Internet, an immeasurably powerful computing system, is
subsuming most of our other intellectual technologies. It’s becoming
our map and our clock, our printing press and our typewriter, our
calculator and our telephone, and our radio and TV.
When the Net absorbs a medium, that medium is re-created in the Net’s
image. It injects the medium’s content with hyperlinks, blinking ads,
and other digital gewgaws, and it surrounds the content with the
content of all the other media it has absorbed. A new e-mail message,
for instance, may announce its arrival as we’re glancing over the latest
headlines at a newspaper’s site. The result is to scatter our attention and
diffuse our concentration.
The Net’s influence doesn’t end at the edges of a computer screen,
either. As people’s minds become attuned to the crazy quilt of Internet
media, traditional media have to adapt to the audience’s new
expectations. Television programs add text crawls and pop-up ads, and
magazines and newspapers shorten their articles, introduce capsule
summaries, and crowd their pages with easy-to-browse info-snippets.
When, in March of this year, TheNew York Times decided to devote the
second and third pages of every edition to article abstracts , its design
director, Tom Bodkin, explained that the “shortcuts” would give harried
readers a quick “taste” of the day’s news, sparing them the “less
efficient” method of actually turning the pages and reading the articles.
Old media have little choice but to play by the new-media rules.
Never has a communications system played so many roles in our lives—
or exerted such broad influence over our thoughts—as the Internet does
today. Yet, for all that’s been written about the Net, there’s been little
consideration of how, exactly, it’s reprogramming us. The Net’s
intellectual ethic remains obscure.
About the same time that Nietzsche started using his typewriter, an
earnest young man named Frederick Winslow Taylor carried a
stopwatch into the Midvale Steel plant in Philadelphia and began a
historic series of experiments aimed at improving the efficiency of the
plant’s machinists. With the approval of Midvale’s owners, he recruited
a group of factory hands, set them to work on various metalworking
7 machines, and recorded and timed their every movement as well as the
operations of the machines. By breaking down every job into a sequence
of small, discrete steps and then testing different ways of performing
each one, Taylor created a set of precise instructions—an “algorithm,”
we might say today—for how each worker should work. Midvale’s
employees grumbled about the strict new regime, claiming that it turned
them into little more than automatons, but the factory’s productivity
soared.
More than a hundred years after the invention of the steam engine, the
Industrial Revolution had at last found its philosophy and its
philosopher. Taylor’s tight industrial choreography—his “system,” as he
liked to call it—was embraced by manufacturers throughout the country
and, in time, around the world. Seeking maximum speed, maximum
efficiency, and maximum output, factory owners used time-and-motion
studies to organize their work and configure the jobs of their workers.
The goal, as Taylor defined it in his celebrated 1911 treatise, The
Principles of Scientific Management, was to identify and adopt, for
every job, the “one best method” of work and thereby to effect “the
gradual substitution of science for rule of thumb throughout the
mechanic arts.” Once his system was applied to all acts of manual labor,
Taylor assured his followers, it would bring about a restructuring not
only of industry but of society, creating a utopia of perfect efficiency. “In
the past the man has been first,” he declared; “in the future the system
must be first.”
Taylor’s system is still very much with us; it remains the ethic of
industrial manufacturing. And now, thanks to the growing power that
computer engineers and software coders wield over our intellectual
lives, Taylor’s ethic is beginning to govern the realm of the mind as well.
The Internet is a machine designed for the efficient and automated
collection, transmission, and manipulation of information, and its
legions of programmers are intent on finding the “one best method”—
the perfect algorithm—to carry out every mental movement of what
we’ve come to describe as “knowledge work.”
Google’s headquarters, in Mountain View, California—the Googleplex—
is the Internet’s high church, and the religion practiced inside its walls is
Taylorism. Google, says its chief executive, Eric Schmidt, is “a company
that’s founded around the science of measurement,” and it is striving to
“systematize everything” it does. Drawing on the terabytes of behavioral
data it collects through its search engine and other sites, it carries out
thousands of experiments a day, according to the Harvard Business
Review, and it uses the results to refine the algorithms that increasingly
control how people find information and extract meaning from it. What
Taylor did for the work of the hand, Google is doing for the work of the
mind.
8 The company has declared that its mission is “to organize the world’s
information and make it universally accessible and useful.” It seeks to
develop “the perfect search engine,” which it defines as something that
“understands exactly what you mean and gives you back exactly what
you want.” In Google’s view, information is a kind of commodity, a
utilitarian resource that can be mined and processed with industrial
efficiency. The more pieces of information we can “access” and the
faster we can extract their gist, the more productive we become as
thinkers.
Where does it end? Sergey Brin and Larry Page, the gifted young men
who founded Google while pursuing doctoral degrees in computer
science at Stanford, speak frequently of their desire to turn their search
engine into an artificial intelligence, a HAL-like machine that might be
connected directly to our brains. “The ultimate search engine is
something as smart as people—or smarter,” Page said in a speech a few
years back. “For us, working on search is a way to work on artificial
intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly
if you had all the world’s information directly attached to your brain, or
an artificial brain that was smarter than your brain, you’d be better off.”
Last year, Page told a convention of scientists that Google is “really
trying to build artificial intelligence and to do it on a large scale.”
Such an ambition is a natural one, even an admirable one, for a pair of
math whizzes with vast quantities of cash at their disposal and a small
army of computer scientists in their employ. A fundamentally scientific
enterprise, Google is motivated by a desire to use technology, in Eric
Schmidt’s words, “to solve problems that have never been solved
before,” and artificial intelligence is the hardest problem out there. Why
wouldn’t Brin and Page want to be the ones to crack it?
Still, their easy assumption that we’d all “be better off” if our brains
were supplemented, or even replaced, by an artificial intelligence is
unsettling. It suggests a belief that intelligence is the output of a
mechanical process, a series of discrete steps that can be isolated,
measured, and optimized. In Google’s world, the world we enter when
we go online, there’s little place for the fuzziness of contemplation.
Ambiguity is not an opening for insight but a bug to be fixed. The
human brain is just an outdated computer that needs a faster processor
and a bigger hard drive.
The idea that our minds should operate as high-speed data-processing
machines is not only built into the workings of the Internet, it is the
network’s reigning business model as well. The faster we surf across the
Web—the more links we click and pages we view—the more
opportunities Google and other companies gain to collect information
about us and to feed us advertisements. Most of the proprietors of the
9 commercial Internet have a financial stake in collecting the crumbs of
data we leave behind as we flit from link to link—the more crumbs, the
better. The last thing these companies want is to encourage leisurely
reading or slow, concentrated thought. It’s in their economic interest to
drive us to distraction.
Maybe I’m just a worrywart. Just as there’s a tendency to glorify
technological progress, there’s a countertendency to expect the worst of
every new tool or machine. In Plato’s Phaedrus, Socrates bemoaned the
development of writing. He feared that, as people came to rely on the
written word as a substitute for the knowledge they used to carry inside
their heads, they would, in the words of one of the dialogue’s characters,
“cease to exercise their memory and become forgetful.” And because
they would be able to “receive a quantity of information without proper
instruction,” they would “be thought very knowledgeable when they are
for the most part quite ignorant.” They would be “filled with the conceit
of wisdom instead of real wisdom.” Socrates wasn’t wrong—the new
technology did often have the effects he feared—but he was
shortsighted. He couldn’t foresee the many ways that writing and
reading would serve to spread information, spur fresh ideas, and expand
human knowledge (if not wisdom).
The arrival of Gutenberg’s printing press, in the 15th century, set off
another round of teeth gnashing. The Italian humanist Hieronimo
Squarciafico worried that the easy availability of books would lead to
intellectual laziness, making men “less studious” and weakening their
minds. Others argued that cheaply printed books and broadsheets
would undermine religious authority, demean the work of scholars and
scribes, and spread sedition and debauchery. As New York University
professor Clay Shirky notes, “Most of the arguments made against the
printing press were correct, even prescient.” But, again, the doomsayers
were unable to imagine the myriad blessings that the printed word
would deliver.
So, yes, you should be skeptical of my skepticism. Perhaps those who
dismiss critics of the Internet as Luddites or nostalgists will be proved
correct, and from our hyperactive, data-stoked minds will spring a
golden age of intellectual discovery and universal wisdom. Then again,
the Net isn’t the alphabet, and although it may replace the printing
press, it produces something altogether different. The kind of deep
reading that a sequence of printed pages promotes is valuable not just
for the knowledge we acquire from the author’s words but for the
intellectual vibrations those words set off within our own minds. In the
quiet spaces opened up by the sustained, undistracted reading of a book,
or by any other act of contemplation, for that matter, we make our own
associations, draw our own inferences and analogies, foster our own
ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from
10 deep thinking.
If we lose those quiet spaces, or fill them up with “content,” we will
sacrifice something important not only in our selves but in our culture.
In a recent essay, the playwright Richard Foreman eloquently described
what’s at stake:
I come from a tradition of Western culture, in which the ideal (my ideal)
was the complex, dense and “cathedral-like” structure of the highly
educated and articulate personality—a man or woman who carried
inside themselves a personally constructed and unique version of the
entire heritage of the West. [But now] I see within us all (myself
included) the replacement of complex inner density with a new kind of
self—evolving under the pressure of information overload and the
technology of the “instantly available.”
As we are drained of our “inner repertory of dense cultural inheritance,”
Foreman concluded, we risk turning into “‘pancake people’—spread
wide and thin as we connect with that vast network of information
accessed by the mere touch of a button.”
I’m haunted by that scene in 2001. What makes it so poignant, and so
weird, is the computer’s emotional response to the disassembly of its
mind: its despair as one circuit after another goes dark, its childlike
pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and
its final reversion to what can only be called a state of innocence. HAL’s
outpouring of feeling contrasts with the emotionlessness that
characterizes the human figures in the film, who go about their business
with an almost robotic efficiency. Their thoughts and actions feel
scripted, as if they’re following the steps of an algorithm. In the world of
2001, people have become so machinelike that the most human
character turns out to be a machine. That’s the essence of Kubrick’s dark
prophecy: as we come to rely on computers to mediate our
understanding of the world, it is our own intelligence that flattens into
artificial intelligence.
Nicholas Carr’s most recent book, The Big Switch: Rewiring the
World, From Edison to Google, was published earlier this year.
11 Does the Internet Make You
Dumber?
The cognitive effects are measurable: We're turning into
shallow thinkers, says Nicholas Carr.
By NICHOLAS CARR
The Roman philosopher Seneca may have put it best 2,000
years ago: "To be everywhere is to be nowhere." Today, the
Internet grants us easy access to unprecedented amounts of
information. But a growing body of scientific evidence suggests
that the Net, with its constant distractions and interruptions, is
also turning us into scattered and superficial thinkers.
Journal Community
The picture emerging from the research is deeply troubling, at
least to anyone who values the depth, rather than just the
velocity, of human thought. People who read text studded with
links, the studies show, comprehend less than those who read
traditional linear text. People who watch busy multimedia
presentations remember less than those who take in information
in a more sedate and focused manner. People who are
continually distracted by emails, alerts and other messages
understand less than those who are able to concentrate. And
people who juggle many tasks are less creative and less
productive than those who do one thing at a time.
12 Mick Coulas
The common thread in these disabilities is the division of
attention. The richness of our thoughts, our memories and even
our personalities hinges on our ability to focus the mind and
sustain concentration. Only when we pay deep attention to a
new piece of information are we able to associate it
"meaningfully and systematically with knowledge already well
established in memory," writes the Nobel Prize-winning
neuroscientist Eric Kandel. Such associations are essential to
mastering complex concepts.
When we're constantly distracted and interrupted, as we tend to
be online, our brains are unable to forge the strong and
expansive neural connections that give depth and
distinctiveness to our thinking. We become mere signalprocessing units, quickly shepherding disjointed bits of
information into and then out of short-term memory.
In an article published in Science last year, Patricia Greenfield,
a leading developmental psychologist, reviewed dozens of
studies on how different media technologies influence our
cognitive abilities. Some of the studies indicated that certain
computer tasks, like playing video games, can enhance "visual
literacy skills," increasing the speed at which people can shift
their focus among icons and other images on screens. Other
studies, however, found that such rapid shifts in focus, even if
performed adeptly, result in less rigorous and "more automatic"
13 thinking.
56 Seconds
Average time an American spends looking at a Web page.
Source: Nielsen
In one experiment conducted at Cornell University, for example,
half a class of students was allowed to use Internet-connected
laptops during a lecture, while the other had to keep their
computers shut. Those who browsed the Web performed much
worse on a subsequent test of how well they retained the
lecture's content. While it's hardly surprising that Web surfing
would distract students, it should be a note of caution to schools
that are wiring their classrooms in hopes of improving learning.
Ms. Greenfield concluded that "every medium develops some
cognitive skills at the expense of others." Our growing use of
screen-based media, she said, has strengthened visual-spatial
intelligence, which can improve the ability to do jobs that involve
keeping track of lots of simultaneous signals, like air traffic
control. But that has been accompanied by "new weaknesses in
higher-order cognitive processes," including "abstract
vocabulary, mindfulness, reflection, inductive problem solving,
critical thinking, and imagination." We're becoming, in a word,
shallower.
In another experiment, recently conducted at Stanford
University's Communication Between Humans and Interactive
Media Lab, a team of researchers gave various cognitive tests
to 49 people who do a lot of media multitasking and 52 people
who multitask much less frequently. The heavy multitaskers
performed poorly on all the tests. They were more easily
distracted, had less control over their attention, and were much
less able to distinguish important information from trivia.
The researchers were surprised by the results. They had
expected that the intensive multitaskers would have gained
some unique mental advantages from all their on-screen
juggling. But that wasn't the case. In fact, the heavy multitaskers
weren't even good at multitasking. They were considerably less
adept at switching between tasks than the more infrequent
multitaskers. "Everything distracts them," observed Clifford
Nass, the professor who heads the Stanford lab.
14 Does the Internet Make You Smarter?
Charis Tsevis
Amid the silly videos and spam are the roots of a new reading and writing
culture, says Clay Shirky.
It would be one thing if the ill effects went away as soon as we
turned off our computers and cellphones. But they don't. The
cellular structure of the human brain, scientists have discovered,
adapts readily to the tools we use, including those for finding,
storing and sharing information. By changing our habits of mind,
each new technology strengthens certain neural pathways and
weakens others. The cellular alterations continue to shape the
way we think even when we're not using the technology.
The pioneering neuroscientist Michael Merzenich believes our
brains are being "massively remodeled" by our ever-intensifying
use of the Web and related media. In the 1970s and 1980s, Mr.
Merzenich, now a professor emeritus at the University of
California in San Francisco, conducted a famous series of
experiments on primate brains that revealed how extensively
and quickly neural circuits change in response to experience.
When, for example, Mr. Merzenich rearranged the nerves in a
monkey's hand, the nerve cells in the animal's sensory cortex
quickly reorganized themselves to create a new "mental map" of
the hand. In a conversation late last year, he said that he was
profoundly worried about the cognitive consequences of the
constant distractions and interruptions the Internet bombards us
with. The long-term effect on the quality of our intellectual lives,
he said, could be "deadly."
What we seem to be sacrificing in all our surfing and searching
is our capacity to engage in the quieter, attentive modes of
thought that underpin contemplation, reflection and
15 introspection. The Web never encourages us to slow down. It
keeps us in a state of perpetual mental locomotion.
It is revealing, and distressing, to compare the cognitive effects
of the Internet with those of an earlier information technology,
the printed book. Whereas the Internet scatters our attention,
the book focuses it. Unlike the screen, the page promotes
contemplativeness.
Reading a long sequence of pages helps us develop a rare kind
of mental discipline. The innate bias of the human brain, after
all, is to be distracted. Our predisposition is to be aware of as
much of what's going on around us as possible. Our fast-paced,
reflexive shifts in focus were once crucial to our survival. They
reduced the odds that a predator would take us by surprise or
that we'd overlook a nearby source of food.
To read a book is to practice an unnatural process of thought. It
requires us to place ourselves at what T. S. Eliot, in his poem
"Four Quartets," called "the still point of the turning world." We
have to forge or strengthen the neural links needed to counter
our instinctive distractedness, thereby gaining greater control
over our attention and our mind.
It is this control, this mental discipline, that we are at risk of
losing as we spend ever more time scanning and skimming
online. If the slow progression of words across printed pages
damped our craving to be inundated by mental stimulation, the
Internet indulges it. It returns us to our native state of
distractedness, while presenting us with far more distractions
than our ancestors ever had to contend with.
—Nicholas Carr is the author, most recently, of "The Shallows: What
the Internet Is Doing to Our Brains."
CULTURAL POLITICS
VOLUME 1, ISSUE 1
PP 51–74
REPRINTS AVAILABLE
DIRECTLY FROM THE
PUBLISHERS.
PHOTOCOPYING
PERMITTED BY LICENSE
ONLY
© BERG 2005
PRINTED IN THE UK
COMMUNICATIVE
CAPITALISM:
CIRCULATION AND
THE FORECLOSURE
OF POLITICS
ABSTRACT What is the political impact of
networked communications technologies?
I argue that as communicative capitalism
they are profoundly depoliticizing. The
argument, first, conceptualizes the
current political-economic formation as
one of communicative capitalism. It then
moves to emphasize specific features of
communicative capitalism in light of the
fantasies animating them. The fantasy of
abundance leads to a shift in the basic
unit of communication from the message
to the contribution. The fantasy of activity
or participation is materialized through
technology fetishism. The fantasy of
wholeness relies on and produces a global
both imaginary and Real. This fantasy
prevents the emergence of a clear division
51
JODI DEAN IS A POLITICAL
THEORIST TEACHING
AND WRITING IN UPSTATE
NEW YORK. HER MOST
RECENT WORK INCLUDES
PUBLICITY’S SECRET:
HOW TECHNOCULTURE
CAPITALIZES ON DEMOCRACY
AND, CO-EDITED WITH PAUL
A. PASSAVANT, EMPIRE’S
NEW CLOTHES: READING
HARDT AND NEGRI. SHE IS
CURRENTLY WORKING ON
A BOOK ON THE POLITICAL
THEORY OF SLAVOJ ZIZEK.
CULTURAL POLTICS
JODI DEAN
JODI DEAN
between friend and enemy, resulting instead in the more dangerous
and profound figuring of the other as a threat to be destroyed. My
goal in providing this account of communicative capitalism is to
explain why in an age celebrated for its communications there is
no response.
NO RESPONSE
52
CULTURAL POLTICS
>
Although mainstream US media outlets provided the
Bush administration with supportive, non-critical and even
encouraging platforms for making his case for invading Iraq,
critical perspectives were nonetheless well represented in the
communications flow of mediated global capitalist technoculture.
Alternative media, independent media and non-US media provided
thoughtful reports, insightful commentary and critical evaluations
of the “evidence” of “weapons of mass destruction” in Iraq. Amy
Goodman’s syndicated radio program, “Democracy Now,” regularly
broadcast shows intensely opposed to the militarism and unilateralism
of the Bush administration’s national security policy. The Nation
magazine offered detailed and nuanced critiques of various reasons
introduced for attacking Iraq. Circulating on the Internet were lists with
congressional phone and fax numbers, petitions and announcements
for marches, protests and direct-action training sessions. As the
march to war proceeded, thousands of bloggers commented on
each step, referencing other media supporting their positions. When
mainstream US news outlets failed to cover demonstrations such as
the September protest of 400,000 people in London or the October
march on Washington when 250,000 people surrounded the White
House, myriad progressive, alternative and critical left news outlets
supplied frequent and reliable information about the action on the
ground. All in all, a strong anti-war message was out there.
But, the message was not received. It circulated, reduced to the
medium. Even when the White House acknowledged the massive
worldwide demonstrations of February 15, 2003, Bush simply reiterated the fact that a message was out there, circulating – the
protestors had the right to express their opinions. He didn’t actually
respond to their message. He didn’t treat the words and actions
of the protestors as sending a message to him to which he was in
some sense obligated to respond. Rather, he acknowledged that there
existed views different from his own. There were his views and there
were other views; all had the right to exist, to be expressed – but that
in no way meant, or so Bush made it seem, that these views were
involved with each other. So, despite the terabytes of commentary
and information, there wasn’t exactly a debate over the war. On the
contrary, in the days and weeks prior to the US invasion of Iraq, the
anti-war messages morphed into so much circulating content, just
like all the other cultural effluvia wafting through cyberia.
We might express this disconnect between engaged criticism
and national strategy in terms of a distinction between politics as
53
the circulation of content and politics as official policy. On the one
hand there is media chatter of various kinds – from television talking
heads, radio shock jocks, and the gamut of print media to websites
with RSS (Real Simple Syndication) feeds, blogs, e-mail lists and the
proliferating versions of instant text messaging. In this dimension,
politicians, governments and activists struggle for visibility, currency
and, in the now quaint term from the dot.com years, mindshare.
On the other hand are institutional politics, the day-to-day activities
of bureaucracies, lawmakers, judges and the apparatuses of the
police and national security states. These components of the political
system seem to run independently of the politics that circulates as
content.
At first glance, this distinction between politics as the circulation
of content and politics as the activity of officials makes no sense.
After all, the very premise of liberal democracy is the sovereignty
of the people. And, governance by the people has generally been
thought in terms of communicative freedoms of speech, assembly
and the press, norms of publicity that emphasize transparency and
accountability, and the deliberative practices of the public sphere.
Ideally, the communicative interactions of the public sphere, what
I’ve been referring to as the circulation of content and media chatter,
are supposed to impact official politics.
In the United States today, however, they don’t, or, less bluntly
put, there is a significant disconnect between politics circulating as
content and official politics. Today, the circulation of content in the
dense, intensive networks of global communications relieves top-level
actors (corporate, institutional and governmental) from the obligation
to respond. Rather than responding to messages sent by activists and
critics, they counter with their own contributions to the circulating flow
of communications, hoping that sufficient volume (whether in terms
of number of contributions or the spectacular nature of a contribution) will give their contributions dominance or stickiness. Instead
of engaged debates, instead of contestations employing common
terms, points of reference or demarcated frontiers, we confront a
multiplication of resistances and assertions so extensive that it
hinders the formation of strong counterhegemonies. The proliferation, distribution, acceleration and intensification of communicative
access and opportunity, far from enhancing democratic governance
or resistance, results in precisely the opposite – the post-political
formation of communicative capitalism.
Needless to say, I am not claiming that networked communications
never facilitate political resistance. One of the most visible of the
numerous examples to the contrary is perhaps the experience of B92
in Serbia. Radio B92 used the Internet to circumvent governmental
censorship and disseminate news of massive demonstrations against
the Milosevic regime (Matic and Pantic 1999). My point is that the
political efficacy of networked media depends on its context. Under
conditions of the intensive and extensive proliferation of media,
messages are more likely to get lost as mere contributions to the
CULTURAL POLTICS
COMMUNICATIVE CAPITALISM: CIRCULATION AND THE FORECLOSURE OF POLITICS
54
CULTURAL POLTICS
JODI DEAN
circulation of content. What enhances democracy in one context becomes a new form of hegemony in another. Or, the intense circulation
of content in communicative capitalism forecloses the antagonism
necessary for politics. In relatively closed societies, that antagonism
is not only already there but also apparent at and as the very frontier
between open and closed.
My argument proceeds as follows. For the sake of clarity, I begin
by situating the notion of communicative capitalism in the context
of other theories of the present that emphasize changes in communication and communicability. I then move to emphasize specific
features of communicative capitalism in light of the fantasies animating them. First, I take up the fantasy of abundance and discuss the
ways this fantasy results in a shift in the basic unit of communication
from the message to the contribution. Second, I address the fantasy
of activity or participation. I argue that this fantasy is materialized
through technology fetishism. Finally, I consider the fantasy of wholeness that relies on and produces a global both imaginary and Real.
I argue that this fantasy prevents the emergence of a clear division
between friend and enemy, resulting instead in the more dangerous
and profound figuring of the other as a threat to be destroyed. My
goal in providing this account of communicative capitalism is to
explain why in an age celebrated for its communications there is
no response.
In the months before the 2002 congressional elections, just as
the administration urged congress to abdicate its constitutional
responsibility to declare war to the President, mainstream media
frequently employed the trope of “debate.” Democratic “leaders,”
with an eye to this “debate,” asserted that questions needed to be
asked. They did not take a position or provide a clear alternative to
the Bush administration’s emphasis on preventive war. Giving voice to
the ever-present meme regarding the White House’s public relations
strategy, people on the street spoke of whether Bush had “made
his case.” Nevertheless, on the second day of Senate debate on
the use of force in Iraq, no one was on the floor – even though many
were in the gallery. Why, at a time when the means of communication
have been revolutionized, when people can contribute their opinions
and access those of others rapidly and immediately, has democracy
failed? Why has the expansion and intensification of communication
networks, the proliferation of the very tools of democracy, coincided
with the collapse of democratic deliberation and, indeed, struggle?
These are the questions the idea of communicative capitalism helps
us answer.
COMMUNICATIVE CAPITALISM
The notion of communicative capitalism conceptualizes the commonplace idea that the market, today, is the site of democratic aspirations,
indeed, the mechanism by which the will of the demos manifests
55
itself. We might think here of the circularity of claims regarding
popularity. McDonald’s, Walmart and reality television are depicted
as popular because they seem to offer what people want. How do
we know they offer what people want? People choose them. So,
they must be popular.
The obvious problem with this equation is the way it treats commercial choices, the paradigmatic form of choice per se. But the
market is not a system for delivering political outcomes – despite the
fact that political campaigns are indistinguishable from advertising
or marketing campaigns. Political decisions – to go to war, say, or to
establish the perimeters of legitimate relationships – involve more
than the mindless reiteration of faith, conviction and unsupported
claims (I’m thinking here of the Bush administration’s faith-based
foreign policy and the way it pushed a link between Iraq and Al Qaeda).
The concept of communicative capitalism tries to capture this strange
merging of democracy and capitalism. It does so by highlighting the
way networked communications bring the two together.
Communicative capitalism designates that form of late capitalism
in which values heralded as central to democracy take material form
in networked communications technologies (cf. Dean 2002a; 2002b).
Ideals of access, inclusion, discussion and participation come to be
realized in and through expansions, intensifications and interconnections of global telecommunications. But instead of leading to more
equitable distributions of wealth and influence, instead of enabling
the emergence of a richer variety in modes of living and practices of
freedom, the deluge of screens and spectacles undermines political
opportunity and efficacy for most of the world’s peoples.
Research on the impact of economic globalization makes clear
how the speed, simultaneity and interconnectivity of electronic
communications produce massive concentrations of wealth (Sassen
1996). Not only does the possibility of superprofits in the finance and
services complex lead to hypermobility of capital and the devalorization of manufacturing but financial markets themselves acquire the
capacity to discipline national governments. In the US, moreover, the
proliferation of media has been accompanied by a shift in political
participation. Rather than actively organized in parties and unions,
politics has become a domain of financially mediated and professionalized practices centered on advertising, public relations and the
means of mass communication. Indeed, with the commodification
of communication, more and more domains of life seem to have
been reformatted in terms of market and spectacle. Bluntly put,
the standards of a finance- and consumption-driven entertainment
culture set the very terms of democratic governance today. Changing the system – organizing against and challenging communicative
capitalism – seems to require strengthening the system: how else
can one organize and get the message across? Doesn’t it require
raising the money, buying the television time, registering the domain
name, building the website and making the links?
CULTURAL POLTICS
COMMUNICATIVE CAPITALISM: CIRCULATION AND THE FORECLOSURE OF POLITICS
56
CULTURAL POLTICS
JODI DEAN
My account of communicative capitalism is affiliated with Georgio
Agamben’s discussion of the alienation of language in the society
of the spectacle and with Slavoj Zizek’s emphasis on post-politics.
And, even as it shares the description of communication as capitalist
production with Michael Hardt and Antonio Negri, it differs from their
assessment of the possibilities for political change.
More specifically, Agamben notes that “in the old regime . . . the
estrangement of the communicative essence of human beings was
substantiated as a presupposition that had the function of a common
ground (nation, language, religion, etc.)” (Agamben 2000: 115). Under
current conditions, however, “it is precisely this same communicativity,
this same generic essence (language), that is constituted as an
autonomous sphere to the extent to which it becomes the essential
factor of the production cycle. What hinders communication, therefore,
is communicability itself: human beings are being separated by what
unites them.” Agamben is pointing out how the commonality of the
nation state was thought in terms of linguistic and religious groups.
We can extend his point by recognizing that the ideal of constitutional
states, in theories such as Jurgen Habermas’s, say, has also been
conceptualized in terms of the essential communicativity of human
beings: those who can discuss, who can come to an agreement with one
another at least in principle, can be in political relation to one another.
As Agamben makes clear, however, communication has detached
itself from political ideals of belonging and connection to function
today as a primarily economic form. Differently put, communicative
exchanges, rather than being fundamental to democratic politics,
are the basic elements of capitalist production.
Zizek approaches this same problem of the contemporary foreclosure of the political via the concept of “post-politics.” Zizek explains
that post-politics “emphasizes the need to leave old ideological
divisions behind and confront new issues, armed with the necessary
expert knowledge and free deliberation that takes people’s concrete
needs and demands into account” (1999: 198). Post-politics thus
begins from the premise of consensus and cooperation. Real
antagonism or dissent is foreclosed. Matters previously thought to
require debate and struggle are now addressed as personal issues
or technical concerns. We might think of the ways that the expert
discourses of psychology and sociology provide explanations for
anger and resentment, in effect treating them as syndromes to be
managed rather than as issues to be politicized. Or we might think
of the probabilities, measures and assessments characteristic of
contemporary risk management. The problem is that all this tolerance
and attunement to difference and emphasis on hearing another’s
pain prevents politicization. Matters aren’t represented – they don’t
stand for something beyond themselves. They are simply treated in all
their particularity, as specific issues to be addressed therapeutically,
juridically, spectacularly or disciplinarily rather than being treated as
elements of larger signifying chains or political formations. Indeed,
THE FANTASY OF ABUNDANCE: FROM MESSAGE TO
CONTRIBUTION
The delirium of the dot.com years was driven by a tremendous
faith in speed, volume and connectivity. The speed and volume
of transactions, say, was itself to generate new “synergies” and
hence wealth. A similar belief underlies the conviction that enhanced
57
this is how third-way societies support global capital: they prevent
politicization. They focus on administration, again, foreclosing the
very possibility that things might be otherwise.
The post-political world, then, is marked by emphases on multiple
sources of value, on the plurality of beliefs and the importance of
tolerating these beliefs through the cultivation of an attunement to the
contingencies already pervading one’s own values. Divisions between
friends and enemies are replaced by emphases on all of us. Likewise,
politics is understood as not confined to specific institutional fields
but as a characteristic of all of life. There is an attunement, in other
words, to a micropolitics of the everyday. But this very attunement
forecloses the conflict and opposition necessary for politics.
Finally, Hardt and Negri’s description of the current techno-globalcapitalist formation coincides with Agamben’s account of communication without communicability and with Zizek’s portrayal of a global
formation characterized by contingency, multiplicity and singularity.
For example, they agree that “communication is the form of capitalist
production in which capital has succeeded in submitting society
entirely and globally to its regime, suppressing all alternative paths”
(Hardt and Negri 2000: 347; cf. Dean 2002b: 272–5). Emphasizing
that there is no outside to the new order of empire, Hardt and Negri
see the whole of empire as an “open site of conflict” wherein the
incommunicability of struggles, rather than a problem, is an asset
insofar as it releases opposition from the pressure of organization
and prevents co-optation. As I argue elsewhere, this position, while
inspiring, not only embraces the elision between the political and
the economic but also in so doing cedes primacy to the economic,
taking hope from the intensity and immediacy of the crises within
empire. The view I advocate is less optimistic insofar as it rejects
the notion that anything is immediately political, and instead prioritizes politicization as the difficult challenge of representing specific
claims or acts as universal (cf. Laclau 1996: 56-64). Specific or
singular acts of resistance, statements of opinion or instances of
transgression are not political in and of themselves; rather, they have
to be politicized, that is articulated together with other struggles,
resistances and ideals in the course or context of opposition to a
shared enemy or opponent (cf. Laclau and Mouffe 1986: 188). Crucial
to this task, then, is understanding how communicative capitalism,
especially insofar as it relies on networked communications, prevents
politicization. To this end, I turn now to the fantasies animating communicative capitalism.
CULTURAL POLTICS
COMMUNICATIVE CAPITALISM: CIRCULATION AND THE FORECLOSURE OF POLITICS
58
CULTURAL POLTICS
JODI DEAN
communications access facilitates democracy. More people than
ever before can make their opinions known. The convenience of the
Web, for example, enables millions not simply to access information
but also to register their points of view, to agree or disagree, to vote
and to send messages. The sheer abundance of messages, then,
is offered as an indication of democratic potential.
In fact, optimists and pessimists alike share this same fantasy
of abundance. Those optimistic about the impact of networked
communications on democratic practices emphasize the wealth of
information available on the Internet and the inclusion of millions
upon millions of voices or points of view into “the conversation”
or “public sphere.” Pessimists worry about the lack of filters, the
data smog and the fact that “all kinds of people” can be part of
the conversation (Dyson 1998; cf. Dean 2002a: 72–3). Despite
their differing assessments of the value of abundance, then, both
optimists and pessimists are committed to the view that networked
communications are characterized by exponential expansions in
opportunities to transmit and receive messages.
The fantasy of abundance covers over the way facts and opinions,
images and reactions circulate in a massive stream of content, losing
their specificity and merging with and into the data flow. Any given
message is thus a contribution to this ever-circulating content. My
argument is that a constitutive feature of communicative capitalism
is precisely this morphing of message into contribution. Let me
explain.
One of the most basic formulations of the idea of communication
is in terms of a message and the response to the message. Under
communicative capitalism, this changes. Messages are contributions
to circulating content – not actions to elicit responses.1 Differently
put, the exchange value of messages overtakes their use value.
So, a message is no longer primarily a message from a sender to a
receiver. Uncoupled from contexts of action and application – as on
the Web or in print and broadcast media – the message is simply
part of a circulating data stream. Its particular content is irrelevant.
Who sent it is irrelevant. Who receives it is irrelevant. That it need be
responded to is irrelevant. The only thing that is relevant is circulation,
the addition to the pool. Any particular contribution remains secondary
to the fact of circulation. The value of any particular contribution
is likewise inversely proportionate to the openness, inclusivity or
extent of a circulating data stream – the more opinions or comments
that are out there, the less of an impact any one given one might
make (and the more shock, spectacle or newness is necessary for
a contribution to register or have an impact). In sum, communication
functions symptomatically to produce its own negation. Or, to return
to Agamben’s terms, communicativity hinders communication.
Communication in communicative capitalism, then, is not, as
Habermas would suggest, action oriented toward reaching understanding (Habermas 1984). In Habermas’s model of communicative
59
action, the use value of a message depends on its orientation.
In sending a message, a sender intends for it to be received and
understood. Any acceptance or rejection of the message depends
on this understanding. Understanding is thus a necessary part of
the communicative exchange. In communicative capitalism, however,
the use value of a message is less important than its exchange
value, its contribution to a larger pool, flow or circulation of content.
A contribution need not be understood; it need only be repeated,
reproduced, forwarded. Circulation is the context, the condition for
the acceptance or rejection of a contribution. Put somewhat differently, how a contribution circulates determines whether it had been
accepted or rejected. And, just as the producer, labor, drops out of
the picture in commodity exchange, so does the sender (or author)
become immaterial to the contribution. The circulation of logos,
branded media identities, rumors, catchphrases, even positions and
arguments exemplifies this point. The popularity, the penetration and
duration of a contribution marks its acceptance or success.
Thinking about messages in terms of use value and contributions
in terms of exchange value sheds light on what would otherwise
appear to be an asymmetry in communicative capitalism: the fact
that some messages are received, that some discussions extend
beyond the context of their circulation. Of course, it is also the case
that many commodities are not useless, that people need them. But,
what makes them commodities is not the need people have for them
or, obviously, their use. Rather, it is their economic function, their role
in capitalist exchange. Similarly, the fact that messages can retain
a relation to understanding in no way negates the centrality of their
circulation. Indeed, this link is crucial to the ideological reproduction
of communicative capitalism. Some messages, issues, debates are
effective. Some contributions make a difference. But more significant
is the system, the communicative network. Even when we know that
our specific contributions (our messages, posting, books, articles,
films, letters to the editor) simply circulate in a rapidly moving and
changing flow of content, in contributing, in participating, we act
as if we do not know this. This action manifests ideology as the
belief underlying action, the belief that reproduces communicative
capitalism (Zizek 1989).
The fantasy of abundance both expresses and conceals the shift
from message to contribution. It expresses the shift through its
emphases on expansions in communication – faster, better, cheaper;
more inclusive, more accessible; highspeed, broadband, etc. Yet even
as it emphasizes these multiple expansions and intensifications, this
abundance, the fantasy occludes the resulting devaluation of any
particular contribution. Social network analysis demonstrates clearly
the way that blogs, like other citation networks, follow a power law
distribution. They don’t scale; instead, the top few are much more
popular than the middle few, and the middle few are vastly more
popular than the bottom few. Some call this the emergence of an “A
CULTURAL POLTICS
COMMUNICATIVE CAPITALISM: CIRCULATION AND THE FORECLOSURE OF POLITICS
JODI DEAN
list” or the 80/20 rule. As Clay Shirkey summarily puts it, “Diversity
plus freedom of choice creates inequality, and the greater the diversity,
the more extreme the inequality” (Shirkey 2003).2 Emphasis on the
fact that one can contribute to a discussion and make one’s opinion
known misdirects attention from the larger system of communication
in which the contribution is embedded.
To put it differently, networked communications are celebrated for
enabling everyone to contribute, participate and be heard. The form
this communication takes, then, isn’t concealed. People are fully
aware of the media, the networks, even the surfeit of information.
But, they act as if they don’t have this knowledge, believing in the
importance of their contributions, presuming that there are readers
for their blogs. Why? As I explain in the next section, I think it involves
the way networked communications induce a kind of registration
effect that supports a fantasy of participation.
60
CULTURAL POLTICS
THE FANTASY OF PARTICIPATION: TECHNOLOGY
FETISHISM
In their online communications, people are apt to express intense
emotions, intimate feelings, some of the more secret or significant
aspects of their sense of who they are. Years ago, while surfing
through Yahoo’s home pages, I found the page of a guy who featured
pictures of his dog, his parents, and himself fully erect in an SMstyle harness. At the bottom of his site was the typical, “Thanks
for stopping by! Don’t forget to write and tell me what you think!”
I mention this quaint image to point to how easy many find it to
reveal themselves on the Internet. Not only are people accustomed
to putting their thoughts online but also in so doing they believe
their thoughts and ideas are registering – write and tell me what you
think! Contributing to the infostream, we might say, has a subjective
registration effect. One believes that it matters, that it contributes,
that it means something.
Precisely because of this registration effect, people believe that
their contribution to circulating content is a kind of communicative
action. They believe that they are active, maybe even that they are
making a difference simply by clicking on a button, adding their
name to a petition or commenting on a blog. Zizek describes this
kind of false activity with the term “interpassivity.” When we are
interpassive, something else, a fetish object, is active in our stead.
Zizek explains, “you think you are active, while your true position, as
embodied in the fetish, is passive . . .” (1997: 21). The frantic activity
of the fetish works to prevent actual action, to prevent something
from really happening. This suggests to me the way activity on the
Net, frantic contributing and content circulation, may well involve a
profound passivity, one that is interconnected, linked, but passive
nonetheless. Put back in terms of the circulation of contributions that
fail to coalesce into actual debates, that fail as messages in need of
61
response, we might think of this odd interpassivity as content that
is linked to other content, but never fully connected.
Weirdly, then, the circulation of communication is depoliticizing,
not because people don’t care or don’t want to be involved, but
because we do! Or, put more precisely, it is depoliticizing because the
form of our involvement ultimately empowers those it is supposed
to resist. Struggles on the Net reiterate struggles in real life, but
insofar as they reiterate these struggles, they displace them. And this
displacement, in turn, secures and protects the space of “official”
politics. This suggests another reason communication functions
fetishistically today: as a disavowal of a more fundamental political
disempowerment or castration. Approaching this fetishistic disavowal
from a different direction, we can ask, if Freud is correct in saying
that a fetish not only covers over a trauma but that in so doing it
also helps one through a trauma, what might serve as an analogous
socio-political trauma today? In my view, in the US a likely answer
can be found in the loss of opportunities for political impact and
efficacy. In the face of the constraining of states to the demands
and conditions of global markets, the dramatic decrease in union
membership and increase in corporate salaries and benefits at the
highest levels, and the shift in political parties from person-intensive
to finance-intensive organization strategies, the political opportunities
open to most Americans are either voting, which increasing numbers
choose not to do, or giving money. Thus, it is not surprising that many
might want to be more active and might feel that action online is a
way of getting their voice heard, a way of making a contribution.
Indeed, interactive communications technology corporations rose
to popularity in part on the message that they were tools for political
empowerment. One might think of Ted Nelson, Stewart Brand, the
People’s Computer Company a
nd their emancipatory images of computing technology. In the
context of the San Francisco Bay Area’s anti-war activism of the early
seventies, they held up computers as the means to the renewal of
participatory democracy. One might also think of the image projected
by Apple Computers. Apple presented itself as changing the world,
as saving democracy by bringing technology to the people. In 1984,
Apple ran an ad for the Macintosh that placed an image of the
computer next to one of Karl Marx. The slogan was, “It was about
time a capitalist started a revolution.” Finally, one might also recall
the guarantees of citizens’ access and the lure of town meetings
for millions, the promises of democratization and education that
drove Al Gore and Newt Gingrich’s political rhetoric in the nineties
as Congress worked through the Information and Infrastructure
Technology Act, the National Information Infrastructure Act (both
passing in 1993) and the1996 Telecommunications Act. These
bills made explicit a convergence of democracy and capitalism, a
rhetorical convergence that the bills brought into material form. As
the 1996 bill affirmed, “the market will drive both the Internet and
CULTURAL POLTICS
COMMUNICATIVE CAPITALISM: CIRCULATION AND THE FORECLOSURE OF POLITICS
JODI DEAN
the information highway” (Dyer-Witheford 1999: 34–5). In all these
cases, what is driving the Net is the promise of political efficacy, of
the enhancement of democracy through citizens’ access and use of
new communications technologies. But, the promise of participation
is not simply propaganda. No, it is a deeper, underlying fantasy
wherein technology functions as a fetish covering over our impotence
and helping us understand ourselves as active. The working of such
a fantasy is clear in discussions of the political impact of a new
device, system, code or platform. A particular technological innovation
becomes a screen upon which all sorts of fantasies of political
action are projected.
We might think here of peer-to-peer file sharing, especially in
light of the early rather hypnotic, mantra-like appeals to Napster.
Napster – despite that fact that it was a commercial venture – was
heralded as a sea change; it would transform private property, bring
down capitalism. More than piracy, Napster was a popular attack on
private property itself. Nick Dyer-Witheford, for example, argues that
Napster, and other peer-to-peer networks, present “real possibilities
of market disruption as a result of large-scale copyright violation.”
He contends:
62
CULTURAL POLTICS
While some of these peer-to-peer networks – like Napster
– were created as commercial applications, others – such as
Free Net – were designed as political projects with the explicit
intention of destroying both state censorship and commercial
copyright . . The adoption of these celebratory systems as a
central component of North American youth culture presents a
grassroots expansion of the digital commons and, at the very
least, seriously problematizes current plans for their enclosure.
(Dyer-Witheford 2002: 142)
Lost in the celebratory rhetoric is the fact that capitalism has never
depended on one industry. Industries rise and fall. Corporations like
Sony and Bertelsmann can face declines in one sector and still make
astronomical profits in others. Joshua Gamson’s point about the legacy
of Internet-philia is appropriate here: wildly displaced enthusiasm
over the political impact of a specific technological practice results
in a tendency “to bracket institutions and ownership, to research and
theorize uses and users of new media outside of those brackets,
and to let ‘newness’ overshadow historical continuity” (Gamson
2003: 259). Worries about the loss of the beloved paperback book
to unwieldy e-books weren’t presented as dooming the publishing
industry or assaulting the very regime of private property. Why should
sharing music files be any different?
It shouldn’t – and that is my point; Napster is a technological fetish
onto which all sorts of fantasies of political action are projected. Here
of course the fantasy is one deeply held by music fans: music can
change the world. And, armed with networked personal computers, the
63
weapons of choice for American college students in a not-so-radical
oh-so-consumerist entertainment culture, the wired revolutionaries
could think they were changing the world comforted all the while
that nothing would really change (or, at best, they could get record
companies to lower the prices on compact disks).
The technological fetish covers over and sustains a lack on the
part of the subject. That is to say, it protects the fantasy of an active,
engaged subject by acting in the subject’s stead. The technological
fetish “is political” for us, enabling us to go about the rest of our
lives relieved of the guilt that we might not be doing our part and
secure in the belief that we are after all informed, engaged citizens.
The paradox of the technological fetish is that the technology acting
in our stead actually enables us to remain politically passive. We
don’t have to assume political responsibility because, again, the
technology is doing it for us.
The technological fetish also covers over a fundamental lack or
absence in the social order. It protects a fantasy of unity, wholeness or
order, compensating in advance for this impossibility. Differently put,
technologies are invested with hopes and dreams, with aspirations to
something better. A technological fetish is at work when one disavows
the lack or fundamental antagonism forever rupturing (yet producing)
the social by advocating a particular technological fix. The “fix” lets
us think that all we need is to universalize a particular technology,
and then we will have a democratic or reconciled social order.
Gamson’s account of gay websites provides a compelling illustration
of this fetish function. Gamson argues that in the US, the Internet
has been a major force in transforming “gay and lesbian media from
organizations answering at least partly to geographical and political
communities into businesses answering primarily to advertisers
and investors” (2003: 260). He focuses on gay portals and their
promises to offer safe and friendly spaces for the gay community.
What he notes, however, is the way that these safe gay spaces now
function primarily “to deliver a market share to corporations.” As he
explains, “community needs are conflated with consumption desires,
and community equated with market” (Ibid.: 270–1). Qua fetish,
the portal is a screen upon which fantasies of connection can be
projected. These fantasies displace attention from their commercial
context.
Specifying more clearly the operation of the technological fetish
will bring home the way new communications technologies reinforce
communicative capitalism. I emphasize three operations: condensation, displacement and foreclosure.
The technological fetish operates through condensation. The complexities of politics – of organization, struggle, duration, decisiveness,
division, representation, etc. – are condensed into one thing, one
problem to be solved and one technological solution. So, the problem
of democracy is that people aren’t informed; they don’t have the
information they need to participate effectively. Bingo! Information
CULTURAL POLTICS
COMMUNICATIVE CAPITALISM: CIRCULATION AND THE FORECLOSURE OF POLITICS
JODI DEAN
technologies provide people with information. This sort of strategy,
however, occludes the problems of organizing and political will. For
example, in the United States – as Mary Graham explains in her study
of the politics of disclosure in chemical emissions, food labeling and
medical error policy – transparency started to function as a regulatory mechanism precisely at a time when legislative action seemed
impossible. Agreeing that people had a right to know, politicians
could argue for warning labels and more data while avoiding hard
or unpopular decisions. Corporations could comply – and find ways
to use their reports to improve their market position. “Companies
often lobbied for national disclosure requirements,” Graham writes.
“They did so,” she continues,
64
CULTURAL POLTICS
because they believed that disclosure could reduce the chances
of tougher regulation, eliminate the threat of multiple state
requirements, or improve competitive advantage . . . Likewise,
large food processing companies and most trade associations
supported national nutritional labeling as an alternative to
multiple state requirements and new regulations, or to a
crackdown on health claims. Some also expected competitive
gain from labeling as consumers, armed with accurate information, increased demand for authentically healthful productions.
(Graham 2002: 140)
Additional examples of condensation appear when cybertheorists
and activists emphasize singular websites, blogs and events. The
MediaWhoresOnline blog might be celebrated as a location of critical
commentary on mainstream and conservative journalism – but it
is also so small that it doesn’t show up on blog ranking sites like
daypop or Technorati.
The second mode of operation of the technological fetish is through
displacement. I’ve addressed this idea already in my description
of Napster and the way that the technological fetish is political for
us. But I want to expand this sense of displacement to account
for tendencies in some theory writing to displace political energies
elsewhere. Politics is displaced upon the activities of everyday or
ordinary people – as if the writer and readers and academics and
activists and, yes, even the politicians were somehow extraordinary.
What the everyday people do in their everyday lives is supposed to
overflow with political activity: conflicts, negotiations, interpretations,
resistances, collusions, cabals, transgressions and resignifications.
The Net – as well as cell phones, beepers and other communications
devices (though, weirdly, not the regular old telephone) – is thus
teeming with politics. To put up a website, to deface a website, to
redirect hits to other sites, to deny access to a website, to link to a
website – this is construed as real political action. In my view, this
sort of emphasis displaces political energy from the hard work of
organizing and struggle. It also remains oddly one-sided, conveniently
COMMUNICATIVE CAPITALISM: CIRCULATION AND THE FORECLOSURE OF POLITICS
forgetting both the larger media context of these activities, as if there
were not and have not been left and progressive print publications
and organizations for years, and the political context of networked
communications – the Republican Party as well as all sorts of other
conservative organizations and lobbyists use the Internet just as
much, if not more, than progressive groups.
Writing on Many-2-Many, a group web log on social software, Clay
Shirkey invokes a similar argument to explain Howard Dean’s poor
showing in the Iowa caucuses following what appeared to be his
remarkable successes on the Internet. Shirkey writes:
65
This does not mean that web-based activities are trivial or that
social software is useless. The Web provides an important medium
for connecting and communicating and the Dean campaign was
innovative in its use of social software to build a vital, supportive
movement around Dean’s candidacy. But, the pleasures of the medium
should not displace our attention from the ways that political change
demands much, much more than networked communication and the
way that the medium itself can and does provide a barrier against
action on the ground. As the Dean campaign also demonstrates,
without organized, mobilized action on the ground, without responses
to and from caucus attendees in Iowa, for example, Internet politics
remains precisely that – a politics of and through new media, and
that’s all.
The last operation of the technological fetish follows from the
previous ones: foreclosure. As I have suggested, the political purchase
of the technological fetish is given in advance; it is immediate,
presumed, understood. File sharing is political. A website is political.
Blogging is political. But this very immediacy rests on something
else, on a prior exclusion. And, what is excluded is the possibility
of politicization proper. Consider this breathless proclamation from
Geert Lovink and Florian Schneider:
CULTURAL POLTICS
We know well from past attempts to use social software to
organize groups for political change that it is hard, very hard,
because participation in online communities often provides
a sense of satisfaction that actually dampens a willingness
to interact with the real world. When you’re communing with
like-minded souls, you feel [original emphasis] like you’re
accomplishing something by arguing out the smallest details
of your perfect future world, while the imperfect and actual
world takes no notice, as is its custom.
There are many reasons for this, but the main one seems to
be that the pleasures of life online are precisely the way they
provide a respite from the vagaries of the real world. Both the
way the online environment flattens interaction and the way
everything gets arranged for the convenience of the user makes
the threshold between talking about changing the world and
changing the world even steeper than usual.3 (Shirkey 2004)
JODI DEAN
The revolution of our age should come as no surprise. It
has been announced for a long time. It is anticipated in the
advantage of the open source idea over archaic terms of
property. It is based on the steady decline of the traditional
client-server architecture and the phenomenal rise of peer-topeer technologies. It is practiced already on a daily basis: the
overwhelming success of open standards, free software and
file-sharing tools shows a glimpse of the triumph of a code that
will transform knowledge-production into a world-writable mode.
Today revolution means the wikification of the world; it means
creating many different versions of worlds, which everyone can
read, write, edit and execute. (Lovink and Schneider 2003; cf.
King 2004)
Saying that “revolution means the wikification” of the world
employs an illegitimate short circuit. More specifically, it relies on an
ontologization such that the political nature of the world is produced
by particular technological practices. Struggle, conflict and context
vanish, immediately and magically. Or, they are foreclosed, eliminated
in advance so as to create a space for the utopian celebration of
open source.
To ontologize the political is to collapse the very symbolic space
necessary for politicization, a space between an object and its
representation, its ability to stand for something beyond itself. The
power of the technological fetish stems from this foreclosure of
the political. Bluntly put, a condition of possibility for asserting the
immediately political character of something web radio or open-source
code, say, is not simply the disavowal of other political struggles;
rather, it relies on the prior exclusion of the antagonistic conditions
of emergence of web radio and open source, of their embeddedness
within the brutalities of global capital, of their dependence for
existence on racialized violence and division. Technologies can and
should be politicized. They should be made to represent something
beyond themselves in the service of a struggle against something
beyond themselves. Only such a treatment will avoid fetishization.
66
CULTURAL POLTICS
THE FANTASY OF WHOLENESS: A GLOBAL ZERO
INSTITUTION
Thus far I’ve discussed the foreclosure of the political in communicative
capitalism in terms of the fantasy of abundance accompanying
the reformatting of messages as contributions and the fantasy of
participation accompanying the technology fetishism. These fantasies
give people the sense that our actions online are politically significant,
that they make a difference. I turn now to the fantasy of wholeness
further animating networked communications. This fantasy furthers
our sense that our contributions to circulating content matter by
locating them in the most significant of possible spaces – the global.
To be sure, I am not arguing that the world serves as a space for
67
communicative capitalism analogous to the one the nation provided
for industrial capitalism. On the contrary, my argument is that the
space of communicative capitalism is the Internet and that networked
communications materialize specific fantasies of unity and wholeness
as the global. The fantasies in turn secure networked transactions
as the Real of global capitalism.
To explain why, I draw from Zizek’s elucidation of a concept introduced
by Claude Levi-Strauss, the zero institution (Zizek 2001: 221–3). A
zero institution is an empty signifier. It has no determinate meaning
but instead signifies the presence of meaning. It is an institution
with no positive function – all it does is signify institutionality as
such (as opposed to chaos for example). As originally developed
by Levi-Strauss, the concept of the zero institution helps explain
how people with radically different descriptions of their collectivity
nevertheless understand themselves as members of the same tribe.
To the Levi-Straussian idea Zizek adds insight into how both the
nation and sexual difference function as zero institutions. The nation
designates the unity of society in the face of radical antagonism,
the irreconcilable divisions and struggles between classes; sexual
difference, in contrast, suggests difference as such, a zero level of
absolute difference that will always be filled in and overdetermined
by contextually given differences.
In light of the nation’s failing capacity to stand symbolically for
institutionality, the Internet has emerged as the zero institution
of communicative capitalism. It enables myriad constituencies to
understand themselves as part of the same global structure even
as they radically disagree, fail to co-link, and inhabit fragmented and
disconnected network spaces. The Internet is not a wide-open space,
with nodes and links to nodes distributed in random fashion such
that any one site is equally likely to get hits as any other site. This
open, smooth, virtual world of endless and equal opportunity is a
fantasy. In fact, as Albert-Laszlo Barabasi’s research on directness
in scale-free networks makes clear, the World Wide Web is broken
into four major “continents” with their own navigational requirements
(Barabasi 2003: 161–78). Following links on one continent may
never link a user to another continent; likewise, following links in
one direction does not mean that a user can retrace links back to
her starting point. So despite the fact that its very architecture (like
all directed networks) entails fragmentation into separate spaces,
the Internet presents itself as the unity and fullness of the global.
Here the global is imagined and realized. More than a means through
which communicative capitalism intensifies its hold and produces its
world, the Internet functions as a particularly powerful zero institution
insofar as it is animated by the fantasy of global unity.
The Internet provides an imaginary site of action and belonging.
Celebrated for its freedoms and lack of boundaries, this imagined
totality serves as a kind of presencing of the global. On the one
hand the Internet imagines, stages and enacts the “global” of global
CULTURAL POLTICS
COMMUNICATIVE CAPITALISM: CIRCULATION AND THE FORECLOSURE OF POLITICS
68
CULTURAL POLTICS
JODI DEAN
capital. But on the other this global is nothing like the “world” – as if
such an entity were possible, as if one could designate an objective
reality undisturbed by the external perspective observing it or a
fully consistent essential totality unruptured by antagonism (Zizek
2002: 181).
The oscillations in the 1990s debate over the character of the
Internet can clarify this point. In the debate, Internet users appeared
either as engaged citizens eager to participate in electronic town
halls and regularly communicate with their elected representatives,
or they appeared as web-surfing waste-of-lives in dark, dirty rooms
downloading porn, betting on obscure Internet stocks or collecting
evidence of the US government’s work with extraterrestrials at Area 51
(Dean 1997). In other versions of this same matrix, users were either
innocent children or dreadful war-game playing teenage boys. Good
interactions were on Amazon. Bad interactions were underground
and involved drugs, kiddie porn, LSD and plutonium. These familiar
oscillations remind us that the Net has always been particular and
that struggles over regulating the Internet have been struggles over
what kind of particularity would and should be installed. Rather
than multiply far-reaching, engaging and accessible, the Internet
has been constituted in and through conflict over specific practices
and subjectivities. Not everything goes.
We might even say that those who want to clean up the Internet,
who want to get rid of or zone the porn and the gambling, who want
to centralize, rationalize and organize commercial transactions in
ways more beneficial to established corporations than to small, local
businesses, express as a difference on the Internet what is actually
the starker difference between societies traversed and mediated
through electronic communications and financial networks and those
more reliant on social, interpersonal and extra-legal networks. As
Ernesto Laclau argues, the division between the social and the
non-social, or between society and what is other to it, external and
threatening, can only be expressed as a difference internal to society
(Laclau 1996: 38). If capital today traverses the globe, how can the
difference between us and them be expressed? The oscillations in
the Internet debate suggest that the difference is between those who
are sexualized, undisciplined, violent, irrational, lazy, excessive and
extreme on the one hand, and those who are civilized, mainstream,
hard-working, balanced and normal on the other. Put in psychoanalytic
terms, the other on the Internet is the Real other – not the other I
imagine as like me and not the symbolic other to be recognized and
respected through abstract norms and rights. That the other is Real
brings home the fact that the effort to clean up the Internet was
more than a battle of images and involved more than gambling and
porn. The image of the Internet works as a fantasy of a global unity.
Whatever disrupts this unity cannot be part of the global.
The particularity of the fantasies of the global animating the
Internet is striking. For example, Richard Rogers’ research on linking
COMMUNICATIVE CAPITALISM: CIRCULATION AND THE FORECLOSURE OF POLITICS
A Lacanian commonplace is that a letter always arrives at its destination. What does this mean with respect to networked communications? It means that a letter, a message, in communicative capitalism
is not really sent. There is no response because there is no arrival.
There is just the contribution to circulating content.
Many readers will likely disagree. Some may say that the line I draw
between politics as circulating content and politics as governance
makes no sense. Dot.orgs, dot.coms, and dot.govs are all clearly
69
CONCLUSION
CULTURAL POLTICS
practices on the World Wide Web brings out the Web’s localism and
provincialism. In his account of the Dutch food safety debate, Rogers
notes “little in the way of ‘web dialogue’ or linkage outside of small
Dutch ‘food movement’” (Rogers 2002). Critics of personalized news
as well as of the sheltered world of AOL click on a similar problem
– the way the world on the Web is shrunken into a very specific image
of the global (Patelis 2000). How would fringe culture fans of blogs
on incunabula.org or ollapodrida.org come into contact with sites
providing Koranic instruction to modern Muslims – even if there were
no language problems? And, why would they bother? Why should they?
Indeed, as a number of commentators have worried for a while now,
opportunities to customize the news and announcements one reads
– not to mention the already undigestible amount of information
available on topics in which one is deeply interested – contribute to
the segmentation and isolation of users within bubbles of opinions
with which they already agree.
The particularity of these fantasies of the global is important
because this is the global that networked communications produce.
Our networked interactions produce our specific worlds as the global
of global capital. They create the expectations and effects of communicative capitalism, expectations and effects that necessarily
vary according to one’s context. And, precisely because the global is
whatever specific communities or exchanges imagine it to be, anything
outside the experience or comprehension of these communities either
does not exist or is an inhuman, otherworldly alien threat that must
be annihilated. So, if everything is out there on the Internet, anything
I fail to encounter – or can’t imagine encountering – isn’t simply
excluded (everything is already there), it is foreclosed. Admitting
or accessing what is foreclosed destroys the very order produced
through foreclosure. Thus, the imagined unity of the global, a fantasy
filled in by the particularities of specific contexts, is one where there
is no politics; there is already agreement. Circulating content can’t
effect change in this sort of world – it is already complete. The only
alternative is the Real that ruptures my world, that is to say the evil
other I cannot imagine sharing a world with. The very fantasy of a
global that makes my networked interactions vital and important
results in a world closed to politics on the one hand, and threatened
by evil on the other.
70
CULTURAL POLTICS
JODI DEAN
interconnected and intertwined in their personnel, policies and positions. But, to the extent that they are interconnected, identifying
any impact on these networks by critical opponents becomes all
the more difficult.
Other readers might bring up the successes of MoveOn (www.
moveon.org). From its early push to have Congress censure Bill
Clinton and “move on,” to its presence as a critical force against the
Iraq war, to recent efforts to prevent George W. Bush from acquiring
a second term, MoveOn has become a presence in mainstream
American politics and boasts over two million members worldwide.
In addition to circulating petitions and arranging e-mails and faxes to
members of Congress, one of MoveOn’s best actions was a virtual
sit-in: over 200,000 of us called into Washington, DC at scheduled
times on the same day, shutting down phone lines into the capital
for hours. In early 2004, MoveOn sponsored an ad contest: the
winning ad would be shown on a major television network during
the Super Bowl football game. The ad was great – but CBS refused
to broadcast it.
As I see it, far from being evidence against my argument, MoveOn
exemplifies technology fetishism and confirms my account of the
foreclosure of the political. MoveOn’s campaigns director, Eli Pariser,
says that the organization is “opt-in, it’s decentralized, you do it from
your home” (Boyd 2003: 14). No one has to remain committed or be
bothered with boring meetings. Andrew Boyd, in a positive appraisal
of the group, writes that “MoveOn’s strength lies . . . in providing a
home for busy people who may not want to be a part of a chapterbased organization with regular meetings . . . By combining a nimble
entrepreneurial style with a strong ethic of listening to its members
– via online postings and straw polls – MoveOn has built a responsive,
populist and relatively democratic virtual community” (Ibid.: 16). Busy
people can think they are active – the technology will act for them,
alleviating their guilt while assuring them that nothing will change
too much. The responsive, relatively democratic virtual community
won’t place too many (actually any) demands on them, fully aware
that its democracy is the democracy of communicative capitalism
– opinions will circulate, views will be expressed, information will
be accessed. By sending an e-mail, signing a petition, responding
to an article on a blog, people can feel political. And that feeling
feeds communicative capitalism insofar as it leaves behind the
time-consuming, incremental and risky efforts of politics. MoveOn
likes to emphasize that it abstains from ideology, from division.
While I find this disingenuous on the surface – MoveOn’s politics are
progressive, anti-war, left-democratic – this sort of non-position strikes
me as precisely that disavowal of the political I’ve been describing:
it is a refusal to take a stand, to venture into the dangerous terrain
of politicization.
Perhaps one can find better reasons to disagree with me when
one looks at alternative politics, that is when one focuses on the role
COMMUNICATIVE CAPITALISM: CIRCULATION AND THE FORECLOSURE OF POLITICS
of the Internet in mass mobilizations, in connecting activists from all
over the world and in providing an independent media source. The
February 15, 2003 mobilization of ten million people worldwide to
protest the Bush administration’s push against Iraq is perhaps the
most striking example, but one might also mention MoveOn’s March
16, 2003 candlelight vigil, an action involving over a million people
in 130 countries. Such uses of the Internet are vitally important for
political activists – especially given the increasingly all-pervasive
reach of corporate-controlled media. Through them, activists establish
social connections to one another – even if not to those outside
their circles. But this does not answer the question of whether such
instances of intense social meaning will drive larger organizational
efforts and contribute to the formation of political solidarities with
more duration. Thus, I remain convinced that the strongest argument
for the political impact of new technologies proceeds in precisely the
opposite direction, that is to say in the direction of post-politics. Even
as globally networked communications provide tools and terrains
of struggle, they make political change more difficult – and more
necessary – than ever before. To this extent, politics in the sense
of working to change current conditions may well require breaking
with and through the fantasies attaching us to communicative
capitalism.
NOTES
1. A thorough historical analysis of the contribution would spell out
the steps involved in the uncoupling of messages from responses.
Such an analysis would draw out the ways that responses to the
broadly cast messages of television programs were configured as
attention and measured in terms of ratings. Nielsen families, in
other words, responded for the rest of us. Yet, as work in cultural
studies, media and communications has repeatedly emphasized,
ratings are not responses and provide little insight into the actual
responses of viewers. These actual responses, we can say, are
uncoupled from the broadcast message and incorporated into
other circuits of communication.
2. I am grateful to Drazen Pantic for sending me a link to this site.
3. Special thanks to Auke Towslager for this url and many others
on blogspace.
CULTURAL POLTICS
I am grateful to Lee Quinby and Kevin Dunn for comments on an
earlier draft of this paper and to John Armitage and Ryan Bishop for
immeasurable help and patience. My thinking on this paper benefited
greatly from exchanges with Noortje Marres, Drazen Pantic, Richard
Rogers and Auke Towslager.
Agamben, G. (2000), Means Without End: Notes on Politics, trans. by
V. Binetti and C. Casarino, Minneapolis: University of Minnesota
Press.
71
REFERENCES
72
CULTURAL POLTICS
JODI DEAN
Barabasi, A.-L. (2003), Linked: How Everything is Connected to
Everything Else and What It Means, New York: Plume.
Boyd, A. (2003), “The Web Rewires the Movement,” The Nation
(August 4/11): 14.
Dean, J. (1997), “Virtually Citizens,” Constellations 4(2) (October):
264–82.
—— (2002), Publicity’s Secret: How Technoculture Capitalizes on
Democracy, Ithaca: Cornell University Press.
—— (2004), “The Networked Empire: Communicative Capitalism and
the Hope for Politics,” in P.A. Passavant and J. Dean (eds), Empire
Strikes Back: Reading Hardt and Negri, New York: Routledge, pp.
265–88.
Dyer-Witheford, N. (1999), Cyber-Marx: Cycles and Circuits of Struggle in
High Technology Capitalism, Urbana: University of Illinois Press.
—— (2004), “E-Capital and the Many-Headed Hydra,” in G. Elmer
(ed.), Critical Perspectives on the Internet, Lanham: Rowman and
Littlefield.
Dyson, E. (1998), “The End of the Official Story,” Brill’s Content (July/
August): 50–1.
Gamson, J. (2003), “Gay Media, Inc.: Media Structures, the New Gay
Conglomerates, and Collective Sexual Identities,” in M. McCaughey
and M.D. Ayers (eds), Cyberactivism: Online Activism in Theory
and Practice,
New York: Routledge.
Graham, M. (2002), Democracy by Disclosure: The Rise of Technopopulism, Washington, DC: The Brookings Institution.
Habermas, J. (1984), The Theory of Communicative Action, Volume I:
Reason and the Rationalization of Society, trans. by T. McCarthy,
Boston: Beacon Press.
Hardt, M. and Negri, A. (2000), Empire, Cambridge, MA: Harvard
University Press.
King, J. (2004), “The Packet Gang,” Mute 27 (Winter/Spring), available
at www.metamute.com.
Laclau, E. (1996), Emancipations, London: Verso.
Laclau, E. and Mouffe, C. (1986), Hegemony and Socialist Strategy,
London: Verso.
Lovink, G. and Schneider, F. (2003), “Reverse Engineering Freedom,”
available at http://www.makeworlds.org/?q=node/view/20
Matic, V. and Pantic, D. (1999), “War of Words,” The Nation
(November 29), available at http://www.thenation.com/doc.
mhtml?i=19991129&s=matic
Patelis, K. (2000), “E-Mediation by America Online,” in R. Rogers (ed.)
Preferred Placement: Knowledge Politics on the Web, Maastrict:
Jan van Eyck Academie, pp. 49–64.
Rogers, R. (2002), “The Issue has Left the Building,” paper presented
at the Annual Meeting of the International Association of Internet
Researchers, Maastricht, the Netherlands, October 13–16.
COMMUNICATIVE CAPITALISM: CIRCULATION AND THE FORECLOSURE OF POLITICS
73
CULTURAL POLTICS
Sassen, S. (1996), Losing Control?, New York: Columbia University
Press.
Shirkey, C. (2003), “Power Laws, Weblogs, and Inequality,” available at
http://shirky.com/writings/powerlaw_weblog.html. First published
February 8, 2003 on
the “Networks, Economics, and Culture” mailing list.
—— (2004), “Is Social Software Bad for the Dean Campaign?” Many2-Many, posted on January 26, available at http://www.corante.
com/many/archives/2004/01/26/is_social_software_bad_for_
the_dean_campaign.php.
Zizek, S. (1989), The Sublime Object of Ideology, London: Verso.
—— (1997), The Plague of Fantasies, London: Verso.
—— (1999), The Ticklish Subject, London: Verso.
—— (2001), Enjoy Your Symptom (second edition), New York:
Routledge.
—— (2002), “Afterward: Lenin’s Choice,” in Revolution at the Gates:
Selected Writings of Lenin from 1917, London: Verso.
The Internet’s Influence on the Production
and Consumption of Culture: Creative
Destruction and New Opportunities
–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Paul DiMaggio
A. Barton Hepburn Professor of Sociology and Public Affairs, Princeton University
Communication and Culture
The Internet’s Influence on the Production and
Consumption of Culture
Paul DiMaggio
360/361
Paul DiMaggio
princeton.edu/~artspol/pd_prof.html
Illustration
Ignacio Molano
362/363
Paul DiMaggio
Paul DiMaggio is A. Barton Hepburn Professor of Sociology
and Public Affairs at Princeton University, where he also
Organization, and is a member of the Executive Committee
of the Center for Information Technology Policy. A graduate of
Swarthmore College, he earned his PhD in Sociology at
Harvard University in 1979. Over the course of his career, he
has undertaken research and published papers about such
topics as arts institutions, culture and inequality, political
polarization, economic networks, and information technology.
He has written about the relationship between Internet use
and social inequality, and teaches a regular course with a
computer science colleague on information and public policy
at Princeton’s Woodrow Wilson School of International and
Public Affairs. DiMaggio is a member of the American Academy
of Arts and Sciences and the American Academy of Political
and Social Science. Sites and services that have changed my life
spotify.com
scholar.google.com
amazon.com
Communication and Culture
of Sociology, Director of the Center for the Study of Social
The Internet’s Influence on the Production and
Consumption of Culture
serves as Director of Graduate Studies in the Department
First, technologies don’t change us. They provide affordances (Gibson
1977) that allow us to be ourselves, to do the things we like or need to do,
more easily. The availability of these affordances may change behavior by
reducing the cost (in time or money) of certain activities (e.g., watching excerpts from movies or comedy shows) relative to other activities (watching
network television). But the Internet will not make the politically apathetic
vote, or the atheist go to church.
Second, when we talk about the role of the Internet in the lives of individuals,
we must not forget that the technology is still absent from or only marginally
part of the lives of many persons, even in the econ­omically advanced societies,
where between 10 and 30 percent of the public lack broadband access (Miniwatt
2013), many of those who have access fail to reap its benefits (Van Deursen and
Van Dijk 2013) and far fewer actually produce online content (Schradie 2011).
For those on the wrong side of the digital divide, the main impact of the Internet
may be reduced access to public and commercial services that have migrated
online. Participation is even lower, of course, in much of the Global South.
Paul DiMaggio
364/365
At certain points, I may use language that implies that the Internet has
had an effect on the world or on its users. The reader should be aware that
talk about the Internet effect, although at times a useful shorthand, should
never be taken too seriously, for at least three reasons.
The Internet’s Influence on the Production and
Consumption of Culture
In this essay, I consider the impact of the Internet on the arts and media, focusing, though not exclusively, on film, journalism, and, especially, popular
music, which serves as an extended case study. For many of these creative
fields, the Internet has been “a dis­ruptive technology” (Christensen 1997),
reshaping industries and rendering long-established business strategies
unsupportable, while introducing new ways to organize production and
distribution. I will consider these economic changes, but also discuss the
implications for creative workers and for the public at large.
Communication and Culture
The Internet’s Influence on the Production
and Consumption of Culture: Creative
Destruction and New Opportunities
Finally, what we call the Internet is a moving target, a product not only
of technological ingenuity but of economic strategy and political struggle.
What we think of as the Internet in the advanced in­dust­rial democracies
reflects a particular regulatory regime through which states allocate rights
to intellect­ual property and, through regulation, influence the cost and
potential profitability of investments in dif­ferent kinds of networking technologies (Benkler 2006; Crawford 2013). Technological change, in­flect­ed by
economic incentives and regulatory constraint, guarantees that today’s
Internet will be as re­mote by 2025 as the Internet of 2000 seems today.
The Internet is a technology that unleashes powerful opportunities. But
the realization of these opportunities is dependent, first, on the inclination
of humans to exploit them in creative ways; and, second, on the capacity of
entrenched stakeholders in both the private sector and the state to use
such tools as copyright, regulation, surveillance, and censorship to stand
in the way. Where the Internet’s effect on culture lies on the continuum
between dystopic and euphoric—to what extent it ripens into a sphere of
unbridled creativity and communication, to what extent it develops into
some combination of conventional entertainment medium and instrument
of political domination—will depend on both economic incentives and
public policies that structure the way those incentives operate. In this
sense, then, the Internet’s future cultural impact is both uncertain and
ours to make.
The Internet and Cultural Production
By cultural production, I refer to the performing and visual arts, literature,
and the media industries. A key distinction is between artistic activities
that require the co-presence of artistic workers (or of artworks) and consumers (live theater and dance, musical performance, art museums and
galleries) on the one hand; and artistic activities that produce artifacts
subject to digital distribution (recorded music, film and video). The Internet,
thus far, has had the most marked effects on the latter.
366/367
1. Just one-third of the
organizations surveyed
responded; if, as seems
likely, organizations with a
web presence and dedicated
employees were more likely to
respond to a survey about this
topic than others, the results
almost certainly overestimate arts
organizations’ web activities.
2. Museo Virtual de Artes, http://
muva.elpais.com.uy/
The Internet’s Influence on the Production and
Consumption of Culture
Yet it is the outliers who are more interesting if we think about the potential influence of the In­ternet on the arts. Consider, for example, MUVA
(Museo Virtual de Artes), a virtual museum of contemporary art hosted in
Uruguay and devoted to work by Uruguayan artists. This archit­ect­urally
impressive building (which exists only online) offers several exhibits
simultan­eously. The visitor uses a mouse to move about the exhibit (hold
the cursor to the far right or left to move quickly, closer to the center to
stroll more slowly, click to zoom onto an image and view docu­mentation),
much as one would a physical gallery. (The site also has af­fordances that
physical galleries do not offer, such as the opportunity to change the color
of the wall on which the work is hung.)2 To be sure, this is not yet a true
Communication and Culture
The performing arts, museums, and restaurants are perhaps least vulnerable to the Internet’s impact for two reasons. First, their appeal is sensual:
no digital facsimile satisfies our desire to see a dancer per­form, hear music
in a live setting, stand before a great work of art, or eat a fresh­ly prepared
meal. Sec­ond, because it is difficult to make live performances and exhibit­
ions highly profitable, in most of the world these activities have been left
to public or nonprofit institutions that are ordinarily less dynamic in their
response to environmental change (DiMag­gio 2006). In­deed, in the U.S.,
at least, theaters, orchestras, and muse­ums have been tentat­ive in their
em­brace of the new technology. Almost all respondents to a recent study
of 1,200 organizations that had received grants from the U.S.’s federal arts
agency reported that their organizations main­tained websites, used the
Internet to sell tickets and post videos, and maintained a Face­book site.
Yet just one third employ a full-time staff member primarily responsible for
their online presence, suggesting a somewhat restrained engagement with
social media (Thomas and Purcell 2013).1 In sum, then, it appears that, in
the U.S. at least, conventional non­commercial cultural organizations have
adopted the Internet, but only at the margins.
Paul DiMaggio
Art with the Personal Touch
museum experience—one has little con­trol over one’s distance from the
work, latencies are high, and navigation is at times clunky—but it provides
both an opportunity to see fascinating art that is otherwise in­accessible,
and technological advances will almost certainly make such experiences
even more compelling within a few years. Such developments, which could
vastly increase the currently tiny proport­ion of museums’ holdings on public view (as opposed to in storage), will be important to people who visit
museums and care about art. But their cultural impact will be modest
because people who regularly visit museums and attend performing-arts
events constitute a relatively small and, at least in some countries, declining share of the population. Such declines, one should note, began in
the pre-Internet era and cannot be attributed to the technology’s growth
(DiMaggio and Mukhtar 2004; Schuster 2007; Shekova 2012).
Creative Destruction in the Cultural Industries
The Internet has had a deeper impact on those cultural industries where the
product can be digitized—i.e., converted into bits and reassembled at an
end user’s computer, tablet, or cell phone. This happened quickly with
photographs and text; and then, as bandwidth and transmission speeds
expanded, music and film. And as it occurred, dominant business models
fell, leaving some industries in disarray. The Austrian economist Joseph
Schumpeter (1942) referred to this process as “creative destruction”—
destructive because of its harsh impact on existing firms, but creative
because of the economic vitality it unleashed.
Analytically, we must distinguish two effects of digitization,
one on cultural production and one on distribution. In traditional
industries, production and distribution were largely, although
not completely, unified, and, outside the fine-arts field, creators
eager to reach a market were typically employed by or under
contract to content-producing firms that also promoted and
distributed their creations.
Paul DiMaggio
368/369
A second result is the elision, in some fields (like photography) of the
distinction between professional and amateur (Lessig 2009). In fields with
strong business models, amateurs are practitioners who do not care to
make art for profit, or are not accomplished enough to do so. In an in­creas­
ing number of fields, amateurs are accomplished practitioners for whom
the returns offer, at best, a partial livelihood. Thus far, the democratizing
impact of technological change seems to have drawn people into cultural
production more quickly than declining returns have driven them out. In
many fields, we are seeing a regime in which small groups of artists interact
intensely with one another and with sophisticated and committed publics,
reviving (as Henry Jenkins [2006] has noted) the intimacy of folk cultures,
but in genres in which innovation is prized. This combination may produce
a golden age of artistic innovation and achievement (although it is also
The Internet’s Influence on the Production and
Consumption of Culture
But the affordances of digitization for production has been just as
important, if often over­looked, perhaps because they are connected to
user-owned devices (computers, soundboards and mixers, cam­eras and
video editors) rather than to the Internet itself. In many creative fields—
photography, digital art, re­corded music, radio programming (podcasting),
journalism (blogging)—the cost of production has declined markedly, opening the fields to many more players. While the percentage of people who
are culture producers remains small—remember that technol­ogies provide
affordances but do not change what people want to do—these numbers
have grown and barriers to entry have declined, at least for creative workers
who have no illusions about supporting themselves financially. The result,
for people who are sufficiently engaged in both technology and the arts to
care, is a far less centralized, more democratic, system in which specialized fan networks replace mass-mediated cultural markets.
Communication and Culture
Digitization both reduced the cost of distribution and made it much simpler: I can post a photo, MP3, or video to Face­book in just a moment, and
my friends can distribute it to their friends ad infinitum. Were incumbent
firms able to capture such effic­ien­cies, this would have expanded their
bottom lines; but, of course, they have often been re­luctant and slow to do
so, at times with dramatic results. To cite the two most notable examples of
creative destruction, since 1999, when Internet use began to take off in the
U.S., sales of recorded music as a share of GDP have declined by 80 percent,
and newspaper revenues have fallen by 60 percent (Waterman and Li 2011).
possible that, due to this decentralization of production and consumption,
relatively few people will be aware of it).
In some industries, creative workers have succeeded in establishing new
kinds of firms for which the Internet is central.
Difficult conditions often root out more vulnerable midsized firms (or, as in the case of the book and record industries,
lead the incumbent firms to concentrate on large projects and
neglect niche markets).
When this occurs, a process called “re­source partitioning” (Carroll,
Dobrev, and Swaminathan 2002) may lead to an increase in the number of
small firms, producing specialized products for specialized markets. Such
newcomers are often sole proprietorships, which gives them much flexibility. Whereas large media firms must net high prof­it margins to survive
because they compete for investment with firms in every sector, private
firms need earn only enough to motivate the proprietor to keep them in
business. Thus podcasters, independent record labels, and community media outlets can survive by producing products for which no radio network,
music conglomerate, or newspaper chain will compete.
In other words, we must question the widely held belief that the Internet
has marched through the creative industries laying waste on all sides on
two counts. First, if we look at statistics on the creative industries in the
U.S. (which is the largest producer and where statistics are most accessible), we see, first, that not all industries have suffered marked declines;
and some that have were doing badly before the Internet’s arrival. Movie
theater revenues accounted for about the same proportion of GDP in 2009
as in 1999; and cable television revenues rose dram­atically, more than making up for declining broadcast television and home video revenues. Book
sales declined between 1999 and 2009, but not much more than they had
during the previous decade (Waterman and Ji 2011).
Second, when one speaks of destruction, one must distinguish between
the Internet’s impact on incumbent firms—the oligopolists who controlled
As we have seen, the film industry has survived the Internet’s arrival with
relatively little damage, especially compared to the newspaper and recording industries (BLS 2013).And this was the case despite the industry’s
complaints about illegal downloads and despite the massive volume of
BitTorrent traffic, much of which entails the illegal transfer of film and
video. The number of establishments showing motion pictures and videos
and the number of persons they employ both declined in the U.S. more
than 10 percent between 2001 and 2011 (in part due to consolidation, and
an increase in the number of screens per theater) (BLS 2013). Similarly,
between 2003 and 2012, in the U.S. and Canada, the number of tickets
sold, and the value of these tickets in real dollars, both declined about 10
percent. But sharply rising box office in the Asia Pacific region and Latin
America more than made up for this decline. Moreover, outside of film
exhibition, the film and video industries have held their own since 2000,
both in terms of number of estab­lishments and number of employees (BLS
2013). Other sources of revenue have supplemented theater admissions.
And tech­nol­ogical change has dramatically lowered the cost of film production, bringing more independ­ents into the in­dustry and increasing the
number of films. During the first decade of the twentieth century, the num­
ber of films released to theaters grew by nearly 50 percent. Signific­antly,
growth occurred out­side of the major firms, which focused their energies
on block­bust­ers, releasing many fewer movies at the same time that the
Paul DiMaggio
370/371
Film
The Internet’s Influence on the Production and
Consumption of Culture
Let us consider three of the industries that have been affected. Film is
an outlier in that it has weathered the storm especially well. The newspaper
industry has been especially hard hit, with potentially significant
consequences for democratic societies that rely on a vigorous press. And
the recorded music industry has experienced the greatest disruption, and
has adapted in the most interesting and perhaps promising ways.
Communication and Culture
most of the market for film, recorded music, and books in 2000—and its
impact on the entertainment industries defined more broadly to include all
of the creative workers and distributions channels that bring their work to
consumers (Masnick and Ho 2012). The creative system as a whole might
flourish, even as historically dominant firms and business models face
grave challenges.
number of films re­leased by independ­ents doubled in number (MPAA 2012).
Moreover, the decade witnessed an equally dramatic increase in films
produced outside of Europe and North America (Masnick and Ho 2012,
10). Between 2005 and 2009, India, which produces the largest number of
feature films globally, increased production (i.e., number of films) by almost
25 percent. Nigeria, which is second, rose by more than 10 percent. China
passed Japan to move into fourth place, behind the U.S., increasing from
260 films in 2006 to 448 in 2009 (Acland 2012).
Remarkably enough, prosperity has occurred even as film piracy—the
massive transmiss­ion of product across BitTorrent P2P (peer-to-peer) networks—has remained substantial and, thus far at least, largely impervious
to copyright-enforcement efforts (Safner 2013). Research suggests that
film downloading may only minimally influence box office receipts. Using
information on release date variations, BitTorrent use, and box office across
countries, Danaher and Waldfogel (2013) conclude that downloading depresses box-office receipts for U.S. movies by about 7 per­cent—but that
this cost is not intrinsic but rather reflects delays in international release
dates (since they find no such effect in the domestic market). Presumably
theater admissions would be even higher were it not for the increased
availability of films through other channels, like cable television, subscription sites (e.g., Netflix), rentals (e.g., Amazon), and online sales (e.g.,
Amazon, iTunes). A quasi-experimental study of down­loads concludes that
the availability of legal film downloads (through iPods) depresses by about
5 to 10 percent, but does not affect sales of physical DVDs. (Whether this
will continue to be the case as consumers who prefer DVDs age out of the
population remains to be seen.) (Dan­aher et al. 2010)
Why has the film industry been relatively immune to the ravages experienced by the recording industry? There are probably five reasons. First, film
companies had become proficient at new forms of distribution—licensing
their product to cable stations, selling and renting physical media through
video stores and other outlets—well before rise of the Internet, and therefore gaining experience that made digital transmission less disruptive than
it might other­wise have been.
Second, greater bandwidth requirements for film piracy gave them a
few more years to adjust to the new world, enabling them to avoid the
Fifth, because their core expertise is in marketing and distributing films,
film companies can also serve as distributors for independent filmmakers.
Even when their share of production declined, they could benefit from the
expansion of the independent studios.
Finally, whereas consumers listen to pirated versions of music tracks the
same way that they use copies they purchase legally, the movie companies’
major distribution channel, theatrical release, offers an experience that is
quite different from watching a film at home. Many consumers who could
download a film for free or rent it from Amazon or their cable provider for
less than the cost of two tickets are still willing to pay for the experience
of spending a night out at a movie theater, a complement to the film itself
that cannot be downloaded.
Newspapers
Few industries have declined more dramatically since the rise of the
Internet than the newspaper industry. Two events in the U.S. in summer
Paul DiMaggio
372/373
Fourth, since the end of the studio system, major film companies have
organized movie production on a project basis—with each film, in effect,
its own small organization. This mode or organization both reduces risk
through cost sharing and, at the same time, reduces the ratio of fixed to
variable costs, making it easier to adapt to changing economic conditions.
The Internet’s Influence on the Production and
Consumption of Culture
Third, related to the first two points, the film industry was more effective in reaching agreements with online distributors who licensed their
wares for distribution. Before the rise of the Internet the film companies
had already changed their business models from one that depended almost exclusively on revenues from rentals to theatrical outlets to a mix of
theatrical release, sale and rental of tapes and CDs to individuals through
retail establishments, and sale of rights to broadcasters. When it came
time to move to sale by download, or rental of streaming media, they had
plenty of experience negotiating deals.
Communication and Culture
antagonistic posture that the record in­dustry took toward many of its customers. Observing the feckless response of the music industry may well
have given the film companies a second-mover advantage.
2013 are emblematic of this trend: Amazon founder Jeff Bezos purchased
the Washington Post for a modest sum, while the company that owned it
retained other holdings, including an online education firm; and the New
York Times sold the Boston Globe to local interests for just 6 percent of
what it had paid for it two decades earlier. Overall, aggregate U.S. newspaper ad receipts (print and online) had toppled by more than half during
the period the Times owned the Globe, and in 2010 were at about the same
levels (in real dollars) as in 1960.3 Moreover, ad revenues increased more
or less steadily during the postwar era until 2000 (around the time that
the Internet became mainstreamed), and then began a steep and uninterrupted decline. Since 2001, newspaper employment has fallen by almost
50 percent (BLS 2013).
In the U.S., at least, newspapers have depended upon an advertisingdriven model, which the In­ternet has devastated in two ways. First, it
almost immediately destroyed the demand for classified advertising which
had accounted for a large part of most newspapers’ re­venues. When one
wishes to sell a used end table, book, or article of clothing, eBay and Craig’s
List—searchable sites that reach an international market—are simply
more efficient media for anyone operating online. (Here the affordances of
the Internet for consumers interacted with those of high-speed computing
and wireless communication for firms like FedEx and DHL, rendering the
Internet’s global reach more valuable by making long-haul shipping more
reliable and more affordable.) Similarly, the market for want ads, another
staple newspaper revenue source, dried up as newspapers were displaced
by online companies like Monster.com and more specialized employment
listings. Newspapers in the U.S. have also suffered collat­eral damage from
the Internet, as online shopping and auction sites have largely eliminated
the generalist department stores that had for decades been major purchasers of newspaper advertising space.
Second, newspapers lost the ability to sell their reader’s attention to
large-scale advertisers as more and more readers accessed their content
through third-party links, most notably those provided by Google News.
3. To calculate these figures
I downloaded ad data in
spreadsheet form from the
Newspaper Association of
America (NAA 2013) and GDP
deflator data from the website
of the St. Louis Fed (FRED 2013),
using the latter to deflate the
former.
374/375
Because of this problem, newspapers have found it difficult to respond
to the Internet’s challenge. Although major newspapers have sporadically
attempted to require consumers to pay for website access, these efforts
have failed. In response, publishers have laid off report­ers, set their employees to reporting for multiple platforms (Boczkowski 2010), and slashed
budgets for investigative reporting. Sober observers have suggested that
the industry will require philanthropic or government support to survive
(Schudson and Down­ey 2009).
Major online news sites like Google News or The Huffington Post still
rely mostly on the report­ing of others. Thus we face the ironic possibility
that just as online distribution has made news more readily available than
ever, the supply of news will decline, both in quantity (fewer news­papers
generating fewer stories) and quality (as papers pull away from in-depth
reporting and rely more on wire services). There is some evidence of resource partitioning in the indust­ry, as laid-off journalists and grad­uates
of journalism schools have created new entities—some busin­esses, some
nonprofit organizations, some websites sponsored by larger nonprofit entities—devoted to community journalism and investigative reporting (Nee
2013). One report ident­ified 172 nonprofit outlets doing original reporting,
The Internet’s Influence on the Production and
Consumption of Culture
Moreover, both kinds of content were nestled amongst print advertisements that the reader skimming through could hardly avoid. The Internet
eliminated this fixed proximity, enabling readers to cherry-pick the content
in which they were most interested and to avoid advertisements as they
did. By decoupling more popular from less attractive content, including
ads, the online model made journalism far more difficult to monetize.
Communication and Culture
Newspapers were vulnerable because they had long used
attractive content—headlines, national politics, local coverage, and sports—to cross-subsidize the less popular content,
like financial or science reporting, that appeared in the same
document.
Paul DiMaggio
Those links went back to the newspapers themselves, so the problem was
not lost readership so much as lost ad revenue.
71 percent of which had been established since 2008. Most of these focused on local (rather than national or international) news, and about one
in five emphasized investigative reporting. And most were staffed sparely,
relying on part-time workers and, in many cases, volunteers, and very
lightly capitalized (Mitchell et al. 2013). A directory of citizen and community news sites in the U.S. lists more than one thous­and, most of which
are noncommercial (Knight Community News Network 2013). Lacking a
revenue model other than self-exploitation, the prospects of such entities
are highly uncertain. Patch.com emerged in 2007 as an effort to provide
news online to underserved suburban com­munities in the U.S, and was
acquired by media firm AOL two years later. Like its noncom­mer­cial counterparts it appears to have suffered from undercapitalization and difficulty
in monet­izing its project. In August 2013, the parent company eliminated
300 of its 900 community sites and laid off many of its paid employees.
Until journalist-run news sites find a way to produce serious news that is
self-sustaining, the great promise of the Internet as a platform for democratic and commercially unconstrained journalism will be overshadowed
by the technology’s threat to the sources of news and information on which
citizens had previously relied.
The Music Industry: A Case Study
The recording industry has suffered the most at the hands of technological change, especially if we define recording industry in terms of unit sales
of recorded music by integrated multinational recording and distribution
firms, of whom five dominated most music sales (90 percent in the U.S.
and approximately 75 percent globally [Hracs 2012]) by the late 1990s.
Until 2012, the industry marked a steady decline in sales, employment,
and establish­ments. According to the Bureau of Labor Statistics, which
includes not just record labels but agents, music studios, and other intermediaries in its counts, employment in the U.S. sound recording industries
has declined steadily since 2001, falling by about 40 percent by 2012. Over
that same period, the number of establishments in the industry fell by more
than 25 percent (BLS 2013). The majors signed fewer artists and released
fewer albums in 2009 than they had even five years earlier (IFPI 2010).
Globally, the revenues from recorded music in all its forms fell by more than
4. The number of tracks comes
from Spotify, which notes that
not all tracks are licensed for all
countries in which it operates. http://
press.spotify.com/us/information/
(accessed August 29, 2013).
Paul DiMaggio
376/377
Indeed, developments in the music field remind us that technological
destruction is creative, in two senses. First, file sharing produces winners
as well as losers. The big losers, of course, are the integrated multinational
record companies and the small percentage of artists who are for­tun­ate
enough to get recording contracts with them. But such artists, although
The Internet’s Influence on the Production and
Consumption of Culture
Recording industry trade associations blamed the decline on illegal
P2P file sharing—Napster, Grok­ster, and a range of successor technologies. File sharing did cut into record sales, but this oc­curred in the context
of a broader failure on the part of the industry to adjust to technol­ogical
change. Economists who study file sharing have, with some exceptions,
found moderate negative effects of file sharing on music purchases, though
a few have found no effects or very weak negative effects (Waldfogel 2012b;
Tschmuck 2010). File sharing almost certainly has harmed music sales,
but does not account for the entire decline, some portion of which likely
reflects a combination of the end of the CD product cycle, the absence of a
new hot genre (like rock or rap) to boost sales, negative consumer reaction
to high prices and in­dustry lawsuits against student downloaders, and the
emerg­ence of new legal, but less lucrat­ive, modes of music access, such
as Pandora (a San Francisco-based freemium service that provides personalized radio streams based on user-provided information) and Spotify
(a Swedish-based freemium streaming site with [as of late summer 2013]
a worldwide catalogue of more than 20 million tracks that permits users
to create and share playlists).4 In so far as the latter depress sales (by producing less revenue than equivalent distribution using the physical-album
model), their impact can be credited to the Internet; but most of the drop in
sales occurred before these ser­vices became pop­ular and, indeed, digital
sales and licensing appear to have revived the industry and now ac­count
for about 40 percent of the industry’s global revenues.
Communication and Culture
40 percent between 1999 (its peak) and its nadir in 2011 (Smirke 2013).
Particular subsectors like retail record stores (which suffered both from
illegal and, later, legal downloading) and recording studios (which were
harmed by the growth of better software and cheaper hardware available
to independent musicians) declined even more sharply (Leyshon 2008).
they account for a large share of economic activity, are a small minority.
Other losers may be artists at the margins of commercial success, who
might have received contracts in an earlier time; and organizational forms
that relied on physical record sales or on business from the integrated
producers.
For most musicians, however, file sharing is part of a complex of technological career-building tools that create or
expand opportunities to obtain at least some income from
one’s musical work.
Relatively few musicians have been able to support themselves through
income attributable to copyright. More commonly, they knit together earnings from combinations of such activities as live performance, sale of
merchandise, teaching, music production, and session work (DiCola 2013).
In many cases the Internet has improved opportunities for non-copyright
linked earnings. Musicians use their websites, for example, to market
sweatshirts, recordings, and other merchandise. Whereas musicians once
gave concerts to promote album sales, today many give the music away
(e.g., by posting videos on YouTube or offering free downloads from their
websites or Facebook pages), viewing the music as a means of increasing performance reven­ues. Research suggests that although file sharing
reduces album sales, it actually increases the demand for live concerts,
especially for artists who have not reached (and perhaps will never reach)
stardom (Mortimer, Nosko, and Sorensen 2012). Not surprisingly, then, surveys indicate that while the most commercially successful artists decry file
sharing, many musicians who record their own music are either indifferent
to or supportive of the practice (Madden 2004; DiCola 2013).
The shift away from integrated music companies has created opportunities for small firms so that, although revenues for the industry are down,
the music field’s artistic vitality is robust. Just as indie film production
has more than made up for declining releases by major film studios, indie
record companies have more than made up the slack in album pro­duction
caused by the major recording companies malaise. Between 1998 and 2010,
album releases by major labels declined by about 40 percent. During that
To be sure, we ought not to romanticize the shift: many of the musicians
signing with inde­pend­ent labels or producing their own albums might prefer to have contracts with the majors; and many who write their own tunes
resent the low royalty payments they receive from stream­ing services. The
streaming services themselves have yet to find a profitable business model, and time will tell whether they survive in their current form. Moreover,
in one sense, the industry has shifted to an economy of self-exploitation,
whereby educated creative workers labor for far less financial return than
they might receive in another line of work. Nonetheless we are witnessing
a sea change within the music industry that would have been impossible
without the affordances the Internet offers.
Paul DiMaggio
378/379
Furthermore, there is evidence from the U.S., Spain, and Sweden that, as
record sales fell, musicians’ concert revenues increased steadily. Just as
theater distribution, and the non-downloadable social element in moviegoing, protected film revenues, so the concert market, which offers an
experience that cannot be downloaded, has sustained the earnings of many
musicians (Albinsson 2013; Krueger 2005; Montoro-Pons and CuadradoGarcía 2011).
The Internet’s Influence on the Production and
Consumption of Culture
time, releases by independ­ent record com­panies increased dramatically,
passing the majors in 2001 and peaking in 2005. Between 2005, their numbers declined, as the number of self-released albums (of which there were
just a handful in 1998) has rocketed to fill in the gap (Waldfogel 2012a). De­
spite the de­cline in revenues, the overall number of releases grew steadily
from 1998 through 2009, as artists have used the Internet to take control
of their fates. During the proc­ess, the percentage of all sales accounted
for by top-selling albums has declined, and the percentage of best sellers
produced by the independents has risen, increasing the diversity of the
music available for purchase and streaming (Waldfogel 2012a).
1. Digital recording technology and the capacity to make and mix masters
at a small fraction of the cost required in the analog era. Although these
technologies are technically independent from the Internet, their development has been vastly accelerated by the rise of the MP3 as a means
of moving music from one place to another.
Communication and Culture
What are these affordances?
The decline in production costs, coupled with the virtually
zero marginal cost of online distribution, dramatically lowered
barriers to entry, so that every artist can, in effect, create his
or her own record company.
2. The Internet has become a powerful means of marketing new music. Not
all artists do create their own companies, of course, for three reasons.
First, most artists still want some number of physical records (CDs
or, increasingly, vinyl) and it is convenient to pool the skills required
to contract with manufacturers and distribute physical units. Second,
contracting with digital intermediaries like LastFM, Spotify, Deezer, or
Saavn is also subject to skill efficiencies. Third, and most important, the
Internet has done less to reduce the cost of marketing, and arguably
has made it more difficult to gain attention in a more densely populated
musical marketplace. The major firms still can invest in media ad campaigns, outreach to radio stations, and major promotional tie-ins, albeit
for many fewer albums.
Most recording artists, however, rely on the Internet—Facebook, Twitter,
and similar sites—to announce new products, sell merchandise (which
may be more lucrative than the music), set up tours and other events,
and communicate with fans. This approach seems rational as by 2010
more than 50 percent of American consumers used the Internet to learn
about new music, while only 32 percent primarily encountered new music on radio (Waldfogel 2012b).
3. The Internet is itself a platform for the publication of albums, many of
which may exist primarily in digital form. Galuszka (2012) identified more
than 569 online record companies (or netlabels) that employ Creative
Commons licenses in lieu of copyright, ceding users a wide range of
rights contingent upon their crediting the authors for the works in question. Promot­ion is almost entirely through websites, blogs, and social
media. Most of these labels were relatively young, three-quarters were
managed by one or two people, and just 13 percent of the owners viewed
them as potential sources of income. Yet most of them had released 16
or more albums and the top 10 percent had more than 50 releases.
380/381
Barry Wellman (Wellman et al. 2003), writing of the Internet’s impact on
social relations more generally, has called this combination of local and
global impact glocalization. Contemporary pop acts, except for the most
commercially successful, are rooted in place: bands and sing­er-songwriters
establish close relations with one another and with local club owners, playing in one another’s bands, shar­ing information and cooperating to produce
shows (Pacey, Borgatti, and Jones 2011; Cummins-Russell and Rantisi
2012). With the emergence of ubiquitous portable wireless devices, mes­
saging becomes a central means of communication within these densely
connected groups: an artist may text a club to check on sound equipment,
text other musicians to put together a show, promote it to his fans on
Facebook and Twitter (counting on the most ardent followers to retweet
it to their networks), and count on fans to take videos of the performance
and post them on YouTube or circulate them as Instagrams.
These dense networks provide basic support, opportunities for artists
to try out new songs and develop their crafts, and to build connections
they may use throughout their careers (Lena 2012). In that sense, this is
nothing new. Dynamic musical movements often exper­ienced gest­ation
in dense­ly connected networks of interacting artists and fans: take, for
The Internet’s Influence on the Production and
Consumption of Culture
As Manuel Castells noted at the dawn of the Internet age (1996), the
increasing importance of networks as opposed to formal organization
is a feature of contemporary societies in many fields. In the most vital
music scenes, dense local networks employ social media both to intensify local participation and to reach audiences around the nation or the
world.
Communication and Culture
Whereas the music that most people listened to was for
many years pro­duced and dist­ributed by large corporations,
increasingly music is created and distributed in diffuse networks connected by a combination of face-to-face relations
and social media.
Paul DiMaggio
4. New forms of technology enable new forms of sociability, built around
the technologies they employ.
examp­le, the rise of the bebop style in jazz in New York in the 1950s
(DeVeaux 1999), of political folk music in Green­wich Village in the early
1960s (Van Ronk and Wald 2006), of acid rock in San Francisco a few years
later (Gleason 1969), or of punk music in London in the 1970s (Crossley
2008). Each of these movements exemplified glocalization, in the sense
that it drew on and main­tained deep local roots while using technology
(the vinyl record or analog tape) to reach a glob­al audience. Artists found
ingenious technological ways to build community and fan loyalty be­fore
the Internet, as well: as early as 1983 and through the early 2000s, the
Brooklyn band They Might Be Giants used a home telephone answering
machine to offer a “Dial a Song” service to fans who called a special phone
number. At its peak in the 1980s, the band added a new song every few
days, publicizing the service through classified ads in youth-oriented
papers and the distribution of cards and stickers in New York City’s protohipster neighborhoods.5
Yet the situation today is different; first, because technology enables
the community to scale upward and outward, and, second, because the
endgame is no longer a contract with a major record company. In the old
model, the artist could build a local following. But such a following could
only grow nationally (or, more rarely, globally) if she or he was taken under
a major company’s wing and promoted heavily to such intermediaries as
record stores and radio stat­ions. Today, the artist may use social media
to build out a base, releasing a tune on SoundCloud or a record on Spotify
and LastFM. Radio stations are broadcasters, seeking the single stream
of programming that will yield the largest audience and constrained by
the limits of time to play only a limited number of tunes. Online streaming
services, by contrast, compete to offer the greatest number of selections,
with playlists tailored to each listener’s tastes.
Getting onto an Internet music provider’s playlist is simple;
getting played once you are on it is much more difficult.
5. Documented at “This Might
be a Wiki: The They Might Be
Giants Knowledge Base,”
http://tmbw.net/wiki/
Dial-A-Song (accessed August
28, 2013).
Paul DiMaggio
382/383
Still other connections are digitally mediated through artists’ community
sites, one of the most interesting which is Soundcloud.com, a rap­idly growing German-based service with 40 million users as of fall 2013 (Pham 2013).
In addit­ion to making new files available to their fans, participating artists
post their compositions as waveforms and listeners can post com­ments
linked to particular moments in the piece. Esp­ecially in the case of electronic compos­itions (e.g., DJ mixes), interaction can be both enthusiastic
and technical, sometimes ripening into transnational computer-mediated
collabor­at­ions. Such interact­ions, or other long-distance social-media
encounters, may lead to tours, with artists using their Facebook or Twit­ter
accounts to announce their intentions, arrange gigs, and, once gigs are arranged, securing lodging from local fans. Indeed, in some cases, the tours
themselves are organized by fan bases that mobilize through the Internet
(Baym 2011).
The Internet’s Influence on the Production and
Consumption of Culture
Artists themselves build ties across space that scale outward. Some
connections are still face to face. Performers in a local community share
resources and information, and the more entrepreneurial may create small
record labels that record the others’ albums or work with venues to organize performances, asking affiliated bands to join the bill. In some cases,
the activity may scale up to larger labels or, in the case of groups like the
Disco Biscuits or Insane Clown Posse, to annual musical festivals that
draw a national or international audience. Artists who tour through the
indie club scene may help performers they meet organize tours to other
regions or countries.
Communication and Culture
Gaining attention from the multitude of music blogs, some local and
many nat­ional or global in focus, is one strategy for building a reputation. Competition is stiff, but the Internet enables the performer to build
on positive press. If, in 1990, I (as a consumer) read about a new band in
Rolling Stone, I could only have heard that band’s music if my local radio
station happened to play it or if I chose to buy the album. In 2013, if I read
about a new band at Pitchfork.com, I can go to its website, listen to (or
perhaps download) some of its songs, listen to more tunes on Spotify or a
similar service, and watch it play on YouTube. If I like the music enough, I
can follow the band (and get links to new downloads) on one of countless
specialized pop-music blog sites, put some songs on a Deezer or Spotify
playlist, download them from iTunes, or even purchase a CD on Amazon.
This case study has described the emergence of a web-enabled,
popular-music industry, organized around social networks that, at once,
are intensely local yet also global in scope, combining face-to-face and
digital relationships in new ways. This part of the industry, network based
and organized less by the market than by self-exploitation and mutual
assistance (what Baym [2011] refers to as a “gift economy”) produce
countless musical tracks, innumerable concerts, and much musical innovation. The Internet did not create this segment of the music industry,
which has existed to varying degrees from time immemorial. But it has
fortified it, enabled it, and enhanced its role in the overall ecology of
contemporary culture.
Concluding Observations
In closing, I address two themes. First, to what extent can we generalize about the Internet’s influence on the cultural industries and how
likely are the developments I have described to persist into the future.
Second, how do the changes I have described map onto larger trends in
contemporary culture.
The Internet and the Cultural Industries
Here I will make three points. First, the Internet’s influence varies from industry to industry, so that facile generalizations must be avoided. Second,
there are reasons to believe that current adjustments in some fields at
least may be unstable. Third, the way that creative workers and cultural
industries use the Internet will depend on public policies.
We have seen that the Internet’s influence depends, first, on the extent
to which digital substitutes for analog experience are likely to satisfy consumers; second, on the extent to which producers compete for financial
investments (and must thus maintain competitive profits), as opposed
to needing only enough funding to persist; and, third, on the ability of
incumbent firms to exploit changes inherent in digital production and
The rise of illegal downloading and the reluctance of many consumers
to purchase music; the shift in the legal market from the sale of packaged
albums (in which strong tracks induced consumers to, in effect, purchase
weaker ones) to consumer choice and track-based on­line sales; and, finally, the rise of streaming services and licensing as a source of revenue,
have together upended the business models of the major integrated music
production companies that dominated the industry in the 1990s. Note,
however, that pain has been felt most keenly by the major companies and
their shareholders. By contrast, the Internet appears to have increased
the availability of live music (returns from which, unlike returns from realtime film exhibition, are in most cases not appropriable by the majors)
Paul DiMaggio
384/385
Yet, as we have seen, each industry is somewhat different. The film
industry, with its project-based production regime and a product that (as
long as people value the theater experience and theaters must rent their
product from studios) retains strong social externalities, has made the
transition somewhat gracefully, becoming less centralized but no less
profitable. Although film distribution will change, the position of filmmakers—both conglomerates and independents—appears relatively stable.
The Internet’s Influence on the Production and
Consumption of Culture
It is in those industries where the core product—a movie,
news story, or musical track—can be downloaded and enjoyed
in private that the Internet has been an agent of creative
destruction.
Communication and Culture
distribution. The Internet has had relatively little impact on traditional
theaters, ballet companies, and orch­estras, because such organizations
provide a service that requires physical presence in an actual audience.
The same is true, a fortiori, for cuisine, the value of which emerges out of
the sensual engagement of the consumer and the product. Institutions that
exhibit the visual arts have also been affected only marginally, although it
is possible that virtual museums may develop a more substantial presence.
Workers in these sectors are keenly aware of the Internet, of course, and
websites and social media play an important role in marketing, sales, and
fundraising in all of them. But the Internet has not challenged the basic
business models.
and produced a more vigorous set of popular-music institutions organized
around a combination of local and technology-assisted networks in which
online services and face-to-face relationships interact.
At the same time, it is somewhat unclear where this new regime is
headed. Although revenues for the major companies are beginning to turn
up after their steep decline, the new business model is far from certain.
Streaming services, despite immense growth and consumer acceptance,
have trouble converting free-service users to paid subscribers, and, as
a result, provide only relatively modest revenues to record companies
and vanishingly small royalties to composers. For its part, the networked
musical economy that has emerged in the vacuum left by the majors’
retrenchment depends heavily on a kind of economic self-exploitation:
contributed effort or acceptance of below-market incomes by the musicians, micro-label owners, blog­gers, promoters, and fans (some of whom
play all these roles) whose efforts make the system work. If, as seems
likely, people’s tolerance for self-exploitation declines as their family oblig­
at­ions grow, time will tell if the supply of participants will sustain itself
sufficiently to maintain the vitality that we now observe.
Finally, the newspapers industry, and the field of journalism, faces a
particularly difficult future, given the reluctance of readers to pay for its
product (especially when they can obtain much of it legally from newspaper
and magazine websites) and given the rise of online advertising media that
have made newspaper advertising less attractive to traditional purchases.
(And, of course, as paid circulation declines, so do advertising rates for
physical media.) Displaced journalists have produced an efflorescence of
journalistic blogging, and some have combined forces to produce successful web-based publications and even to undertake serious investigative
journalism. But how long such efforts can survive, and how widely they
can scale, remains uncertain. The issue is less whether newspapers will
survive than whether they will be willing and able to pay for the quality
of reporting—especially local and international news and investigative
reporting—that healthy democracies require.
These developments will, of course, be affected by public policy.
Government subsidies for the press, for example, would change the economics of journalism, both by providing support directly and by freeing
In the longer run, the structure of the Internet itself may change depending on the outcome of debates over the relative rights and obligations
of content providers, online businesses, cable television companies, and
other internet service providers, as well as regulation of the flow of information and the openness of systems in mobile devices. The issues involved
are technical, and they will be critically important in determining whether
the Internet will continue to be as open and useful to creative workers and
their publics as it is today (Benkler 2006; Crawford 2013).
Paul DiMaggio
386/387
Intellectual property policy has been an especially highly contested field
of struggle. Confronted by downloading, media firms have fought back in
country after country, succeeding in tightening restrictions on downloading
and increasing penalties in France, Sweden, the United States, and many
other nations. Whether such legal changes will be effective, however, is
questionable, and, of course, they only address one part of the media companies’ troubles. And all too often, media companies have sought copyright
expansion that has endangered traditional notions of fair use (including
secondary uses by artists and educators), without solving the underlying
problem of illegal digital distribution (Lessig 2004).
The Internet’s Influence on the Production and
Consumption of Culture
newspapers from capital markets’ demands for competitive returns on
investment. Similarly, government support for local media centers with
high-speed internet and media production equipment—a program pioneered by Brazilian Minister of Culture Gilberto Gil in his Pontos de Cultura
program—could sustain the vitality of independent journalists, musicians,
filmmakers, and other creative persons working outside the framework of
the major media industries (Rogério 2011).
Ultimately, the Internet’s influence on the production and use of culture is
conditioned by broader trends that shape the way that people choose to
use the affordances that technology offers. Here I consider just a few of
these broader possibilities.
Can the Internet cultivate an expansion of creativity?
Communication and Culture
The Internet, the Arts, Information, and
Cultural Change
In much of the world, the rise of the Internet appears to have
come at a time of increased interest in many forms of cultural
expression, including the arts, political debate, and religion.
Although some have argued that this is a consequence of the emergence
of the Internet as a public forum, it is far more likely that, as Castells (1996)
anticipated, changes in the organization of human societies have produced
cultural effects—including greater fluidity and salience of individual identity—that have enhanced many people’s appetite for culture. Indeed there
is some evidence that the Internet’s rise has coincided with a period of
artistic democratization. In the field of music, for example, one indicator
is retail activity in musical instrument and supply stores: if more people
are playing music, these stores should thrive. Indeed, in the U.S., sales of
musical instruments and accessories boomed, rising almost 50 percent
be­tween 1997 and 2007.6 It is possible that the increased availability of
diverse forms of music online as well as increased vitality of local music
scenes accounts for some of this change.
Will we benefit from increased cultural diversity? The rise of music
streaming services with many millions of subscribers, the increased tendency of art museums to display some of their holdings online, the ability
to view images and performances of the past on YouTube, or to easily
stream films from many cultures and eras, have all increased dramatically the availability of what Chris Anderson (2006) called the “long tail”
of market demand. Technology has reduced the cost of storing inventory—
which now requires space on a server rather than a warehouse—making
it easier for firms to profit from supplying artifacts for which there is
relatively little demand. That this has occurred is indisputable. The effect
on taste is less certain, for two reasons. First, culture is an experience
good: how much one gets out of listening to music or viewing a mus­eum
exhibit depends, in part, on how much experience one has with this kind
of art before­hand. (This is even more true for artistic styles or genres that
are intellectually challenging or based on novel or unfamiliar aesthetic
6. U.S. Census Bureau, http://
factfinder2.census.gov/faces/
tableservices/jsf/pages/
productview.xhtml?src=bkmk
and http://www.census.gov/econ/
industry/hist/h45114.htm
Paul DiMaggio
388/389
Or will the Internet lead to cultural balkanization? At the onset of the
Internet, legal scholar Cass Sunstein (2001) predicted that the vast array of
views and information on the Internet would lead to cultural and political
balkanization, as consumers exposed themselves only to views that were
congenial. It turned out that Americans, at least, did not need the Internet
to accomplish that: the emergence of politically polarized networks on
cable news effectively accomplished the same thing. But the underlying
concern remains and, indeed, has grown stronger, especially in the U.S.,
where privacy is less protected than in the EU. The cause of this concern
is the proliferation of technologies like third-party cookies and browser
fingerprinters that track one’s behavior across multiple websites, the rise
of information-aggregation companies that produce extensive profiles of
The Internet’s Influence on the Production and
Consumption of Culture
A world of omnivores? Sociologists have argued that people’s relationship to culture has changed, so that educated and sophisticated culture
consumers no longer specialize in traditional works of high culture (if they
ever did) but instead distinguish themselves through easy familiarity with
a wide range of aesthetic genres and styles (Peterson and Kern 1996). This
development antedated the Internet, but the technology provides extensive affordances for its growth. To be sure, research in France, Spain, and
the U.S. suggests that some high-status people, at least, still embrace
the conventional divide between high and popular culture (Bourdieu 1984;
Coulangeon 2007; Goldberg 2011; Lizardo 2005). And we have little sense
of with just how many styles it pays to be familiar. But certainly in so far
as social changes have increased the tendency of educated people to explore and become familiar with a wide range of cultural forms, the Internet
makes that much easier.
Communication and Culture
conventions [Caves 2000].) Second, psychologists rec­ognize that most
people respond poorly to choice, especially if it is in a field in which they
are not already well versed: after a fairly low threshold their subjective
utility declines as the num­ber of options amongst which they must choose
rises (Schwartz 2008). For those who are passionate about music, art, or
film, the enhanced availability that the Internet provides is a tremendous
boon. For those who are indifferent it is a matter of no concern. But for
those in between, who enjoy the arts but are disinclined to invest much
of their time in learning about them, expanded choice may be more irksome than beneficial.
Internet users by combining information from many sources, and the use
of this information by online retailers and content pro­viders to decorate
users’ web pages with personalized content that reflects the tastes and
inter­ests they have already acquired (Turow 2011). In other words, the
Internet lays a table before us of unprecedented abundance, and then
tries to keep us from that table by constantly showing us reflections of
ourselves. Clearly, the effect of these technologies will depend on the proclivities of users: the path of least resistance will be to use the Internet
in ways that constantly reinforce one’s prior views and tastes. What we do
not yet know is to what extent people will choose to overcome these tendencies and explore the wider range of ideas and styles that the Internet
can provide.
A new form of cultural inequality? For many years, political scientists
have explored what they call the “knowledge gap hypothesis”—the paradoxical notion that if good inform­ation becomes cheaper, better-informed
members of the public will become even more well informed, and lessinformed citizens will fall even further behind. The assumption behind this
expectation is that well-informed people value information more highly
than people with little information, so that they will acquire more of it if
the price goes down. Markus Prior’s research (2005) indicates that, as far
as political information is concerned, this is true of the Internet as it has
been of other media. Another study (Tepper and Hargittai 2009) demonstrated similar dynamics in the field of music: students from higher social
class backgrounds used a broader range of websites and P2P sources
to explore new kinds of music, developing greater expertise and getting
more out of their online experience than students from more humble
backgrounds.
The implications of this research are sobering. The Internet provides a
remarkably rich supply of art, music, and information, enabling citizens to
dig deeper into the policy issues before them, to learn more about their
worlds, and to enjoy an unprecedented wealth of aesthetic experience.
But it is unclear just how many people this potential will benefit. Indeed,
it seems that this expanded supply may be welcomed by a relatively small
group of highly educated people, those who are already engaged in politics,
involved in the arts, and conversant with the Internet’s affordances. Other
users may be unaware of the possibilities or unwilling to take the time to
390/391
Paul DiMaggio
The Internet’s Influence on the Production and
Consumption of Culture
Communication and Culture
explore a range of new ideas and unfamiliar options. And the significant
minorities who still lack meaningful Internet access will, of course, have no
choice. The possibility that the Internet may usher us into a world of even
greater cultural and informational inequality—one in which an educated
elite gets its information and entertain­ment online from a vast range of
diverse sources, while the majority settle for the offerings of chastened
and diminished giant media firms—poses a challenge to both cultural and
political democracy.
References
Acland, Charles.
“From International
Blockbusters to National
Hits: Analysis of the 2010
UIS Survey on Feature
Film Statistics.” UNESCO
Institute of Statistics,
Bulletin No. 8. February
2012. http://www.uis.unesco.
org/culture/Documents/
ib8-analysis-cinemaproduction-2012-en2.pdf
Albinsson, Staffan.
“Swings and Roundabouts:
Swedish Music Copyrights
1980–2009.” Journal of
Cultural Economics 37
(2013): 175–84.
Anderson, Chris.
The Long Tail: Why the Future of
Business is Selling Less of
More. New York: Hyperion,
2006.
Baym, Nancy K.
“The Swedish Model: Balancing
Markets and Gifts in the
Music Industry. Popular
Communication 14 (2011):
22–38.
Benkler, Yochai.
The Wealth of Networks: How
Social Production Transforms
Markets and Freedom. New
Haven: Yale University Press,
2006.
BLS (U.S. Bureau of Labor
Statistics).
“Spotlight on Statistics:
Media and Information.”
2013. U.S. Bureau of Labor
Statistics, http://www.bls.
gov/spotlight/2013/media/
(accessed August 26, 2013).
Boczkowski, Pablo.
News at Work: Imitation in an Age
of Information Abundance.
Chicago: University of
Chicago Press, 2010.
Bourdieu, Pierre.
Distinction: A Social Critique
of the Judgment of Taste.
Cambridge, MA: Harvard
University Press, 1984.
Carroll, Glenn R., Stanislav
Dobrev, and Anand
Swaminathan.
“Organizational Processes of
Resource Partitioning.”
Research in Organizational
Behavior 24 (2002): 1–40.
Castells, Manuel.
The Rise of the Network Society.
London: Blackwell, 1996.
Caves, Richard E.
Creative Industries. Cambridge,
MA: Harvard University
Press, 2000.
Christensen, Clayton.
The Innovator’s Dilemma: When
New Technologies Cause
Great Firms to Fail. Boston:
Harvard Business Press,
1977.
Coulangeon, Philippe, and
Yannick Lemel.
“Is ‘Distinction’ Really Outdated?
Questioning the Meaning of
the Omnivorization of Taste
in Contemporary France.”
Poetics 35 (2007): 93–111.
Crawford, Susan.
Captive Audience: The Telecom
Industry and Monopoly Power
in the New Gilded Age. New
Haven: Yale University Press,
2013.
Crossley, Nick.
“Pretty Connected: The Social
Network of the Early UK
Punk Movement.” Theory,
Culture and Society 25
(2008): 89–116.
Cummins-Russell, Thomas, and
Norma M. Rantisi.
“Networks and Place in
Montreal’s Independent
Music Industry.” Canadian
Geographer 56 (2012): 80–97.
Danaher, Brett, Samita
Dhanasobhon, Michael D. Smith,
and Rahul Telang.
“Converting Pirates without
Cannibalizing Purchasers:
The Impact of Digital
Distribution on Physical
Sales and Internet Privacy.”
Social Science Research
Network (SSRN), March
3, 2010. http://ssrn.com/
abstract=1381827
Danaher, Brett, Michael D. Smith,
Rahul Telang, and Siwen Chen.
2012.
“The Effect of Graduated
Response Anti-Piracy Laws
on Music Sales: Evidence
from an Event Study in
France.” Social Science
Research Network (SSRN),
January 21, 2012. http://ssrn.
com/abstract=1989240
Danaher, Brett, and Joel
Waldfogel.
“Reel Piracy: The Effect of Online
Film Piracy on International
Box Office Sales.” Social
Science Research Network,
January 16, 2012. http://ssrn.
com/abstract=1986299
392/393
DiMaggio, Paul.
“Nonprofit Organizations and
the Intersectoral Division
of Labor in the Arts.” In
Walter Powell and Richard
Steinberg, eds. The Nonprofit
Sector: A Research Handbook.
2nd ed. New Haven: Yale
University Press, 2006.
DiMaggio, Paul, and Toqir
Mukhtar.
“Arts Participation as Cultural
Capital in the United States,
1982–2002: Signs of Decline?”
Poetics 32 (2004): 169–94.
Downie, Leonard, Jr., and
Michael Schudson.
“The Reconstruction of American
Journalism.” Columbia
Journalism Review, October
19, 2009. http://www.cjr.
org/reconstruction/the_
reconstruction_of_american.
php?page=all
Foster, Pacey, Stephen P.
Borgatti, and Candace Jones.
“Gatekeeper Search and
Selection Strategies:
Relational and Network
Governance in a Cultural
Market.” Poetics 39 (2011):
247–65.
Gibson, James J.
“The Theory of Affordances.”
In Robert Shaw and James
Bradford, eds. Perceiving,
Acting, and Knowing. Hillsdale,
New Jersey: Lawrence J.
Erlbaum, 1977. 67–82.
Gleason, Ralph.
The Jefferson Airplane and the
San Francisco Sound. New
York: Ballantine, 1969.
Goldberg, Amir.
“Mapping Shared
Understandings Using
Relational Class Analysis: The
Case of the Cultural Omnivore
Reexamined.” American
Journal of Sociology 116
(2011): 1397–1436.
Hracs, Brian J.
“A Creative Industry in
Transition: The Rise of
Digitally Driven Independent
Music Production.” Growth
and Change: A Journal of
Urban and Regional Policy 43
(2012): 442–61.
(IFDI) International Federation of
the Phonographic Industry.
“IFPI Digital Music Report 2010:
Music How, Where and When
You Want It.” International
Federation of the
Phonographic Industry, 2010.
http://www.ifpi.org/content/
library/dmr2010.pdf
Knight Community News
Network.
“Directory of Community News
Sites.” Knight Community
News Network, 2013. http://
www.kcnn.org/citmedia_
sites/ (accessed August 24,
2013).
Krueger, Alan B.
“The Economics of Real
Superstars: The Market
for Rock Concerts in the
Material World.” Journal of
Labor Economics 23 (2005):
1–30.
Lena, Jennifer.
Banding Together: How
Communities Create Genres
in Popular Music. Princeton:
Princeton University Press,
2012.
Lessig, Lawrence.
Free Culture: How Big Media
Uses Technology and the Law
to Lock Down Culture and
Control Creativity. New York:
The Penguin Press, 2004.
———.Remix: Making Art and
Commerce Thrive in the
Hybrid Economy. New York:
Penguin Press. 2009.
Leyshon, Andrew.
“The Software Slump?: Digital
Music, the Democratisation
of Technology, and the
Decline of the Recording
Studio Sector within
the Musical Economy.”
Environment and Planning 41
(2009): 1309–31.
Paul DiMaggio
Jenkins, Henry.
Convergence Culture: Where Old
and New Media Collide. New
York: NYU Press, 2006.
The Internet’s Influence on the Production and
Consumption of Culture
DiCola, Peter.
“Money From Music: Survey
Evidence on Musicians’
Revenue and Lessons About
Copyright Incentives.”
Arizona Law Review
(forthcoming). Social Science
Research Network (SSRN),
January 9, 2013. http://ssrn.
com/abstract=2199058
FRED (Federal Reserve Bank of
St. Louis).
“Gross Domestic Product
(Implicit Price Deflator).”
Federal Reserve Bank of
St. Louis, July 31, 2013
(updated). http://research.
stlouisfed.org/fred2/
series/A191RD3A086NBEA/
downloaddata?cid=21
Communication and Culture
DeVeaux, Scott.
The Birth of Bebop: A Social and
Musical History. New York:
Picador, 1999.
Lizardo, Omar.
“Can Cultural Capital Theory
Be Reconsidered in the
Light of World Polity
Institutionalism? Evidence
from Spain.” Poetics 33
(2005): 81–110.
Madden, Mary.
“Artists, Musicians and the
Internet.” Pew Internet
and American Life Pro­ject,
December 5, 2004. http://
www.pewinternet.org/~/
media//Files/Reports/2004/
PIP_Artists.Musicians_
Report.pdf.pdf
Masnick, Michael, and Michael Ho.
“The Sky is Falling: A Detailed
Look at the State of the
Entertainment Industry.”
TechDirt, 2012a. http://www.
techdirt.com/skyisrising/
———.“The Sky is Falling:
Regional Study—Germany,
France, UK, Italy, Russia,
Spain.” Computer &
Communications Industry
Association, 2012b. http://
www.ccianet.org/CCIA/
files/ccLibraryFiles/
Filename/000000000733/
Sky%20is%20Rising%20
2013.pdf
Miniwatt Marketing Group.
“Internet World Stats:
Usage and Population
Statistics.” Internet World
Stats, 2013. http://www.
internetworldstats.com/
(accessed August 20, 2013).
McLuhan, Marshall.
Understanding Media: The
Extensions of Man. New York:
New American Library, 1964.
Mitchell, Amy, Mark Jurkowitz,
Jesse Holcomb, Jodi Enda, and
Monica Anderson.
“Nonprofit Journalism: A
Growing But Fragile Part of
the Nonprofit News System.”
Project for Excellence in
Journalism, Pew Research
Center, Washington, D.C.,
2013. http://www.journalism.
org/analysis_report/
nonprofit_journalism
Montoro-Pons, Juan D., and
Manuel Cuadrado-García.
“Live and Prerecorded Popular
Music Consumption.” Journal
of Cultural Economics 35
(2011): 19–48.
Mortimer, Julie Holland, Chris
Nosko, and Alan Sorensen.
“Supply Responses to Digital
Distribution: Recorded Music
and Live Performances.”
Information Economics and
Policy 24 (2012): 3–14.
MPAA (Motion Picture
Association of American).
“Theatrical Market Statistics
2012.” Motion Picture
Association of American,
2012. http://www.mpaa.org/
Resources /3037b7a4-58a24109-8012-58fca3abdf1b.pdf
NAA (Newspaper Association of
America.
“Annual Newspaper Ad
Revenue.” Newspaper
Association of America, April
2013. http://www.naa.org/~/
media/NAACorp/Public%20
Files/TrendsAndNumbers/
Newspaper-Revenue/
Annual-Newspaper-AdRevenue.ashx
Nee, Rebecca Coates.
“Creative Destruction: An
Exploratory Study of How
Digitally Native News
Nonprofits Are Innovating
Online Journalism Practices.”
International Journal on Media
Management 15 (2013): 3–22.
Peterson, Richard A., and Roger
M. Kern.
“Changing Highbrow Taste: From
Snob to Omnivore.” American
Sociological Review 61
(1996): 900–7.
Pham, Alex.
“Google+ Plus in SoundCloud
for Its 343 Million Users
(Exclusive).” Billboard,
August 12, 2013. http://
www.billboard.com/biz/
articles/news/digitaland-mobile/5645566/
google-plugs-in-soundcloudfor-its-343-million-users
Prior, Markus.
“News vs. Entertainment: How
Increasing Media Choice
Widens Gaps in Political
Knowledge and Turnout.”
American Journal of Political
Science 49 (2005): 577–92.
Rogério, Paulo.
“Learning From Gilberto Gil’s
Efforts to Promote Digital
Culture for All.” Americas
Quarterly, November 7, 2011.
http://americasquarterly.org/
node/3077
Romanesko, Jim.
“Patch is Laying Off Hundreds
of Employees on Friday.” Jim
Romenesko.com (blog), August
8, 2013. http://jimromenesko.
com/2013/08/08/patch-islaying- off-hundreds-ofemployees-on-friday/
Schuster, J. Mark.
“Participation Studies and
Cross-National Comparison:
Proliferation, Prudence and
Possibility.” Cultural Trends
16 (2007): 99–196.
Schwartz, Barry.
“Can There Ever Be Too Many
Flowers Blooming?” In Steven
J. Tepper and Bill Ivey, eds.
Engaging Art: The Next Great
Transformation of America’s
Cultural Life. New York:
Routledge, 2008. 239–56.
Shekova, Ekaterina.
“Changes in Russian Museum
Attendance: 1980–2008.”
Museum Management
and Curatorship 27 (2012):
149–59.
Smirke, Richard.
“IFPI Digital Music Report 2013:
Recorded Music Revenues
Climb for First Time Since
1999.” Billboard, February 26,
2013. http://www.billboard.
com/biz/articles/news/
Sunstein, Cass.
Republic 2.0. Princeton: Princeton
University Press, 2001.
Waldfogel, Joel.
“And the Bands Played On:
Digital Disintermediation and
the Quality of New Recorded
Music.” Social Science
Research Network (SSRN),
July 25, 2012a. http://ssrn.
com/abstract=2117372.
———.“Music Piracy and Its
Effects on Demand, Supply
and Welfare.” Innovation
Policy and the Economy 12
(2012b): 91–110.
Thomson, Kristin, and Kristen
Purcell.
“Arts Organizations and Digital
Technologies.” Report of Pew
Internet and American Life
Project, Washington, D.C.,
January 13, 2013.
Tepper, Steven J., and Eszter
Hargittai.
“Pathways to Music Exploration
in the Digital Age.” Poetics 37
(2009): 227–49.
Tschmuck, Peter.
Creativity and Innovation in the
Music Industry. Dordrecht:
Springer, 2006.
———.“The Economics of Music
File Sharing—A Literature
Overview.” Paper delivered
at Vienna Music Business
Research Days, University of
Music and Performing Arts,
Vienna, June 9–10, 2010.
Turow, Joseph.
The Daily You: How the New
Advertising Industry is
Defining Your Identity and
Your Worth. New Haven: Yale
University Press, 2011.
Van Deursen, Alexander, and Jan
Van Dijk.
“The Digital Divide Shifts to
Differences in Usage.”
New Media and Society,
June 7, 2013. http://nms.
sagepub.com/content/
early/2013/06/05
/1461444813487959.full.
pdf+html
Waterman, David, and Sung
Wook Ji.
“Online and Offline in the U.S.:
Are the Media Shrinking.”
Paper presented at the
39th TRPC Conference,
Washington, D.C., September
2011. Social Science
Research Network (SSRN),
September 24, 2011. http://
ssrn.com/abstract=1979916
Wellman, Barry, Anabel QuanHaase, Jeffrey Boase, Wenhong
Chen, Keith Hampton, Isabel
Diaz de Isla, and Kakuko Miyata.
“The Social Affordances of the
Internet for Networked
Individualism.” Journal
of Computer Mediated
Communication 8, no. 3
(2003). http://onlinelibrary.
wiley.com/doi/10. 1111/
j.1083-6101.2003. tb00216.x/
full
Paul DiMaggio
394/395
Schumpeter, Joseph A.
Capitalism, Socialism, and
Democracy. New York:
Harper and Bros., 1942. Repr.
London: Routledge, 1994.
Van Ronk, Dave, and Elijah Wald.
The Mayor of MacDougal Street.
New York: Da Capo, 2006.
The Internet’s Influence on the Production and
Consumption of Culture
Schradie, Jen.
“The Digital Production Gap: The
Digital Divide and Web 2.0
Collide.” Poetics 39 (2011):
145–68.
digital-and-mobile/1549915/
ifpi-digital-music-report2013-global-recorded-music
Communication and Culture
Safner, Ryan.
“Steal This Film, Get Taken
Down? A New Dataset
on DMCA Takedown and
Movie Piracy.” RyanSafner.
com, May 2, 2013. http://
ryansafner.com/papers/
dmcatakedowns.pdf
Online Community and
Fandom
Nancy Baym
by:Larm Oslo 2008
I’m here to talk on behalf of the fans, and in particular the online fans.
The internet has transformed what it means to be a music fan. Fans can and do build
communities more rapidly and successfully now than ever before, with consequences
not just for their own experience of music, but for everyone involved in the creation,
distribution and promotion of music in any capacity. They’re making a new kind of
music scene that transcends place and shakes up long-standing balances of power
between fans and the music makers. Though it gets all the attention, downloading is
just one piece of this. I want to focus on the pieces that don’t get discussed as often.
My goal today is to provide a big picture perspective on how it is that the internet has
empowered fans in this way, what relational consequences this has, and offer some
suggestions on how to foster relationships with fan communities from which everyone
can benefit.
1
analog
fandom
I want to start by going back about 25 years to the early 1980s and take a very quick
walk through pre-internet fan community. There was an internet in the early 1980s, but
most of us didn’t know it.
I was a college student in the United States. It was a time when what we now call
“alternative” or “indie music was first emerging from the tiny bars of places like
Athens, Georgia. Like many of my friends at the time, I became entranced by R.E.M.
My friends and I spent hours listening to their records and talking about them. Their
tours were the social highlights of the year – we’d throw our bags into a van with two
seats and a mattress and take a movable party on the road to see their shows.
Along the way we met other REM fans in other towns. This broadened our knowledge
base considerably: we could compare set lists, we could trade bootleg cassette
recordings or leaked demos we’d made or traded our way into. Throughout the 1980s,
working my connections, I amassed around thirty live REM tapes. This was considered
an exceptional collection and I have to admit I was quite proud of both it and the social
connections it represented.
We were something akin to a community. We didn’t all know each other, but we
weren’t many degrees of separation apart. We shared values and we knew it, that was
half of what it meant to be an REM fan.
For their part, REM fostered this fandom well. They combined accessibility and
enigma so that fans could both identify with them and want to know more. The energy
they and their fans generated created an entire music scene, one which launched many
other bands.
2
That was my experience the 1980s, but fan community had been thriving for a long
time before that. Deadheads had mastered the art of the distributed community,
building a lifestyle around the Grateful Dead and setting up models of roadtripping,
tape trading and social networking that thrive today.
Even before that, though, in the mid-1800s, American fans of Charles Dickens novels
were said to gather at the docks as the ships arrived from England bearing new issues
of magazines with new chapters of novels they were reading. It’s hard to imagine these
fans didn’t come with their friends or take advantage of the opportunity to get to know
one another.
These three earlier examples of fan community share qualities that the internet seems
to disrupt:
They were firmly place-based, in that they rely on people coming together in physical
space to form connections.
They were also reliant on media to which the fans simply did not have access -magazines, book publishers, radio stations, the recording industry.
3
Fans did produce their own media before there was an internet. Fanzines and their
equivalents go back at least to the early 1920s and probably earlier than that.They had
very limited distribution, however.
4
fans were early internet adapters
When the internet became public in the 1990s, and even before then, music fans
promptly recognized and took advantage of its potential to further their interest in
music. They created mailing list and discussion groups in environments such as Usenet
newsgroups, which are pictured here.
That the internet should prove hospitable to fandom is not surprising given that one of
the first things those who were creating it did with it in 1972 (just 3 years after the first
successful login) was to create a vibrant community of science fiction fans on the
mailing list SF-Lovers.
5
parasol.com
urbana, illinois, usa
I want to personalize some of the ways the internet can superpower fandom with
another tale from my own life as a fan.
A few years go, I logged onto a web record store based in Urbana Illinois, in the
midwestern US. Parasol, run by a guy I grew up with, has carved out a bit of a niche
for itself with a specialization in Scandinavian independent music. As you can see on
the page here, they offer recommendations of “best Scandinavian releases” with fullsong streams of sample songs.
6
I clicked on a stream of the song “Vocal” by Norway’s Madrugada.
Hello, new favorite band.
7
Consider my situation. I was in Kansas, the geographical center of North America.
No one I knew in my town had ever heard of Madrugada – the ones who have today
heard about them through me. I couldn’t buy any of their other records. And I was
hungry to know more – what other records did they have? Were there unreleased
songs? What were they like live?
8
madrugada fan site
berlin
I found a Madrugada fan site put together by a Norwegian living in Berlin and run
continuously since 1999.
The fans involved in the site had built an incredibly detailed repository of information
about Madrugada -- a complete concert chronology, a discography, photographs,
videos, a complete list of all songs they were known to have ever performed, lyrics to
all of their songs and, not least, a discussion forum
9
The forum is not huge. All its discussion is in English. The sites regulars include
people from Norway, Germany, France, Switzerland, England, Greece and the US
among other countries.
Through the board I met a man in France who’d collected many recordings – he sent
me a CD-ROM with almost as many Madrugada concerts as I’d spent nearly ten years
accumulating with REM. I never got to see them live, but I didn’t have to miss it
entirely.
10
fandom is social interaction
share feeling
build social identity
pool collective intelligence
interpret collectively
Through that fan board, I found information. I found music. I found people willing to
discuss the minutia of something fascinating to me but boring to most. I found the
resources that made it possible for me to be an engaged fan.
Now when I talk about “fans” I am not talking about everyone in the audience. There
are a lot of ways to casually or deeply appreciate music without being a “fan.” And
you can be a fan without being engaged in fandom. But music is a social experience,
and dedicated fans are often driven to connect with other fans. From its very origins
thousands of years ago, music has been social. Its original and arguably core nature is
to connect people. In connecting around music, fans today are continuing to foster the
connection between music and sociability by talking about and sharing music.
11
fandom is social interaction
share feeling
build social identity
pool collective intelligence
interpret collectively
Fans do 4 core things when they talk about music online or off:
Share feeling : As most of you know well, loving music can be an emotionally powerful experience.
Having access to other people who share those feelings validates our experience and provides means
to foster and perpetuate those feelings. The feelings shared in fandom are not always good. I’ve
seen fan communities angered or disappointed by bands or their recordings. I’ve seen them grieve
together when musicians they loved died. I’ve seen them support one another through life’s changes
in ways that had nothing to do with the music.
Shared identity : Fans often build collective identities around music. When I worked in a record
store, I could often guess what genre people would look at based on the way they were dressed or
how they wore their hair. We develop shared systems of codes to mark ourselves as fans online and
off. Fans don’t all share a single identity, though, and there can be divisions within fan communities
between, for example, fans of a hit single vs. fans of early obscure recordings.
Collective intelligence : Fans are generally interested in knowing more. They’re the ones who buy
magazines to read interviews with the person on the cover. As my REM and Madrugada stories
illustrate, when they’re together, they can create a pool of far more information than they can alone.
Collective interpretation : Fandom is also about pooling the resources of many to pick apart and
understand. Whether it’s figuring out what lyrics might be referencing, drawing attention to
particular parts of songs, or debating whether or not Madrugada sold out by using an explosion of
gold glitter as part of their live show, fans engage in making sense of things together.
12
“Downloading of music, movies, games and programs
is only one side of the story as well. On the other hand
there is communities, blogs, websites with loads of
information, free information of high and low (THE
lowest) quality everywhere, all the time and it's
increasing by the minute. It goes hand in hand with
the downloading of music, movies, programs and
games. It's stressful, highpaced, superficial and at
times very rewarding. It's a world of culture under
ongoing change at a level so basic that it probably will
have replaced the old system completely in a couple of
years. 4 years, counting from last Thursday, is our
guess.”
- Hybris Records blog
The internet enhances all of these things I have been discussing and brings the bands,
labels and others into this in new ways.
I want to suggest that at a time when the music industry is reeling from changes it
barely understands, the sorts of activities fans are doing online have the potential to
create the culture in which you will all be operating in the future.
13
“At a time when so much of the structure that
holds together music culture has disappeared,
fans could take the initiative to create a new
one.”
- Eric Harvey, Pitchfork
There are six qualities of the internet that enable fans empowerment
and I want to talk through them with some examples, then wrap up
by covering the implications this has for the relationships between
fans, artists and labels. The six qualities are:
The internet extends fans’ reach
It enables them to transcend distance
It provides group infrastructures
It supports archiving
It enables new forms of engagement
It lessens social distance
14
r.e.m. fan site
My favorite example of how the internet has increased fans’ reach is murmurs.com.
This is an REM fan site created in 1996 by Ethan Kaplan, who was then 16 years old.
You see here what it looked like every two years. It quickly became the most popular
spot for REM fandom on the internet and remains so today.
According to Kaplan, Murmurs has over 24,000 members, 2 to 3 thousand active
participants. and 2 to 5 thousand people coming to the site daily where they read news,
participate on the discussion board and participate on our Torrent tracker.
Through his website, Ethan was able to reach tens, probably hundreds of thousands of
REM fans and provided them with a means to reach one another.
It also provided him with the means to reach the band and, eventually, the record label.
In addition to running the fan site, he is now the Chief Technology Officer for their
record label, Warner Brothers.
15
The internet also lets fans connect instantaneously across distance. This means they can
build relationships across geographic boundaries and become centers of scenes
regardless of their location.
I want to illustrate the impact of transcending distance by showing you the
Scandinavian webzine “It’s A Trap,” run by Avi Roig and a motley crew of volunteer
reviewers, including me. It’s a Trap gets several thousand hits a day from all over the
world, and many from Scandinavia.
If you register you can use the message board and comment on items so there’s some
fan-interaction, though not much.
Roig describes himself as “ the leading news provider -- the go-to site for many, many
industry people and am often one of the first places people will send news releases since
I have a quick turnaround and a wide reach”
16
Avi runs it’s a Trap from Olympia Washington.
It’s hard to get much further from Scandinavia. Online it just doesn’t matter.
17
“What I think is really fascinating about this bands and
fans and the internet is that there are bands who are
not very massive anywhere in the world but who
have these tribes who can be traced through sites like
Last.fm and MySpace all over the world.”
- Nick Levine, Tack! Tack! Tack!
“The internet helps so much, especially myspace.
People are listening to The Fine Arts Showcase in
India and Thailand and Indonesia. I wouldn’t be
doing this interview if it weren’t for the internet.
Nobody in Indonesia would listen to The Fine Arts
Showcase.”
- The Fine Arts Showcase
The ability to transcend distance also means that bands can use the internet to build
distributed fan bases in locations they never could before.
18
hybris worldwide orders
Every pin on this map is a city from which an order has been placed from Hybris
Records’ website in Stockholm, Sweden. They are offering half off to the first person
to order from Africa, Antarctica or Greenland.
19
Sounds of Sweden
(Glasgow)
Tack! Tack! Tack!
(London)
Hej! Hej!
(Washington
DC)
Fikasound
(Madrid)
It’s A Trap
(Malmö)
But even as the internet makes place less relevant, it increases the means for shared
experiences of place. The pages you see here represent fan-sponsored music club in
different cities that book only Scandinavian bands. These online fans create ways for
bands to play for audiences in new places, and can create local scenes around
Scandinavian music far outside Scandinavia.
Johan Angergård from Labrador Records and the bands Club 8, Acid House Kings and
the Legends says, “I actually can't understand how [international booking] worked
before Internet. People who contact us and want to arrange gigs are usually fans. Quite
often fans doing gigs professionally, but still fans.”
20
“I think I've done a lot to promote Swedish
music in Scotland, and have converted
many people into Swedophiles :) It's also
great to be able to help Swedish musicians
reach a new audience. Glasgow has now
become a standard port of call for Swedish
artists touring the UK. I've always had a
great passion for music... but I can't play an
instrument or sing, so this is what I do - I
help make sure those with talent are heard.”
- Stacey Shackford, Sounds of Sweden
I like this quote from the woman who runs Glasgow’s Swedish music club because it
shows how fans often view their labor as a means of participating in a community.
21
Last.fm “sweden” groups
The internet also provides infrastructures to support group interaction and stability.
But it provides so many of them that things get very chaotic and redundant very
quickly. Fan communities are spread out through a huge range of online spaces loosely
connected through their patterns of behavior.
For instance try looking for groups on last.fm that might be about swedish music by
searching “sweden.”
22
Last.fm “swedish” groups
Or “swedish”
23
It’s A Trap Last.fm group
Even IAT, which has a clear hub in the online scene can be found in group form on
Last.fm.
24
It’s a Trap members on Last.fm
... where it actually has more self-identified group members than it does on IAT itself.
25
Last.fm data on It’s A Trap profile
IAT uses last.fm to its advantage, importing information from it into its user profiles.
26
It’s A Trap at myspace
IAT can also be found on MySpace, where being its friend is another way to affiliate
with the community.
27
It’s A Trap on virb
And it’s on Virb.
28
madrugada concert chronology
One of the main things fans do when they get together is amass intelligence. The
internet provides the infrastructure to support archives of all that information they
collect. As a result, fans can build stable, dense, exhaustive and searchable archives
more complete than anything a band or label might ever create. Producers of the show
Futurama have talked about checking out the fan boards to make sure they are
consistent with their own time lines -- the fans have done the work of building detailed
timelines.
Consider, for example the Madrugada fan-created concert chronology which covers
not just every concert, but every set list, notes about the performance, and information
about whether any recordings were made and, if so, whether they were ever broadcast
or circulated.
29
fan wiki entries at
Last.fm and
WikiMusicGuide
Fans also write wiki entries about bands on many sites throughout the net.
30
Hello! Surprise!
One of my favorite fan archives is this one, by Johannes Schill in Sweden who’s
collected a list of over 500 Swedish pop bands, more than 40 labels, and for each has
created a page with information and a link to their website and any free downloads or
other media that the artist has made available.
31
“I’m just an enthusiast, I wouldn’t say
I’m involved [in the Swedish music scene]
at all. The ones who are doing the work
are the artists, they should have the
money.”
- Johannes S., Hello! Surprise!
“Maybe what they see is that when
someone else does the work, they do
not have to bother with it for the
official page, hehe.”
- Reidar Eik, MadrugadaMusic.com
When you ask him how he justifies doing so much work for free, he rejects the idea
that what he’s doing is work.
This raises the really important relational issue of how to encourage fans to put in labor
on your behalf without exploiting them, especially given that for the most part, they do
not want monetary payment and, as I’ll return to later, generally prefer to maintain
independence.
If you do it right -- as has been the case with Madrugada and the person who runs this
fansite -- everybody wins. If you do it wrong, everybody loses.
32
swedesplease
chicago
absolut noise
paris
The internet also enables new forms of engagement. Digital information is easy to
replicate and manipulate, and that’s given rise to new ways that fans are creatively
engaging music. We now see things like fan-created remixes, mashups and videos.
We also see the rise of the mp3 blog, which has become increasingly important in the
last few years. Here are two blogs that specialize in Swedish music. They are written in
English and French and actively seek to export Swedish music to international
audiences.
Together with sites like it’s a Trap, sites like this are creating a whole international
scene around indie swedish and to a lesser extent other scandinavian music.
33
mp3 blog aggregators
Mp3 blog aggregators such as Hype Machine and Elbows aggregate these bloggers
into a collective voice -- a moment by moment stream of buzz
34
portable playlists
Fans can also now create playlists on places like youtube or last.fm which they can
then embed in other websites, building a social identity that incorporates music while
promoting the music they like.
35
myspace
Finally, the internet changes fandom by lessening the distance between fans and artists,
raising a host of issues about how to interact with fans yet still maintain creative
distance, privacy, and, when wanted, some mystique.
Myspace offers access but gives musician control, though some opt out or subvert as a
statement. I think too many bands are too reliant on MySpace. It’s important to have a
presence there, but it is not enough.
There are also issues of ownership and rights over your online presence -- you don’t
own your myspace page, fox interactive does. Everyone should have an online
presence they own.
Some artists, like Jens Lekman, have left or never been on MySpace. The image on the
right is what you see if you click on the “MySpace” link on Lekman’s homepage. (the
text says: Fill In The Blanks). He is active on his own site, though and has done a great
job building relationships with his fans that way.
36
“Our record company handles the
promotion side of things …but
we have tried to have a strong
presence on myspace.”
- The Shout Out Louds
“Since I got into myspace
interaction between me and
people who like the music has
increased by hundreds and
hundreds of percent.”
- Starlet
Handling friends requests on MySpace can be a timesuck, as can weeding spam out of
comments, but for many musicians, the direct interaction with fans has been a
powerfully rewarding experience.
37
As in the Lekman example. bands are also providing direct access through their
websites. Here, for instance, is the “ask the cardigans” section of the Cardigans
website where their bass player Magnus Sveningsson loyally responds to fan inquiries.
Others maintain band or personal blogs or find other ways to foster interaction with
their fans online.
38
All of this means that the fans are more powerful. This isn’t just true in music. For
instance these are 5 recent examples of online fans having real influence.
The movie Snakes on a Plane was preceded by a fan blog “Snakes on a Blog” where
fan discussion came to shape the film’s title and script.
Jericho was a tv show cancelled until fans organized online and launched a campaign
in which they sent over 40 tons of peanuts to CBS headquarters in NYC until they
relented and agreed to a second season.
Fandom Rocks was a group of fans of the tv show Supernatural who, inspired by Joss
Whedon fan groups, decided to raise money for charity and gave over $2000 to a
homeless shelter in my town.
Fans of the band Two Gallants were present when they were roughed up by police at a
show in Texas and posted video of the event to YouTube, ensuring that it gained a
wide audience.
39
Finally, the most impressive example is the football fans who organized to buy a UK
The relationship between fans and the people and things around whom they organize
can be synergistic, but it can also be deeply problematic.
Both Prince and Usher, for instance, have taken legal steps to claim the domain names
of fan sites because they are not happy with the fan activities on there. Prince says they
are violating intellectual property by posting images (including one of a tattoo bearing
his likeness). Usher did not like the way the fans reacted to his then-fiancee.
Little Rubber Shoes was organized around Crocs shoes and had the blessing of the
company until they realized that the site was running ads for their competitors. They
sued and the site no longer fawns as much over Crocs.
Trent Reznor of NIN has been at the cutting edge of pushing internet fandom, but even
he ran into trouble when his idea to encourage fans to create their own remixes and
upload them to a NIN site was nixed by the legal department, who were suing fans of
other bands for doing just that without blessings from above.
40
“It used to be that fans and the label were very
distinct entities that were separated by access
to means of media representation. That no
longer applies, as the means of
communication for both fans and the
artists/label is digital data. Because of that,
labels have had to adapt on how we deal with
fans. In the end, we’re both on the same side:
the side of the artist. The label promotes,
distributes and develops artists while the fans
support them from underneath.”
- Ethan Kaplan, Murmurs.com/WBR
The flip side of fans’ increased power is a loss of control amongst those who’ve been
able to control music production, distribution and coverage.
It’s natural to respond to this with fear as the major labels, RIAA, and many artists and
their managers have done. The threats are real.
But getting control back is not an option. That’s just not going to happen. So the
question then is how you can build relationships with these fandoms that are mutually
supportive.
They do this best when bands and labels have to do their part to make that work.
Here’s a hint -- building good relationships with fan communities does not involve
suing them.
41
“The barrier is down, or a lot of
it, thanks to MySpace, Last.fm
and other sites. The hierarchy is
flattened, me and my “fans,”
and the same with artist I like
and adore, are in a way on the
same level.”
- Starlet
Fans need to be seen as collaborators and equals
42
“It’s breaking down the barriers of the
inaccessibility of the artist, which is
good. It makes people realize it’s
something they can do themselves. It’s
important to remember that people
who play music are just people. The
internet helps that.”
- The Fine Arts Showcase
This humanizes the fans. And it should humanize everyone involved.
43
“We email quite a bit with the “fans” (I'm
having a hard time using the word
“fans”)… The relationship for me is the
fact/hope that we gather like-minded
people that share a common love.”
- Club 8/Labrador
The labels and musicians who are taking full advantage of the internet to foster their
fandoms and to relate with their fans resist using the term fan, focusing instead on the
sense of community.
Fans, labels and bands are together building a new kind of music scene, one in which
they’ve all got important parts to play.
44
a shared problem:
the internet is overwhelming
Fans, bands and labels are bound together by shared love of the music, and at least
potentially by a sense of shared community.
But we’re also bound together by a common problem. Everyone finds the internet
overwhelming.
45
“ Bands should have their online page be a portal
to all their online web 2.0 activities with links to
their Last.fm, MySpace, YouTube. That’s the
wave of the future.”
- Nick Levine, Tack! Tack! Tack!
“ If someone reads about an artist on Labrador in
a physical paper and wants to listen to the music
it should be very easy to find it. If they find their
way to Labrador.se they can download mp3s
from all bands. If they're on Last.fm they can
hear every album in full there. Etc.”
- Labrador Records
For bands and labels the problem is the need to be represented everywhere -- even
niche long tail audiences are distributed all over the place online.
Who’s got time?
Building a good online identity is a different skill set from making music. That may
not be their creative strong suit. Too many bands let their friends handle their web
presence and then their friends flake and they don’t want to hurt their feelings so settle
for a poor presence.
46
“All I want is to get the music through to
people.”
- Adrian Recordings
“ We have stopped thinking about selves as
labels, we’re more like music companies. We
make music. We don’t think about selling
music, we just want to have attention.”
- Hybris Records
From the fans’ point of view, there are so many things vying for our attention that we
need filters that can guide us to the music we’re most likely to like. There are bands for
any fan. But it takes a lot of diligence to find them.
Hybris records talk about having 2.4 terabytes of music on hard drives in their offices describing music “as an endless stream.” There isn’t enough time in life to listen to all
that music.
Avi Roig from IAT has 2000 bookmarked sites he checks daily through an automated
process -- on top of blog subscriptions and direct emails.
No one can keep up completely.
47
“we need certain tastemakers, or editors,
between sender and receiver. This is
where (the good) blogs and online
mags/forums come in handy. There is
simply too much out there to take in so we
need to help each other. Something that I
think will create a better world in maybe
ten, twenty years time, a better climate to
create and activate thousands of creative
minds that never would have a voice if it
wasn’t for the internet.”
- The Bell
This quote comes from an interview on the blog Muzzle of Bees with Swedish band
The Bell who are getting a lot of international buzz right now. I like it because it both
points out the need for fan filters and also repeats that idea I hear often when I talk to
indie bands or labels that the internet is creating a new kind of music culture.
It’s a powerful counter-story to the “pirating is killing music” narrative that dominates
the discourse about online music.
48
“The label isn’t enough of a filter
anymore. It’s great for us. If a big
mp3 blog puts up a track by one of
our artists it gives it credibility. It
makes it easier for people to like it
and accept the music.”
- Hybris Records
The traditional media still filter -- even the indie labels still target the major magazines,
newspapers and radio. But now fan communities filter too.
The band’s need to be represented everywhere and the fans’ drive to visibly identify
with and talk about music intersect and work together. Bands can’t be everywhere, but
the fans already are.
49
free music
information
blog posts
attention
If you want fans to talk about you, you need to give them something of social value.
Give them things that stimulate the activities they want to do: give them things to build
identity with, to offer up for collective interpretation, to pool into collective
intelligence.
50
“Music 2.0”
is (largely) fan filter infrastructure
There’s also a huge set of new third party players, creating online and mobile music
services that in many ways utilize fans as filters. The next few slides demonstrate some
examples.
This collage of Music 2.0 labels was put together by Jadam Kahn.
51
recommendation systems
Amazon and MyStrands
52
personalized radio streams
53
music-based social
network sites
Last.fm, iLike and MOG
54
library sharing
Anywhere.FM and Qloud
55
social network site applications
Facebook applications offer another way to draw on fan activity. MySpace
applications are under development.
56
widgets
This is probably the best band widget out there. It’s from ReverbNation. It can be
embedded in any webpage and functions as a mini website allowing video and song
streaming, music purchasing, information links, direct signup for band mailing lists,
recommendations of other bands, and more.
57
“Fans can’t be managed like employees
because they’re volunteers and treasure
their independence. It’s more like the
organic skills of gardening or farming,
sensing the way the wind is blowing and
adapting tactics to suit.”
- David Jennings, Author
Net, Blogs & Rock ‘n’ Roll
It’s important to recognize and respect the fact that fan communities need
independence from bands and labels
Like the artists who sue their fan boards, it can be tempting to try to control what fans
say about you online. It can’t be done. And you don’t want to do it, even when they
say things you don’t like.
58
“We’ve
been happy to remain the
unofficial fan site because then
we have exclusive control over
what goes on the website, without
publicists and lawyers getting
involved.”
- Brenna O’Brien, Friday the 13th Fan Site
Brenna O’Brien runs a very successful fan community for fans of
the Friday the Thirteenth movie series. At one point they almost
became the official site. Her point about laywers and publicists is
important, and gets back to the issue of control.
59
“The official page has some of the necessary
information a new fan would need to get into the
band. […] it works as an introduction to the band.
They also have a link directly to the discussion forum
of my fan page, which is a very nice touch because it
enables the fans of the band to get in direct contact
with each other just one click away from the band’s
official page. So while lacking in content, the official
page makes up for it by using the resources the fans
pool together.”
- Reidar Eik, MadrugadaMusic.com
Online fandom should be left to compliment the official presence rather than be
absorbed by or compete with it.
60
“music management, who have looked
after Rob for over a decade have been
great. They assist us in the kind of content
we post on the site to keep our download
section legal and pass on things the
community wish to send to Rob such as
messages of support, fan feedback and
birthday cards/gifts. They’ve been very
supportive, sending us congratulations on
our first year and advising us on how to
handle any media inquiries”
- Shell, PureRobbie.com
If you want fans to respect and pay you for what they can easily download for free,
you have to treat them with respect and trust. This is at the heart of the “organic skills”
David Jennings alludes to. When bands foster respectful and trusting relationships with
their fan bases, the fans will rally for them because they will feel not just a legallybound economic relationship to them but a morally-bound social relationship as well.
We saw this with Wilco. They let their most recent record stream on the internet for
months before its release, despite dire warnings that this would ruin their sales. It was
widely circulated and mp3 blogged. Shortly before its release, they sent out an email to
their fan mailing list pointing out the many ways they had demonstrated trust in their
fans -- their encouragement of the taping and distributing of their concerts for instance
-- and asked that they hold up their end by going to the store and buying the record the
day it was released. The fans did, and the record charted higher than their previous
records had.
The success of the Radiohead CD sales also demonstrates the extent to which showing
fans that you trust them to do the right thing can be rewarded.
61
“Labels and managers should focus on
the ‘whole fan’ and concentrate on their
lifetime value as committed advocates,
which may mean indulging the odd
misdemeanour in return for having
someone who will evangelise and recruit
more fans on your behalf for years to
come.”
- David Jennings, Author
Net Blogs & Rock ‘n’ Roll
I’ll leave you with a couple of thoughts about how artists and labels should think about
online fans.
When Jennings talks about “the odd misdemeanour,” he’s talking about things like
intellectual property violations, negative public criticism, and the sorts of things that
lawyers tend to go after fans for doing.
62
“Trust the fans to bring what
they do to the table, and provide
them with tools, media and
good information to develop
their fandom in positive ways.”
- Ethan Kaplan, Murmurs.com/WBR
I will give the last word to Ethan Kaplan who, as the founder of an extremely
successful fan community and major label tech guy, is in a particularly good position
to offer insight.
63
contact me:
[email protected]
[email protected]
read my blog:
www.onlinefandom.com
This work is licensed under the Creative Commons Attribution-Noncommercial-No
Derivative Works 3.0 United States License. To view a copy of this license, visit
http://creativecommons.org/licenses/by-nc-nd/3.0/us/ or send a letter to Creative
Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.
That means you can reuse it as you like so long as you don’t make any money off it
and don’t take it apart and make it into other things. If you want to make money from
it or create derivitave works, please email me.
Unless indicated otherwise, all quotes are from interviews conducted by Nancy Baym
except that Robert Burnett interviewed the Shout Out Louds.
64