The Language of Cyberspace:

Transcription

The Language of Cyberspace:
Content
Acknowledgement
A
Content
A
Introduction
1
I Cyberspace
2
I.1 INTRODUCTION
3
I.2 DESCRIPTION OF A NEW WORLD
3
I.3 ORIGINS
I.3.1 Invention
I.3.2 Conception
4
4
5
I.4 SIGNIFICANCE
6
I.5 VIRTUAL REALITY
8
I.6 MEANING
I.6.1 Internet Cyberspace
I.6.2 Misunderstandings of Internet Cyberspace
I.6.3 Benediktine Cyberspace
I.6.4 Threads
I.6.5 Conclusion
10
11
13
14
17
21
I.7 CYBERSPACE CITY
21
I.8 CONCLUSION
23
II Internet Cyberspace
24
II.1 INTRODUCTION
25
II.2 HISTORY OF THE INTERNET
II.2.1 The Mother of All Networks
II.2.2 The Web
II.2.3 Facts and Numbers
II.2.4 Bandwidth
II.2.5 Network Protocol
26
26
27
28
29
30
II.3 HYPERTEXT
II.3.1 Navigating through Cyberspace
II.3.2 Consequences
30
30
32
A
II.4 VIRTUAL COMMUNITIES
II.4.1 Cybercity
II.4.2 Places in Cyber-‘Space’
II.4.3 MUD
II.4.4 Origin
II.4.5 Habitat
34
34
35
35
36
38
II.5 CYBERSPACE URBANISED?
II.5.1 Virtual Cities
II.5.2 Digitale Stad Amsterdam
43
44
46
II.6 CONSEQUENCES
47
II.7 CONCLUSION
49
IIICyberspace Architecture
51
III.1 INTRODUCTION
52
III.2 VIRTUAL ARCHITECTURE
III.2.1 Introduction
III.2.2 Urban Design
III.2.3 Cyberspace Architects
III.2.4 Critical Approach
III.2.5 The Representation of Space
52
53
53
55
56
57
III.3 DIGITISED ARCHITECTURE
III.3.1 Data Field Architecture
III.3.2 Applied Software
III.3.3 Time-Space Relationship
III.3.4 Virtual House
61
61
65
66
68
III.4 LIQUID ARCHITECTURE
III.4.1 Introduction
III.4.2 Virtual Poetics
III.4.3 Transmitting Architecture
III.4.4 Conclusion
68
69
70
71
72
III.5 EVOLUTIONARY ARCHITECTURE
III.5.1 Nature
III.5.2 History
III.5.3 Generative Systems
III.5.4 The Tools
III.5.5 The Evolutionary Model
III.5.6 Conclusion
72
73
73
74
75
76
79
III.6 CONCLUSION
79
B
IV Information Architecture
80
IV.1 INTRODUCTION
81
IV.2 THE INFORMATION REVOLUTION
IV.2.1 Cyberspace as an Information Tool
IV.2.2 The Value of Online Information
IV.2.3 Search Engines
81
81
82
84
IV.3 3D INFORMATION VISUALISATION
IV.3.1 Information Quantity
IV.3.2 Visualisation Techniques
IV.3.3 Overview
IV.3.4 Spatial Arrangement of Data
IV.3.5 Examples
85
85
86
89
89
90
IV.4 VR/SEARCH
IV.4.1 CGI
IV.4.2 PERL
IV.4.3 VRML
96
98
98
98
IV.5 MAPPING INFORMATION IN CYBERSPACE
IV.5.1 Dimensionality
IV.5.2 Continuity
IV.5.3 Limits
IV.5.4 Density
IV.5.5 The Remaining Principles
IV.5.6 Conclusion
100
100
104
105
105
107
111
IV.6 CONCLUSION
111
Conclusion
112
References
113
C
Introduction
What is cyberspace? What is, in fact, the meaning of this space? And if cyberspace can
really be understood as space, what is the resultant role of architecture in this still
largely unknown realm? Is all reality then necessarily becoming virtual reality? Who are
the architects of cyberspace, and which designing principles should they follow? And if
there are really architects involved, why are the contemporary examples of virtual reality
environments nowadays then still characterised as banal? Moreover, what does it
actually mean to design cyberspace? Which urban metaphors are implemented in the
virtual realm, so that in some way familiar notions become apparent in this abstract and
technological world? Is cyberspace a novel departure or an extension – perhaps the
final extension – of the trajectory of abstraction and dematerialization that has
characterised so much modern art, architecture and human experience? Or shortly, to
put it in the summarising words of Ole Bouman: “Can architecture go digit-all?”1
The impressive influence that both information and digitalisation, two phenomena that
undeniably are revolutionising society and culture at every level as well, had and still
have on the notion of architecture itself is more than fascinating. Certainly, this
inspiration already conquered much of the development of western knowledge and has
apparently drawn many multidisciplinary authors further away from their traditional fields
of research. Consequently, in this very attempt to answer the questions mentioned
above as correct as possible, it is in fact their work that will be used intensively.
First, the significance of the term cyberspace itself is thoroughly analysed. The context
of its literary origin is hereby explained, as well as some of the many social
consequences it has caused and the various important meanings it has gathered in
history.
In the second chapter, the concrete visualisation of virtual environments is compared to
the promising chances architectural form could possess in a pure digital realm.
Furthermore, the commonly recognised characteristics of the city, which are used
intensively in these socially inspired applications, are pursued and analysed in greater
depth. This investigation is based on some two-dimensional community worlds now
existent in the so-called Internet cyberspace, although it should be noted that naturally,
this field is developing in an unbelievable rapid pace and consequently threedimensional and immersive environments will certainly emerge soon.
In the third chapter, some of the possibilities the process of digitalisation brought along
in the field of the architectural generation of form are clarified by describing specific
personal investigations of visionary architects and researchers. They are in fact
convinced of the originating power contemporary computers now are able to produce
and base their entire architectural discourse on the dynamic perception of abstract
information, which is mapped unto the construction and surfaces of their digitally
generated forms.
Finally, in the last chapter, the more specific field of information visualisation is
illustrated with several specific examples of virtual spaces that are uploaded with
various architectural concepts. Furthermore, a VRML-application called VR/search that also
has been programmed as part of this work is explained by some of the cyberspace
design-principles proposed by the author Michael Benedikt.
1
BOUMAN, OLE, RealSpace in QuickTimes: Architecture and Digitization, Rosbeek, Nuth, 1996, p.23
1
“The realm of pure information, filling like a
lake, siphoning the jangle of messages
transfiguring
the
physical
world,
decontaminating the natural and urban
landscape, redeeming them, saving them from
the chain-dragging bulldozers of the paper
industry, from the diesel smoke of courier and
post office trucks, from jet fuels fumes and
clogged airports, from billboards, thrashy and
pretentious architecture, hour-long freeway
commutes,
ticket
lines,
and
choked
subways… from all the inefficiencies,
pollutions (chemical and informational), and
corruptions attendant to the process of moving
information attached to things – from paper to
brains – across, over, and under the vast and
bumpy surface of the earth rather than letting
it fly free in the soft hail of electrons that is
cyberspace.”
(BENEDIKT, MICHAEL - Cyberspace: First Steps - p.3)
I
Cyberspace
« We are witnesses to an extraordinary era
that will no doubt be remembered in history as
an appropriately revolutionary development to
accompany a new millennium. I hope, by the
time you finish this book, that that last
sentence will be regarded as mild
understatement rather than wild, wide-eyed
hyperbole. »
(WHITTLE, DAVID - Cyberspace: The Human Dimension - p.4)
2
I.1 Introduction
The Content
In this chapter, the phenomenon of cyberspace will be thoroughly investigated. First, its
roots will be explained by the description of the movement called cyberpunk and its
most important figure William Gibson. As this word received many meanings out of
several human fields of research through the years of its existence, the actual
significance it had (and has) and two main theoretical streams of thoughts about the
content of the term will be withheld. At last, the context of its origin is clarified by an
analysis of the urban environments literally described in Gibson’s books.
How it all begun…
“Cyberspace, a consensual hallucination, experienced daily by billions of legitimate operators, in
every nation, by children being taught mathematical concepts… A graphic representation of data
abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of
light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights
receding...” 2
I.2 Description of a New World
Welcome to a new world, seen through the eyes of its inventor: William Gibson.
Apparently describing the opening sequence of a film like Blade Runner3, this literal
description resulted in a lot more than any other paragraph in the famous science fiction
novel entitled Neuromancer, a book of which the significance is sometimes compared to
futuristic legends as 1984, or Brave New World. What first only seems to represent a
dizzying trip in some space vessel above a metropolitan city in the dark and uncertain
future will hopefully get another interpretation throughout this chapter.
Figure I-1 Two stills taken out of the film Blade Runner (1982)
(http://us.imdb.com/Title?Blade+Runner+(1982))
A silent and slowly changing panorama by night, containing uncountable light patterns
of unknown source, only disturbed and enlightened by sudden and loud pulses of huge
vertical flames coming from…nowhere. Chaotic endless clusters of moving,
meaningless objects shifting in an impressive view, almost showing the vast amount of
yet undiscovered human knowledge. This dramatic visualisation taken out of the
beginning of Blade Runner can be considered as a simplistic personal expression of the
visionary thoughts about cyberspace. Michael Heim described in his essay how the
fictional characters of Neuromancer experience the ‘Matrix’ –- cyberspace - as a place
of rapture, erotic intensity and powerful desire, a phenomenon where objects attain a
2
3
GIBSON, WILLIAM, Neuromancer, HarperCollinsPublishers, London, 1995, p.67
RIDLEY, SCOTT, Blade Runner, USA, 1982
3
supervivid hyper-reality4. In this view, ordinary experience seems dull and unreal by
comparison. Gibson in turn tries to clarify his first and rather abstract description to the
reader in his later books. This ‘abstract representation’ seems to have the capability to
take any form, ranging from pure geometric colour-coded copyrighted shapes or
architectural representations signifying corporate ownership to photo-realistic illusions.
In short, Gibsonian cyberspace can be seen as a rather spectacular representation of a
global information economy, international and essentially computer-based. This
immersive environment is hereby articulated as a metropolis of bright data constructs,
able to stimulate all the organic senses of any human spectator by a consensual
hallucination. ‘Consensual’ is then the result of some well-known, commonly shared
protocols and agents5 for encoding and exchanging information. Moreover, cyberspace
can then be considered as a ‘hallucination’ when simulation software is able to create a
three-dimensional environment out of the information itself.
With this early and imaginable definition in mind, we first investigate the origins and
importance of the word after which we can step deeper into the very meaning it
obtained in more than ten years of research and development in many, not only
academic, communities.
I.3 Origins
I.3.1 Invention
Although the science fiction writer William Gibson is credited with introducing this word
in one of his first science fiction stories and later in his book Neuromancer (1984),
Gibson himself does credit John Brunner6, author of Shockwave Rider (1975) with
inventing the concept. Brunner in turn refers the original origin to the futurist Alvin Toffler
in his book Future Shock (1970). In Toffler’s visionary work several pages are devoted
to a section entitled “The Cyborgs Among Us”, in which he describes the possibilities of
human-machine integration and even of human brains functioning independent of their
bodies. This latter concept also returned, although a little adapted, in many of Gibson’s
novels. In an ironic twist of fate, George Orwell’s vision of an invasive cyberspace
presence called ‘Big Brother’ in the book 1984 takes place in the same year as the title,
which is also the year Neuromancer was first published.
Cyberspace, now often used to describe an infinite electronic world filled with promise
and interaction, came thus originally out of the dark visions of a, then rather unknown,
science fiction genius. This inventor, William Gibson, is the author of Neuromancer
(1984), Count Zero (1986), Mona Lisa Overdrive (1988), Burning Chrome, The
Difference Engine (with Bruce Sterling) and Virtual Light. His first book Neuromancer
won all three major and most prestigious American science fiction prizes: the Hugo, the
Nebula as well as the Philip K. Dick award. Although born in the United States, Gibson
lives and works in Vancouver, Canada since 1972.
He is recognised as the leading writer of a new kind of science fiction called
‘cyberpunk’, extrapolating contemporary technology into a future of urban decay and its
consequences on the lives of underclass characters. Some other writers followed this
movement, of which Rudy Rucker, Bruce Sterling, and John Stirley are only a few. In
Gibson’s novels, everyone, even punks and street gangs, has access to technology,
while huge multinational corporations battle each other illegally, as each of them is
holding more power and wealth than world governments. Warfare, as well as normal
criminal acts are executed through pure electronic communication, often using
4
HEIM, MICHAEL, The Erotic Ontology of Cyberspace, in BENEDIKT, MICHAEL (Ed.), Cyberspace : First
Steps, MIT Press, London, 1991, p.62
5
For further explanation about communication protocols and electronic agents, see Chapter II: Internet
Cyberspace.
6
WHITTLE, B. DAVID, Cyberspace : The Human Dimension, W.H. Freeman Co., New York, 1996, p.4
4
programs like artificial intelligent viruses, so smart and human-like that some even
received a real human citizenship. Being more than a literary genre, cyberpunk may
also be applied to other forms and media, including film, comics, music, and fashion.
Sometimes it is interpreted as a critique of capitalism or as the disembodied style or
‘hacker chic’ that is best fit to represent social interaction in cyberspace. Cyberpunks
are people who explore the digital landscapes of electronic space and the term is often
used to describe the outlaws and hackers on the computer frontier7, people who are
involved in illegal computer activities such as breaking into networks. It should be noted
that not all forms of science fiction that deal with cyberspace are considered forms of
cyberpunk.
Popular as well as specialised press hailed Gibson the unchallenged guru, prophet, and
voice of the new cybernetic world order and virtual reality. Curiously enough, although
he has become an authority in the highly technological issues of the virtual realm, not
many of his readers know that Gibson actually wrote Neuromancer on a simple 1927
Hermes typewriter.
I.3.2 Conception
Cyberspace. The actual term is technically unimportant, as other phrases are often
used synonymously: Cyberia, Cyburbia, virtual space, virtual worlds, dataspace, the
Matrix, the digital domain, the electronic realm, the information sphere, Electropolis,
Netropolis, Virtual Reality, computer networking, the Internet… Nevertheless, it can be
noticed that Cyberspace™ (trademark!) was almost a historical fact. As the word
obviously seemed very attractive for commercial use, Autodesk Inc. tried seriously to
protect it for one of its VR-projects8. Such a rush to lay claim to intellectual property
rights was a clear sign that the forces of commerce were firmly charging at the
economical richness of the expression. William Gibson, helped by Michael Benedikt, a
man whose importance will be clarified later, succeeded to stop this proposal legally in
order to keep the term in the ‘Public Domain’.
But what does it mean literally? ‘Cyber’ connotes automation, artificial control, and
computerisation.9 In the context of artificially generated imaginable environments,
‘space’, of course, means a multidimensional place, most often used in relation with
electronic spaces created by computer-based media.10 The word that results is as
futuristic as the concept. Although regularly criticised in its early existence as
representing a temporary hype phenomenon or typified as ‘only for nerds’, the word
‘cyberspace’ continued to increase rapidly in popular usage and meaning. But when the
same critics still could not invent a better expression to replace the old one, it was
concluded that this already commonly accepted word had become a necessary
permanent fixture in many western languages, among which not only the English.
Purists also complained that ‘cyberspace’, pronounced as “cī-bûr-spās” actually is derived
from the expression ‘cybernetics’, meaning the study of control mechanisms, indicating
control through interactivity. Cyberspace could thus be understood as a place capable
of controlling information, as a way that enables people to control certain devices
through computers that give them a feeling of some kind of feedback. The expression
cybernetics in turn is derived from the Greek “kubernetes”, and thus, critics argue, should
be pronounced “kī-bûr-spās”. But obviously, they lost the battle.
7
Ref.: HAFNER KATIE & MARKOFF JOHN, Cyberpunk: Outlaws and Hackers on the Computer Frontier,
1992
8
SALA, LUC & BARLOW, JOHN P., Virtual Reality: De Metafysische Kermisattractie, SALA
Communications, Amsterdam, 1990, p.50
9
‘Cyber-‘ has actually become the prefix of the 1990s: cyberspace, cyberdeck, cyberpunk, cybernaut,
cyberart, cybergames, cybersex, cybertalk, cyberbody, cyberworld, …
10
Other non-electronic ‘spaces’ are able to emerge when, for instance, people read books, listen to radio,
etc.
5
Etymologically ‘cyber’ means “steersman”, and that is what they are, the ‘console
cowboys’ or ‘jockeys’ in Gibson’s books, when they ‘jack’ into the infinite ‘Matrix’ (a term
that actually originates from the Latin for the mother). Connected through a neurological
implant, they ride their cardinal brainwaves retrieved from an electronic controlling
cyberdeck, experiencing the digital sensations of every requested command or
program-run by some kind of perceptible representation. The physical world becomes
hereby replaced by a symbolic media-generated landscape.
“She slid the trodes on over the orange silk headscarf and smoothed the contacts against her
forehead.
“Let’s go,” she said.
Now and ever was, fast forward, Jammer’s deck jacked up so high above the neon hotcores, a
topography of data he didn’t know. Big stuff, mountain-high, sharp and corporate in the non-place
that was cyberspace.”
(William Gibson, Count Zero)
I.4 Significance
In this paragraph, several reasons will be mentioned why the term cyberspace received
so much attention and importance until today. How did a word that science fiction writer
Gibson had thrown into his work almost casually and with unconscious irony, acquire
such value within only a few years time? It is strange to notice that, with the dark nature
of Gibson’s view in Neuromancer in mind, this term transformed in the everyday use
into a dynamic and positive representation of concepts and applications that the
visionary writer himself could not have foreseen or predict. The word broke out of the
domain of the synergetic techno-visionary world of ‘cyberpunk literature’ and engaged
even the creative imaginations of a narrow spectrum of government, corporate, and
academic researchers from various disciplines.
Although cyberspace is popularised by Gibson’s books, it passed the phase of trendy
phenomenon rather easily and is now considered as a powerful, collective mnemonic
technology that promises to have an important, if not revolutionary, impact on the future
compositions of human identities and cultures. In the view of anthropologist David
Tomas, Gibson has devoted considerable attention to the chilling socio-economic
implications of this space and its post-industrial context. Describing and extrapolating
the lives of the lower social level of society, he tries to show the possible consequences
of the future information age, and this instead of dry speculation. In fact, Tomas is
convinced that Gibson delivers us the most sophisticated and detailed ‘anthropological’
vision of cyberspace to date: its social and economic facets and the outlines of its
advanced post-industrial form. In this view, Gibson’s anthropological description is
significant in three different ways.11
11
DAVID, TOMAS, Old Rituals for New Space : Rites de Passage and William Gibson’s Cultural Model of
Cyberspace, in BENEDIKT, MICHAEL (Ed.), Cyberspace : First Steps, MIT Press, London, 1991, p.32
David Tomas is an artist and anthropologist teaching in the Departement of Visual Arts at the University of
Ottawa in Ontario. He has published numerous articles on ritual and photography, including one on the
technicity in William Gibson’s novels.
6
1. Science fiction is considered as an important tool that allows us to make
sense of a rapidly emerging post-industrial culture. It acts like a ‘spatial
operator’ connecting pasts and futures by way of the present.
2. It allows us to interpret an advanced information technology that has the
potential to overthrow the sensorial architecture of the human body by
reformatting its organic borders in powerful, computer-generated, digital
spaces.
3. Advanced digital technologies, such as those who generate cyberspace, can
act as a testing ground for ‘post-ritual’ theories and practices, as
conceptualised by a post-industrial anthropology.
So obviously some authors argue that the success of Gibson’s powerful vision of
cyberspace was actually not for the merits of signalising some kind of technological
development, but actually because he tried to describe a new social fascinating
community. For social researcher Allucquére Stone, Neuromancer reached the hackers
and also the technologically literate and socially disaffected who were searching for
social forms that could transform the fragmented anomie that characterised life in
Silicon Valley and all other electronic ghettos.12 This book provided them the imaginable
public sphere and refigured community that was able to establish the grounding of a
new kind of social interaction. In this way, the publication in 1984 resulted into a
massive inter-textual presence not only in other literary productions of the 1980s, but
also in technical publications, conference topics, hardware design, and scientific and
technological discourses. Other merits might be more related to the ‘right time, right
place’-phenomenon of the book, in which the fascinating way technology was described
grasped the attention of more than one person personally involved in the field of
computer research. In this context, the next fragment can be most clarifying.
Question: How does cyberspace relate to ‘virtual reality (VR)’, ‘data visualisation’,
‘graphic user interfaces (GUIs)’, ‘networks’, ‘multimedia’, ‘hyper-graphics’ and
many other catchy words introduced by the computer technology industry?
Answer: Cyberspace relates to all of them. More than this, in some sense
‘cyberspace’ includes them all and much of the work being done under their
rubrics.13
So cyberspace as a project and as a concept has collected these separate projects into
one and focused them on a common target. The dream and fascinating dynamic force
the concept incorporates, draws many studies and companies into the track of its own
realisation. Although the visionary description provided by Gibson can be characterised
as dark and dangerous, it had (and has still) a great influence on the way virtual reality
and cyberspace researchers were (and are) structuring their research agenda.
Nevertheless, more critical people still dare to doubt the real value of Gibson’s visionary
contribution. They do agree that the disturbing and dark vision of fictional cyberspace
might have contributed to the initial views of cyberpunks, criminals, anarchists,
politicians, and of course businessmen and capitalists. But they also dare to argue that
the fictional cyberspace no more depicts the real cyberspace than did Dante’s Inferno
paint a picture of the real world in which he lived.
12
STONE, R. ALLUCQUERE, Will the Real Body Please Stand Up? Boundary Stories about Virtual
Cultures, in BENEDIKT, MICHAEL (Ed.), Cyberspace: First Steps, MIT Press, London, 1991, p.95
13
BENEDIKT, MICHAEL, Cyberspace: Some Proposals, in Cyberspace: First Steps, MIT Press, London,
1991, p.122
7
I.5 Virtual Reality
Gibson’s definition can also be considered as a fictional translation of Ivan Sutherland’s
original concept of the ‘Ultimate Display’, a special form of display that presented
information to all the senses in a form of total immersion. Maybe inventing virtual reality
in human history, Sutherland14 published an academic paper at the MIT Draper Lab in
Cambridge in 1968 entitled ‘A head-mounted three dimensional display’, specifying one
of the key technologies still being developed for virtual reality experiments. This system
used television screens and half-silvered mirrors, so that the environment was visible
through the TV displays. In these early days of computing, computers still being more
like machines, huge and expensive, Sutherland dreamed about a room within which the
computer can control the existence of matter, and concluding that, with appropriate
programming, such a display could literally be the Wonderland in which Alice walked.15
Later work at NASA and by the American Department of Defence led to some
prototypes for space exploration and military applications. These early applications of
VR seemed particularly well suited for tank and submarine trainers, as the ‘real’
experience had people looking into low resolution and small binoculars anyway.
More than 15 years later, Gibson extended this idea of ‘looking into a mathematical
wonderland’ to embrace all the human senses being experienced to the entire universe
of information existing in all electronic resources in the human system. And this might
be the very difference with another, and easily confused, phenomenon, namely ‘Virtual
Reality’. This term was coined in 1989 by Jaron Lanier, founder of the first commercial
VR-based company called VPL Research, in an attempt to encompass all of the ‘virtual’
projects then being investigated. Although many writers and researchers in the domain
of virtual applications still use both words, cyberspace and VR, with interchangeable
meanings, the following paragraph can be a possible interpretation for virtual reality
exclusively.
Virtual Reality
A sound, smell, and tactility-enhanced total video environment constructed of
elaborate, flexible, interactive architectures that one may not only inhabit but
actually move through, alter and invent. One inhabits virtual reality in real time,
along with any numbers of others, by means of an electronic analog or deputy self
through which all interactions are mediated. VR is not a simulated environment,
but a new space altogether, made possible by telephone banks, computer
graphics, and television.16
First experimentally explored by Ivan Sutherland (1968), the technology of virtual reality
stands nowadays at the edge of practicality. By mounting a pair of small video monitors
with the appropriate optics directly to the head, a stereoscopic image is formed before
the user’s eyes. This image is continuously updated and adjusted by a computer to
respond to head movements. This results in the first important characteristic of virtual
reality: total immersion. The user thus finds himself totally surrounded by a stable and
three-dimensional world, which he is able to explore. Wherever the user looks, his eyes
are sensing what he otherwise would see if this world would be real and existing around
14
One of the most influential figures in the history of computing, computer graphics, and computer
simulation. Founder of Evans & Sutherland, a developer of military aircraft and vehicle simulators. He
refused talking to the press about himself or about his work. All this makes him an excellent candidate for the
role of inventor and hero of virtual reality.
15
WOOLLEY, BENJAMIN, Virtual Worlds : a Journey in Hype and Hyperreality, Blackwell Publishers,
Oxford, 1992, p.41
(à he refers to : SUTHERLAND, IVAN, A head-mounted threedimensional display, Proceedings of the
International Federation of Information Processing Congress, 1965, p.507)
16
SANFORD, KWINTER, in Newsline, May 1991, in KOOLHAAS REM, S,M,L,XL, 010 Publishers,
Rotterdam, 1995, p.1278
8
him. Sherman and Judkins describe the critical characteristics of virtual reality as “VR’s
five i’s”.
Virtual Reality’s Five i’s17
§ Immersive: virtual reality should deeply involve or absorb the user.
§ Interactive: in virtual reality, necessary techniques should be implemented
that offer both the user and the computer the capability to act reciprocally via
the computer interface.
§ Intensive: in virtual reality, the user should be concentrating on vital
information of multiple sources, to which the user will respond.
§ Illustrative: virtual reality should offer information in a clear, descriptive and
illuminating way.
§ Intuitive: virtual reality should be easily perceived and virtual tools should be
used in a ‘human’ understandable way.
The first characteristic, immersion, can be tested by Myron Krueger’s so-called ‘duck
test’: if someone ducks away from a ‘virtual stone’ aimed at his or her head, even while
knowing the stone is not real, then that world is believable.
This ‘virtual’ world can be generated in one of the following three ways. Either is it
calculated in real time by the computer, or it can be pre-processed and stored, or it
exists physically elsewhere and is ‘video-graphed’ and transmitted in stereo, digital
form. In the last two cases, the technique is also named ‘tele-presence’ rather than
virtual reality.18
“And here things could be counted, each one. He knew the number of grains of sand in the construct
of the beach (a number coded in a mathematical system that existed nowhere outside the mind that
was Neuromancer). He knew the number of yellow food packets in the canisters in the bunker (four
hundred and seven). He knew the number of brass teeth in the left half of the open zipper of the salt
crushed leather jacket… (two hundred and two)”
(William Gibson – Neuromancer)
In addition, the user might wear stereo headphones, which would deliver an acoustic
sensorium added to the previous visual one. Accomplishing the second most important
characteristic of virtual reality, interactivity, are the special gloves the user might be
wearing, or even a whole body suit, that then would add an extra human sense to the
experience. This equipment tracks the motion and position variations, which are
transmitted to the computer or to other users to represent the shape and activity of the
user’s body. Research is done to provide an additional form of force-feedback to the
glove or the suit so that the user will actually feel the presence of virtual ‘solid’ objects
by their weight, texture and even temperature. This physical extension makes it
possible to introduce interactive actions to the otherwise static virtual world. Ultimately,
science fiction and creative people are imagining devices as the ‘Holodeck’ in television
series as Star Trek, the Next Generation or even direct neural connections to the
human nerve system, spoken of in Gibson’s novels. Before technology will develop that
far, three main areas are requiring the most research: sensory perception interfaces,
hardware development and 3D graphic displays. This research then results in a
spectrum that can be split into four broad categories.
17
referred to: SHERMAN AND JUDKINS, Glimpses of Heaven, p.122
(in McMILLAN KATE, Virtual Reality, Architecture and the Broader Community,
http://www.arch.unsw.edu.au/subjects/arch/specres2/mcmillan/futworld.htm, May 1994)
18
BENEDIKT, MICHAEL, Introduction, in Cyberspace: First Steps, MIT Press, London, 1991, p.11
9
VR’s four Categories of Interface
§ desktop systems: navigating through 3D on a monitor
§ partial immersion: navigating through 3D on a monitor with enhancements
such as gloves and 3D goggles
§ full-immersion systems: head gear, gloves, and bodysuits
§ environmental systems: externally generated 3D, but with little or no body
paraphernalia
In short, virtual reality is most often used to simulate some kind of believable actuality
through the manipulation of sensory feedback using electronic and digital technologies.
It acts like a technological tool that provides a more intimate ‘interface’ between humans
and computer imagery. It is about simulating the full ensemble of sense data that make
up ‘real’ experience. With this description of virtual reality, it is only logic that some
people confuse it with the mental state and visual images of cyberspace. Although the
worlds created by virtual reality overlap with cyberspace, cyberspace itself extends
beyond virtual reality to encompass a much broader range of human communications
and interactions. Certainly, virtual reality will be found in cyberspace, but the two
concepts are as dissimilar as the spoken word is to the radio.19 Further investigation
and clarifying explanations of cyberspace can be found in the next paragraph. However,
making a clear distinction between the two separate phenomena throughout all the
streams of thoughts of the many authors will prove to be a rather difficult task.
I.6 Meaning
Cyberspace. What does it mean to you? Gibson once said: “Cyberspace has a nice
buzz to it, it’s something that an advertising man might of thought up, and when I got it, I
knew that it was slick and essentially hollow and that I’d have to fill it up with meaning.”20
What first was only meant as a description of today’s post-modern culture, has already
proved to have changed rapidly and drastically. Not only Gibson tried in his succeeding
books to manipulate the content of cyberspace in his fiction though, also many
researchers attempted in many of their publications to deliver the most suitable
interpretation. Unfortunately, not all of the produced texts seemed to have a unanimous
or uniform view on this subject.
For example, Gibson’s own interpretation came from watching children playing video
arcade games. He actually observed that these kids and also many computer users
seemed “to develop a belief that there’s some kind of actual space behind the screen,
some place you can’t see but you know is there”.21
Random searches throughout opinions of people result in examples showing the vast
pool of possible interpretations in which the essential meaning should be found. As the
explanation of John Perry Barlow for instance, once lyricist for the Grateful Dead and an
important hyped cyberspace pioneer after he co-founded the Electronic Frontier
Foundation, put it: “…that place you are in when you are talking on the telephone”. Or
again, phrasing Howard Rheingold in his book Virtual Communities: “Cyberspace…is
the name some people use for the conceptual space where word, human relationships,
data, wealth, and power are manifested by people using computer-mediated
communications”. No, the most architectural three-dimensional cyberspace world is
then imagined by Michael Heim, in his book The Metaphysics of Virtual Reality:
19
WHITTLE, B. DAVID, Cyberspace : The Human Dimension, W.H. Freeman Co., New York, 1996, p.12
WOOLLEY, BENJAMIN, Virtual Worlds: a Journey in Hype and Hyperreality, Blackwell Publishers,
Oxford, 1992, p.122. (à in fact, he does refer to: Interview with the author, Late Show, BBC2, 26 September
1990.)
21
SUE BARNES, Creating Paradoxes for the Ecology of Self, in STRATE LANCE, JACOBSON RONALD &
GIBSON B. STEPHANIE (Ed.), Communication and Cyberspace: Social Interaction in an Electronic
Environment, Hampton Press, New Jersey, 1996, p.195
20
10
“The juncture of digital information and human perception, the ‘matrix’ of
civilisation where banks exchange money (credit) and information seekers
navigate layers of data stored and represented in virtual space. Buildings in
cyberspace may have more dimensions than physical buildings do, and
cyberspace may reflect different laws of existence. It has been said that
cyberspace is where you are having a phone conversation or where your money
exists. It is where electronic mail travels, and it resembles ‘Toontown’ in the movie
Roger Rabbit.”
Obviously, this last definition is implying that we have already entered the cyberspace
age, while it also tries to bring together Gibson’s notion of visualisation of abstract data
and the more common conceptions of virtual reality. This abstract border, already
blurred in the last description, has to change its transparency more drastically if a final
and clear distinction has to be searched. In the next paragraphs two different definitions
will therefore be investigated. Each point of view represents one of the main
interpretations that can be found in most of the publications about cyberspace.
I.6.1 Internet Cyberspace
What is cyberspace? To this rather easy question, many answers are possible. It would
seem that this is a word that either defies definition, or is one of those intuitive words
that can be understood without a definition. However, some key characteristics of this
phenomenon can be found, of which all have to be included in any possible
interpretation that will be formulated later.
Characteristics of Cyberspace22
§ It is a virtual space, like a state of mind, a place simultaneously real and
artificial, and thus by definition not a physical location. It can be easily
compared to a trance-like state we human beings enter when we are
absorbed in visual or verbal communication, such as reading, writing,
observing and examining pictures, watching video or art, or listening carefully
to music or speech. In this way, cyberspace can be considered as a digital
complement of our atomic world.
§ It can be entered only by means of some sort of physical access device with
an artificial processing mechanism, such as digital computing power and/or
software that is joined with other access devices on a network of physical
connections. Whether this physical assistant is a computer screen, a
telephone, a terminal, a Holodeck or a neurological organic chip is considered
irrelevant. Without an access device, there is no distinction between
cyberspace and communications in the real world. Whatever tool people
want to use, it defines the nature of the experience in cyberspace and may be
considered as the border of cyberspace or the window (cf. Sutherland’s
‘Ultimate Display’) into cyberspace.
§ It enables interaction and communication between individuals and groups of
individuals and their creative output, largely independent of time and space.
Cyberspace is understood as incomplete without any interaction. This
interaction is different from what normally would be expected in the sense that
it may often be somewhat indirect, delayed in time or separated in distance.
The sense of immediacy that apparently results from the interactions in
cyberspace is in fact artificial at best, since these human communications
almost always lack similarity of place, and usually also happen in a shifted
and different time.
22
WHITTLE, B. DAVID, Cyberspace : The Human Dimension, W.H. Freeman Co., New York, 1996, p.7
11
Most often can we distinguish the computer screen as the physical access device,
acting as a window into the new electronic world of today, where physical connections,
both human and hardware, are coupled to create an alternative reality or, in other
words, a virtual space. Most of the connections of today however are based upon the
spoken and written word, which are still most easily transmitted and represented
artificially, given the state of technology today and its ever-present limits. The explosive
growth of the World Wide Web (WWW)23, however, is rapidly adding pictures, sound,
and even video to the cyberspace experience. Looking at the research done by the
computer companies, it can certainly be expected that the future connections will
become more and more realistic, more like ‘being there’ and thus more like ‘virtual
reality’. Ultimately, technological development might be ending somewhere close to the
futuristic fantasies that today exist only in the imaginations of visionaries.
There are numerous manifestations of cyberspace, and although they have some
things in common, each can be distinguished by the nuances of its purpose and origin.
The following list is a quick overview of some manifestations happening in what we
define as Internet cyberspace. Further specific explanation of the most important
applications will be given in the next chapter, entitled “Internet Cyberspace”.
Online Phenomena24
§ telephone conversations
§ electronic mail (e-mail)
§ telephone mail and answering machines
§ newsgroups and forums
§ mailing lists
§ chat rooms
§ Telnet destinations
§ web sites
§ electronic libraries, such as FTP sites
§ electronic conferencing
§ conference calls
§ MUD (Multi-User Domains)
§ virtual reality
§ Interactive TV of all forms, including visual telephones
Reading this list, many people find themselves surprised by the fact that they have lived
in cyberspace more in their life than they imagined without fully knowing about it, even if
they have never touched a computer. Other cyberspace experiences which many
people are familiar with, is for instance watching a movie ‘on demand’, obtained by
ordering from a pay-per-view cable box, submitting an order for merchandise or a
game-subscription via a special commercial number, or taking money out of a
Automated Teller Machine (ATM).25
The former method of clarification, which uses a collection of examples and applications
to make a definition more clear and understandable, has the disadvantage however of
confusing some people with some other phenomena they already know or have heard
of. David Whittle tried to avoid this, by analysing the most common misunderstandings
of what cyberspace might be, but actually is not.
23
see next chapter
WHITTLE, B. DAVID, Cyberspace : The Human Dimension, W.H. Freeman Co., New York, 1996, p.8
25
Some authors, of whom William Mitchell is only one, actually argue that the graphic surface of the
automated teller machine is a more important public representation of a bank than the façade of any of its
remnant buildings.
24
12
I.6.2 Misunderstandings of Internet Cyberspace
“’Try it,’ Case said.
Aerol took the band, put in on, and Case adjusted the trodes. He closed his eyes. Case hit the power
stud. Aerol shuddered. Case jacked him back out. ‘What did you see, man?’
‘Babylon,’ Aerol said, sadly, handing him the trodes and kicking off down the corridor.”
(William Gibson – Neuromancer)
1. Virtual Reality
The first that comes in mind is the improper comparison with ‘virtual reality’ that already
has been described and clarified earlier, and quickly fails when used to conceptualise
cyberspace itself. A clear distinction can be made between on the one hand the whole
group of electronic-human communication manifestations, cyberspace, and on the other
hand only one of its many members, namely virtual reality.
2. Information Superhighway
Metaphors and analogies are perhaps the most powerful way to convince people. But
this technique is also the first originator of much confusion, certainly when people try to
present new ideas by building on the foundation of familiar concepts to represent the
very unfamiliar. Another example, the ‘Information (Super)Highway’ or ‘Infobahn’, is a
good description of the backbone of a global network, but is obviously an abused
metaphor for cyberspace. For David Whittle, an information superhighway can be
perfectly applied to describe the physical infrastructure that constitutes the standards
and bandwidth of the networks and connections upon which cyberspace is being built.
However, it is entirely inappropriate to represent the entire set of online phenomena
because it raises a variety of impressions that do not apply well to the concept of
cyberspace, as the following part tries to prove.
Information Superhighway26
§ is used to travel along wide, well maintained paths, funded, owned and
controlled by the (American) federal government.
à cyberspace: a network of highways, avenues, streets, and roads covering
the whole world, funded by government and private enterprise and owned
and controlled by no one.
§ is often used to travel from a known beginning to a known end, for a known
purpose.
à cyberspace: the ‘journey’ is represented by seconds of delay and is
actually pointless, while the destination is everything and often unknown
before arriving.
§ is a part of a finite number of broad high-traffic connections between only the
most important cities.
à cyberspace: every size of connection and every size of node is present
and available.
As the last point of the list is only a general comparison, the precise technique of
transmitting data through communication networks like the Internet will be explained
later. Only then will also become evident that the corresponding illusion of traffic jams
and speed limits is an elementary misunderstanding, as speed is hardly a problem on
the digital network, but in fact the notion of bandwidth is.
26
WHITTLE, B. DAVID, Cyberspace : The Human Dimension, W.H. Freeman Co., New York, 1996, p.10
13
3. Electronic Frontier
Even the mental image of an ‘electronic frontier’ will not be appropriate forever. Many of
the ‘pioneers’ already feel as if the frontier is given away to the hordes of ‘newbies’. It is
already a fact that the former vast, unexplored online land is victim to crude
commercialisation and consequently is changing into an easy victim of banality. So, it
can be argued that also this metaphorical pioneer imagery will become lost forever.
4. Feeling of Fear
Another problem arises when people start to believe the misleading pictures and visions
represented by the largest part of the entertainment industry, starting from Hollywood to
the ordinary sci-fi comics. We can already agree that we are indebted to Gibson for the
word and to numerous science fiction writers for the conceptual foundation of
cyberspace. However, there is little value in accepting any of those views as a simple
fact, and a critical attitude is needed when the consumer market is continuously
massaged with promises of a ‘sci-fi becomes reality’ technology, or with the fears of the
unbelievable power of electronic crimes.
I.6.3 Benediktine Cyberspace27
The description of Internet cyberspace can be classified as very specific, precise and
easy to understand. However, this view seems not very powerful comparing to the
visionary thoughts some people are formulating about, for instance, the future relation
between architecture and cyberspace. It may be regrettable, but obviously a more
visionary definition is needed in order to be able to grasp the kind of representation
researchers as well as some of the science fiction writers are referring to.
An interesting and more academic definition and discourse in this matter is given by
Michael Benedikt’s28 article Cyberspace: Some Proposals, where he tries to answer the
important question:
What is cyberspace?
“Cyberspace is a globally networked, computer-sustained, computer-accessed,
and computer-generated, multi-dimensional, artificial, or ‘virtual’ reality. In this
reality, to which every computer is a window, seen or heard objects are neither
physical nor, necessarily, representations of physical objects but are, rather, in
form, character and action, made up of data, of pure information. This information
derives in part from the operations of the natural, physical world, but for the most
part it derives from the immense traffic of information that constitute human
enterprise in science, art, business and culture.”29
This definition is somewhat different, although not very clearly, from that of virtual reality.
VR actually only tries to describe the digital simulation of a general environment and the
total immersion plus the possibility of interaction of the inhabitant, the human user.
While with the view of cyberspace, the perspective is broadening up to encompass the
larger spectrum of visual and information based representations, which is much closer
towards Gibson’s first concept as well. In agreement with the principles of Internet
cyberspace, Benediktine cyberspace should thus be seen as a global, coherent virtual
world, independent of how it is accessed and navigated. There may be several ways to
27
This term is in fact not used by Michael Benedikt himself, but has been found in an essay of YOUNG,
PETER, Three Dimensional Information Visualisation, 1996, (which is also published in Computer Science
Technical Report, No. 12/96), http://www.dur.ac.uk/~dcs3py/pages/work/documents/lit—survey/IV-Survey/
28
Michael Benedikt is Professor in the School of Architecture at the University of Texas at Austin. He has
taught at the Graduate School of Design at Harvard and he is also President and CEO of Mental
Technology Inc., of Austin, Texas, which is actually a software design consultancy
29
BENEDIKT, MICHAEL, Cyberspace: Some Proposals, in Cyberspace: First Steps, MIT Press, London,
1991, p.122
14
enter cyberspace, from mouse-controlled animation of video monitor images, through a
completely developed virtual reality technology. Many ways should be possible to
navigate around, act or manipulate the environment. In other words, cyberspace should
even act like a city, making possible all kinds of activities happen as they may.
Therefore, although it depends on them technically, the global concept of cyberspace
itself is neither a hardware system, nor a simulation or sensorium production system,
nor a software graphics program. It is a place, and a mode of being.
While generally is agreed that any physical access device is permitted, Marcos Novak
foresees that a certain well-defined application will emerge out of the characteristics of a
shared, digital, and virtual world. For Novak30, visualisation is the task of a cyberspace
desk, or more precisely, a cyberspace synthesiser. This device receives a minimal,
coded and compressed, description of the cyberspace, and is able to generate a
visualisation of that space for the user to navigate within. The quality of the rendition is
then only dependent from the technology and parameters used by the user. For
transmitting this data, a cyberspace protocol is used, which includes a description
language for virtual reality, a user-configurable interface standard, a list of primitives and
the valid relations among them, and operations upon these. The overriding principle in
every case is that of minimal restriction. What is remarkable, is the fact that rendering
cyberspace is different from synthesising it. The cyberspace decks are primarily
responsible for virtual reality synthesis, while the actual rendering is being processed by
current graphic supercomputer workstations.31
Benedikt as well as Novak seek to bridge the gap between science fiction and reality,
and situate their cyberspace in the future. Therefore, some critical voices diminish the
importance of their view in their own personal research, arguing that this concept
remains hypothetical, unrealised and unreal.32 Nevertheless, it is definitely interesting to
comment the hypothetical view of Benedikt, who is not accidentally an architect of
profession, and his proposed elementary principles to load space with threedimensional objects of information.
Furthermore, concluded out of Benedikt’s definition of cyberspace, it is possible that
dimensions, axes as well as coordinates existing in this digital world, are not necessarily
equivalent with the physical ones of our natural, gravitational environment. These
dimensions themselves can be loaded with informational values, which are appropriate
for optimal orientation and navigation in the accessed data. To be capable of a threedimensionally representation of all kinds of abstract information, some translation has to
be done from certain known variables to visual distinguishable characteristics. In this
way, many searchable variables can be shown to the user in very different ways.
Position, colour, size, action, texture, etc. can all be dependent on that information and
are even able to change according to time or the user’s commands.
30
NOVAK, MARCOS, Liquid Architectures in Cyberspace, in BENEDIKT, MICHAEL (Ed.), Cyberspace:
First Steps, MIT Press, London, 1991, p.233
31
the term ‘cyberspace synthesis’ refers to the reconciliation of different kinds of information into a coherent
image, while ‘cyberspace rendition’ refers to the production of high-quality graphic presentation of that
image.
32
STRATE, LANCE, JACOBSON, RONALD & GIBSON, B. STEPHANIE, Surveying the Electronic
Landscape: An Introduction, in Communication and Cyberspace: Social Interaction in an Electronic
Environment, Hampton Press, New Jersey, 1996, p.2
15
Rules and principles that should be followed designing this sort of cyberspace will be
investigated in chapter ‘IV. Information Architecture’. Then will be analysed as
well how cyberspace can represent useable information in a meaningful way, and which
rules from the physical world should be implemented in this virtual realm. After all, this
cyberspace should be designed like an another-life world, a parallel universe, like a
dream thousands of years old: the dream of transcending the physical world.33
“’Christ,’ Case said, awestruck, as the virus twisted and banked above the horizonless fields of the
Tessier-Ashpool cores, an endless neon cityscape, complexity that cut the eye, jewel bright, sharp
as razors.
‘Hey, shit,’ the construct said, ‘those things are the RCA Building. You know the old RCA Building?’
The program dived past the gleaming spires of a dozen identical towers of data, each one a blue
neon replica of the Manhattan skyscraper.”
(William Gibson, Neuromancer)
Furthermore, Benedikt’s definition tells us that information-intensive institutions and
businesses all have a form, identity, and working reality, in one word, an ‘architecture’.
This ‘architecture’ is considered to be counterpart of and different from the form, identity,
and working reality of the physical world. The ordinary physical reality of these
institutions and businesses are seen as surface phenomena, as husks, their true energy
coursing in ‘architectures’ unseen except in cyberspace.
This applies as well to individuals. In cyberspace, egos, roles, and functions are no
longer dependent of physical appearance, location, or circumstances. In this new
existence of the individual, virtue is replaced and new associations are possible, for
both non-economic and economic reasons, and new levels of truly interpersonal
communication can be developed.
Benedikt is well aware that this completely mature kind of cyberspace does not yet exist
outside of science fiction and the imagination of a few thousand people. Nevertheless, it
can be argued that the efforts the computer industry is taking nowadays are only
actions of a temporary expensive but patient ‘under construction’ stage. Benedikt tries
to list the most important of these efforts as follows.
§
§
§
§
§
§
§
development and access to three-dimensionalised data
effecting real-time animation
implementing ISDN and enhancing other electronic information networks
providing scientific visualisations of dynamic systems
developing multimedia software
devising virtual reality interface systems
linking to digital interactive television
33
Michael Benedikt is aware that it is probably best not to use grandiose terms like these, which in fact
easily can be criticised. It can be noticed though, how enthusiastically people are greeting every little step
the computer industry takes closer to this vision. It can be examined how hype is created, or which names
are chosen for computer companies and products. And then, it can be argued that the whole industry
creativity to convince its customers is drawn by dreams such as cyberspace.
16
I.6.4 Threads
In search of the fundaments of this Benediktine cyberspace, Michael Benedikt himself
distinguishes some threads that all try to prove the logical development of the
phenomenon through stages of human evolution. Each story is able to intertwine with
another, and it is certainly not so that these four are the only explanations that can be
found. But it is certain that these impressionistic points of view, seen through the eyes of
Michael Benedikt, are intriguing in the way they try to seek the right place for the more
utopian cyberspace in some important historical developments.
1. The Myth
The first and oldest narrative begins in language, and perhaps before language, with a
‘commonness-of-mind’ among members of a tribe or a social group. Beliefs about the
environment, the dangers, the meanings of things, the earth, the sky, and far beyond
were shared in the mind and the behaviour of a group. With language and pictorial
representation, these ideas began to elaborate at a rapid pace. Variations develop on
the common themes of life and death, resulting in many different ‘whys’ and
‘wherefores’… Less coherent systems of narratives, characters, scenes, laws and
lessons, even myths, began to play an important role in sharing these values through
time and succeeding generations. It can be said with great certainty that these
mythological themes are still vital in our western, advanced technological cultures. They
inform us about the way we understand each other and test ourselves, how we shape
our lives. In this way, myths both reflect the ‘human condition’ and create it. The
segment of our population most able to be influenced by this collective
unconsciousness is the group of young people, whose boundaries between fiction and
fact, between wish and reality, are not yet determined. Pure and ideal archetypes,
delivered to them by their education as well by the entertainment industry, become
magnified and twisted in their struggle towards adulthood.
It is no surprise then, that adolescents, and in particular adolescent males, almost solely
support the comic book, science fiction, and video-game industries, which are filled and
in fact alive with dynamically adapted myth representations. These young males are so
convinced that their personal ‘mission’ consists of mastering the newest technologies,
that they actually populate most of the online communities and newsgroups. Indeed,
like cyberspace was announced in a science fiction novel, so have these programmers
and hackers, mostly working day and night in the world’s best computer laboratories,
created cyberspace by their very activity. In this cultural-anthropological view,
cyberspace can be seen as an extension and a most tempting stage for those
‘gateway’-media that are by definition, like theatre, books or paintings, somehow less
themselves than what they actually reach for.
Comment
It is a complex discussion when cyberspace is considered as the result of the
hard work and the sole invention of the imagination of many young, ambitious
males. Arguments can differ from the mythical version Benedikt is giving, for
instance, when the social approved lives and hyped conventions of the hightechnological industries’ (male) workforce would be objectively investigated. It can
also be noted though, that even when statistics show the majority of the virtual
communities (more than 90%) consists of males, the online part of the opposite
sex is quickly rising.34
34
Different authors give many reasons for the fact of female absence in online manifestations. Many blame
the primarily fixed gender relation in most electronic games.
17
2. The History of Communication Media
Close-by related to the thread of the myth is the history of media technology: the history
of the technical means by which absent or abstract entities, like events, experiences
and ideas, become symbolically represented. We could start with the undeliberate
spoors and tracks human beings left in the surrounding vegetation, signs which later
developed to be intentional and ‘produced’: markings on sand, wood, bone, stone, the
human body, later on tablets, papyrus and so on. As society grew and the need to keep
records and to educate became apparent, writing advanced into more efficient small
and conventional symbols. In this time already, the movement towards the
dematerialization of media and reification of meanings became clearly visible. To
underestimate the traffic of information then would be wrong, as it was a period filled
with social activity, when even objects were loaded with meaningful stories of its maker,
its use and its ownership.
Centuries later, the invention of the printing press changed the ‘records’ into easily
duplicable and transportable goods, and social and scientific life would never be the
same. The introduction of the telephone changed physical information into an
electrically transportable fact, fast and without delay, which even was able to be ‘stored’
electromagnetically. The medium was again further being dematerialised, and finally
also space and time were conquered. Parallel development of wireless broadcasting
started to saturate world’s invisible airwaves with a huge amount of encoded
information, available everywhere and at any time. Television and cellular phones
turned humans into nomads who are always in touch. With the upcoming digital
television, fast personal computers, and high-bandwidth cable, the so-called postindustrial societies stand ready for a deeper voyage into the individuals needs. Online
communities are increasing in number as well as in users. Cyberspace is then seen as
a public, consistent, and democratic ‘virtual world’. When people will intensively
experience multimedia computing or fully developed virtual reality, the first historical
movement of physical doing to a developed symbolic doing will loop back. In this era,
communication through language-bound descriptions of information is due to decrease
in favour of a possibility to transmit information as events in a both immersive and
interactive manner.
“In future computer-mediated environments whether or not this kind of literal,
experiential sharing of worlds will supersede the symbolic, ideational, and implicit
sharing of worlds embodied in the traditional mechanisms of text and
representation remains to be seen. While pure virtual reality will find its unique
uses, it seems likely that cyberspace, in full flower, will employ all modes.”35
3. History of Architecture
The next narrative will try to explain the principal theme driving architecture’s selfdematerialization. This phenomenon might contradict the general view of architecture,
which can be explained as the art of creating durable physical worlds, able to withstand
generations of men, women, and children. Architecture begun with the creative
response to climatic stress, with the choosing of advantageous sites for settlements,
and the internal development of social structures to meet population and resource
pressure, such as: the mechanics of privacy, property, legitimisation, task specialisation,
ceremony and so on. All this had to be carried out with constraints of time, materials,
convention and design and construction expertise then available.
35
BENEDIKT, MICHAEL, Introduction, in Cyberspace: First Steps, MIT Press, London, 1991, p.13
18
Reality is death…
“If only we could, we would wander the earth and never leave home; we would
enjoy triumphs without risks, eat of the Tree and not be punished, consort daily
with angels, enter heaven now and not die. “36
Pursuing these dreams, we build gravity-defying cathedrals, create paradise gardens,
huge sport-stadia for games or magnificent libraries, reaching beyond nature’s grip in
the ‘here and now’. Meanwhile, in counterpart to the earthly garden of Eden, floats the
image of the Heavenly City, the new Jerusalem of the book of Revelation. Furthermore,
it can be noted that all the images of the Heavenly City, in East and West, have
common features. Benedikt listed parts of it as follows: weightlessness, radiance,
numerological complexity, palaces upon palaces, peace and harmony accomplished by
ruling ‘good and wise’, utter cleanliness, transcendence of nature and of crude
beginnings, and the availability of all things pleasurable and cultured. Still nowadays,
again, these descriptions, originated in the time of medieval monks, continue in many
science fiction novels and films. In almost all cultures in history, buildings and projects
have begun in serious pursuit of realising the dream of the Heavenly City. If the history
of architecture is filled with visionary projects of this kind, these should be considered
physical realisations of a symbolic, cultural archetype, standing for enlightened human
interaction, form, and information. Thus, while the original biblical Eden may be
imaginary, the Heavenly City is considered as twice as ‘imaginary’. Once, in the very
conventional sense, because it is not real, but once again because even it became
actual, it could come into existence only as a virtual reality, only ‘in the imagination’. And
thus only as a religious vision of… cyberspace.
Returning to the history of architecture, Michael Benedikt as well as Marcos Novak37,
use visionary architectural examples to prove and clarify their professional point of view
in this matter. They argue that visionary architecture, like poetry, seeks an extreme:
beauty, awe, structure, or the lack of structure, enormous weight, lightness, expense,
economy, detail, complexity, universality, uniqueness. These projects, which carry more
meaning than good proportions or structural engineering alone, are often well beyond
what can be built. This should not be seen as a weakness, as this the very essence of
‘a vision’.
In art, early modern artists like Malevich, Kandinsky, Klee or Mondrian, prefigure
cyberspace in turning away from representing known nature. The paintings of Max Enst
or Bosch create mysterious new worlds.
But this can also be recognised in the history of architecture. Piranesi’s series of
etchings entitled Carceri, or Prisons, marks the beginning of an architectural discourse
of purposefully unbuildable visions. Against the increasing constriction of architectural
practice, Piranesi drew an imagined world of complex, evocative architecture. Ledoux
emphasised une architecture parlante, architecture as poetry. Boullée tried to search for
a way to express the sublime potential of architecture. And the production of visionary
architecture even continues to the present. The ability to imagine architecture obviously
outstrips the ability to build it. In many other disciplines this marks the difference
between applied and pure research, and the value of pure research has always been
undisputed. The theoretical laboratory in architecture is still the Studio, but is only
accessible for architects, so that the world cannot share the inventions produced there.
Cyberspace architecture can then be seen as a vast virtual laboratory for the invention
of new architectural visions, while it is also returning architecture to a public realm.
36
BENEDIKT, MICHAEL, Introduction, in Cyberspace: First Steps, MIT Press, London, 1991, p.14
NOVAK, MARCOS, Liquid Architectures in Cyberspace, in BENEDIKT, MICHAEL (Ed.), Cyberspace:
First Steps, MIT Press, London, 1991, p.244
37
19
In another line of reasoning, the message, carried by any architectural representation is
investigated. The invention of high-tensile steels, steel-reinforced concrete and highstrength glass together with the economic pressure steered the architect to celebrate a
new vocabulary of lightness. In 1924, Le Corbusier designed his own Heavenly City or
La Ville Radieuse, the Radiant City, an exercise in soaring geometry, rationality, and
enlightened planning. The whole forceful notion that architecture is about the
experiential modulation of space and time, buildings carrying symbolic content, shaping
information of meaning in their anatomy, captivated architectural theory between the
1920s and the 1960s.
But it seems that this architectural ‘message system’ has taken a life of its own. In some
movements, architecture shifted in a peculiar way more to the field of illustrated
conceptual art. Sometimes buildings themselves have begun to be considered as
inhabitable arguments, propositions, or narratives in an architectural discourse. In the
movement called ‘Deconstructivism’, for instance, the building is considered not (only)
as an object of beauty or inhabitation, but as an object of information to be ‘read’ as a
collection of junctions, reversals and iterations, metaphorical meanings, and so on,
becoming a pure demonstration of an intellectual process. Then, logically, there is a
limit to how far these notions of dematerialization and abstraction can reach and still
produce interesting and useful, real architecture. To some, this limit is already reached,
although the search for the Heavenly City remains. And yes, indeed, the solution is
apparent: this inducement can usefully flourish further and even far beyond, in…
cyberspace.
Comment
Questions can be raised of which other scientific laboratories are accessible for
the public, and whether the public really cares for it to be open. Furthermore, it is
remarkable in what extent Benedikt and Novak believe in the force of the avantgarde. Referring to Aaron Betsky’s Violated Perfection (1990), Benedikt is almost
sure that: “…we should remember that, as a rule, today’s avant-garde informs
tomorrow’s practice.”
It is the practice and force that the avant-garde had (and still has) in the
architectural discourse, which both authors use to clarify the role of future
architecture in the field of the virtual. But, it can also be noted that some examples
are known of creative movements whose influences were, after all, not that
important on the architectural discourse and history as a whole. Let alone the
whole discussion that can be started of what can be considered as avant-garde
and what not. It seems thus that both of these architectural minded researchers
are trying to create a well-defined specific field of virtual design practice, which
has clearly an equal importance, but actually stands next to the physical
architecture. Also the concept of dematerialization of architecture and the concept
of the Heavenly City is not equally obvious or provable in the wide range of
opinions and creative associations architecture provokes. Moreover, it is obvious
that even nowadays, some architects are, on the contrary, very engaged in
producing the qualities of matter and sensuality rather than the dynamic flows of
contemporary feelings and technologies.
4. History of Mathematics
This thread can be conceived as a line of arguments and insights that revolve around
three different thoughts.
§
§
§
the propositions of geometry and space
the spatialisation of arithmetical or algebraic operations
reconsideration of the nature of space in the light of the second point
20
Reasoning with shape, in fact deductive geometry, began in ancient Greece. Developed
through time since, the results had many uses in building and road construction,
mechanical engineering and even in the field of astrology. From the late nineteenth
century on, with the discovery of non-Euclidean geometry, the science and art of
geometry has developed sporadically. Moreover, with the concept of consistent
geometries of higher dimensionality than three, suddenly all statements of visual
geometrical insight could be studied more generally and accurately in the
symbolic/algebraic language of analytical mathematics. But this linkage between
geometry and algebra, space and symbol, form and argument, actually works in two
ways. Descartes’ invention, the Cartesian coordinate system, resulted in an
‘algebraised’ geometry as well as a ‘geometrised’ algebra. This strong concept should
not only be considered as the proof that space itself is non-physical, but also that space
is able to contain all different kinds of information in one.
We can think of beautiful forms that emerge from simple recursive equations into the
rendered and surprisingly symmetric complex chaos called ‘fractals’. Or we can
investigate the more common art of diagrams and charts, which mixes histories,
geographies, the physical and the abstract and many other variables into simple interval
or continuous scales. All of them, from simple bar charts through complex matrices and
‘spreadsheets’ to represent multi-dimensional, computer-generated visualisations of
invisible physical processes, seem to exist in some kind of identical geography. This
space resembles and is borrowed from the same piece of paper or computer-screen on
which we see them, although it is certainly not the same. These pictures of the natural,
phenomenal world represent the first border of a continent filled with sign language and
will act ultimately as the engine to produce cyberspace.
I.6.5 Conclusion
On one hand, cyberspace can be seen as a fact of today, an efficient term grouping a
collection of online phenomena that already exist but are still developing. All the
different manifestations do need a strong and clear set of underlying definitions that
avoid to intertwine. In this way, ‘virtual reality‘ is then only one of the possible
applications although with a large importance since it has to represent the wide range of
experiences existing in the three-dimensional realm. On the other side, visionary
thoughts try to scale deeper and discover that something formless and undefined is
rapidly developing. This form of cyberspace is still an elusive and future thing that
actually hardly can be described sufficiently in this early stage. What if information is
that element of 3D-space and time, and it got manufactured and transferred to
thousands of locations? What would the implications be when almost all existing
traditional two-dimensional representations would get some kind of competing
application that is able to represent itself one or two dimensions higher? What is the role
of architecture if this space has to be designed? Who knows, but: like one said:
“…today intellectual, tomorrow practical, one can only guess at the implications…”.
I.7 Cyberspace City38
As will be noticed in the next chapter as well, the metaphor of the city is very powerful to
use in the hybrid structure of the electronic realm. To clarify the very notion and
conception of cyberspace more clearly, it can still be useful to illustrate the fictional
images that Gibson himself uses in most of his books more thoroughly. As the concept
of cyberspace is deeply interrelated with the overall environments plus powerful
atmosphere of his imaginable world, the city has an important role in Gibson’s utopian
view as well. This has been recognised also by Richard Skeates, who tries to explain
38
SKEATES, RICHARD, The Infinite City, in CITY, Nr.8, December 1997, pp.6-21
21
the ‘future alternatives of the physical city’, using the highly imaginable representations
in the work of William Gibson.
Primarily, in search of the urban non-places of post-modernity, he considers the future
city to be lost, finding no signs of the former identity, form or structure. Gibson in turn
describes a totally man-made, constructed world, although it can certainly not be
considered as a ‘thing’ that is the outcome of an ‘urbanisation’-process in time. On the
contrary, the temporal, spatial and cultural identification becomes increasingly difficult in
the context of a globally homogenised culture that is defined by consumption and which
is deprived of any external reference. In these continuous non-places where no one is
at ‘home’ at any moment, urbanisation manifests itself by the overwhelming
experiences of communications and data technologies, in one word: cyberspace. As
habitable and pleasant spaces become rare and individual needs increasingly emerge,
the search for new territories to urbanise is forced to shift to the digital realm.
Remarkably, as it can be noticed that this dark and negative view of the city leads
directly back to the beginning of this chapter and the strong and staggering view
visualised in the film Blade Runner. It is no surprise then that cyberspace itself is
defined as ‘non-space’ and imaged as a cityscape, for the three concepts are ultimately
linked.
“The more the old ordered world of modernity is represented as having changed
into a turbulent and dangerous post-modern place, the more attractive the ‘new
world’, represented as the virtual space of cyberspace, becomes an attractive
option.”39
But cyberspace is the ultimate anti-city: the city without streets, without crowds, without
polluted air, without history and without particular geography. However, the myth of
cyberspace seems to offer a solution to a number of urban problems. It promises an
alternative to an almost inhabitable ‘real’ world. At the same time, it revives the old
notions of community in the creation of democratic global networks. In its most
accomplished form, it delivers the possibility to a consciousness that can roam free of
its biological chains. This is obviously a realm that is far greater and richer than the
physical one, even far more human-friendly than nature can offer.
However, it may not be forgotten that the claims are formulated in the context of the
myth and, moreover, the metaphor. The essence of cyberspace is sole privacy: a
removal of life – social, economical as well as political – from the public to the private
needs. In short, the substitution and retreat of public space for private space.
“There were countless theories explaining why Chiba City tolerated the Ninsei enclave, but Case
tended toward the idea that the Yakuza might be preserving the place as a kind of historical park, a
reminder of humble origins. But he also saw a certain sense in the notion that burgeoning
technologies require outlaw zones, that Night City wasn’t there for its inhabitants, but as a
deliberately unsupervised playground for technology itself.”
(William Gibson, Neuromancer)
39
SKEATES, RICHARD, The Infinite City, in CITY, Nr.8, December 1997, p.15
22
On the other hand, idealistic and optimistic micro-spaces of opposition do try to
emerge.40 These spontaneous, unplanned and organic-grown manifestations of
community are able to reconstitute themselves outside the surveyed and controlled
urban landscape. As alternative and experimental spaces, they offer notions of refuge
and escape from a world of institutional oppression and brutality. Conclusively, they can
be considered as the new frontier of cyberspace, where spaces have become places
and both the social and the physical were able to combine. Here are thus two opposite
views of the city: as a place where destructive forces erase the marks of the past, and
as a place where signs of history, place and identity are still apparent.
I.8 Conclusion
Unfortunately, it can be noticed that almost all the definitions of cyberspace were purely
literal and fictional, while available images or immersive experiences might be more
suitable for describing this sort of phenomenon. This is the result, of course, of the
unknown future of the concept itself. Nevertheless, out of more than a dozen different
definitions, two important streams of thoughts have been recognised. Globally,
cyberspace could be described as any space that is a field for human effects through
environmental interaction, restricted to that type of human effect field that is computermediated and has an electronic tele-effect of symbolic exchange. The phenomenon of
Cyberspace is not unlike technology in that respect41. Technology refers to any
organisation of tool use, from the first firestones in caves to CAD-microchips in Silicon
Valley. But, it is almost always presumed to mean ‘high’ technology of recent vintage.
And also like technology, cyberspace represents a shared space of common goals, for
the human world requires both collaboration and competition. It is not surprising then,
that most researchers do not want to wait the fully developed implementation to give
their visionary opinion and most promising view of this phenomenon.
So it has to be noted that many cyberspaces, and not only the two mentioned above,
still are in the phase of full development. Some have already generated some
interesting examples of architecturally influenced information handling, examples that
will be investigated in the next chapters. However, in future, cyberspace will probably
have a specific and certain range of applications. How far this strong influence will
reach, exactly will have to be further awaited, but possibly the next fragment is able to
ease some minds for now:
“But only a fraction of most people’s lives is spent engaging in electronically
mediated communication. The sights and sounds and, therefore, the architecture
if the real world dominate consciousness, and will do so in the foreseeable
future.”42
40
In Gibson books, these are the ultimate symbols of the accidental and chaotic re-ordering of modernity,
have names such as the Projects, the Bridge, the Ninsei enclave,… and are often situated within forgotten
en unusable spaces throughout the city.
41
PHELAN, JOHN M., CyberWalden: The Inner Face of Interface, in STRATE LANCE, JACOBSON
RONALD & GIBSON B. STEPHANIE (Ed.), Communication and Cyberspace: Social Interaction in an
Electronic Environment, Hampton Press, New Jersey, 1996, p.42
42
BENEDIKT MICHAEL, Unreal Estates, in: ANY, Electrotecture: Architecture and the Electronic Future,
Nr.3, November/December 1993, p.56
23
“Evening in the ACTLab finds bunches of
young,
computer-savvy
men
(and
increasingly, women) batting the keys with
abandon. As I watch them, or rather their
bodies (since their selves are off in the net,
simultaneously everywhere and nowhere,
living out fragmentation, multiplicity, and
playfulness faster than I can theorise it), I
remind myself that these are the people who
are writing the descriptors right here in front of
me – writing the computer code that makes
the phantamastic structures of prosthetic
sociality. Then they will inhabit the structures
they write. These people, not the big system
designers, are the architects of virtual
community.”
(Allucquére Rosanne Stone - Sex, Death and Architecture, in
Any, No.3, Nov/Dec, p.38)
II
Internet Cyberspace
“The implications of digital technology for a
broad range of contemporary experiences,
and certainly for architecture, needs certainly
to be considered.”
(Mark C. Taylor - Electrotecture, in Any, No.3, Nov/Dec, p.14)
24
II.1 Introduction
In this chapter, the ‘architecture’ that is being implemented in various examples of
online social environments is investigated. Furthermore, some of the urban metaphors
being used on the Internet are described, since these cognitive notions have proved to
be an effective way to clarify the chaotic structure that is now undeniably present in the
digital realm.
Therefore, architecture and urbanism used in this chapter are re-imagined in the context
of many observations such as: the digital communications revolution, the ongoing
miniaturisation of electronics, the commodification of bits, and the growing domination of
software over materialised form. Arguments are given that the task of the future does
not consist out of digital plumbing of communications links and associated electronic
applications, nor the production of electronically deliverable content. Rather it is asked
to imagine and create digitally mediated environments for the kinds of lives that all want
to lead in the sort of communities that all want to have. Why? Why should this new kind
of architectural and urban design be investigated? Because the emerging digital
networking structures affect the access to economic opportunities and public services,
the character and content of public discourse, the forms of cultural activity, the inaction
of power, and the experiences that give shape to the daily routines. It is necessary to
understand what is under way, so that organised and intervening alternatives can be
explored, that developments can be planned, in fact be ‘designed’. It is in this view, that
some social as well as architectural important online digital manifestations will be
investigated.
Meantime, In the Real World…
The question can be asked of why the social communication applications existing on
the Internet are nowadays so commonly accepted and so increasingly successful. Next
to the specific characteristics that these environments possess, and which in fact will be
described further in this chapter, another reason can be found in the remarkable shift in
contemporary western society. The transition from an industrial age to a post-industrial
or information age has been discussed for so much and for so long, that some might not
have noticed that humankind is actually passing into a post-information1 age. In the
industrial age, the concept of mass production was introduced, manufacturing with
uniform and repetitious methods in any one given space and time. The same
economies of scale were used in the information age, the age of the computers, but
with less regard for time and space. The manufacturing could happen anywhere, at any
time, moving and following the strong, global economical laws. Mass media and many
industries got bigger and smaller at the same time: large international conglomerations
are reaching larger audiences, while at the same time niche, narrow specialised
services catered small, specific groups. But, in the post-information age, the audience
often only consists out of one, as information and its use got extremely personalised.
The only way to satisfy the individual needs of a large number of users is by the concept
of the network. This is, of course, not the only reason for the huge success this
communication technology is experiencing today. But it can certainly be one of the
important economical and motive-driven reasons why the actual implementation of, for
instance, the online communities described in this chapter is developing so fast.
However, before many of the intrinsic aspects possessed by the online manifestations
can be understood, some of the underlying technological techniques which are
characteristic to the concept of Internet cyberspace should first be described.
1
This line of arguments is taken from NEGROPONTE, NICHOLAS, Being Digital, Hodder & Stoughton,
London, 1995, pp.243
25
II.2 History of the Internet
“As electrically contracted, the globe is no more than a village.” It took more then 20
years after Marshall McLuhan spoke these words in 1964, before the image of the
seductive ‘global village’ became fashionable again. The technology that makes this
possible is the network.2 It is the technology of communication that enables information
of any type to be carried from one place to another, regardless of their distance. To
accomplish this task, it uses electronic messages, carried by wire, optical fibre, by radio
and microwave.
II.2.1 The Mother of All Networks
“There is little doubt that the Internet, for all its faults, is perhaps the most
fascinating and explosive technological and social development of the twentieth
century.“3
It all started in 1969 quite harmlessly with the completion of several projects of the
American Advanced Research Projects Agency, ARPA in short. The American
government decided to fund an experimental electronic network that should allow
information to be exchanged between (at that time huge, rare and very expensive)
remote computers. The ARPANET was originally designed to allow ARPA researchers
to share data, but was increasingly used to exchange messages, an event that actually
should be considered as the first ‘virtual’ community. During the 1970s, ARPA wanted
to encourage the educational community to take advantage of their network and some
university research groups began to use its applications. Next to the typical
communication standard tools we still know today, such as file transfer protocol (ftp) and
remote login technologies (TELNET), an inter-computer electronic mail system (e-mail)
was being implemented. One of the other main development efforts was to design the
whole system in such way that the exchange of information would not be endangered if
physical sections of the network were lost. This resulted in the still existing network
protocol that will be explained further on. So in fact, because its electronic
underpinnings are so modular, geographically dispersed, and redundant, this electronic
network can be considered as essentially ‘indestructible’.4
Consequently, this particular characteristic had the immediate interest and priority from
the military services. So by 1975, the control was being transferred to the American
Department of Defence that wanted to use the characteristics of these non-hierarchical
networks to serve their military computer communications. Implementing this
technology would mean that communications would always be fully operational, also
when considerable parts of it would be damaged, even by nuclear attack. However,
soon enough the military (MILNET) and the civilian (ARPANET) networks had to be split
as traffic was growing beyond the existing capability of telephone lines. It would take
until the 1980s when all the networks were converted to a single standard network
protocol that ARPANET finally became the backbone of what is now called the
Internet.5
2
WOOLLEY, BENJAMIN, Virtual Worlds: a Journey in Hype and Hyperreality, Blackwell Publishers, Oxford,
1992, p.125
3
WHITTLE, DAVID B., Cyberspace: The Human Dimension, W.H. Freeman Co., New York, 1996, p.10
4
MITCHELL, WILLIAM J., City of Bits: Space, Place, and the Infobahn, MIT Press, Massachusetts, 1995,
p.110
5
TOLHURST W., PIKE M., BLANTON K., Using the Internet, Special Edition, Que Corp., Indianapolis,
1994, p.33
26
II.2.2 The Web
The World Wide Web (WWW), also known as ‘the Web’ or ‘the Net’, has become the
leading information retrieval service (also called ‘byte and packet mover’) for the
Internet. This system uses the concept of ‘Hypertext’ or ‘Hypermedia Links’ to easily
retrieve and access documents in the increasing vast amount of information present in
the digital memories of the many ‘servers’ or ‘hosts’ connected to the Internet.
Hyperlinks are electronic connections that allow a user to select a word or picture from a
two-dimensional web page in order to access additional information that is related to
that originally requested – ‘clicked’ - object. In this way, links can lead to other
documents, images, sounds, animations, movies, and three-dimensional worlds. A
hypertext document is usually written in a certain standardised and simple HyperText
Markup Language (HTML). HTML defines the standard look and feel of information
published on the Web, when it is interpreted by the Web browser. The remarkable
power of this concept lies in the fact that these hyperlinks are able to direct the user to
other host computers, regardless of their true location, making the reach of the Internet
effectively transparent. To make this possible, each online document can be
electronically retrieved by its assigned unique online address called Uniform Resource
Locator (URL).
The development of the World Wide Web began in 1989 by Tim Berners-Lee and his
colleagues at the European Particle Physics Laboratory called CERN, in Geneva,
Switzerland. These researchers created the HTTP protocol, which is a standard
communications protocol needed for transmission between computers, and
programmed a text-based Web browser in January 1992 as well. In conclusion, the
whole concept of the Internet can be written in one, powerful formula.
WWW
= URL + HTTP + HTML
Uniform
Resource
Locator
HyperTex
HTTP t Transfer
Protocol
URL
HTM
L
The unique Internet address. To be understandable for the
user, combinations of words are used instead of numbers.
E.g. ‘http://www.machine.edu/subdir/file.html’
The standard communication protocol. Short messages are
sent instead of using a dedicated connection the whole time.
E.g. ‘get in touch’, ‘send data’,…
Text based description of data documents. Standard tags are
HyperText
embedded in the text for effects and links.
Markup
E.g.<A HREF=”http://www.kuleuven.ac.be/”> click me!!
Language
</A>
However, the huge success of the Internet only came after the release of Mosaic in
September 1993. This program, developed by Marc Andreessen6 and others at the
National Centre of Supercomputer Applications (NCSA) at the University of Illinois, was
a graphic Web browser that used the same sort of ‘point-and-click‘ manipulations that
had been available in personal computers for some time.7.
6
Andreessen would later leave the institute and co-found Netscape Communications Corp., whose
Netscape Navigator became rapidly the dominant Web browser soon after its release in December 1994.
7
KURMANN, DAVID, 3D and the Web, CAAD Programmierkurs ‘97/’98, ETH-Zürich
27
II.2.3 Facts and Numbers
“ Program a map to display frequency of data exchange, every thousand megabytes a single pixel
on a very large screen. Manhattan and Atlanta burn solid white. Then they start to pulse, the rate of
traffic threatening to overload your simulation. Your map is about to go nova. Cool it down. Up your
scale. Each pixel a million megabytes. At a hundred million megabytes per second, you begin to
make out certain blocks in midtown Manhattan, outlines of hundred-year-old industrial parks ringing
the old core of Atlanta…”
(William Gibson – Neuromancer)
Yea
r
1971
1974
1977
1991
1994
1995
1996
1998
2000
Hosts8
23
62
111
700.000
2.000.000
4.000.000
10.000.000
30.000.000
Users
Comment
ARPANET
4 million
15 million
25 million
50 million
300 million
US-wide connections
Universities + Email + File Transfer.
HTML + Mosaic + WWW
50% is commercial use
Sounds + Dynamic HTML + VRML + …
Tele-presence + Web TV + Shopping +
1 billion
…
The prediction of the year 2000 was published in 1995 by Nicholas Negroponte, director
of MIT Media Lab, who points out that: “The Internet is not North American any more.
Thirty-five percent of the hosts are in the rest of the world, and that is the fast-growing
part.”9 Facts of today actually show that chances are still considerable that he foresaw
his remarkable number correctly. Nevertheless, these numbers should still be read
carefully, as no true way is existent to be sure of the exact value of users on this
‘network of networks’ and as it should be noted that most surveys are commercialmarket inspired. It is no surprise then, that numbers can vary between different sources,
even putting the number of American users twice has high as the number world-wide.
The general methods certain services and companies use to calculate the growth of the
Internet are definitely open for easy critic. Until now, two different approaches can be
recognised. First, surveys can be sent to as much online-system administrators as
possible. Doubts are then raised whether how reliable difficult questions as “How many
network people does your organisation count?” are answered, next to the low general
response of around 15%. Another method consists of counting the reachable (or ‘ping’able) computers on the Internet in a fixed period, a task that easily can be processed by
programmed software browser robots. Not only protected sites are then skipped, but
only servers are counted as well, neglecting the actual people online. For instance, one
host could incidentally be the online server of American Online (AOL), acting as the
gateway for more than 8 million subscribers. The true density of Internet accesses is
different as well and for instance higher in affluent, computer-literate places such as
university areas or near computer research parks. Nevertheless most online services on
8
Hosts or servers are computers that store and transmit documents to other computers, which in turn are
generally called clients. Severs perform these actions whenever they have been asked by transmittable
commands that are standardised by the principles described in the network protocol.
9
NEGROPONTE, NICHOLAS, Being Digital, Hodder & Stoughton, London, 1995, p.182
28
the Internet itself use the following simple and still rather truthful rule: ‘The Internet is
doubling every year, beginning with 3.5 million users in October 1984.’
A survey in 1995 investigated the population of the Internet. The conclusion was very
explicit and is already confirmed by many other Internet researchers. The Internet public
tended overwhelmingly to be “exactly the sort of people that companies want to talk to:
30-ish, well-educated and often in exactly the sorts of high paying jobs that keep a
steady flow of spendable cash sloshing into their bank accounts”. This cyberspace, in
short, looked “remarkably white, middle class and well educated”, only a third of the
users were women; over 2/3 had at least a university degree; and average incomes
were well above average (55.000$).10
II.2.4 Bandwidth
“As bandwidth burgeons and computing muscle continues to grow, cyberspace
places will present themselves in increasingly multi-sensory and engaging
ways… We will not just look at them, we will feel present in them. We can expect
them to evolve into the elements of cyberspace construction – constituents of a
new architecture without tectonics and a new urbanism freed from the constraints
of physical space.”11
The number of bits that can be transmitted per second through a given channel (like
copper wire, radio spectrum, or optical fibre) is the bandwidth of that channel. But
bandwidth actually is not well understood, especially now that fibre optics makes almost
infinite bandwidth possible. So, when bandwidth is the capacity to move information
down a given channel, most people compare it to the diameter of a pipe or to the
number of lanes on a highway. By doing so, they ignore the technological ability to put
more or fewer bits per second down the very same pipe. What stays constant, however,
is the speed of the transmitted electronic messages, which can be expressed by the
technical term bps or baud.12 Research results indicate that contemporary fibre and
laser technology should be able to deliver 1.000 billion bits per second, while today 1.2
to 6.0 million bps is still well suited for most existing and powerful multimedia.
Meanwhile, a ‘fancy’ 38.400 baud is generally used by modem owners of today.13
“If the value of real estate in traditional urban form is determined by location,
location, location, then the value of a network connection is determined by
bandwidth, bandwidth, bandwidth.”14
For William Mitchell, no network connection at all, zero bandwidth, makes a person an
outcast from cyberspace. The digital network creates new opportunities, but exclusion
from it becomes a new form of marginalisation. Accessibility is redefined. Tapping into a
broadband data link is like being on ‘Main Street’, making intense interactions and fast
connections. The ‘tyranny of distance’ is replaced by that of bandwidth, which is now
considered the new economy of land use and transportation. Since the actual cost of
high-bandwidth cable connection grows with the distance, certain information-intensive
10
GRAHAM, STEPHEN & AURIGI ALESSANDRO, Urbanising Cyberspace? The Nature and Potential of
the Virtual Cities Movement, in CITY, Nr.7, May 1997, p.22
(àthey refer to BROWNING, J., Who’s what on the Web, Wired, August, pp.33-35)
11
MITCHELL, WILLIAM J., City of Bits: Space, Place, and the Infobahn, MIT Press, Massachusetts, 1995,
p. 114
12
Bits per second, bps, and baud actually mean the same thing. The latter is named after Emile Baudot, the
‘Morse’ of telex.
13
NEGROPONTE, NICHOLAS, Being Digital, Hodder & Stoughton, London, 1995, p.16
14
MITCHELL, WILLIAM J., City of Bits: Space, Place, and the Infobahn, MIT Press, Massachusetts, 1995,
p. 17
29
areas are developing around high-capacity data sources. Universities and so-called
tele-ports or tele-cottages, which all try to concentrate powerful communications
equipment, are recognised as one of the new growing poles of many economical
important industries.
II.2.5 Network Protocol
One major application that resulted from the electronic network development was a new
and revolutionary type of communication protocol. Network protocol is a formal set of
rules that computers connected to the network use to ‘talk’ to one another. In the case
of the Internet, computers use the technology of packet switching. Rather than send a
large block of data over a dedicated line directly to the destination computer, this
system breaks the data into small chunks. Each chunk is then sent along a common
transmission line in a ‘packet’ that also contains source and destination information.
Each packet finds its own separate way through the network, passing many different
‘routing’-computers at the network’s nodes. It will keep seeking its destination until the
destination computer is reached, and is able to re-route itself around damaged parts.
Moreover, since copies of digital data are absolutely exact replicas of the originals,
originals are ‘allowed’ to get lost or destroyed. At arrival, the packet information is
stripped, and all the data from the chunks are reassembled into the original data.15 This
protocol has the advantage that any number of computers can share the same
communication network, while this system can contain different connection types and
different transmission capacities. The standard Internet protocol of today is called
TCP/IP or ‘Transmission Control Protocol/Internet Protocol’.
II.3 Hypertext
Asked about the original use of the word cyberspace, William Gibson once answered
he actually meant to suggest: “The point at which media (flow) together and surround
us. It’s the ultimate extension of the exclusion of daily life. With cyberspace as I describe
it you can literally wrap yourself in media and not have to see what’s really going on
around you.”16 The media are continuously developing in an unbelievable rapid race,
but the one that is growing most rapidly is definitely the cyberspace of the Internet.
‘Internet cyberspace’ or ‘pre-cyberspace’ is the name often given to the now existing
large group of online phenomena. It is obviously a less ‘physical’ spatiality than other
cyberspaces, which will be investigated later, at least pretend to be17. Furthermore, the
first term ‘cyber’ in the word ‘cyberspace’ would seem to imply a controlled space, and
yet the contemporary electronic landscape is commonly characterised as decentralised,
bottom-up, and anarchistic. In the investigation of some cyberspace environments, this
essential denial of any hierarchic structure will definitely result in important design
characteristics and constraints.
II.3.1 Navigating through Cyberspace
In cyberspace, physical paths are replaced by logical links. Sometimes, looking at the
contemporary graphic interfaces, the requested places are nested to form a strict
hierarchy: you go down a level by clicking a folder icon, which opens another ‘window’
with a ‘view‘ into that place. Alternatively, the circulation system may be more freeform:
when wandering through a sort of three-dimensional labyrinth, symbols – not
15
TOLHURST W., PIKE M., BLANTON K., Using the Internet, Special Edition, Que Corp., Indianapolis,
1994, p.31
16
WOOLLEY BENJAMIN, Virtual Worlds : a Journey in Hype and Hyperreality, Blackwell Publishers,
Oxford, 1992, p.122. (à he refers to : Interview with the author, Late Show, BBC2, 26 September 1990.)
17
This refers back to the visionary Benediktine cyberspace and other immersive applications.
30
necessarily doors or gateways – may provide clickable entry points to any number of
other places.
“Click, click through cyberspace; this is the new architectural promenade.”18
Despite the fact that the system in essence is uncontrolled, still some forms of structure
within the Internet can be recognised. The electronic information space consists of an
incredible amount of free-text or structured databases, hypertext documents, and
knowledge bases. Exploring and retrieving useful and meaningful information in this
resulting chaotic field can thus be considered as quite a task, incomparable by its very
nature. To fully understand the concept of hypertext, though which most of the
navigating on the Internet cyberspace is still being done, seven concepts are proposed
that are commonly used when hypermedia documents of all kind are made. In this way,
they can be considered as important design principles as well for presenting and
structuring online information. They try to cover a broad range of navigation tools and
techniques from appropriate structuring of information to the application of artificial
intelligence techniques.19
Description
Example
1. Linking
Global linking structure of a
document
Hyperlink
2. Searching
Mechanisms for full-text search
Full-text search
3. Sequentialisation
Mechanism for sequentially
visiting selected locations within
the hyper-documents
Path
4. Hierarchy
Hierarchical table of contents
Table of contents
5. Similarity
Connection between not-yet-linked
but semantically related nodes
Index
6. Mapping
Graphical visualisation of contents of Overview map
hyper-document
7. Agents
Mechanisms to execute complex
tasks on behalf of the user
Shopping agent
1. The linking structure is the most known and remarkable feature of a hypertext
document. Links allow direct access to a designated location within the very
information space through markers that are embedded within the documents. Two
types can be distinguished: normal static links and program generated dynamic links,
which are automatically created upon a certain action.
18
MITCHELL, WILLIAM J., City of Bits: Space, Place, and the Infobahn, MIT Press, Massachusetts, 1995,
p. 24
19
The table is based on GLOOR, PETER, Elements of Hypermedia Design: Techniques for Navigation and
Visualisation in Cyberspace, Birckhäuser, Berlin
31
2. Searching capabilities imply locating information in a specific range of stored
possibilities. Generally, only full-text searches are processed, but systems exist that
provide additional databases. Most known examples of this principle are so-called
search-engines such as ‘Yahoo!’ or ‘Lycos’.20
3. N-dimensional hypertext documents can be reduced to one sequential path for a
guided tour or a developing story when the user is not allowed to skip information or
persist a personal approach throughout the structure.
4. Most well understood by humans are documents that possess a hierarchical
structure. Almost all books are organised this way. On some sites on the Internet, this
structure is made visible to the users so that they can use it as their principal
navigation aid.
5. Similarity links connect nodes that have similar contents but might not yet be
linked. An index is a very simple example of this concept as the same entries might
prove some sort of equality. More complex tools are based upon the assumption that
a system has knowledge not only about the document structure, but also about the –
semantic - contents of the information contained in the document. It is not surprising
that the systems of this category have not developed beyond the state of early
prototypes.
6. Mapping is a simple and strong technology to visually structure webs of
information. Similar to real maps, these graphic maps show overall context so the
users know where they are and where they can go from there. Mapping is orthogonal
to the previous concepts, which means that maps can be used to visualise them.
7. Guides and agents are not only popular for navigation, but also for many other
electronic areas. The system incorporates some form of artificial intelligence so that
it can help the human user in any of the other six concepts to retrieve information.
Agents can be implemented in several forms ranging from simple, hardwired guides
to rule-based systems that are able to react in a flexible way to different needs of
different users.
II.3.2 Consequences
A hypertext is not a closed work but an open fabric of heterogeneous traces and
associations that are in process of constant revision and supplementation.21 The
structure of a hypertext is thus certainly not fixed, but shifting and always mobile, even
dependent in time. Branching options multiply, menus reproduce, windows open other
windows, and screens display other screens leaving and following heterogeneous
traces and associations. Instead of an organic whole, a hypertext is like collage and
texture whose meaning is unstable and whose boundaries are constantly changing.
There is no clearly defined pre-established path through the layers of a hypertext. And
although the network is shared, the course each individual follows is different. Thus, no
hypertext is the product of a single author who is its creative origin or architect. All
authorship becomes joint authorship, and all production becomes co-production.22 In
conclusion, hypertextual space displays and evokes an alternative architecture.23 The
Internet cyberspace is a complicated and evolving structure. To overcome these
essential drawbacks with eventually could hold further development, some voices
20
See next chapters for more nformation about search-engines
TAYLOR, MARK C., De-signing the Simcit, in: ANY, Electrotecture: Architecture and the Electronic
Future, Nr.3, November/December 1993, p.16
22
The problem of perfect duplication and authorship cannot be neglected, and much research is now being
developed for the protection of digital data available on the Internet.
23
TAYLOR C. MARK & SAARINEN ESA, Imagologies: Media Philosophy, Routledge, London, 1994,
p.’Telewriting 6’
21
32
foresee some kind of overall structure that is able to translate machine space into
cognitively understandable physical space in the near future.
33
II.4 Virtual Communities
II.4.1 Cybercity
It should not be forgotten that the conceptualisation of cyberspace was purely fictional.
However, it should be noted as well that science fiction is a genre that has long
participated in the representation of the city. Consequently, some people do not see
cyberspace only as a new social world or as a way to design multiple possibilities of
immersive data representations. These visionary authors are actually able to converge
William Gibson’s original imaginable structures, skyscrapers, colourful grids, and
geometric forms into a form of future urban environment, considering navigation through
the ‘hyper’-context of cyberspace as ‘going across a city’.24 Some authors in fact predict
that in this city many of the human activities will be replaced or augmented by the
existing and upcoming electronic technologies. Although not all foresee necessarily the
future to change very drastically.
“I would hardly expect cyberspace to replace or even revolutionise the very
human aspects of such ‘meatspace’ (the human dimension outside cyberspace)
activities as dating, partying, going to a dentist or doctor, taking a vacation,
churchgoing, engaging in team sports, socialising, gardening, offering charitable
service, dining out, farming, sharing the holiday season with family, shopping at
malls, and so on. Cyberspace is instead, suitable for communicating, finding
information, learning, sharing, purchasing, researching, reading, writing,
publishing, and so on.”25
“The network is the urban site before us, an invitation to design and construct
Cybercity, capital of the 21st century,” William Mitchell points out. Classical categories
will turn inside out and change the discourse in which architects have engaged from
classical times until now. The city awaiting us will be ‘unrooted’ to any spot on the
surface of the earth. It will rather be shaped by connectivity and bandwidth constraints
than by accessibility and land values. “Its places will be constructed virtually by software
instead of physically from stones and timbers, and they will be connected not by doors,
passageways, and streets, but by logical linkages.”26
“Is the net a city without walls or do walls merely take new forms?”
(Mark C. Taylor/Esa Saarinen - Imagologies)
For Mitchell, physical, spatial cities are not only the materialised results of processes
that cause maximum accessibility and promote face-to-face interaction. Their
subdivisions as districts and neighbourhoods are legal parts that control and organise
access and in doing so, define a place. People entering this place - as an owner, guest,
visitor, tourist, trespasser, intruder, or invader – are making a symbolic, social, and legal
act. This concept can be recognised on the Web as well, but then ‘the game’ gets a new
set of rules. Structures of access and exclusion are reconstructed in entirely nonarchitectural27 terms, and places can be entered and exited not by physical travel, but
by simply establishing and breaking logical linkages.
24
BUKATMAN, SCOTT, at the IN ANY Event, published in: ANY, Electrotecture: Architecture and the
Electronic Future, Nr.3, November/December 1993, p.48
25
WHITTLE, B. DAVID, Cyberspace : The Human Dimension, W.H. Freeman Co., New York, 1996, p.316
26
MITCHELL, WILLIAM, The Electronic Agora, in: ANY, Electrotecture: Architecture and the Electronic
Future, Nr.3, November/December 1993, p.33
27
If architecture is considered as materially constructed form.
34
II.4.2 Places in Cyber-‘Space’
Places in cyberspace are actually always software constructions. It can be argued that
these pieces of software create environments of interaction, virtual realms that humans
can enter. Basic examples of this point of view can be surprisingly simple and well
known: a text window provided by a word processor, the drawing surface or space
within a CAAD system. Even the ‘desktops’ and ‘file folders’ used by most
contemporary operating systems, the ‘mailboxes’ and the ‘bulletin boards’ can be
considered as the representation of a certain space. Just like architectural and urban
places, all these manifestations have characteristic appearances, and the interactions
within are controlled by certain rules. In short, software can represent a onedimensional place in a screen-displayed text, a two-dimensional place to put things on a
‘desktop’ surface’, a three-dimensional virtual room, library, museum, and so on, even a
N-dimensional place in an abstract data structure. Sharing a virtual space is of course
not necessarily equal to sharing physical space. Some electronic applications are
meant to be occupied by one person, other by groups or even whole communities.
Think of shared electronic calendars, simultaneous accessed CAD files or virtual chat
and conference rooms. Crucial factor is almost always the simultaneous access to the
same information, regardless of physical place where this action is being done. Shared
places can be represented in several ways ranging from simple text-based interfaces to
immersive, multi-sensory virtual reality.
Most shared places on the Internet are still in the phase of chat boxes, generally
announcing themselves by descriptive or allusive names. People search their personal
interests between the Teen Chat, Thirtysomething, Pet Chat, Starfleet Academy, Gay
and Lesbian, and enter and leave these ‘rooms’ whenever they are pleased to. In
opposite to more traditional meeting places, the point is not just to ‘be there’, but to
‘present’ yourself and to interact with others. This is usually done by typing text or coded
symbols, or even by activating standard computer-animated ‘body doubles’.
Many places on the Internet are public, and offer uncontrolled access, just like streets
and squares. Other are private, like real houses, and need a key, generally a password,
to prove your identity. And sometimes, as in cinemas, you even have to pay to get in.
II.4.3 MUD
Throughout time, architecture has been the creative response to the internal
developments of social structures to meet population and resource pressure. In this
context it might be interesting to compare the social and technological structure of
today’s virtual communities with the possibilities of an architectural language relevant to
cyberspace. Whether sitting at a keyboard or mounting a headset and bodysuit,
entering cyberspace is an involving experience. Many different experiences are
possible in the electronic realm, and one of them is the MUD or Multi-User Dungeon,
derived from the role-play ‘Dragons and Dungeons’. Roy Trubshaw programmed the
first MUD, back in 1979.28 Next to the original meaning, MUDs are now also often
referred to using more general terms, as Multi-User Dimensions, Multi-User Simulated
Environments (MUSE), or MUDs, but then Object Oriented (MOO).
Of all the cyberspaces currently available online, none try harder to generate a sense of
space than the concept of the MUD. MUDs are computer-based role-playing games in
which the players are confronted with a certain well-defined multi-user space. In this
space, players create and constantly redefine the fundamental reality in which they
interact, which is contrary to the entirely pre-programmed environments known from
normal computer games. Currently, most MUDs are text-based and thus rely solely on
a written interface to describe and interact in the world. In this way, also demonstrated
28
Maia Engeli refers to: http://www.utopia.com/talent/lpb/muddex/ (outdated link)
35
by graphic-based MUDs as the one called ‘Habitat’, cyberspace can be more defined by
interactions than by the actual technology that is implemented. Even text-based MUDs
with their ‘primitive’ interface, have proven to be very addictive for some people.29
II.4.4 Origin
Physically, MUDs exist as lines of codes on computer hard drives. They can be
accessed via modem over the Internet or through private online services. Newcomers,
the so-called “newbies”, can login, select a name for their character, provide a
description, and wander off to explore. Many architectural components are present, as
normal MUD environments are lived from inside out.30 The user’s walk normally begins
from a certain central space, out of which the further and slow representation of the
different rooms starts. Players can move around by typing commands like “go east” or
“climb stairs”. Whenever they enter a room, a written description is given that mentions
the existing prominent features and the obvious exits. Sometimes, users have to ask
present fantasy characters, often disguised as ‘software robots’, for certain objects or
important information. These actions are possible by typing a command like “Go
southwards”, or “Wave at Mouse and tell him hello”. When a player enters a room or
performs any action, this event is announced to all the others players present, so that
the screen might read: “Matilda enters the room and winks happily to Flupper”. Most
games allow players to ‘talk’, which can be heard by everyone in the room, and to
‘whisper’, so that two players can converse only with each other more privately.
There are currently more then 300 MUDs existent in the world (numbers are of 1993),
with a regular base of players in the tens of thousands. Most early MUDs are designed
upon fantasy themes, although the connection with Internet has caused an important
diversification. MUDs can thus be based upon existent buildings, animals, promote
collaborative work between children, or, more recently, be intended for professionals in
a certain field and act like a place where social interaction and informational work go
hand in hand. Examples are known from that of virtual research centres for the
international astronomy community to imaginary adventure games set in post-nuclear
holocaust. The appeal of MUDs as a form of entertainment and a forum of interaction
can be traced to three characteristics.
The Appeal of MUDs31
§ The game involves multiple players who interact directly.
§ The concept of role-playing promotes free and creative expression.
§ MUDs are computer mediated, which creates a dynamic, responsive, and
immersive space in which to play.
To fully understand the concept of MUDs, one has to understand the crucial role that
the medium in which the environment is created actually plays: the technical
possibilities of the computer and the immersive cyberspace it establishes. The more
experience a player gets, the more power will be drawn to his character. By these
technological means, players that have gradually grown into ‘builders’ are able to
perform some degree of computer programming by themselves within the game. This
makes it possible to change the game, the functioning as well as the design during the
play itself. This can be considered as an essential difference from that of a high-context
29
BEAUBIEN, P. MICHAEL, Playing at Community: Multi-User Dungeons and Social Interaction in
Cyberspace, in STRATE LANCE, JACOBSON RONALD & GIBSON B. STEPHANIE (Ed.), Communication
and Cyberspace: Social Interaction in an Electronic Environment, Hampton Press, New Jersey, 1996, p.180
30
ENGELI, MAIA, MUD: Text als Baumaterial, in SCHMITT, GERHARD, Architektur mit dem Computer,
Vieweg, Wiesbaden, 1996, p.42
31
BEAUBIEN, P. MICHAEL, Playing at Community: Multi-User Dungeons and Social Interaction in
Cyberspace, in STRATE LANCE, JACOBSON RONALD & GIBSON B. STEPHANIE (Ed.), Communication
and Cyberspace: Social Interaction in an Electronic Environment, Hampton Press, New Jersey, 1996, p.181
36
gaming environment, in which rules can only be changed after a consensual
agreement, and often only after the approval of some kind of organisational body. This
characteristic of possible change implemented by the users themselves results in many
and very different kinds of unpredictable events and virtual conflicts.
It is obvious that knowing the programming language actually means possessing power
as well, while not all of the possible programmable applications of a play are
foreseeable. Examples are known in which characters robbed money, textually ‘raped’
other individuals (who were hundreds of kilometres away) or steeled parts of virtual
bodies of other non-aware users. These events resulted logically in some sort of
democratic decision making in the cyberspace community, after which collectively
rejecting this kind of characters and actions was possible if necessary.
Regular players put a lot of time into building their game characters, and thus put a lot of
themselves into the characters as well. So it is only understandable that the more skilled
they become, the larger the personal stake they have in maintaining the character, even
if the world they exist in is made up by text. Deep personal and emotional involvement
in the game makes therefore the situation more confusing, as players start to confuse
the boundary between the game and reality. Meanwhile they do not want to violate the
unwritten rules of the virtual world, which they compare to the real, physical ones. All
this facts result in a rather inexperienced collective world, which is sometimes not
armed against some of the sudden negative events that occur. Moreover, these
manifestations show the importance of programming as the privileged language of
cyberspace, which is able to construct reality rather then reflecting it. This is the
difference between creation and utilisation. And it is the tension between these two
levels that is the root of many social problems in the MUDs.
“Cyberspace architects will benefit from the study of the principles of sociology
and economics as much as from the principles of computer science. We
advocate an agoric, evolutionary approach to world building rather than a
centralised, socialistic one.”32
Figure II-1 A typical walk through a MUD-environment.
(http://www.lysator.liu.se/nanny/misc/enterpage.html)
32
MORNINGSTAR, CHIP & FARMER, R. RANDALL, The Lessons of Lucasfilm’s Habitat, in BENEDIKT,
MICHAEL (Ed.), Cyberspace: First Steps, MIT Press, London, 1991, p.298
37
Comment
As the appearance of space is only represented by text, it is obvious that design
is replaced by programming. After obtaining certain objectives, experience points
and essential objects, the users’ characters receive gradually higher levels of
power and control. The highest level in any normal MUD is generally called ‘the
Wizard’. Many Wizards can be apparent in a MUD, all able to program new and
yet unexplored areas to the world. Good design is replaced by good coding skills
in LPC – a C-like computer language - and the avoidance of the use of standard
available templates. Normal MUDs can contain over 20.000 different rooms or
areas, where the continuous presence of 40 to 80 characters is considered as
normal.33
II.4.5 Habitat34
It is important to know that the following ‘Habitat’ example was actually a commercial
application, and consequently customers had to pay to use its services. This should
explain the fact that the basic code was kept beyond the reach of non-authorised
people resulting in the fact that additional programming on the part of the players was
entirely impossible.
Lucasfilm’s Habitat project (1986) was one of the first attempts to create a very largescale, commercial, many-user, graphical virtual environment. It first struggled with
questions of how to represent space and social interactions, and the developers had no
clue in the beginning what the structure of the result would become. The project was so
ambitious that it had to contain and support a population of thousands of users in a
single shared cyberspace. In this online simulated world users could interact and
communicate in real-time with other users, which would make many social,
electronically translated activities possible. Some of the experiences the designers had
during the development and implementation of the Habitat system are considered to be
useful for the so-called “cyberspace architects” of the future.
Many of the learned ‘lessons’ that the developers wanted to share, are based upon the
interactions among the users rather than the many technological facts that were used. It
has to be noted that although this environment was clearly low context and relatively
open, it proved to be sufficient enough to create an almost fully involved experience.
Instead of concentrating on the upcoming interface technologies – such as DataGloves,
head-mounted displays and special-purpose rendering engines – the core of Habitat’s
vision is actually the idea that cyberspace is necessarily a many-participant
environment. So, as the concept of Habitat is meant as a ‘many-player online virtual
environment’ and its purpose was to be an entertainment medium, users will be called
‘players’. Like the following paragraphs will clearly indicate, the Habitat concept was
mainly inspired by role-playing games, but it undoubtedly included influences of
cyberpunk and early forms of object-oriented programming as well.
What is Habitat?
A very restricting, but also remarkable fact is that Habitat could be played with a
Commodore 6435 computer. When logged on, the computer display shows a graphical
and animated representation of the environment. The view shows next to the various
objects also the user himself. This animated figure is called an ‘Avatar’, and they are
33
Online interview with a Wizard called ‘Reece’ inside a Swedish NannyMUD
(http://www.lysator.liu.se/nanny/misc/enterpage.html)
34
MORNINGSTAR, CHIP & FARMER, R. RANDALL, The Lessons of Lucasfilm’s Habitat, in BENEDIKT,
MICHAEL (Ed.), Cyberspace: First Steps, MIT Press, London, 1991, p. 273
35
This was, of course, driven by a commercial fact: the Commodore 64 was the leader of the recreational
computing market on that moment.
38
generally humanoid in appearance.36 Avatars can move around and have the ability to
manipulate objects, while talking to another avatar is accomplished by typing on the
keyboard. The typed text appears over the avatar’s head in a cartoon styled wordballoon. The whole Habitat world is geographically made up by 20.000 different discrete
locations called ‘regions’. These regions can be accessed by means of doors, while
avatars are also able to carry all kinds of different objects that some the regions contain.
In 1991, this system was successfully in operation for over three years, sustaining a
population of over 15.000 participants. It has already developed to a technically more
advanced version, called ‘Fujitsu Habitat’, which primarily works with the graphical and
computable higher capabilities of the Fujitsu FM Towns personal computer. In 1993,
this community had approximately a million and a half people in it.37 Meanwhile the
company has transformed into WorldsAway, which claims to be specialised in designing
all kinds of graphic virtual worlds.38
Figure II-2 Overall map and screenshot from Habitat’s daughter Dreamscape (note the
inspiration of the Chapel of Notre Dame at Ronchamp of Le Corbusier, in the building that stands
in the middle and lower part of the map).
(http://www.worldsaway.com/home.shtml)
The Lessons
Some of the lessons the first Habitat developers have learned are interesting in the light
of future designing and programming of virtual environments, resulting in the
implementation of various kinds of cyberspaces.
1. The idea of a many-user environment is central to cyberspace.
The Habitat-programmers deeply believe people seek richness, complexity, and depth
in any virtual world. It is obvious that with application of the science and technology of
today, no automation can be built that approaches the complexity of a human being, let
alone a whole society. Instead of trying to artificially create this kind of environment, the
goal was changed to use the computational medium itself to augment the
communications channels between people, taking into account the next principle.
36
The developers actually were first not ‘really happy’ with only two genders. But apart from the problem
how to represent these other non-binary genders, the programmers’ rather conservative client was not
interested in this. On the other side, changing ones own gender in this world is no problem at all. In short, for
anthropologists this world obviously is a real treasure for numerous social experiments.
37
These numbers come from IN ANY event, words of STONE, ALLUCQUERE, in: Any, Electrotecture:
Architecture and the Electronic Future, No.3, November/December 1993, p.44
38
As the Habitat II (http://gmsnet.or.jp/habitat2) is obviously entirely presented in the Japanese language,
the Dreamscape example is taken for the graphic figure in this paragraph
(http://www.worldsaway.com/home.shtml).
39
2. Communications bandwidth is a scarce resource.
Although being in full development at this moment, this is still a most difficult
technological constraint. The underlying logic is that of economical law of supply and
demand. The communication technology advances simultaneously with the
computational technology, resulting in an absolutely non-winnable race, meanwhile
continuously feeding the ‘hot research’ of sophisticated data compression techniques.
At the time of the Habitat implementation, the design had to deliver a satisfactory
experience to the player over a 300-baud serial telephone connection. To overcome this
data transmission problem, computer scientists normally organise the system in terms
of primitive elements that can be easily manipulated within the context of a simple
formal model. Typically, they adopt a very small variety of different simple and compact
primitives, which are then used and combined in many numbers to represent an
intended complex object.
For a graphics-oriented cyberspace environment, the temptation of the overwhelming
display technology to use images, polygons or other graphic primitives for
representation and interaction is considerable. This data-intensive technique is however
surely an open invitation to a huge programming disaster. Consequently, in case of the
Habitat environment a relatively abstract and high-level description was chosen. This
concept could be represented compactly as it only dealt with the communication of
human behaviours. This fundamental choice leads to the third principle.
3. An object-oriented data representation is essential.
Object-oriented models of, for instance, polygons would surely be possible. But to avoid
any fundamental problem, the basic objects from which the system is built should
actually correspond more or less to the objects in the user’s conceptual model of the
virtual world, that is, people, places, and artefacts. This approach should enable the
communications between machines to take place at the behavioural level (what people
and things are doing), rather than at the presentation level (how the scene is changing).
Consequently, description of a place should be in terms of what there is instead of how
it looks like, while interactions should be translated in functional models instead of
physical ones. The interpretation of this high-level and conceptual representation is then
necessarily being calculated locally at the user’s computer. All this results in the fourth
principle.
4. The implementation platform is relatively unimportant.
Defining a cyberspace in terms of the configuration and behaviour of the objects
enabled the project to span the vast range of computational and display capabilities
among the participants in a system. For instance, a tree might be represented to one
player as “There is a tree here”, which is very resembling with most text adventure
games and conventional MUDs. At another user who possesses a powerful processor,
the tree could be generated by rendering a three-dimensional fractal model in high
resolution, including an animated view of the branches waving softly in the wind. And, of
course, these two players might be looking at the same tree in the same place and
actually even be talking to each other.
This design approach has two consequences. First, it means that effective cyberspaces
can be built today. Secondly, with all these previous principles in mind, systems can be
made for today’s technology and be easily adapted when tomorrow’s technology further
develops.
40
5. Data communication standards are vital.
Rather than concentrating on the mechanisms of data transportation protocols, this
principle puts the attention more on the aspect of the data itself that is being
transported. More precisely, this protocol should be able to communicate behaviour
rather than representation between different objects and from one system to another. In
a future more dynamic and developed model, this problem will become more important
as well, as it is considered impractical to distribute a new release of the system every
time one wants to add a new class of object.
Next to the technological field of the Habitat implementation, the developers
encountered many problems concerning the creation and management of the world,
which after all consists mainly out of a group of digitally interacting human beings. More
precisely, the design of Habitat’s world itself, as it tries to effectively represent ‘space’,
seems interesting to investigate from an architectural point of view. The authors now try
to clarify their choices of design under the following, maybe controversial assertion.
6. Detailed central planning is impossible; don’t even try.
The original specification for Habitat asked to create a world capable of supporting a
population of 20.000 avatars, with a possible expansion to 50.000. For all these
characters, it was needed to organise 20.000 ‘houses’, situated into towns and cities
with associated traffic arteries and shopping and recreational areas. Wilderness areas
were placed so that everyone would not be jammed together into the same place. Most
of all, all these people had to do some kind of activities. And they needed interesting
places to visit and things to do at these places as well. But it is obvious that, since it was
not possible that all the avatars were present all in one place at the same time, that
number of interesting places had to be sufficient. It is also clear that each one of those
places like houses, towns, roads, shops, forests, theatres, arenas, and many others all
had to be designed separately as a distinct entity.
To solve this problem, the programmers made a set of tools to aid the generation of
areas that naturally possess a high degree of regularity and structure, such as
apartment buildings and road networks. But places like forests whose value lie more in
their uniqueness, or at least in their differentiation from the places around them, needed
a different approach. These areas actually needed to be crafted by hand, which is
inevitably an incredibly labour-intensive and time-consuming process. Furthermore,
Lucasfilm’s Habitat developers (although experienced in creating original, graphical
entertainment) emphasise that “even very imaginative people are limited in the range of
variation that they can produce, especially if they are working in a virgin environment
uninfluenced by the works and reactions of other designs.”39
Comment
Following Allucquére Stone’s description, the Fujitsu version wanted to provide “a
little bit of some part of culture, from all over the word”. The overall design varies
from a standardised European village to a typical resort park. When looked more
in detail, the world also exists of magical forests, Easter Island statues, pyramids,
ruins, shrines, and cactuses, completed with dinosaurs and penguins. There is
also night and day in the Habitat world.40 It is remarkable that these architectural
spaces are rather a nostalgic creation. No sign has been noticed that the
designers even have thought of other aspects of architecture that are possible in
the virtual environment.
39
MORNINGSTAR, CHIP & FARMER, R. RANDALL, The Lessons of Lucasfilm’s Habitat, in BENEDIKT,
MICHAEL (Ed.), Cyberspace: First Steps, MIT Press, London, 1991, p. 287
40
STONE, ALLUCQUERE, at the IN ANY Event, published in: Any, Electrotecture: Architecture and the
Electronic Future, Nr.3, November/December 1993, p.44
41
These design shortcomings would be more visible and noticeable if not all the
participants had different goals, interests, motivations, and types of behaviour. Normal
computer adventure games, although interactive and often containing a multi-player
mode, are actually experienced individually. They have the characteristic to be repeated
innumerable times at different moments over a population consisting out of thousands
of players, while actions are restricted to the goals the programmers have provided.
This is an important distinction with the concept of cyberspace worlds as Habitat, which
are conceived as open-ended and get a great deal of their appeal out of the interaction
of simultaneous manifestations of other numerous colleague-players. The Habitat
programmers foresaw the possibilities of various experiences, ranging from events with
established rules and goals (like a treasure hunt) to activities driven by the players’
personal motivations (starting a business, running the newspaper) to completely freeform, purely existential activities (going out and conversing with friends). As the
programmers themselves had to provide some degree of planning and set-up to make
all this possible, they were still much thinking like ordinary game designers. However,
many unexpected experiences revealed the programmers that they never could predict
the outcome of their ‘controlling’ actions, which were almost always initially based on
certain predicting assumptions.
One of the examples demonstrated a certain labour-intensive treasure hunt of which
only one person had a wonderful experience (the person who won the price in an
incredible short time) and a dozen of others left bewildered (which were the persons
who did not even got the chance to get into the game). In short, however hard they
tried, the designers obviously were not skilled in the inexact science of ‘social
engineering’. Due to the many hours of programming time and work efforts becoming
useless, they were almost forced to instead let the players themselves drive the
direction of the design. The designers finally became facilitators in design and
implementation, as they now built parts that almost always were used and appreciated.
Instead of having the illusion of controlling the whole system, they now still had the
chance of considerably influencing the implementations that matched the people’s
desires and needs. Finally, the Habitat developers mentioned two other cautions in
designing virtual communities.
1. You can’t trust anyone.
This actually implies that no leakage is allowed between the two so-called levels of
‘virtuality’: the ‘infrastructure level’, the level of the physics, the invisible laws and the
implementation, and the ‘experiential level’, which is what the users see and interact
with. As experience proved, people were prepared to perform a huge and incredible
(programming) effort to be able to ‘cheat’ with some of the features provided by the
Habitat system, however no direct personal benefit resulted out of it for these ‘hackers’.
2. Work within the system.
It has to be noted that the temptation to break the two levels of virtuality subjected the
system operators as well. Therefore, wherever possible, actions should only be done
within the framework of the experiential level. This principle should be applied to both
the technical and sociological aspects of the system. It means that system
administrators should not abuse their power and control in the world (threaten a player
to kick him of the system), but instead should solve possible conflicts with the rules
already existent in the world itself (arrange a satisfactory deal in public presence).
In future, the developers promise to eliminate the central planning concept. Obviously,
the processing and communications would become too complex when the world’s
42
number of users would grow too drastically. A possible solution is considered in the
form of a fully distributed system, while public-key cryptographic techniques protect the
operating system. Secondly, possibilities are investigated to let the user configure the
world by him or herself. But this desire in turn does require an abstract representation of
regions and objects to let the user do the design and creation.
II.5 Cyberspace Urbanised?41
To many authors, of which has to be noted that the most write from an American point
of view, it is a fact that current urban trends threaten the concept of the original urban
public realm. Main reasons have proved to be the privatisation of urban public spaces,
the rising fear of crime and the ‘other’ in the post modern city, the erosion of urban
social cohesion and spatial splintering of the contemporary metropolis. Also increasingly
apparent in the European context, city centres have become packaged, ‘themed’, and
exist mainly of facilities for consumption and leisure. Meanwhile, the access is based on
the ability to pay rather than some universal notion of the rights of citizens. Enclosed
malls, which subtly exclude socially undesirable groups, have emerged rapidly, while
middle classes cocoon themselves in houses and gated communities, relying on cars
and communications infrastructures to integrate their lives. In parallel with these trends,
multiplying layers of technological media are diffusing to under-grid and interlace cities
and urban systems. From TV, radio and the telephone to new computer networks like
the Internet, all are offering alternative channels for social expression, complementing
and threatening real collective face-to-face exchanges. It has to be noted that these
technical media too, are increasingly diversified and fragmented, as they are being
pushed to support specialised media over constantly widening distances.
It is only logic that in such context, debates are encouraged to investigate the potential
of digital computer networks for supporting new types of public social and cultural
exchange. In fact, a growing group of optimists are urging that cyberspace will act as
the ‘new public realm’. People like Michael Benedikt hope that this ‘realm of pure
information’ is “transfiguring the physical world, decontaminating the natural and urban
landscape, redeeming them, saving them… from clogged airports, from billboards,
trashy and pretentious architecture, hour-long freeway commutes, ticket lines, and
choked subways… from all the inefficiencies, pollution…”42
It can thus be argued that much of the current hype surrounding the Internet rests on
the utopian assumption that such networks will inevitably emerge to be dominated by a
democratic culture of public space. Consequently, face-to-face contact will be
substituted for computer-mediated communications, as part of a shift to tele-mediated
work, service access, health and education networks, and media flows. Safe, nonthreatening and space transcending ‘Virtual communities’ and the culture of ‘real
virtuality’ seem to become the solutions for the repressive character of contemporary
urban life. Or, in the words of Graham and Aurigi: “One of the explanations for the
virtual community phenomenon is the hunger for community that grows in the breasts of
people around the world as more and more informal public spaces disappear from our
real lives.”
The new cyber-based communities, organised ‘by the community, for the community’,
do offer the possibility of new interactive, discourse-driven ‘public’ arenas. This promise
should be especially important for the most marginalised groups, being hardest hit by
the individualistic ways of social organising in cities. But then, at the very least, the
41
This title is based upon GRAHAM, STEPHEN & AURIGI, ALESSANDRO, Urbanising Cyberspace? The
Nature and Potential of the Virtual Cities Movement, in CITY, Nr.7, May 1997
42
BENEDIKT, MICHAEL, Introduction, in Cyberspace: First Steps, MIT Press, London, 1991, p.3, see also
the introduction-phrasing of the first chapter.
43
Internet would need to shift from its public of elite, professional groups to become a
near-universal medium. Costly infrastructure needs – skills, finance, a telephone line, a
modem, computer software, a service subscription, and electricity supply – should be
easily accessible for the majority of urban populations. Much development is thus
needed, as it can be easily proved that access to electronic networks is currently a
domain of the well-off and privileged. But these differences are also more than complex,
as having access does not imply that the use has any meaning, or that it necessarily
brings any power and advantage to the users. Heavy users may simply undertake
routine and underpaid tele-work, which has clearly no relation to their intensive use of
technology. Thus, in contradiction with the idea of a single, unifying cyberspace, it
seems likely that different topologies of networks use will emerge, with different degrees
of power and control of the users. In this view, three different positions of groups can be
recognised within the emerging urban social architecture of cyberspace.43
1. Information users: The elite, the trans-national corporate class,
used to operating the global economy, relying on intense mobility
and accessing computer networks to ‘command space’.
2. The information used: In fact the less affluent and mobile wage
earners, experiencing the consequences of a narrow, passive
consumption system, limited to control the ‘press now to purchase’buttons. Global alliances between TV, Internet, cable, telecom, film,
publishing, advertising, and newspaper industries must be seen in
the context of this commercial consumption-driven market.
3. The off-line: Disadvantaged groups living in poverty and structural
unemployment, financially excluded from electronic networks.
Moreover, even the actual relevance of Internet access for these
social groups is questioned by Graham and Aurigi: “Just giving
someone time at a terminal with Internet capabilities or, by
extension, at a kiosk in a public space – will not benefit anyone who
feels confronted with a seemingly insurmountable problem, or who
has no idea to begin”
II.5.1 Virtual Cities
Meanwhile largely ignoring these urban social inequalities in Internet-access and the
different social architectures of emerging networks, city authorities across the world
have recently constructed hundreds of experimental ‘virtual cities’. These onlinephenomena are meant to operate as electronic analogies for the real, material, urban
areas that generally host them. Over 5.000 (2.000 in 1997) virtual cities are collected
together on the City.Net network44, ranging from comprehensive web spaces, which try
to integrate all online activities in a city, to single promotional web pages. These virtual
cities are, in fact, many different attempts to use the potential of the Internet for
supporting local democracy and discourse development, urban marketing and
‘regeneration’, new types of electronic municipal service delivery, local inter-firm
networking and social and community development within cities. Early research at the
Centre for Urban Technology, with the aim of building up a typology of digital cities in
the EU, shows that two main types of web cities are emerging.
1. Non-grounded web cities: These sites use the interface of (most often a
map of) a ‘city’ as a familiar metaphor to group together wide ranges of
43
GRAHAM, STEPHEN & AURIGI, ALESSANDRO, Urbanising Cyberspace? The Nature and Potential of
the Virtual Cities Movement, in CITY, Nr.7, May 1997, p.24
44
Located at http://www.city.net
44
Internet services located across the world in a comprehensive and straightforward manner.
2. Grounded web cities: These are developed by urban agencies to help the
development of specific cities by coherently relating all the electronic
possibilities in context of the subjected city. Two sorts are further recognised:
the more promotional oriented sites, where little space is left for information
for residents, in opposition to the ‘public’ electronic spaces, which support
political, social, and cultural discourses about the city itself.
Figure II-3 The Iperbole initiative in Bologna: a three-dimensional interface based on a urban
(http://www.nettuno.it/bologna/MappaWelcome.html)
metaphor.
Accessibility to these cities seems certainly to be wide because Internet-global, but the
sites are mainly configured for passive use. Although these services are thus ‘public’ for
sure, they can hardly be considered as a ‘public space’. Most grounded web cities are
constructed as nothing more than urban databases, showing information about
residents, about political processes and decisions in town management, as well as
transport information, leisure opportunities, cultural events, accommodation and
restaurants for tourists. ‘Urban design’ is often limited to pure simulation, idealisation,
and even parody, of the perfect post-modern city, containing different exciting ‘zones’
and ‘climates’.
Nevertheless, the concept of virtual cities is still very young as it obviously needs time
for serious development. In spite of some bad examples, much is still expected from
these virtual initiatives. Many hope that their dynamic potential will overcome the
geographical, social and cultural fragmentation of the contemporary cities and help to
bind the urban fragments back together. But then surely, the population should broaden
to include also the socially marginalised. Furthermore, one-way applications should be
avoided, since many private virtual cities are little more than consumer spaces that use
city metaphors to distinguish themselves from the chaotic mass of ‘placeless’ Internet
sites. Meanwhile city authorities still avoid to recognise the potential of the digital, and
keep using their sites as post-modern urban promotion, offering a collection of
advertisements of private, local firms, clearly tempted by the rich population of local
Internet users. But, despite the dangers associated with virtual cities, it could be argued
that these initiatives are at least beginning to create an articulation between placebased and electronically mediated realms. It is clear that the best hope for virtual cities
will come with local strategies driven by partnerships between public, private and
community sectors, combined with the shift towards true mass diffusion for Internet
45
access. In the future, ‘urban cyberspace planning’ should construct meaningful urban
‘enclosures’ within the fragmenting effects of globalisation, reviving collective notions of
urban identity and democratic, discourse-driven spaces.
II.5.2 Digitale Stad Amsterdam
One of the most ambitious virtual cities examples in Europe at this time, is definitely
‘The Digital City’ of Amsterdam45, which claims to be socially inclusive and discourse
driven. Since the creation of ‘De Digitale Stad’ (DDS) in January 1994, this private
initiative has always been subsidised by the municipality of Amsterdam. From the first
rather closed text-based interface, it developed rapidly into a complex web-based site
with a rather appealing graphic interface. In so far this spatial metaphor of a city
meanwhile stands as an example for many other digital cities. It seems thus evident that
the entire organisation of DDS is represented as a global ‘town’.
This interface consists out of 33 thematic squares, covering issues as diverse as books,
transport, new technology, gay issues, politics, health and medicine, local government
services, planning, and sport. Each ‘square’ represents home pages of up to eight
relevant information providers, which can be originated out of the private, public or
voluntary sector. Each ‘square’ is in turn surrounded by residential ‘homes’. These boxlike links are, in turn, used by the city’s own residents to provide their personal
information, free of charge. Each square also has a ‘virtual cafe’, an area for archived,
online debate on its theme. Additionally, a genuine environment was developed called
‘The Metro’, the place where the global virtual community exists.
Figure II-4 The Virtual City and a ‘Square’ of De Digitale Stad Amsterdam.
(http://www.dds.nl/)
The question arises as to whether DDS really can be considered as a public space for
the city of Amsterdam. And two objections to this claim of DDS can be founded. First,
despite the existence of public terminals and some use by marginalised groups, it is a
fact that the young, white, male, well-educated groups, which are also present at the
Internet as a whole, still dominate the system. And as DDS is gradually introducing
more commercially driven content, such inequalities seem unlikely to reduce. Second,
the nature of the electronic network makes it questionable how much of the population,
debates and discourses on DDS genuinely represent the citizens of Amsterdam as
opposed to the wider Internet population. Next to the fact that DDS can be used by
anyone with a connection to the Internet, it is also noticed that many other themes are
more visited than the ones on the level of democracy within the Digital City itself. Also
here, with the pressure of the increasing marketing funds in mind, this link probably only
will deteriorate in future.
45
Located at http://www.dds.nl , another interesting example of a virtual city is the Iperbole initiative in
Bologna, at http://www.nettuno.it/bologna/MappaWelcome.html
46
II.6 Consequences
These examples have proved that cyberspace designers of all kinds actually do
possess a great deal of power in creating the digital constraints and variables of
cyberspace. Furthermore, it can sometimes be noticed the fact that most writings about
the existing and future of the electronic realm use a simple form of naive and
enthusiastic persuasion, which not always could be proved with objective arguments.
Thus, it might be more than useful to formulate also some critical voice to the
phenomenon of Internet cyberspace.
Like most new technologies, cyberspace did not appear from no-where as a mystical
spark of inspiration from the mind of one individual. It is a conscious reflection of the
deepest desires, aspirations, experiential yearning and spiritual dreams of Western
man. In this view, cyberspace can be considered as a new territory that Western
civilisation has to conquer. In human history, this inevitable obsession followed a rather
basic and linear pattern. The temptation always lied in the desire to acquire new wealth
that in turn provided impetus for the development of new technologies, which were in
turn necessary to bring these new territories under the power of the West. When these
were finally colonised, they were handed over to business interests at last, giving way to
the so-called ‘progress’, ‘globalisation’, and ‘modernisation’.
It is this very thought that inspired Ziauddin Sardar in a flaming critical essay about the
digital dreams and applications in general. One of his many arguments raises the
question about the information electronically available nowadays. As a product of a
culture where individual and common goals have lost all their significance and
meanings, most actions are based on the phenomenon of boredom. In such a culture,
one needs something different to do, to see, have a new excitement and spectacle
every other moment, and this in a time span that is no longer than a single frame of a
MTV video: five seconds.
“Netsurfing provides just that: the exhilaration of a joyride, the spectacle of visual
and audio inputs, a relief from boredom and an illusion of God-like omniscience
as an added extra.”46
For this author, the individual’s self is reduced to discrete bits of binary code: humanity
digested by cyberspace. Meanwhile, virgin land is waiting passively to be dominated by
the latest territory controlled by the West. Cyberspace, then, is the ‘American dream’, as
the obvious remark of the dawn of a new ‘American civilisation’. Parts of this argument
can be visually recognised in the representation of virtual worlds like Habitat. On the
other side, this community itself can hardly be recognised as one, as communities are
shaped by a sense of belonging to a place, a geographical location, by shared values,
by common struggles, by tradition and history of a location. And certainly not by joining
a group of people with common interests. A cyberspace community has no context and
self-selecting, exactly what a real community is not. People can be banned while in
reality the essence is to help them because they are always there. In this view,
cyberspace communities are only the protection from the race and gender mix of reality,
from the contamination of pluralism.
Furthermore, the problem of the power and authority of the programmer or administrator
can be questioned. This however, will certainly be a subject when the role of the
architect as programmer will be investigated.
46
SARDAR, ZIAUDDIN, alt.civilisations.faq: Cyberspace as the Darker Side of the West, in SARDAR,
ZIAUDDIN & RAVETZ, JEROME R.(Ed.), Cyberfutures: Culture and Politics on the Information
Superhighway, Pluto Press, London, 1996, p.27
47
48
II.7 Conclusion
The digital environment is definitely changing social, economic, and political systems.
The concept of architecture is still not a hot topic in this huge and rapid process.
Nevertheless, three propositions can be argued in relation to the importance of the
network in relation to architecture.47
Digital information is Ambient.
Major economical shifts in the global telecommunication business prove the great
(market) importance of this technology. While still developing at a rapid race, digital
information spurts out of the wall sockets everywhere in the developed countries. A
condition is reached in which essentially any information is available in any quantity, at
any time. This has to result in important implications. As the bandwidth increases, as
the digital information environment does truly become ambient, the conditions that have
been very familiar to humankind will fundamentally change. Being somewhere
physically will only deliver a few more ‘bits’ and modalities instead of electronic
communication. Human interaction will definitely change.
Digital information is solvent.
It is a solvent by eliminating or radically reducing the need for contiguity of architecture.
It dissolves the social glue that holds buildings and cities together, leaving only a
residue of recombinable fragments. Banks, for example, were formerly recognised as
powerful institutions with an important representational role in society. The buildings had
a powerful floor plan, with spaces and relationships between them that were carefully
mapped. Contemporary banks are much more a network of automated teller machines
and some nodes in the almost infinite international money transfer. All this could be
controlled in a normal house, acting as a power centre of enormous importance, but
having no architectural representation at all. Banking is a good example of
recombination as well as the new signs of the institutions, the ATM machines, shifted
gradually. First these were still well attached to the bank building, while in many
countries they have now moved to the busy places where people really want them: big
supermarkets, airports, …at home. The task of future architecture is difficult to foresee,
as facades and surfaces will become infinitely complex.
The capital of the 21st century will be a virtual city.
This proposition can also be recognised as a consequence of the solvent power of
digital information. Looking at the phenomenon today, the growing digital urban and
public spaces of the future will probably be the virtual communities spoken of earlier.
They have some kind of powerful attraction and addiction, which are - and will be investigated by many social researchers for a long time. However, it should be noted
that more social activities could be processed in a simple computer box somewhere
completely invisible and unknown, than on Time Square, New York, right now. The
important difference is actually the architectural translation, as no design or theory had
any authority to shape these social manifestations in any way. Meanwhile, many urban
metaphors are used to describe the navigation of the network. Even so, glimpses are
being recognised of how the web can be very different. As cyberspace will be explored
thoroughly in the future, some of the obvious metaphors will disappear, other will
emerge.
47
MITCHELL, WILLIAM, at the IN ANY Event, published in: Any, Electrotecture: Architecture and the
Electronic Future, Nr.3, November/December 1993, p.47
49
Looking at the situation of today, it can easily be argued that architecture certainly has
no great role in the design or conception of contemporary virtual environments.
Although some critical voices emphasise that the urban, public realm is radically overhyped, the force and success of this open movement cannot be neglected. Most of
these worlds are socially driven, and are based on human characteristics of interaction
instead of significant representation of the surrounding environment and its relations as
such. It has to be noted though, that bandwidth limitations and technological
development still are fundamentally based on two-dimensional communication and
interaction, and will be for a considerable time. In this way, these worlds, even when
graphically designed, still show a remarkable banal and peaceful style of architecture
that resembles, at the most, fine coloured expensive children books. It can then be
asked whether this conservative spatiality that is seen, suggest that the impact may not
be as earth-shattering as some would have think. Why is the space that ones inhabits
so banal? Is it because it is the sole possible representation of the very metaphor, the
Cybercity where people want to interact? The banalities of the architectures and the
imitations of visible architectures that are showing up in graphical virtual environments
maybe suggest that spatiality is not the best analogy to be working with.
Furthermore, many foresee the theoretical evaporation of architecture in the face of the
electronic power. They argue that the role of architecture is one of dematerialization,
reduced to the lightweight, the banal. Why should architecture not be able to displace,
reorient or even resist this line of thinking? Meanwhile, some authors did not wait until
these remarkable manifestations are implemented and have already formulated some
thoughts and theories of the future digital architecture. It is also expected that the role of
architecture will thus become more important when the third, and even the fourth,
dimension will be included in the electronic realm of the future. These subjects in
particular will be further investigated in the next two chapters.
50
The overall structure of virtual architecture can
be placed in a certain theoretical frame. One
of the themes within this structure is primarily
concerned with the phenomenon of the
Internet, while another explains the electronic
social spaces based upon communication
protocols, which in fact are already in place
and are being used intensively. It should be
noted that these both phenomena were
explained
in
the
former
chapters.
Furthermore, it can be argued that the overall
frame of these theoretical points of view also
represents a certain architectural theory,
whether this is thought of in traditional terms
or not. Difficulties arise as some parts of the
electronic realm that should be theorised are
in fact already working and thus constraining
the investigated elements for thorough
evaluation. However, some other authors are
meanwhile investigating possible structures
that actually do not have anything to do with
pre-existing architectonics. In contrary, these
architectural researchers try to inject the
notion of architecture with innovative ideas
and concepts, and are largely influenced by
biological and physical reactions that are
translated in a digital format. In this chapter,
some of these rather revolutionary concepts
will be described.
III Cyberspace Architecture
“When speed reaches a certain point, time
and space collapse and distance seems to
disappear. The very conditions of spatiotemporal
experience
are
radically
transformed. At this point, does architecture
finally become immaterial?”
(Mark Taylor - Electrotecture, in Any, No.3, Nov/Dec 1993, p.9)
51
III.1 Introduction
“At present architectural software is still based on a primitive form of building. The programmers still
use rigid, static traditional elements, like window posts, vertical walls, level floors, columns, in short
the classical composing parts, which are made into a building by means of the archaic technique of
putting one element on top of the other. The software available on other fields offers considerably,
more and better modelling facilities, like the programmes for industrial design and mechanical
engineering. Architects discovering these possibilities, will no doubt start using the available space.
Architecture will become much more complex, as far as form and calculation concerned, so complex
even that apparent chaos can only be controlled by means of the computer.”
(Kas Oosterhuis, Artificial Intuition, in Wiederhall: The Open Volume, No.12, 1990, p.50)
Throughout history, architectural practice has been directly influenced by the
technological advancements of design media. Design realised through drawing and
drafting has become the standard method since the period of the Renaissance. The
entire industry of drafting industry in fact defined intensively the practice in the
architectural office until the late twentieth century. Then the introduction of the computer
technically advanced the design process more than drastically. Computer-Aided
Architectural Design tools allowed architects greater control, more efficiency and a huge
amount of accessible information all at the same time. As these programs have begun
to offer designers a fast and accurate medium for three-dimensional modelling, they
have at the same time also started to reorganise and redefine the nature of architectural
design itself.
Today, virtual reality technologies could be the next evolutionary step in the evolution of
design. Without the two-dimensional interfaces such as the mouse, keyboard, and
monitor, architects now have the opportunity to design in a more intuitive manner in the
virtual realm itself. In opposition to the notion of virtual ‘walkthroughs’, VR offers
architects a new, fast, controlling, and accurate tool for three-dimensional modelling.
Conceptual sketches could now finally be shifted into the virtual space, in which any
point of view, collaboratively, or within a very specific and accurate context, is
technically possible. Architecture cannot be separated from the concept of creating
meaning out of form. It is this remarkable starting point that will determinate the next
architectures that now will be described. Many different approaches can be
distinguished, and the next paragraphs should thus be considered as an incomplete
view out of the vast horizon of relationships between architecture and the digital realm.
III.2 Virtual Architecture
As stated earlier, virtual architecture, the design of three-dimensional environments for
inhabitation in cyberspace and conceptualised by virtual reality, will most probably soon
be realised on a world-wide scale. When fully implemented, these virtual worlds may
even come to replace much of what we know as the architectural types of today.
52
III.2.1 Introduction
“In the netropolis, architecture becomes electrotecture. Electrotecture surpasses the techniques of
computer-aided design by actually taking responsibility for fashioning cyberspace. If we increasingly
dwell in cyberspace, the architect must find ways to design the electronic environment. There is no
clear line separating the electrotect from either the imagologist or the computer programmer. In the
netropolis, images and programs are no longer preliminary models that are the prelude to ‘real’
building but constitute the living space for global villagers.”
(Mark C. Taylor/Esa Saarinen- Imagologies)
Although many notions of virtual architecture will be taken from lessons of traditional
design professions, it is noted that the virtual realm consists of completely different
conditions and characteristics. As the concept of the virtual environment is considered
as a universal and infinite space and not one of place, some authors argue the fact that
this concept is in fact essentially non-architectural.1 However, still these authors foresee
the virtual realm and its virtual communities as the ultimate forces that will pulse
development of buildings and social spaces, and will thrive on the exchange of
information. The typical seduction that these manifestations of social electronic
communities possess has already been investigated in chapter II, in which also an
increasing importance and growth of this phenomenon has been predicted. Notably, it
may not be forgotten that it is the consumer and entertainment-based industry in which
most western people live, that will be the platform upon which the infrastructure of the
virtual will be built.
This leads to the conclusion that architecture, understood as the expression of society
in physical form, firstly will have to adapt to, for example, an electronic, virtual society.
The virtual communities will have similar needs and as the communities that exist in the
physical world, maybe even more.2 It are these predicted ‘high expectations’ of a
complete ‘new’ architecture, that ultimately predict an entirely new realm of design that
should develop as a sister profession to architecture, that of virtual architecture.
III.2.2 Urban Design
As the network connections are becoming as important to people as their bodily
locations, it can be stated that the notion of human habitat becomes reinvented. In this
field of virtual architecture, some authors concentrate themselves not on the generation
of form and objects, but on the organisation of the electronic applications available in an
increasing amount on the electronic networks of tomorrow, called virtual urban design.
“And the new urban design task is not one of configuring buildings, streets, and
public spaces to meet the needs and aspirations of the civitas, but one of writing
computer code and deploying software objects to create virtual places and
electronics between them. Within these places, social contacts will be made,
economic transactions will be carried out, cultural life will unfold, surveillance will
be enacted, and power will be exerted.”3
1
See paragraph ”Cyberspace Architects”.
CAMPBELL, A. DACE, Vers une Architecture Virtuelle..., 1994,
http://www.hitl.washington.edu/people/dace/portfoli/arch560.html
3
MITCHELL, WILLIAM J., City of Bits: Space, Place, and the Infobahn, MIT Press, Massachusetts, 1995,
p.160
2
53
In Cyberspace
Of course, these designers should not exclusively consider the actual urban design the places and interconnections provided and their look and feel - but also the civic
character, in which these communities are able to work in a just, equitable, and
satisfying way. For William Mitchell, the upcoming task is revolutionary and huge. He
points out that the task of the twenty-first century’s designers and planners will consist
out of building the bit-sphere, which is a world-wide, electronically mediated
environment in which networks are everywhere. Furthermore, most of the artefacts that
function within will then have intelligence and telecommunications capabilities. This
should result into an electronic world that “will overlay and eventually succeed the
agricultural and industrial landscapes that humankind has inhabited for so long.”
But Mitchell’s evolutionary way is long, as this hyper-extended dense and widespread
habitat transcending national boundaries is unprecedented. Architects should then
design the upcoming virtual gathering places, exchanges, and entertainment places for
its plugged-in population. Moreover, just as architects foresaw in the needs of many
traditional social service institutions and functions, bit-sphere planners now have to
structure the channels, resources, and interfaces of educational and medical delivery
systems for much more extended purposes.
In the Physical World
Questions can then easily be raised of whether the existing cities will be able to resist
the shift of social and economic activity to cyberspace. And Mitchell thinks he knows the
right answer. He points out that great cities always had the intrinsic capability to adapt to
most of the challenges of industrialisation and the rather radical inventions such as for
instance the automobile. Furthermore, it is noted that future applications of immersive
tele-presence most probably will reduce the reliance on bodily presence and material
exchange, hereby progressively changing the ways public spaces will be used and also
weakening many of the linkages that hold large urban agglomerations together. But this
is no reason for Mitchell to think that this necessarily will lead to the elimination of
human desire for face-to-face contact. In fact, he foresees the opportunity of a radical
reorganisation of the cities into small-scale neighbourhoods that are nourished by
strong electronic links to the wider world, but at the same time have the unique quality
of being different of the majority of other places. The danger is then that these wellconnected, well-serviced enclaves would start to offer economic opportunities while “the
poor could be left with the obsolete and decaying urban remnants and isolated rural
settlements that the more privileged no longer need.”4
Hereby, the challenge of building the bit-sphere would thus largely consist of deploying
the principles of social equity. While pre-industrial buildings were routinely equipped
with the necessary comfort systems of today (heating, air-conditioning, water
supply,…), they now are getting electronic nervous systems connected to all possible
networks and information appliances. Ultimately, the distinction between smart
electronics and dumb connection could no longer be made, turning the architectural
works of the bit-sphere into ‘robots with foundations’. In short, for Mitchell the future task
of the architects is clearly one of relying on both the bodily presence and the technique
of tele-presence. New ways will thus be found to recombine transformed fragments of
traditional buildings in the matrix of digital communications systems.
4
MITCHELL, WILLIAM J., City of Bits: Space, Place, and the Infobahn, MIT Press, Massachusetts, 1995,
p.171
54
III.2.3 Cyberspace Architects
“The door to cyberspace is open, and I believe that poetically and scientifically
minded architects can and will step through it in significant numbers. For
cyberspace will require constant planning and organisation. The structures
proliferating within it will require design and the people who design these
structures will be called cyberspace architects. Schooled in computer science and
programming (the equivalent of “construction”), in graphics, and in abstract
design, schooled also along with their brethren “real-space” architects,
cyberspace architects will design electronic edifices that are fully as complex,
functional, unique, involving, and beautiful as their physical counterparts, if not
more so. … And all the while such designers will be re-realising in a virtual world
many vital aspects of the physical world, in particular those orderings and
pleasures that have always belonged to architecture.“5
The Principles
“To move on, architecture must give up its devotion of the book and must dare to
become hypertextual,” is the opinion of Mark C. Taylor.6 He points out that the potential
importance of electronic technology for architectural practice and theory is not a novel
insight, as Robert Venturi and Denise Scott Brown have proved in their book Learning
from Las Vegas. He states furthermore that postmodern architecture not really showed
much of the so-called innovations and changes of the past 25 years and that it is, “in
fact, an extension of the aesthetic principles and philosophical presuppositions of
modernism.” So, while electricity concentrates and electronics disperse, a new
architecture has to arise. Many other authors concerned with the architectural future in
cyberspace foresee the emergence of quite a new profession as well. They argue that,
despite the differences from architecture of the physical world, the designs of virtual
architecture will require the expertise of traditional three-dimensional designers. In this
way, certain basic lessons of architectural design will inevitably be carried over to the
virtual realm. Dave Campbell for instance, is convinced that formal characteristics such
as rhythm, scale, balance, and unity will be part of the virtual design as well.7 Other
important lessons of which environment behaviour principles, way-finding principles,
and territoriality are only a few should be used as basis for creating inhabitable,
understandable spaces in the virtual context.
The Constraints
Cyberspace is a blank, black void until an artificial context is introduced. Before that, it is
fundamentally nothing, even placeless and indescribable. Doubts can then be raised
about how three-dimensional design will able to react when its space becomes infinite
and unboundable, and how the so-called cyberspace architects will design this nature of
anti-architectural placelessness.
In physical space, objects and people exist in relation to all objects around them. But in
cyberspace, despite their three-dimensionality, objects do not take up space, except in
the computer's memory. They have no place in relationship to other objects, except in
the abstraction of the electronic program. It would thus seem as whether there are no
constraints in the design of virtual environments. Most known architectural constraints
(climate, wind, gravity, water, building codes, property lines,…) just do not exist in the
nature of cyberspace.
5
BENEDIKT, MICHAEL (Ed.), Cyberspace: First Steps, MIT Press, London, 1991, p.18
TAYLOR, MARK C., De-signing the Simcit, in Any, Electrotecture: Architecture and the Electronic Future,
No.3, November/December 1993, p.16
7
CAMPBELL, A. DACE, Vers une Architecture Virtuelle..., 1994,
http://www.hitl.washington.edu/people/dace/portfoli/arch560.html
6
55
“When bricks become pixels, the tectonics of architecture become informational.
City planning becomes data structure design, construction costs become
computational costs, accessibility becomes transmissibility, proximity is measured
in numbers of required links and available bandwidth. Everything changes, but
architecture remains.”8
Indeed, there are constraints in the virtual realm, but unlike the physical ones.
Limitations on polygons (the building block of virtual worlds), pixels (which render the
polygons with colour or texture), as well as restrictions like bandwidth, disk space, and
memory will become the budgets and materials of the cyberspace architects.9 Problems
of how to express enclosed space, needed for structural reasons of privacy and
separation of function, in fact still remain. For Campbell, privacy and separation of
function should be established by ignoring other inhabitants and functions in the
computer’s calculation of the environment. Walls could then become necessary for
perceptual comfort and orientation, but their form could also be dependent from all sorts
of triggered variables. This easy introduction of some possible cyberspatial design
principles will be broadened out in the next chapter, in which the ideas of Michael
Benedikt will be thoroughly investigated.
III.2.4 Critical Approach
In the former chapter, the phenomenon of virtual communities has been investigated.
As it was concluded that this field could not be considered the new ‘frontier’ filled with
revolutionary architectural notions and new conceptual spatial thinking, the research will
be continued in the more abstract theories of generating form by electronic-based tools.
Both applications however can only be represented in the imaginary space with the sole
effort of people with advanced skills in programming. These designers of either
architectural or community space do consequently possess a great deal of power.
Ziauddin Sardar describes this fact, in his article Cyberspace as the Darker Side of the
West in the next, more generalising terms.
“The romantic, liberating notion of information technologies draws our attention
away from its more real potential: to enslave us in its totality. Beyond the rapture
of free access to unlimited information and the dream of controlling all human
knowledge, lies the reciprocal threat of total organisation.”10
‘Everything’ in cyberspace is managed by so-called invisible system operators (sysops),
who ensure that the electronic networked system runs smoothly and efficiently.
Meanwhile, they hold unrestricted power to deny entry, cut, delete or censor any
communication, and observe all that is going on their system. On bigger networks, Big
Sysops cannot only monitor what is going on but also have the ability to intercept
communications, read them and re-route them in different directions. Several legal
cases in the VS already have proved that elecronic applications such as private email
actually cannot really be considered as all that private. And since many take-overs
occur in the business of electronic communication, this power will most certainly not
decrease. Furthermore, these large multinational providers have the economical,
technical and political power to control the whole network system. Moreover, since they
have free access to everything on the system, they are even able to act like all-knowing,
all-seeing central network operators. As the use of the Internet network technology
8
NOVAK, MARCOS, Trans Terra Form, http://www.t0.or.at/~krcf/nlonline/nonMarcos.html
CAMPBELL, A. DACE, Vers une Architecture Virtuelle..., 1994,
http://www.hitl.washington.edu/people/dace/portfoli/arch560.html
10
SARDAR, ZIAUDDIN, alt.civilisations.faq: Cyberspace as the Darker side of the West, in SARDAR,
ZIAUDDIN & RAVETZ, JEROME R. (Ed.), Cyberfutures: Culture and Politics on the Information
Superhighway, Pluto Press, London, 1996, p.32
9
56
applied as a social communication tool will continuously and exponentially increase like
commonly expected, this very issue should definitely not be neglected by its potential
users. Even William Mitchell foresees some problems with this potentially dangerous
power. To enter most electronic spaces, people have to undergo a personal
examination. Not only are they giving up parts of their privacy, but they run the risk to
get excluded and marginalised as well, without any arguments or discretion in some
cases.
“So control of code is power. For citizens of cyberspace, computer code –…
typically accessible to only a few privileged high-priests- is the medium in which
intentions are enacted and designs are realised, and it is becoming a crucial
focus of political contest.”11
In the case of the digital generation of spatial form, architects in their education as well
as in their profession do already have the ability to ‘control’ the space to shape it to their
own creativity. This power will consequently depend upon the individual knowledge of
any form generating computer language, hereby creating two different levels and kinds
of designers. On the other hand, the conceptual use of the computer cannot be
implemented without any dangers. It has to be noted that, if utilised unimaginatively,
computers have a tendency to dull critical faculties, as they induce a false sense of
optimised design. Moreover, they sometimes create an atmosphere where any
utterance of the computer is regarded as having divine significance, and they are able
to distort the design process to fit the limitations of the most easily available program as
well.
III.2.5 The Representation of Space
Maybe, the influence of the electronic realm could be explained otherwise than the still
rather futuristic view of educating the new profession of cyberspace architects, and
implementing them rather drastically in the new field of design. The next approach takes
another point of view, and starts hereby at the observation of some architectural
projects that are highly influenced by the interpretation of the contemporary information
age.
“In light of the existence of a new architecture that allows the representation of contents, it is not
surprising that many architects began to use drawings as a medium for the development of a
representational sphere extending beyond building. Those who derided drawings created in this vein
as impractical and unrealisable were missing the point that such drawings were better suited than
realisable ones to the expression of far-reaching ideas. With the advent of a new ‘architecture of
drawings’, architecture was freed from its restriction to the practically realisable. Now it was again an
avenue for the manifestations of visions and dreams.”
(Heinrich Klotz, The History of Postmodern Architecture, 1988, p.398)
11
MITCHELL, WILLIAM J., City of Bits: Space, Place, and the Infobahn, MIT Press, Massachusetts, 1995,
p.112
57
The Experience of Space
“Space: a boundless, three-dimensional extent in which objects and events occur
and have relative position and direction.“
(The New Encyclopaedia Britannica)
By abstracting a space, one generally understands conceptualising a physical
accommodation for a particular function. Space becomes then activated by its own
construction or the presence of certain objects and persons. Fundamentally inserted
perceptible metaphors (like public-private, exteriority-interiority, surface-depth,
verticality-horizontality,…) then help defining the human, spatial experience of the
characteristics possessed by the surrounding space. In this view, it can easily be
argued that the human notion of contemporary architectural space is nowadays more
and more subject of a rapid developing technological process of fundamental change.
For instance, an increasing number of examples of buildings as well as design
proposals do emerge in which the emotional pressure of movement and congestion
becomes augmented by fundamental architectural decisions.
Elements like catwalks, elevators, traffic-regulators and ramps explicitly penetrate the
overall space, providing dynamic connections between society’s newest technological
frontier and the enclosed architectural expression. The overwhelming sensation of huge
numbers non-aware moving persons becomes a spectacle on its own, as the distinction
between spectator and actor vanishes between the settings of this architectural stage.
Groups of people are continuously enduring shiny liquid-crystal screens, watching everchanging facade-projections or walking through sweeping coloured light beams. As
victims of the inventions of the new era-multimedia, they are undergoing rapidly
changing images combined with those of the uncountable reflections on the
surrounding transparent physical borders. These invisible walls are then transformed
into the new infinite landscapes, able to project any panorama the project’s function
desires at any time. In some proposals12, people were able to walk in a huge Japanese
garden, shop in the small streets of a typical urban city or run through the new Kevin
Costner movie while waiting for their cruise-ship inside the sea-terminal of Yokohama.
Numerous spatial fractions of typologies in one single designed room, collectively
capable to vary in time, technology makes it all possible. Furthermore, congestion of
crowds and voids of groups consisting out of people in such designed spaces in fact
appear as a kind of programmable result of certain technical manifestations. Or reverse,
these differences in density are discovered by several sensors, which then successively
trigger, process and execute other events. A whole system of invisible algorithms tries
to manipulate the very thing of which it was designed to perceive as well: the overall
spatial experience. Therefore, the relation between the screened and the screen is
finally complete. This architecture offers the means to reveal emerging structures of all
kinds, as it generates unpredictable patterns instead of representing a certain
predefined semantic value. All these events are, in addition, combined with the various
effects generated by Koolhaas’ notion of ‘Bigness’, the consequences of the
exaggerated scale of a project. Deliberately or not, it is then only logical that these
dynamic electronically steered activities are converted into the final ‘event’, into the main
‘Big’ program out of which the actual perceived space is merged.
12
I refer hereby to the projects like the Yokohoma Terminal proposal of architect Königs
58
The Representation of Space
Consequently, the characteristics of this architecture cannot be seized by any traditional
static presentation technique. This structural inability to rely on the presented plans,
drawings or pictures forced some competition commissions to complain that they were
almost unable to judge the fundamental qualities of proposals inspired by this
experimental approach. Even the fixed traditional notion of an ordinary plan is subject to
change, as it is noted that these architectural representations are almost developing into
minimalistic pointillistic schemes. Forced to adapt the required program of dynamic
organisation, many of these projects are able to change function-place relationships in a
considerable short time as well. The architectural value of fixed objects hereby
decreases while the traditional notion of matter starts to loose its importance. Ultimately,
the Cartesian order of erect walls, coming vertically out of the architectural plans, get
important competitors in the form of so-called blobs13 and organic roofs, able to cover all
possible functions under, and on top of, one single, huge surface. This sometimes even
results into an extreme plan representation that has almost no fixed content besides the
overall covering membrane. It is hereby noticed that movement of objects and people
plus the change of various functions in time are rendered into an almost bigger
aesthetic value then the only fixed element, the construction, of the building itself.
Questions can then easily be raised as for how the architect can be fully aware of the
spatial and perceptible implications of most design decisions, as much is planned to be
dependent of its dynamic and unpredictable use itself. Many of the project’s qualities will
consequently rely on the applied information technology. This technical packet of hardand software will finally determine how the space becomes animated and activated.
Time as the fourth dimension becomes thus a very significant factor as well, as the
perception of the building’s environment can change at each megahertz-cycle decision
of the computer’s processor. In such projects, space and time are forced into one single
ultimate embraced movement.
Figure III-1 Left: architectural work of Ben van Berkel and Caroline Bos, right: the Water
Pavilion of NOX-Architecture
The Technology of Space Representation.
This architectural movement should obviously not be separated from that of the
development of the actual building technology. Some authors have noticed that in the
history of architecture, there is a general impulse is to shift from the material to the
immaterial. From the heavy stones of the Egyptians, to Roman vaults, to Gothic arches,
to iron construction, to the curtain wall, to structural glass, to holograms, to...virtual
reality. And it is this last phenomenon that could be the very representational tool this
experimental architecture is seeking for. In this way, next to the objects physically
realised, the drawing and the scale model, the cyberspatial model could be the most
appropriate form of expression.
With the application of virtual reality, dynamic representations of the designed
information processing mechanisms could be implemented, while the spatial simulation
takes place in an immersive and convincing manner. These architectural experiences
13
For more information, see paragraph entitled ‘the Blob’.
59
could be shared by numerous interested people at any time, checking the effects and
events that were programmed by the (cyberspatial?) architect. Furthermore, it would be
only logical if the design would then be steered out of these explorations, and done
inside the digital environment itself. Even the fundamental question whether the
simulated space is meant to be built in the physical world or not, could then be
considered as redundant.
But even nowadays, architectural suitable software is hard to find. Most sophisticated
CAAD-software programs have already reached the border of their efficiency and some
architects’ personal creativity. Contemporary architects like Frank O. Gehry, Greg Lynn
or Peter Eisenmann14 (have to) use several non-architectural computer programs
nowadays, to be able to generate their ideas into some kind of acceptable
representation. Even the contemporary virtual reality tools are still inadequate for truely
renewing designing purposes, as the VR-technique in principle is a representation of a
certain defined mental projection. It is dependent of the mind of its program writers and
developers, and to date, this has clearly been of an almost exclusively analytic,
Cartesian order. Therefore, creating VR oriented architectural software-applications is
the technological part where spatial experienced persons such as architects are able to
propose many innovations and ideas in future. Because now at last the power of
imagination is virtually free, the concept of space, first defined by the materialisation of
design, is turning its attention to the process of generation as architectural
representation.
The Future of Architectural Space
Inevitably, the identity of space has transformed in a period of hundreds of years of
changing technology and spatial awareness. In the future, some architectural
experiences in a certain space, physical or not, will change continuously, even every
computable nanosecond if necessary, out of the designed concept of its architect.
Virtual reality as a representation technique is able to represent and explore these ideas
of time dependence, changing form and the sensorial environment that is uploaded by
space. By most optimistic views, imagination, architecture, and virtuality should be able
to create places similar to the major historical architectural types we know today. For
Mark C. Taylor, “…the possibilities of the computer would not be used to design
buildings, but space would be designed by developing programs. The materials of the
future architect would than be transformed from concrete, steel or glass, to become
code, programs, and images.” 15
The electronic environment offers much more than the universal design principles which
reflect old building technology or traditional representation methods. Architects of the
future should attempt to find ways of fashioning this new space in ways that take
advantage of its extraordinarily rich potential. In this way, computer-generated spaces
should not be the sum of endless calculations, but should use the power of the
computer to produce something unforeseen. And this is the very area that is yet largely
unexplored.
„Architecture is geared to the future, but has had plenty of experience with eternity.“
(Ole Bouman, RealSpace in QuickTimes: Architecture and Digitization, 1996)
14
Some principles and approaches these three architects utilise in their work will be explained in the next
paragraphs.
15
TAYLOR, MARK C., De-signing the Simcit, in Any, Electrotecture: Architecture and the Electronic Future,
No.3, November/December 1993, p.17
60
III.3 Digitised Architecture
Greg Lynn teaches architecture at Columbia University in New York, the University of
California in Los Angeles, and is the principle of the office called FORM in New Jersey
and Los Angeles. His most known works until today still consists of competition design
proposals such as the Cardiff Bay Opera House and the Yokohoma International Port
Terminal. Nevertheless, he can certainly be considered as an important voice in the
architectural discussion about cyberspace, as most of his projects are dealing
intensively with the investigation of dynamic, computer-generated models in the
generation of architectural form.
III.3.1 Data Field Architecture
In this discourse, he persuasively criticises most contemporary architecture as being
static, since they embrace the classical models of pure and timeless form. Lynn does
not use the term ‘static’ in opposite to ‘moving’, but tries in this way to explain the fact
that this traditional architecture is based on reducing whole ideal numbers reaching a
certain calculable equilibrium, instead of some sort of non-static mathematics.16
Figure III-2 Studies for a single-family house residence on Long Island, N.Y.
(transform, No.2, 1998, p.66)
The Generation of Form
In opposition to the scientific approach that merely investigates procedures and
processes, Greg Lynn tries to explain the concept of form with logic based on timerelated growth or development. Illustrating the fundaments of his theory, he refers to the
metaphor of a rock, of which the form can only be understood as a certain result of its
history. This phenomenon based on Henri Bergson’s17 theory of readable and defined
history embedded even in any form, can be characterised by the term energy. One of
the approaches to design is to build a history into form, rather than reducing the form to
some ideal state. In the same way the energy is captured into the rock, Lynn is trying to
store motion in the form itself by generating it in a time-based environment. This does
not have to mean that Lynn’s buildings move, but that the perception of its users
contains the unfolding of a stored pathway into its forms and surfaces. In his view,
16
DUISBERG, CHRISTOPHER, & GUIHAND, MARC, …growing buildings out of data fields?, in transForm,
No.2, January 1998, pp.65-69
17
Greg Lynn bases this notion on the book: BERGSON, HENRI, Matter and Memory, New York, Zone
Books, 1988
61
forces can be built into any designed building just by the way it is formed. Initially, these
fundamental principles of perceptible forces did reach that far, that he even deliberately
refused to investigate the experiential components of a project.
Later, he started to accept the concept of a defined perceptible model, although the
recognition of its form varied very strongly and dynamically by the way, place, and
direction in which it was observed. In the research of his projects, a primarily design
procedure and a whole set of parameters are established that will generate the
architectural form. For example, Lynn showed that a building could be designed out of a
certain data field, a diagram that acts like a kind of deformed typology. For this project,
the notion of typology was used to provide certain internal constrained limits to the
model. This was combined with external constraints of the surrounding environment
mapped on a data field, which finally resulted in a computer generated single-family
house on Long Island.
The Blob
“Or should I say blobs. Many blobs, of all different sizes and shapes and
irreducible typological essences. Blobs that threaten to overrun a terrorised and
deterritorialised tectonics like a bad B-movie.”18
To prove the rather original point of view with which Greg Lynn approaches the
architectural theory of form, one of his radical and new architectural elements will now
be explained more thoroughly. In a rather theoretical essay in ANY, Greg Lynn
introduces a revolutionary new formal typology, called ‘the blob’, which should
drastically enrich the architectural discourse of tectonics. These elements seem most
promising, as the concept suggests fundamental alternative strategies of structural
organisation and construction. Its characteristics should provide complex architectural
relations, as they try to fill the metaphorical gaps of traditional representation with their
remarkable ‘sticky surfaces’. Blobs cannot be reduced to a typological essence, as no
two blobs are the identical. Furthermore, the form and organisation of any given blob is
contextually intensive and therefore dependent on strict conditions for internal
organisation. Most importantly, blobs can be considered as alien and detached from
any place while they possess simultaneously the capability of melding with their
surrounding context.
Figure III-3 Two of Greg Lynn’s diagrams of blobs.
18
LYNN, GREGG, Blobs, or why Tectonics is Square and Topology is Groovy, in ANY , No.14, 1996, pp.
59-61
62
What exactly can be recognised as a ‘blob’? The image, the morphology, and behaviour
of the blob present a sticky, viscous, mobile composite entity capable of incorporating
disparate external elements into itself. Lynn himself identifies three different approaches
to clarify the former characteristics of the idea of ‘blobbiness’.
1. The images of science fiction horror films.
The blobs that appear in these typical B-movies can be understood as
organisms that are topologically inverted. While these alien structures move
through the city absorbing materials, each of them acts like a digestive system
but then turned inside out. Such a B-film blob is a gelatinous surface of which
its interior and exterior are continuous.19 Gelatinous organisms, like fluids,
have no internally regulated shape but depend on contextual constraints or
containment for their form. Although they have minor shaping forces such as
surface tensions and viscosity, they posses neither a global form nor a single
identity. Three principles of this movement and spatiality are characteristic for
all blobs. First, blobs possess the ability to move through space as if space
were aqueous, thus determining its form by the environment as well as the
movement itself. Secondly, it is noticed that blobs can absorb objects as if
they were liquefied. And in conclusion, the term blob seems to connote a kind
of intelligence, neither singular nor multiple, that behaves as if it is networked,
multiplied, and distributed.
2. The physical definition of viscous composite entities.
These fluid entities are described as being ‘quasi-solid’, incomplete beings
whose symbolisation has been ignored due to ‘specific dynamics’
characteristic of real fluids. In search of a theoretical abstract model of
essential diversification within the discipline of architecture, an alternative
system of complexity of form seems required. Meanwhile, a typology of
topological geometries for modelling complex aggregates has been
scientifically developed. The most interesting example in this field is the
concept of ‘isomorphic surfaces’, also referred to as ‘meta-balls’, or ‘blobs’.
These elements can only be organised in relation to other objects, as their
centre, surface area, and mass are determined by various other fields of
influence. The inner volume defines a fusion zone within, which is able to
connect to other meta-ball objects to form a single surface. The outer
inflection zone defines a region within which other meta-balls objects can
influence and inflect the surface. There is no fundamental difference between
these elements and a spherical formation, as the latter is merely the index of
a low level of interactions and blobs possess a high degree of information in
the form of differentiation between components in time.
3. Contemporary construction techniques.
Many construction and architectural minds argue that following the fact that
humans always structured themselves as ‘standing upright’, by extension
buildings should do also. Nevertheless, it can be noted that structural
dynamics are far more complicated than the transmission of perpendicular
loads to the earth’s surface, as the interaction of multiple loadings of all kinds
should be considered as well. In this way, many architects retreated the
primarily Cartesian model of simple gravity and have begun to investigate the
alternative possibilities of topological surface organisations. First it must be
acknowledged that blob construction is still in its early stages of development
19
The shock effect in such movies are often generated by displaying partially digested victims suspended
within a gelatinous ‘ooze’.
63
in the contemporary architectural culture. Nevertheless, an increasing number
of projects do emerge in design competitions as well as in a built form.
Figure III-4 The Yokohoma Port Terminal of Alejandro Zaera Polo and Farshid
Mousavi, now known as Foreign Architecture Office (FAO).
The most known and documented example of today is certainly that of the
winning design proposal of the Yokohoma Port Terminal of Alejandro Zaera
Polo and Farshid Moussavi. Here, flexible surfaces are treated as slabs, a
precedent established by Koolhaas in some of his proposed designs of which
the Bibliothèque Nationale in Paris is only one. Instead of aligning
programmatic diversity with fluctuations and punctures in flexing slabs, the
Yokohoma proposal chose radically for a different approach. The overall plan
and sections are kept very symmetric in the deployment of spaces and
programs, resulting in a monolithic typology that is nevertheless locally flexible
in its transitions from slab to slab. The roof structure on the other hand, is
treated as an individual volume that can be packed with certain activities, as
the flexible thickness adapts itself to the various locations of programs below.
A Critical Approach
It is certain that although Gregg Lynn’s own sort of architecture produces an organic
image, the technique is certainly not more ‘natural’ than for instance conceptually
building a cube. In this way, Lynn does not consider the philosophical side of his
theories as a clarification of how a project should look like or what it should represent or
symbolise. Quite the opposite, these metaphors of images and signs are largely denied
as his interest does much more rely on what architecture actually does, and how it can
act in terms of performance, function and spatial organisation. In this way, the priority is
always on the diagram. How the diagram is manifested into built form becomes much
more an aesthetic issue, and is thus not considered as a primary element. Some other
authors, of whom Rem Koolhaas is only one, point out that Lynn would solely be
interested in a form of data-automation design. This critical approach is concentrating
on the fact that the building surfaces are modelled out of fixed numerical data derived
from the surrounding context, and thus blaming it to be a kind of hyper-functionalism.
But on the other side of arguments, Lynn’s design approach could be considered as an
exploration of a system that consists out of a creative and artistic medium in which the
intuition is still very important. The architectural set of tools, continuous surfaces and
vectors, are hereby translated into shapes, which contain an underlying structure and
geometry just like a Cartesian geometry. Consequently, this approach makes the
design still unpredictable as the set of constraints and information can be arbitrarily
chosen. Various versions of such diagrams can interact, so that the end result is never
anticipated. The computer is thus not considered as an automatic designer, but is
facilitator of calculating unforeseen connections, generating surfaces out of elements
like traffic flows and sun-paths. Certain design pathways are followed to determine what
the project will become rather than pre-determining the result and hereby constraining
the process. This is the exact opposite of some conceptual thoughts such as in the
movement of minimalism. Minimalist architects, like disciplinarians, argue that one must
to have a clear initial idea and every subsequent decision must follow from that original
64
idea. As a result, in minimising formal effects every effort is made to state one simple
reduced moment.
III.3.2 Applied Software
The use and importance of CAAD-software is undeniably increasing in the architectural
design practices of today. But while it nowadays merely is used as a fast and accurate
representation tool, some architects rely their entire process of design decisions on the
digital application of various computer programs. Two of the most controversial and
sophisticated software packets meanwhile possess a special place in the architectural
contemporary discourse about the creation of form.
FORM-Z
In the view of Greg Lynn, the choice of software is one of the most significant choices
he makes within a project. It can be considered as evenly important as the decision
whether to build the scale model in clay or in cardboard, as this choice primarily
depends on the properties of the medium that has to be used. For instance, Lynn uses
the program Form-Z20, which is a polygon-based modeller able to generate objects by
calculating triangulated surfaces. Some critics even argue that this program presents
Lynn’s projects in such way that the formal language resembles much of that of Peter
Eisenman’s newest folding projects, which are modelled with the same software as well.
CATIA
Meanwhile, another software packet is being intensively discussed in the architectural
worlds following the opening of the new Guggenheim Museum in Bilbao, which was
designed by Frank O. Gehry and Associates. After testing many other CAAD-tools
applied by projects such as the steel Barcelona Fish sculpture, the Hanover Bus Stop,
the Disney Concert Hall and the Prague office building, finally the advanced threedimensional CATIA21 software was chosen. This software was developed by the French
aerospace manufacturer Dassault Systems and released to the public some twelve
years ago. As many publications about this building already proved, Gehry’s solely
technically inspired decision would become most controversial.
“Almost incomprehensible in its scale and three-dimensional complexity, the Bilbao
Guggenheim goes far beyond of what was formerly understood to be possible in
architecture both aesthetically and technically”, points the Architectural Review22. This is
the result of Gehry’s original approach of computers, of which the primary purpose is
not to design, but to rationalise and make buildable highly intuitive formal concepts.
Most architectural and rendering software is based on grids of various sorts of polygons.
By contrast, CATIA uses a complete numerical control mechanism, able to define
surfaces by descriptive geometrical mathematical formulas. Three-dimensional
manually made models were digitised by identifying the control surfaces or setting out
common defined points. By offsetting the surfaces, a construction zone was identified
and used for the structural concept of the building. A prefabricated and rather
straightforward frame, of which all members were straight sections, received its
geometrical complexity purely in the connections.
While the structural engineer calculated the overall dimensions, the final precise
positioning of the members was undertaken by CATIA’s wire frame at the architect’s
office. This software is able to provide the precise location of any point on a surface, so
that certain formulas could be offered to the steel and stone contractors to build the
structure in a still economical manner. As this technique proved to be so accurate, the
20
More information about the Form-Z software can be found at: http://www.autodessys.com
More information can be found at http://www.catia.ibm.com/catmain.html
22
LECUYER, ANNETTE, Building Bilbao, in The Architectural Review, No.12, December 1997, p.42-45
21
65
need for field cutting and welding was virtually eliminated. Hereby each structural
element had to be bar-coded as well, so that the coordinates could be revealed on site
by electronically referring to the stored CATIA model. This is common practise in the
aerospace industry, but relatively new to building. However, as the erection itself
caused some problems in pulling the structural steel into place, in future the CAADsoftware should also be used for montage-rehearsal and improvement of the various
sequential actions needed during construction. In short, this software provides a very
high degree of accuracy in replicating the surfaces as well as estimating various
building costs. Moreover, as this software encourages the creativity of the architect,
Gehry believes that curved forms in buildings will become more feasible.
“I’m excited about them because I like the sense of movement. They feel
genuine, accessible, and joyful. If I do a lot of buildings with curves, and people
enjoy them, then clients will begin demanding them, and more architects may
follow.” 23
III.3.3 Time-Space Relationship
Peter Eisenman, architect and Professor at the Cooper Union in New York, is best
known for the remarkable architectural style of his projects, which are often
characterised as examples of the theories of formal ‘deconstruction’. Its conceptual
foundation is the result of his many theoretical and philosophical thoughts, which are
strictly and creatively translated into original architectural principles and formal
consequences. In a guest lecture at the Federal University in Zürich24, Peter Eisenman
explained his opinion of one remarkable aspect of the future of architecture. Hereby, he
tried to interpret the phenomenon of time, and its ultimate influence on the perception of
space. Peculiarly enough, while following his lines of arguments, many names and
theoretical insights were mentioned that already are explained in the former
paragraphs.
In his discourse, Eisenman used the example of a certain art project of Richard Serra
called Torqued Ellipses25 to clarify his arguments in an understandable manner. Serra,
helped by Frank Gehry’s engineer Rick Smith, invented and made a kind of object
which is made of a flat, thick, solid steel template. This element became torqued in such
way that two imagined, enclosed, and overlapping ovals with reversed axes and
identical geometrical centre, were gradually rotated in the elevation of the object itself.
Only CATIA-software seemed capable to determine the lines of the bending patterns
according to which the actual steel templates would be rolled. The final object, which
was actually the result of a long, international search to find a suitable rolling mill,
possesses such strange and remarkable characteristics, that it drew the immediate
attention of the famous American architect.
23
For more information, see: The Success Story between Gehry and Dassaults Systèmes Software
Program CATIA, http://www.catia.ibm.com/custsucc/sufran.html
24
th
EISENMAN, PETER, Utilitas in der heutigen Architektur, Guest-lecture at ETHZ, 5 November 1997,
(much of Eisenman’s line of arguments could later be traced back in his own article: The Time of Serra’s
Space: Torquing Vision, in ANY, How the Critic See, No.21, pp.56-62)
25
Project: Torqued Ellipses by Richard Serra, at the Dia Centre for the Arts, New York city, from September
1997 through June 1998
66
Figure III-5 Two views of Richard Serra’s sculptural art installation ‘Torqued Ellipses’.
(ANY: How the Critic See, No.21, p.60)
In his view, Serra’s objects are the opposite of many examples of classical architecture,
generally concerned with relating the axis of the upright human figure to the symmetrical
axis of a building. This displacement however, still took place in a narrative, classical
time. It was only until Henri Bergson’s Matter and Memory that the possibility of
difference between ‘the time of the object’ and ‘the time of experience of the subject’
was proposed. Bergson suggested that there were two different kinds of time:
chronological time, concerning a difference in degree, and the time of duration,
proposing a difference in kind. Architecture is usually experienced in chronological time,
as the subjects walk in and around the space and understand its structure through a
process and sequence of individual perceptions.
This is fundamentally different from that of Richard Serra’s work, which draws the
energy to a certain disjunction in time. The way in which apparently these static objects
seem to have a certain duration in time is accomplished by just this difference between
understanding and experiencing its space. Consequently, this sculptural installation is
an early example of an architectural phenomenon that requires the individual to
experience the space of the object in time. Although people can walk around these
pieces, one can never say to be inside that space. Moreover, due to the effect of
torquing the steel plates combined with its overall scale and height, the plan of the form
cannot be seen, nor can it be conceptualised by a subject walking in and around the
object. One cannot ‘see’ the top plane or even draw its plan. Time is condensed and
spun fast. The torquing raises another issue, as the vertical axis of the human body and
that of the architectural enclosure becomes separated. Furthermore, parts of the
perceived lack of stability comes from the absence of conventional structure and the
extremely thin size of the steel plates in comparison to their height.
In conlusion, it is such an architecture that possesses the characteristic of realising a
time of duration by the separation of understanding and experience, that Eisenman
foresees to emerge by the architects of the future. Meanwhile, it is certain that he
himself will apply this principle in his future projects, as he is already convinced that it is
apparent in some of the projects of Rem Koolhaas (Kunsthal, Rotterdam) and Frank O.
Gehry (Guggenheim, Bilbao).
67
III.3.4 Virtual House
The German design competition held in 1996, took the conceptual notion of using
computers in the architectural practice much further, as it was asked to design an
inhabitable ‘virtual house’26. Many famous architects participated, but Eisenman’s
proposal contained some architectural related notions that remarkably enough will
return in a more sophisticated form in many other approaches in the field of the
cyberspatial and electronic realm, of which some are mentioned throughout this
chapter. In his theoretical view of this project, the form is a becoming expression of the
virtual.
Figure III-6 The Virtual House, Eisenman Architects, Berlin, Germany, 1996-1997
(Dialogue, No.9, November 1997, p.60)
First, the program of the virtual house consisted of the spatial concept of an earlier
designed house by Eisenman. This particular project was then abstracted into nine
cubes, in which a potential field of relations and conditions of interconnectivity was
expressed by means of vectors. These vectors were then visualised by showing the
effect they have on lines within their (arbitrary) range of influence. Each vector hereby
produces movements and interrelations that are considered as constraints which
influence its location, direction, and repetition. The condition of each vector is in addition
recorded, either unconstrained or constrained, within the space as a series of traces.
Consequently, each element of the system has the additional potential to be affecting
as well as effecting, thus respectively acting as constraint or effect. “The manifestation
becomes effectuation; it has an effect on something, becomes an active participant
within the process. In this regard, the virtual carries the idea of multiple potentials for
new connections or unseen relations.”27
III.4 Liquid Architecture
In fact, the term ‘liquid architecture’ was coined by Marcos Novak. He is the founder of
RealityLab28, the Laboratory for Immersive Virtual Environments, at the Advanced
Design Research Program (ADRP) at the School of Architecture at the University of
Texas at Austin. This is in fact the first faculty devoted to the study virtual space as
autonomous architectural space. Novak is an architect, artist, composer, and theorist
investigating actual, virtual and mutant intelligent environments. Furthermore, his
personal research is situated in the field of algorithmic compositions, cyberspace, and
the relationship of architecture to music.
26
Herzog and de Meuron’s entry consists out of an electronic space inspired by the notion of the MUD, and
was developed in collaboration with the CAAD Department of ETH-Zürich. The resulting project can still be
found at: http://virtualhouse.ch
27
EISENMAN ARCHITECTS, The Virtual House, in Dialogue, No.9, November 1997, pp.56-60
28
More information about this lab can be found at http://ww.aud.ucla.edu/~marcos/marcos.html.
68
III.4.1 Introduction
In the view of Novak, cyberspace is the ideal realm in which the benefits of digitally
separating data, information, and form can be intensively maximised. The core of its
arguments leads back to the possibility that by reducing selves, forms, objects and
processes, new and unsuspected phenomena can be investigated. This representation
reduced to primarily binary streams, could thus permit the discovery of previously
invisible relations in this data, simply by modifying the applied mapping techniques.
Cyberspace
Consequently, Marcos Novak refers to cyberspace as a habitat for the imagination, as
well as a habitat of the imagination. Furthermore, he investigates the consequences
that arise when cyberspace is considered to be an inevitable development in the
interaction of humans with computers. Then, the image of the relationship of human to
information will be inverted, as humans will now be placed within the information space
itself. But then also, these manifestations of ‘landscapes’ and scattered ‘objects’ are
considered as an architectural problem. In short, throughout his arguments Novak
proves to be convinced that conceptually, cyberspace is architecture, cyberspace has
architecture and cyberspace contains architecture. Hereby, the traditional conception of
these terms changes considerably. Architecture, which is normally understood in the
context of the city and all its implied metaphors, shifts towards the abstract structure of
relationships, connections and associations of appearances and accommodations.
Liquid
Novak uses the term ‘liquid’ to mean animistic, animated, metamorphic, as well as the
crossing of many categorical boundaries. Animism suggests that entities have a ‘spirit’
that tries to guide their own behaviour. Animation in turn means the capability to change
the location through time. Metamorphosis adds the change of form, through time or
space.29 In an interview30, he tries to clarify this concept further as follows: “It can take
different forms. Its essence is not invested in a particular form. It can ‘adjust’.”
Furthermore, the nature of this virtual world is considered to be information, and the art
of the world is consequently the investigation of the data. In this space, the boundaries
of ‘how’ information can be perceived are thoroughly investigated in an architectural
manner. Therefore, various different types of media are processed and combined
through an electronic algorithmic translator, generating form towards as much human
sensorial modalities as possible. This lead to the fact that for the first time in history, the
architect is called to design not the object, but the principles by which the object is
generated and varied in time. And Novak expectations of this abstract definition are
huge. His liquid architecture requires much more than just variations of a theme, as it
calls for the invention of ‘something’ equivalent to a ‘grand tradition’ of architecture. This
should then lead to: “a continuum of edifices, smoothly evolving in both space and time.
Judgements of a building’s ‘performance’ become akin to the evaluation of dance and
theatre.” Even the comparison to ‘a symphony in space’ seems not to be sufficient, as
liquid architecture never can be repeated but instead continues to develop.
29
NOVAK, MARCOS, Liquid Architectures in Cyberspace, in BENEDIKT, MICHAEL (Ed.), Cyberspace:
First Steps, MIT Press, London, 1991, p.250
30
Cyber23 (real author unknown), Virtual Architecture: Liquid Architectures, Interview with Marcos Novak,
http://www.best.com/~cyber23/virarch/novak.htm
69
III.4.2 Virtual Poetics
“…and we can in our Thought and Imagination contrive perfect Forms of Buildings entirely separate
from Matter, by settling and regulating in a certain Order, the Disposition and conjunction of the Lines
and Angles.”
(Leon Battista Alberti - The Ten Books of Architecture)
Poetics
Novak thinks it is possible that a poetic composition could be the structuring system for
the generation of form. Poetic systems such as music, dance, or lyrics are taken and
transformed into the generators of form in a synthesised virtual world. In this way,
poetics is not only seen as an application to words, but is understood as some sort of
structuring that evolves the ways in which works of art can be made. Ultimately, the
generation of meaning can then be investigated in relation to those items by which the
meaning was manifested. Furthermore, the existing and concepts of traditional artefacts
can be questioned. Structures and primitives of the communicated representation of
many different media types could thus be ‘morphed’ and transformed into a single
space and place, as a sort of immersive symbolic database.
Music and Cinema
As extension to these theories of interchangeable media, the generation of form is also
applied to music and cinema, resulting into the phenomena of navigable music and
habitable theatre. The structuring of a certain database can be influenced by themes in
music so that forms can visually abstract their originating composition. Several
characteristics of sounds can be mapped, out of which functions and variables can be
created which subsequently produce simple more dimensional figures. Such a
conception of architectural space has the advantage of being extremely compact, since
a single mathematical expression can be expanded to become a fully formed chamber.
Music is then understood as a single object in time: it has a beginning and an end, a
plan or a section can be sorted out, and it even can be graphed out. This leads to the
idea that a whole musical composition can be seen as a landscape, capable to allow
and extract many unique trajectories through it. The whole organisation is then based
on a matrix of infinite possibilities and is promised to evoke any sort emotion that
conventional music could, because it is based on the discovery of interesting nodes in
the matrix. First, architecture existed as a separate category, known as the art of space.
Time was also considered as a category, and music was the art of time. The two
combined by the former principle results thus in a new art of space-time, which can be
called archiMusic.
As any media can be mapped in this way, the meaning of it can be multiplied,
compressed in one single representation. Another possible example is the media of
cinema that possesses the same sort of linearity as music does. Inhabitable cinema in
turn implies the possibility of artificial and discontinuous environments, turning the
cinema of the future into a landscape of opportunity. Furthermore, the same could be
done with dance, by digitally recording movement after which this data could be
morphed into a definable construct. All this should then lead into the notion that the
landscape of data is an abstraction that makes the information much more
comprehensible.
70
III.4.3 Transmitting Architecture
In an essay called Transmitting Architecture: The Transphysical City31, Novak
investigates deeper the consequences of his liquid architecture definition.
Time
He considers hereby architecture as transmittable, now finally habitable and interactive
spaces and places can be distributed by electronic means. This leads to the direct
conclusion that theory, practice, as well as education are confronted with questions
without any precedent within the discipline itself, as ”Learning from software supersedes
learning from Las Vegas, the Bauhaus or Vitruvius…”
Hereby he points out that the architect is not only obliged to take interest in the motion
of the user through the environment, but he has to pay attention into the structure itself
as well, as it now is able to change its position, attitude, or attribute. Ultimately it is even
considered capable to breathe and transform. This means that the design of
mechanisms and algorithms of animation and interactivity for every act of architecture is
required. Consequently, the concept of time must mathematically be added to the list of
active parameters of which architecture is a function. This notion brings Novak back to
the already mentioned theory of Bergson, in which objects out of place, time, or plot are
able to colour a scene with their probable histories or futures.
Sampling
For Novak, the world is until today solely understood through the process of sampling,
as even the cognitive mechanisms of the body’s nervous system have to translate raw
input of numerous sources into some kind of recognisable and meaningful pattern.
‘Reality’ becomes thus segmented into intervals and then back reconstituted to fit a
human understanding, creating in fact a continuous illusion. The concept of sampling
implies furthermore the existence of a field to be sampled, a sample rate or frequency,
and a sampling resolution or sensitivity. Looking at the world as a field is completely
different from understanding it in terms of dialectic, solids, or voids.
Here, the distinction of existence is not considered as binary, but made by the concept
of degree. Capturing an object’s boundary is then simply the reconstructed contour of
an arbitrarily chosen value out of the collection of all possible data points. As an
essential characteristic, changing the three features of the sampling-mechanism or the
source of the data replaces the shaped and perceived world with complete new ones.
The concept plan is furthermore considered as dead and inappropriate to capture the
dynamic flows of the new trajectories, waves, and holes. It is then even so, that we
already inhabit an invisible world of shapes, an architecture of latent information, ready
to be seen, captured, and creatively visualised.
Transmission
The astonishing capacity of the electronic net surrounding the planet to carry
information is just being grasped. Meanwhile however, its potential is still being
restricted by the present limits of bandwidth. It is thus unlikely and against the insights of
distributed computing, to implement a central computer system to manufacture one
reality for many participants. The concept that will emerge is quite the opposite: each
user will receive an electronic and compressed description of the world and information
about the state and actions of all other participants. Each participant’s local machine will
then synthesise a version of the shared reality that is similar to, but not necessarily
identical with, the one the others perceive. Each location is thus considered
independent, and yet necessary to make a larger reality possible. Obviously, to
31
NOVAK, MARCOS, Transmitting Architecture, http://www.ctheory.com/a34-transmitting_arch.html
71
accomplish this task the technique of simple compression is insufficient, since it
imposes the same limit of resolution for all participants, regardless of their
computational and communicational resources. Instead, it is not the object itself, but its
genetic code that will be transmitted, as it possesses all information for its generation
regardless of neither location nor resources.
III.4.4 Conclusion
Marcos Novak’s liquid architecture is clearly a dematerialised architecture, an
architecture designed as much in time as in space, changing interactively as a function
of duration, use, and external influence, and it is described in a compact coded notation.
He sees architecture deliberately much further than the process of building alone in his
long search for architectural sign systems that should both be spatial and
encompassing.
Finally, this technique should lead to the application of architectural typologies that will
influence the notion of how people will use future virtual spaces. In almost the same line
of reasoning can the theories of John Frazer be situated, described in the next
paragraph. He is also able to generate architectural form out of separate observed
phenomena, although he still strongly emphasises the notion of electronically building
structures that are still meaningful and useful in the physical world. For this purpose, he
is obliged to struggle with the technique of inserting knowledge and formal constraints
into the growing construction itself. His evolutionary architecture is consequently not
founded out of the characteristics of abstracted art, but uses many concepts that are
derived from nature.
III.5 Evolutionary Architecture
“Modern builders need a classification of architectural factors irrespective of time and country, a
classification by essential variation. Some day we shall get a morphology of the art by some
architectural Darwin, who will start from the simple cell and relate to it the most complex structures.”
(William Lethaby, Architecture: an Introduction to the History and Theory of the Art of Building, 1911)
John Frazer is unit master of Diploma unit 11 at the Architectural Association in London.
He was also lecturer at the University of Cambridge and was awarded a personal chair
at the University of Ulster in 1984. He investigates in his book An Evolutionary
Architecture some fundamental computer-based form generating processes in
architecture. He proposes the model of nature as the generating force for architectural
form, as the concept of architecture here is primarily considered as a form of artificial
life, subject to principles of morphogenesis, genetic coding, replication and selection.32
32
FRAZER, JOHN, An Evolutionary Architecture, E.G.Bond Ltd., London, 1995, pp.117,
http://www.gold.net/ellipsis/evolutionary/evolutionary.html
72
III.5.1 Nature
The next paragraph will describe some of Frazer’s metaphors and arguments in which
he recognises many characteristics of nature as important and instructive phenomena
in the concept of architecture.
§
§
§
§
§
§
Analogy. In history, architectural form and structure are frequently inspired by
many concepts of nature. For instance, Sullivan, Wright and Le Corbusier all
employed some kind of biological analogies, although in the case of Frazer,
the primary inspiration is not that of the image. In his case, nature is the
generator and instructive example of fundamental formative processes and
information systems.
Intentionality. It can be argued that the system of evolution operates without
any pre-knowledge of what is to come, which means it has no notion of
design. In case of Frazer, the architect is very clear of his or her intentions, but
is meanwhile considered to be ‘blind’ to the eventual outcome of the process
that is being created.
Inspiration. The perfect and balanced variety of natural forms is the result of
the continuous experimentation of evolution. Although vernacular architecture
might occasionally share this characteristic, the vast majority of buildings of
today most certainly do not. Consequently not only the concept of natural
selection should hereby be taken over into architectural development,
because other aspects of evolution, such as the tendency to self-organisation,
are equally or even more significant.
Generation. In Frazer’s view, the technique of ‘shape grammars’ or elemental
combinatorial systems to generate architectural design has to many
limitations. This approach not only requires a syntax and grammar of a
particular formal language in advance. This ‘kit of parts’-approach, influenced
by general problem-solving processes, seems also to be too complex as a
suitable description of architectural terms, while an almost unmanageable
quantity of permutations should be created as well.
Environment. John Frazer promises that his approach of architectural design
will reflect much more the changing demands of society, the realities of the
construction industry and the pressing need for environmentally responsible
buildings. In his view, buildings would act much more like natural ecosystems
do: they recycle their materials, permit change and adaptation, and make
efficient use of ambient energy. Hereby the design is drawn ‘beyond the
object’, as it focuses on user-experiences rather than on physical form.
Economics. By designing the artificial generating system, clear and
economical concepts on the individual logical operations had to be used.
Furthermore, just like in natural forms of codes such as DNA, information had
to be compressed in an extreme manner. Just like the natural ecosystem
these principles resulted, in both hardware and software, into a complex
hierarchy built up from the simplest functions.
III.5.2 History
It is no coincidence that the development of computing has been shaped by the building
of computer models for simulating natural processes. For instance, Alan Turing, an
important figure in the development of the concept of the computer, was interested in
morphology and its simulation by computer-based mathematical processes. Von
Neumann on the other hand, another significant key personage, was explicitly
searching a theory that would encompass both natural and artificial biologies, with the
notion that the basis of life was information.
73
The Turing Machine
In the year 1935, Alan Turing already thought about an abstract experiment in which for
the first time in history, a universal computing machine33 was being conceived. This reprogrammable and digital machine worked with an endless paper tape and a head that
could move freely along the tape, reading, writing, or erasing symbols. The then
revolutionary idea consisted out of the fact that any computable process could thus be
performed by following a set of logical instructions on the tape. It would take until the
Second World War before Turing’s designs were built and these machines were used to
break the German Enigma code. By this time, Turing had moved on and already
proposed the notion of artificial intelligence. Later, Turing would mainly use the
computer for modelling morpho-genetic processes, a research which would occupy him
for the rest of his life.
The von Neumann Machine
Starting from Turing’s concept of the universal computing machine, John von Neumann
developed the foundation of the serial computer, defining three basic elements of
central processor, memory, and control unit. Although he went on building the first
American computers, his work on self-replicating automata would be more significant. In
this field, he considered the Turing machine to represent a class of universal automa
that could solve all infinite logical problems. Furthermore, he began to investigate the
possibility of one automation taking some raw materials and building another
automation. Hereby, the feasibility of such automation physically replicating itself into
more complex forms was examined.
Ultimately, this important investigation of a self-building, evolving automation resulted in
an immensely complicated and essentially unbuildable project, requiring some 200.000
cells in any one of twenty-nine states. Logically, this remained a paper exercise that
was part cellular automation, part robot, having only a conceptual robot arm. It was thus
von Neumann who recognised that life depends upon reaching the critical and complex
level in which items are able to self-organise and self-reproduce into more complicated
objects. Furthermore, it seems that life exists on the edge of chaos, and in fact, this is
the point of departure for Frazer’s new model of architecture.
III.5.3 Generative Systems
In this paragraph, some of the techniques that are described in An Evolutionary
Architecture are further investigated. Although Frazer himself does not mention the
phenomenon of cyberspace, it can easily be imagined that these techniques would be
used in the electronically transmittable realm, as it will be proven that a maximum
impact of materialised form can be created out of a strict minimum of computer code.
The role of the Computer
“A digital computer is, essentially, the same as a huge army of clerks, equipped with rule books,
pencil and paper, all stupid and entirely without initiative, but able to follow exactly millions of
precisely defined operations… In asking how the computer might be applied to architectural design,
we must, therefore, ask ourselves what problems we know of in design that could be solved by such
an army of clerks… At the moment, there are few such problems.”
(Christopher Alexander, The Question of Computers in Design, 1967)
33
John Frazer refers to TURING, A.M, On Computable numbers, with an Application to the
Entscheidungsproblem, Proceedings of the London Mathematical society, 1937
74
But actually the evolutionary approach is ‘exactly’ the sort of problem that could be
given to such army of clerks, as the difficulty lies much more in developing and
prescribing the ‘rule book’. It is furthermore noticed that the use of a computer is not
without any dangers. ‘Imaginative use’ is translated into ‘using the computer’ to
compress information in such way that complex architectural form is able to develop.
This is reflected by a rather radical change of the applied human intuition, perception,
and imagination, although the first step still relies on the human skills of its creator. Or
like Frazer himself puts it: ”The prototyping, modelling, testing, evaluation and evolution
all use the formidable power of the computer, but the initial spark comes from human
creativity.”
The Concepts
Throughout the following projects, it was more than necessary to develop and design
new essential tools: computer software, computer languages and even prototype
computer hardware, as there was a clear shortage of suitable and efficient sources in
these fields. The following list sums up the most important efforts being done by
Frazer’s research team.
§
§
§
§
Data-structures are important to contain the graphic representation and the
results of the various transformations. Commonly this is in the form of a
matrix, which can be multiplied by the matrices of the transformations.
Transformations such as scaling, translating, reflecting, differential scaling,
rotation and shearing in the three dimensions were programmed.
Furthermore, orthographic, axonometric, isometric and perspective
projections were added.
Symmetry operations are extensively used in design and architecture,
although only the simplest procedures such as reflection and rotation are
generally available. Instead of implementing the seventeen 2D possibilities, a
tool was created that made all 230 symmetry operations in three dimensions
possible.
Shape processing graphical programs had to be defined, of which the
interface offers the user the capability to transmit the required commands for
computable evaluation.
III.5.4 The Tools
In the earliest phase, solar-geometry programs were being developed, capable to
evaluate environmental performance and showing the related shadow and sun-path
movement on the associated architectural projects. They were not only able to perform
analysis, but assisted with the processes of design as well. It was only later that the
research in the generation of architectural form was emphasised. Hereby Frazer tried to
imagine a model, in which architecture exhibits the characteristics of metabolism, selfproduction, and mutability, all elements that can be considered as essential
requirements of life as well.
The Generator
In 1979, the first working model of a self-replicating computer was built. It consisted out
of a collection of three-dimensional cubes, able to explore their neighbouring sides.
After the development of communication, this thus resulted into electronic processes of
self-inspection and transmission of various messages. All this work was then
concentrated on the so-called Generator project. It was thought to be possible to
physically build a certain ‘intelligent structure’, which was able to “learn from the
alterations it made to its own organisation, and coach itself to make better suggestions.”
75
This environmental control system would evenly register ‘boredom’ if the building would
not be changed enough, and ultimately possess a kind of artificial consciousness.
The Universal Constructor
This early idea was further investigated in the Universal Constructor project developed
in 1990. The cubes now additionally received the capability to output messages by
means of illuminating a combination of eight LED-lights to the surrounding environment.
This enabled finally modifying actions as ‘take me away’ or ‘add a cube on top’, which
now could be performed by the physically present participator. The model as a whole
possessed also a certain common computer program for interrogation, messagepassing, and screen display, which permitted the mapping of different codes in the
physically flat base into height in the computer model of the virtual contoured
landscape. A problem could thus be implemented by adding some environmental
features with the coded cubes. The application program would then respond by the
addition or removal of cubes representing its coded and diagrammatic response.
The range of possible applications included three-dimensional cellular automata
responding to a complex site problem, a Fibonacci curve-fitting program, and an
encoding of a suitable landscape for a particular dance-performance. This application
can thus clearly be understood as a powerful tool for the explanation and demonstration
of a radically new design process able to represent an expression of logic in space.
Polyautoma
“Polyautomata is a branch of computational theory concerned with a multitude of
interconnected automata acting in parallel to form to form a larger automation,” points
John Frazer. These systems were investigated out of their potential as generative
devices and from their simplicity that makes them appropriate for exploring rule-based
systems. Out of this theoretical field, the technique of the genetic algorithm was chosen
for communicating the compressed information, necessary for the generation and
optimisation of form. Genetic algorithms are a class of highly parallel, evolutionary,
adaptive search procedures, characterised by string-like structure equivalent to the
chromosomes of nature. They actually consist out of a coded form of parameters, and
are considered as highly parallel because they search using populations of potential
solutions rather than performing this task randomly. Furthermore, they are described as
adaptive for reaching optimal solutions through gradual changes within the population
over several generations.
III.5.5 The Evolutionary Model
In nature, the genetically coded information consists of manufacturing instructions, but
its precise expression is environmentally dependent. Frazer’s model of architecture
which started already in 1968, was initially considered as a form of artificial life and
contained such a code-script, which was in fact the sole element that was able to
‘evolve’. But much more conceptual elements were needed to achieve an ideal
evolutionary model.
§
§
§
§
§
A genetic code script
Rules for the development of the code
The mapping of the code to a virtual model
The nature of the environment for the development of the model
The criteria for selection
76
Figure III-7 The different evolution levels of a materialised genetic code script.
(http://www.gold.net/ellipsis/evolutionary/evolutionary.html)
Architectural Style
In order to create a genetic description, it was considered as important to develop and
code a certain architectural concept in a generic and universal form. This approach
should then be capable of being expressed in a variety of structures and spatial
configurations in response to different environments, a strategy which is also followed
when specific individual architects apply their generally immediately recognisable,
personal formal language in all of their projects.
“What we are now proposing is a technique applicable to a wide range of
architectural concepts and geometries, all conceived as generative systems
susceptible to development and evolution, all possessing that quality
characterised by Viollet-le-Duc as ‘style’: ‘the manifestation of an ideal established
upon a principle’.”34
This style should then represent the alternative approach of most existing CAADsoftware, in which intensive modelling and simulation are rather difficult to perform
during the designing phase itself, and where objective evaluation of different alternative
approaches are still not being widely implemented yet. The model should thus become
adapted iteratively in the computer in response to the feedback from the applied
evaluation.
The genetic algorithms act as seed out of which then abstract representations of
structure, spaces, and surfaces are being calculated. Two kinds of information have to
be stored in the overall framework: the coded, conceptual model of the building
information, and a description of the actual components and details for the output stage.
This information is different from the information derived by the seeds themselves and is
thus necessary when the generative technique is started. Then the resulting seed is
compared to the user’s database of requirements for a particular building, after which
the seed is grown and stretched until it conforms to these requirements. These
modifications are processed in two ways. First, optimisation routines search and
evaluate alternative strategies, after which the program adjusts itself to the most
successful ones. Secondly, the user evaluates a series of solutions in terms of nonquantifiable criteria, including aesthetic judgements.
34
FRAZER, JOHN, An Evolutionary Architecture, E.G.Bond Ltd., London, 1995, p.67
77
The Universal Interactor
The main structure of this program developed in 1992 contained a possible solution to
the problem of providing suitable environmental factors for computable evaluation.
Experimental antennae were developed which were either transmitters or receivers of
information. The receivers used different technical means to sense several kinds of
movement, sound, colour, wind patterns, and touch. The transmitters were able to send
out sound, light, and even movement, which in fact represented the emotional state of
the system, hereby encouraging human participation by experimenting with the notions
of conflict and co-operation. Here the form of the data-structure is based on a direct
analogy with DNA, and is derived by measuring properties on the sides of the cubes. In
this technique, environmental cues and internal rules determined the seed’s response,
which resulted in a proportional evaluation that was then passed to the genetic
algorithm to select the cells for the next stage. Unfortunately, the drawback of the
system was the limitation to the three-dimensional geometry of the specific components
on which it was based.
The Universal State Space Modeller
The computer environment for this developed model, also described as the
‘architectural genetic search space’, was called the Universal State Space Modeller.
This technique was capable to model any structure or space, as both the environment
and the structures itself were evaluated in exactly the same way. Each point carries all
possible information of its identity and of its neighbours (properties, history, location and
much more), plus a complete copy of the architectural genetic code and the instructions
needed for the generation of forms. Hereby, the data-structure itself is the program in
the sense that “only the whole knows about the whole”.
In this way, information travels through the groupings of cells concentrically in the form
of logic fields. This means that the form of the model thus can evolve in response to the
environment, and that it is able to learn which rules are successful in developing and
modifying form. In this instance, both the conceptual model as well as the structure itself
can consequently be called intelligent. It is even discovered that individual cells
gradually take on specialist functions in the generation of the structure. Consequently,
the visible form itself of the process-driven virtual model can be considered as a byproduct of cellular activity, hereby in fact representing both in-form-ation. Furthermore,
the model has greater proximity to generate buildable forms than ordinary CAADmodels since it understands the process of manufacture and construction, and because
most environmental components (including site, climate, users, cultural context,…) are
here viewed as simple logical states in the data-structure.
78
III.5.6 Conclusion
Naturally, this approach implies drastic changes in some of the architects’ working
methods. Architects now have to determine the criteria for evaluating an idea, and even
have to be prepared to accept the concept of client- and user-participation in the
process. The design responsibility hereby changes radically to one of overall concept
and embedded detail, as Frazer is convinced that the role of the architect will enhance.
In his view, it becomes possible to seed far more complex generations of new designs
than could be individually supervised, possessing a complexity that would be impossible
otherwise.
Architects would now be transformed into the creators of rich genetic ideas, while “the
role of the mass of imitators would be more efficiently accomplished by the machine.”
Furthermore, the building process could be incorporated into the model by the
application of various nature-based scientific insights of biological and physical
construction. Many socio-economic advantages could arise, as this evolutionary
technique is primarily based on the equilibrium of the architectural concept and
influences of the environment. Ultimately, Frazer foresees the unpredictable future of
the evolution of the seeds themselves as well. Taking the example of natural selection,
which has ‘superb tactics but no strategy’, architectural life could then emerge out of
nothing, with no preconceptions, even with no design at all. This should even lead to the
application of self-constructing physical buildings, already imagined by many other
authors.
“Our new architecture will emerge on the very edge of chaos, where all living
things emerge, and it will inevitably share some characteristics of primitive life
form. And from this chaos will emerge order: order not particular, peculiar, odd, or
contrived, but order generic, typical, natural, fundamental and inevitable – the
order of life.”35
III.6 Conclusion
Discussions about the relationships between the actual and the virtual have proven to
be polarised very easily. Nevertheless, it seems that the city of the future will be
intensively filled with forms of intelligence. Sensors and effectors will be considered as
normal and will be linked everywhere with information utilities just as running water.
Following Marcos Novak, it is a certainty that urbanism will alter, since cities will become
the alternative interfaces to the net. This non-local urbanism, freed from a fixed
geometry, will not be the post-physical city, but it will be a transmutation of the known,
and stand alongside as well as be interwoven into the contemporary reality of the city.
Architecture itself is already being influenced by the newest insights of those architects
who are already exploring the electronical borders of programmable form-generating
techniques. However the spatial effects of the dynamic and digital society becomes
more clear in some of the emerging contemporary architectural projects, much still can
be done in the development and invention of new principles and applications in this
field. It may furthermore not be forgotten that it has never been stated that the whole
field of architecture should be devoted to the phenomenon of the virtual. Moreover,
many other formal recognisable movements are now emerging as well, such as for
example minimalism, which in fact stand for quite opposite views than the principles
mentioned in this chapter. It is thus clear that the notion of cyberspace architecture
should not be interpreted as the new surveyor of architecture.
35
FRAZER, JOHN, An Evolutionary Architecture, E.G.Bond Ltd., London, 1995, p.103
79
In this last chapter, the attention will be much
more focused on one particular aspect of the
relationship
between
cyberspace
and
architecture. In fact, the field of information
visualisation can be seen as an important and
rapidly developing scientific field on its own. It
is thus certainly not the intention to examine
the technical aspects or detailed concepts of
the applications based on the insights of
cognitive
perception
and
efficient
programming thoroughly. In the contrary, it is
much more interesting, perhaps, to pursue the
future possibilities architecture could create
when cyberspace is understood as a primarily
informational tool. Furthermore, it is noticed
that many authors mention or are convinced
that cyberspace also possesses some valid
rules and principles which its designers should
follow. However, it is remarkable that not
many dare or are able to theorise them and
write them down. In this view, the principles
Michael Benedikt himself has proposed, are
considered more then challenging, certainly
out of the point of view that a part of the future
of architecture is foreseen to be built in the
virtual realm.
IV Information Architecture
“In cyberspace, the real is hyper-real and
reality becomes virtual. In this space that is no
place and yet is not everywhere, what does it
mean to build?”
(Mark C. Taylor – in Any, No.3, Nov/Dec 1993, p.24)
80
IV.1 Introduction
“Cyberspace is an invented world. As a world, it requires ‘physics’, ‘subjects’,
‘objects’, and ‘processes’, a whole ecosystem. This is made possible by the
representation techniques and calculation power of contemporary electronic
technology. This digital technology has implemented dissociation between data,
information, form, and appearance. Form seems now to be governed by
representation, while data is a binary stream, and information is pattern perceived
in the data after the data has been seen through the expectations of a general
representation scheme or code.”1
Until today, cyberspace is already motivating research projects in science, art, business,
and architecture. For Michael Benedikt, the ‘cyberspace program’ should begin
experimentally, probably by first creating ‘crude’ and ‘fragile’ cyberspaces with a limited
number of users, out of which the most essential lessons should be learned. He
foresees this process taking decades of time, meanwhile offering various spin-offs into
many areas in the field of computing, such as hardware, software, telecommunications,
and interface design. Furthermore, thousands of engineers, programmers, designers,
and managers will be working to make this visionary cyberspace a fact of reality,
investigating and exploring this tool meant to increase the productivity of many
companies and agencies.
“Because the design, institution, and management of cyberspace will be a task of
immense scale and complexity, it can simply be argued that ‘it is never too soon
to begin’”2
IV.2 The Information Revolution
“The information sector of our economy is enormous – including mass media
(newspapers, magazines, books, online services, movies, radio, and television),
information systems, educational institutions, and more. No industry or enterprise
is untouched by the persuasive influence of the information revolution.
Understanding this revolution requires an examination of the determinants and
sources of the value of information and the impact of that value on the
organisational infrastructure of business and commerce.”3
IV.2.1 Cyberspace as an Information Tool
David Whittle is convinced that the right information (on the right time) can have an
enormous value that consequently could command a high price. In that aspect,
information is anything but a commodity, as its value varies from person to person and
its price often has little to do with its value. But as the supply of information exceeds
drastically the actual demand, and since the cost of sharing is very low, the price quickly
approaches zero. It is a fact that information available on the Internet is characterised by
its immediacy and sheer breadth and scope.
For today, there are databases of all kinds, varying from high-priced and real-time
financial and stock market information to more hidden but free collections of high-school
students’ papers. Already today, this electronic network can deliver any digital good
imaginable, although illegal copying of electronic content is still a serious problem.
1
NOVAK, MARCOS, Liquid Architectures in Cyberspace, , in BENEDIKT, MICHAEL (Ed.),Cyberspace: First
Steps, MIT Press, London, 1991, p.234
2
BENEDIKT, MICHAEL, Cyberspace: Some Proposals, in Cyberspace: First Steps, MIT Press, London,
1991, p.189
3
WHITTLE, DAVID B., Cyberspace: The Human Dimension, W.H. Freeman Co., New York, 1996, p. 306
81
Furthermore, the primarily known disadvantages of Internet cyberspace as a
commercial tool, which in fact are based upon the thoughts of David Whittle, are listed
and shortly explained below.
§
§
§
§
Difficulty of access. Most users often bump on a stubborn and
overcomplicated, sometimes even over-designed user-interface, which is
often characterised by a great amount of text and links asking for immediate
attention or a search-button on the first web page. Of course, the extreme
opposite design should be considered as not really user-friendly as well.
Bandwidth constraints. As the technology itself is still in a phase of fast and
radical development, high-bandwidth fibre wires are still not a common good
for normal modem users. Consequently, telephone and coaxial wire will
probably still be used in most homes for the next decade.
Wide disparity in the quality and applicability of information. The amount of
similar information present is often astonishing. Serious efforts are being
undertaken to implement various techniques of filtering and searching, which
should adapt to the users needs over a period of time. Much is expected from
the technology of agents4 in this matter. These latter should be able to
perform delegated standard tasks and even make simple decisions on the
user’s behalf. For instance, actions should be made possible such as
screening and removing junk-emails, searching and negotiating the cheapest
air-travel, scanning news services, picking items that the user has proved to
be interested in, and so on.
Lack of real security. As the most economical oriented problem, this lack of
privacy has received a technical solution quite fast. The technology used for
this matter is known as Pretty Good Privacy (PGP). In short, based on the
characteristic of very large prime numbers, it offers the possibility of
encrypting messages with a so-called unique public key of a certain individual
user, which is then the only person able to decrypt it after arrival by means of
his own, complementary, secret private key.
IV.2.2 The Value of Online Information
To clarify the role and the characteristics of cyberspace as a delivery mechanism of
information, six different key factors in this matter have been recognised and will be
explained.
Convenience
Netsurfing is more convenient for quick access to a huge amount of knowledge, while
printed media is better suited for portability, content, and permanence. This is a fact as
cyberspace certainly can be considered as being in a constant state of flux. Although it
can be argued that this is an easy way to keep the information current, it is also an
important factor at the expense of history and uncertainty. For instance, it is likely that
by the time you read this text, some of the cyberspace sources will have disappeared.
Quality
With the millions of pages on the World Wide Web currently in existence, the novelty of
looking at Web pages will surely give way to demand for quality content. Often, only raw
material of ideas is found in cyberspace, while reviewed, useful, stimulated and well
reasoned writings would be more appropriate.
4
The concept of agent embodies a help tool for humans where expertise is mixed with knowledge of the
user. This idea can easily be compared to the intrinsic characteristics of travel agents, real estate agents and
so on.
82
Granularity
Granularity stands for the concept involving the size of the pieces of information that are
still of great value for the consumer. For instance, when a user requires a certain series
of magazine articles, he generally needs to take a subscription for a whole year, which
can be considered as a rough granularity of the offered information. With traditional,
physically limited information, the finer the granularity, the more expensive that
information is to deliver and the more narrow the market. But now, cyberspace
promises targeting and delivery technologies a big step forward as the granularity
increases drastically. Imagine the opportunity of an Internet subscription of your
favourite newspaper-cartoon, for one cartoon a day: the individual member price should
be about one cent a day, while the cartoonist himself with a public of 100.000 online
users earns over $250.000 a year. 5 In short, the granularity phenomenon should result
in a more efficient information flow, while the time and effort being spent for gathering it
will either decrease or become more productive. The impacts will be far reaching, if not
immediate.
Accessibility
Accessibility is the ease with which information can be obtained and understood. As
dramatic as the total growth of the Internet phenomenon may be, and no matter how
great its potential is understood, it will not become an everyday part of most lives until it
becomes as accessible as for instance cable TV. Furthermore, questions about the
universal access of cyberspace can be raised again, as not all social classes possess
the opportunity to ‘go online’.
Suitability
Information is more valuable when it is suited to the needs of the consumer. This
presents a challenge because advantages of convenience and granularity should not
be lost in lack of quality and suitability. In an attempt to solve this very problem, socalled search engines with filtering capabilities have been created. What can be noted is
the remarkable lack of good, well-known, reputable catalogues, critics, commentary,
and best-seller lists for the Internet. In short, finding the information that is best suited
for a certain well-defined target is often more than a challenge, although once found, it
has the advantage to being continuously available.6
Scarcity
Scarcity affects the economic value of information. As more and more information will
be available at lower cost, people will become more educated, which is considered of
great value in any society. As the value of such information itself is inherent and thus
not based on its price, it can be argued that this phenomenon could trigger growth of the
economy as a whole, as information and knowledge increase productivity and the
effective use of other forms of capital.7
“We commonly mistake data for information. Information starts data, but data is not information. It is
a source of information.”
(Ramesh Jain, in Elements of Hypermedia Design, 1995)
5
NEGROPONTE, NICHOLAS, Being Digital, Hodder & Stoughton, London, 1995, p.152
Comment: This should obviously be interpreted very carefully, as web pages have the characteristic to
disappear rapidly and without any notice, leaving the user behind with a ‘File Not Found’-notice.
7
It should be noted, of course, that this is a typical American point of view, a country in which the
development of the Internet is intensively supported by the federal government.
6
83
IV.2.3 Search Engines
As the aspect of retrieving information is a rather important issue when vast electronic
networks such as the Internet are used, some of the most known and used searchengines will be further investigated. As some aspects will be used later, it might be
interesting as well to know how the digital techniques of these online applications
actually work. It should be noted that the following list8 is only an indication of the
general characteristics of the used techniques, as this technology could also be
considered as a substantial researchable informational field of its own.
§ Alta Vista
(http://www.altavista.digital.com)
is developed by Digital Corp. and is one of the most powerful and flexible search
engines on the Web today. Each day, database entries are gathered by a web
crawler which enters a WWW-site and thoroughly indexes the page contents. The
frequencies and proximities of significant words are tallied and form the basis of
the order of display in the search results provided by the engine. Furthermore, an
objective investigation presented in March ’98 at the 7th International World Wide
Web Conference (WWW7), stated that AltaVista was in fact the biggest search
engine at that time, having indexed an estimated forty percent of the available
pages on the Web (which in turn would consist of 275 million distinct pages).
§ Excite!
(http://www.excite.com)
is developed by Excite Inc. It uses an artificial intelligence technology to establish
relationships among the terms that the web crawler finds on indexed pages. The
search engine handles entered phrases and finds the closest matches using
fuzzy logic9, while a relative relevance is established for closeness of fit to the
query.
§ HotBot
(http://www.hotbot.com)
was developed by Inktomi Corp., which in fact does not exist any more, and was
formerly part of the Network of Workstations Project at the University of Berkeley.
The power of this engine lies in its ability to use artificial intelligence to record
geographic information, URLs and domain names as well as file names and
types, such as JavaScript, VRML, embedding features, etc.
§ InfoSeek
(http://www.infoseek.com)
is produced by the Infoseek Corp. and shares most characteristics of the other
engines. One advanced feature is considered as special, as terms can be defined
to be required or excluded.
§ Lycos
(http://www.lycos.com)
was developed at Carnegie Mellon University but is now independent. A certain
feature allows the users to define the closeness of fit (distance, number of times,
etc.) among the terms entered.
§ WebCrawler (http://www.webcrawler.com)
started its existence at the Department of computer Science and Engineering at
the University of Washington and was later purchased by Excite Corp. It was the
very first full text search engine available on the Internet. User submissions as
well as the input of the web crawler are used to build the database.
§ Yahoo!
(http://www.yahoo.com)
The search indexes are built primarily by user submissions. It presents a highly
structured, hierarchical subject directory as it is the outgrowth of one of the
earliest attempts at categorizing information found on the Internet at Stanford
University and is still considered a superb starting point.
8
Much of the information is retrieved from an intensive investigation and comparison of the most used
search engines done by : RALPH, D. RANDY, Search Engines: Indexes, Directories and Libraries,
http://www.netstrider.com/search/directory.html, April 1997
9
This is particularly easy for novice users as the technique compensates the input of poorly formed queries.
84
Figure IV-1 The information which can be retrieved out of a typical search result.
(search result is retrieved from http://www.lycos.com )
IV.3 3D Information Visualisation
IV.3.1 Information Quantity
It has already been noted that computer networks as the World Wide Web are growing
in a rapid, almost true exponential manner. Coupled with the increasing processing
power and storage capacity of contemporary computer systems and the decreasing
price of this high performance computing hardware for the general public, it is only
logical that this results in an ‘information big-bang’. Although it can easily be argued that
information is maybe not the right word to describe these huge mass of meaningless
‘noise’. Some critical researchers, Ziuaddin Zardar for instance, describe the situation of
the Internet of today as follows.
“The net, in fact, provides us with a grotesque soup of information: statistics, data
and chatter from the military, academia, research institutions, purveyors of
pornography, addicts of Western pop music and culture, right-wings extremists,
lunatics who go on about aliens, paedophiles and all those contemplating sex with
a donkey. A great deal of this stuff is obscene; much of it is local; most of it is
deafening noise.”10
In this chaotic pile of information, of which the increasing part is becoming purely
commercial, it is a fact that although the amount of data has grown, the amount of
information has certainly not. Meanwhile, the problems of finding relevant information
have shifted thoroughly. First, information was not easily accessible or searchable,
although this has changed since the emergence of so-called search-agents and searchengines. However, these possess the characteristic of most ordinary software, namely
to overthrow any possible context or question in which the information is being asked.
Now, the problem thus no longer lies in getting the information as such, but more in
finding it and sorting out the one useful record from the more than hundred similar
items.
Researchers are investigating various tools and techniques to deliver the user various
comfortable and economical applications to pass this problem. Algorithms and methods
for intelligent data evaluation are being developed to automate information filtering, able
to recognise for the user relevant elements and records. On the other hand, research
into visualisation techniques is receiving much attention. This concept puts the task of
retrieving relevant data more to the side of the users themselves. They even allow the
users to make use of their cognitive, perceptual and intuitive skills to find data, which
may be of interest but could be missed by search algorithms, for instance because they
are not directly relevant to the query. Before some principles and examples of the
architectural inspired techniques in this area will be closer and more thoroughly
10
SARDAR, ZIAUDDIN, alt.civilisations.faq: Cyberspace as the Darker side of the West, in SARDAR,
ZIAUDDIN & RAVETZ, JEROME R. (Ed.), Cyberfutures: Culture and Politics on the Information
Superhighway, Pluto Press, London, 1996
85
investigated, it can be useful to describe the variety of the now existing 3D information
visualisation techniques.
IV.3.2 Visualisation Techniques
The growth of specially designed graphic user interfaces (GUIs) started at Xerox Palo
Alto Research Centre (PARC) in 1971. This photocopying technology-daughter of IBM
still put much research in the future of the so-called ‘paperless’ office, the natural
opposite of the actual goal of the company.11 The approach that the PARC researchers
adopted was to show the computer’s resources graphically, so that the user could
explore to discover everything what could be done with the application of the computer.
Most known application developed in this laboratory is the comfortable, user-friendly
Apple Macintosh graphical desktop, which still receives much merit even until today.12
Within the concept of cyberspace, these GUI-technologies enable the user to
experience control (‘cyber-‘) as a projection of self, out of the own centre, the own will,
into a field of activity, which can be characterised as space. This space is real because
it is independent from the potential user, it is even more real because it is able to
respond and interact. In this view, the challenge of the future computer industry is not to
deliver better and faster hardware technology, but to develop a completely other
concept of a kind of real user interface, one that recognise the presence and needs
from its user not only by the keyboard. Still, while some part of this research is
concentrating on the communications level of this task –such as speech recognition and
other artificial agents capable to learn the user’s preferences out of a certain
experience-, much work is still being done to visualise and offer all kinds of information
on a more intuitive and human-friendly manner. Most efforts in this matter are focusing
intensely on the possibilities of three-dimensional representation techniques, resulting in
some applications that will be explained in the next paragraph. Most example
techniques described can be classified as belonging to one of the following three
groups.
Classification of 3D Visualisation Techniques
§ Mapping techniques: use some aspect, property, or value of the dataelements to produce a mapping onto objects within the visualisation.
§ Presentation techniques: do concentrate on the appearance, accessibility,
and usability of the data, which then should result in a user-friendly and
intuitive interface.
§ Dynamic techniques: enrich the visualisation with behaviour and dynamic
properties, able to respond automatically to changes in the data or actions by
the user.
The next classification is largely taken from the book Elements of Hypermedia Design13
and an investigation of three-dimensional information visualisation done by Peter
Young14, which was found on the Internet. Of course, this classification is merely
conceptual and a degree of overlap can easily be noticed in some of the following
cases.
11
WOOLLEY, BENJAMIN, Virtual Worlds: a Journey in Hype and Hyperreality, Blackwell Publishers,
Oxford, 1992, p.147
12
In order to get hold of 100.000 Apple shares, Xerox offered Apple the access to PARC’s research
achievements. This deal has proved expensive, when Apple’s co-founder Steve Jobs took one particular
project (for free), and developed it into Apple Lisa, a technology which meanwhile has proved to be worth
millions of dollars.
13
GLOOR, PETER, Elements of Hypermedia Design: Techniques for Navigation and Visualisation in
Cyberspace, Birckhäuser, Berlin
14 14
YOUNG, PETER, Three Dimensional Information Visualisation, 1996, (also published in Computer
Science Technical Report, No. 12/96), http://www.dur.ac.uk/~dcs3py/pages/work/documents/lit—survey/IVSurvey/
86
1. Surface Plots
Surface plots can be considered as the most familiar extension of the standard 2D
graphs. These visualisations are constructed by plotting data triples: X and Z axes
usually contain standard sets with regular structure, while the height represents variable
data in the Y-axis. After the resulting points are netted and coloured, the visualisation
resembles a landscape with relative easily detectable patterns or irregularities.
2. Cityscapes
Cityscapes are created in a similar way as surface plots by mapping scalar data values
onto the height of 3D vertical bars or blocks, which are placed on a uniform 2D
horizontal plane. Several actions and features can be implemented on this visualisation
to search and clarify certain perceivable patterns.
3. Fish-eye Views
The name is taken from a similar resulting view produced by a very wide-angle ‘fisheye’ lens. This technique results in a view in which objects with greatest detail and
magnification appear in the middle of the view, whereas other objects on the periphery
are distorted in a way that show lesser detail. This allows a detailed study of objects of
interest, while maintaining a view of context or position with respect to the other objects.
This technique has already received a widespread implementation, and proved to be
very useful in visualising large graphs containing many interconnected nodes.
4. Benediktine Space
The term ‘Benediktine space’ can be traced back to Michael Benedikt’s research of the
structure of cyberspace. He has investigated intensively the possibilities to map
attributes of an object onto certain intrinsic and extrinsic spatial values, based on the
two principles of exclusion and maximal exclusion. This very concept will be the subject
of a thorough investigation and will be further explained in this chapter.
5. Perspective Walls
Perspective walls are able to represent large, linearly structured information, allowing
the user to view and navigate freely while maintaining some degree of location or
context. They fold the linear structure in a 3D space, for example forming a cylindrical
shell with the data mapped onto the interior surface.
6. Cone Trees and Cam Trees
The aim of both techniques is to display a larger amount of information that can be
navigated in an intuitive manner. To accomplish this task, more cognitive and
comprehending load of the represented information has to be shifted to the human
perceptual system. In this way, child nodes are placed at equal distances along the
base of their mother’s node cone. This process is repeated for every node in the
hierarchy, while the base diameter of the cones is reduced at each descending level. It
has to be noted as well that cam trees are identical to cone trees except that they grow
horizontally as opposed to vertically.
Originally produced at Xerox PARC, these trees could be rotated smoothly and bring
any particular node into focus. The smooth animation was found to be critical in relation
to the cognitive capabilities of the human senses, as sudden changes in the orientation
caused severe disorientation of the user.
87
Figure IV-2 Picture of a cone tree visualisation, representing a UNIX-file store.
(http://www.crg.cs.nott.ac.uk/research/applications/cones/)
7. Sphere Visualisation
Informational objects are mapped onto the surface of a sphere. Highly related objects
are placed close to a pre-selected object of interest (OOI). This results in a fish-eye view
wherein unrelated objects become less emphasised and move round to the opposite
side of the sphere. The overall view consists of a number of nested spheres, each
representing a different level of information. Navigation is accomplished by rotating the
main sphere to bring objects of interest into view and traversing several possible links to
lower level, darkener spheres.
8. Rooms
Xerox PARC’s so-called ‘room’ concept is a powerful three-dimensional extension of the
common known desktop metaphor that is encountered in computing today. In this
specific technique, several rooms and the navigation in between are used to organise
and structure several kinds of documents and applications. Within each room,
information sources such as 3D-objects or wall projections may be present, while a floor
plan is used for overall economical navigation and comfortable cognitive orientation. To
be able to ‘carry’ information, objects or important items while working in this
environment, a helpful pockets-metaphor was created.
9. Emotional Icons
Emotional icons are objects that can perform certain actions related to the presence of a
user or other icons. For instance, icons may come closer, retreat, grow, change colour,
or get animated dependent on the user’s proximity and pre-defined interests in the data
they represent. Icons with a similar nature might also move together when they sense
each other’s presence. In short, this concept could provide an important development
towards a so-called ‘living’ data environment.
10. Self Organising Graphs
Conventional layout techniques generally possess some sort of function or routine that
tries to fulfil certain aesthetic criteria or heuristics on a given graph to represent a
suitable layout. Self-organising graphs however, model the layout as an unstable
physical system that tries to reach a state of equilibrium. Objects can be represented as
rings and springs. The springs contain a repulsive or attractive force, dependent on
whether the string is compressed or extended. The whole network of these objects and
88
forces starts with a high-energy state. Consequently, this system will attempt to reach
equilibrium over a number of iterations, after which the layout is complete.
11. The Information Cube
The information cube technique extends two-dimensional tree maps into the threedimensional realm. In this concept, nested, semi-transparent cubes represent
hierarchical information. Transparency and shading are used to manipulate the depth
and amount of information being visualised, as they control the degree of visibility of the
cube’s content and its children. This reduction in amount of presented information
makes the model much more intuitively understandable for the user, while other 3D
visualisations or information can be represented within the cubes.
Figure IV-3 On the left: the Information Cube; on the right: the Sphere Visualisation.
(http://ww.dur.ac.uk/~dcs3py/pages/work/documents/lit-survey/IV-Survey/ )
IV.3.3 Overview
Mapping techniques
Surface plots
Cityscapes
Benediktine space
Spatial arrangement
Presentation techniques
Perspective walls
Cone trees & cam trees
Rooms
Dynamic techniques
Fish-eye views
Emotional icons
Self-organising graphs
While concentrating on the different techniques and approaches, the original relation
with cyberspace may not be forgotten. In this way, visualisation techniques are very
dependent from the applied technology. But ultimately, it has to be noted that some
visionary thoughts foresee these applications of information visualisation to become the
core of what cyberspace is about: offering a narrow union between users and their
virtual representation as a true state of human-machine ‘symbiosis’. In their view, these
future environments will become accessible as for ‘real’, making the user interface
ultimately disappear as the user is immersed in the universe of information. And like
noticed earlier, the promises of this universe are great, as a realm is envisaged that is
far richer than the physical one, still only dimly perceived through some imaginations.
IV.3.4 Spatial Arrangement of Data
When a three-dimensional information environment is created, generally a certain
mapping concept has to be followed. These rules should translate the abstract data into
a corresponding recognisable representation and also in a certain location of the object
within the information terrain. The resulting spatial configuration is then interpretable
and properties of data items can be read out of the relative position and unique
89
presentation of the objects. Within the research of these mapping techniques, four
different approaches can be recognised.
1. Benediktine Cyberspace
In the concept of the electronic cyberspace, the concept of mapping is a strong and
already widespread technique. A stream of bits, which are initially formless, is given
form by a representation scheme, and information emerges through the interaction of
data with the representation. Ultimately, appearance can then be considered as a late
after-effect, simply a consequence of many sunken layers of patterns acting upon
patterns, some behaving as code, some as data. It should be noted that this theory can
be useful for both two-dimensional (e.g. hypertext) as well as three-dimensional
representations, and thus for both forms of cyberspace spoken of earlier, although, of
course, different accents can be noticed. Although the three-dimensional consequences
of Michael Benedikt’s technique will be further explained later, it is probably more than
useful to be aware of the existence of other mapping techniques.
2. Statistical Clustering and Proximity Measures
Statistical methods are applied to analyse large database contents. In this way, items
are being grouped according to their semantic closeness to the searched item. A
common known and widespread application in this matter is the concept of searchengines which automatically group results according the matching of user-definable key
words. Further analysis results in ‘scores’ of the separate items, which then can be used
to create a suitable mapping into Benediktine space. Systems that adopt this approach
include VIBE, further extended into three dimensions to produce VR-VIBE.15
3. Hyper-Structures
Hyper-structures are created from an amount of information consisting of a number of
data objects with any number or arrangement of explicit relationships between them.
The most known example is of course the hypertext structure of documents used in the
World Wide Web, which was more thoroughly explained in Chapter II.
4. Human Centred Approaches
This approach is normally used to create abstract real-world metaphors such as cities,
buildings and rooms, in which the user is able to manipulate the data in more familiar
surroundings. The main problem of this approach is that the generation is difficult and
time-consuming, because it is still not suitable for automatic computable processes.
Furthermore, creating such an abstract model, which matches the real-world
environment plus the appropriate structure and representation of the data itself, can
certainly be considered as a difficult and time-consuming task.
IV.3.5 Examples
Most examples are built on the premises that navigation through an information space
can be very effecting when applied to retrieve useful information. Analogies and familiar
concepts such as location and motion are used for human intuitive comprehension of
abstract facts. Although there are many examples in the field of three-dimensional
information visualisation, some are picked out in this paragraph to show some of the
creative and convincing possibilities of these techniques known until today.
Furthermore, they demonstrate a part of the inspiration that was used when an own
program was being designed.
15
See paragraph ‘Examples’ for more information.
90
VR-VIBE
The example of the three-dimensional VR-VIBE can be considered as part of the
concept called ‘Populated Information Terrain’ (PIT). In a PIT-environment, multiple
users can inhabit and work simultaneously and thus co-operatively within the data as
opposed to merely with the data. This means that users are aware of each other’s
presence and actions, resulting in a true sharing of information.
Figure IV-4 Two screenshots of the VR-VIBE system visualising a bibliography with 1081 entries
and five keywords.
(http://www/crg.nott.ac.uk/research/technologies/visualisation/vrvibe/)
The visualisations in VIBE are constructed by a primarily defined set of ‘Points of
Interest’ (POIs) containing certain keywords which are then used in a query. A full text
search is performed on a number of documents after which each one receives a
relevance score to each POI. With these results, the visualisation is then calculated and
the representing objects are placed inside the space. Two separate methods are
possible within the concept of VR-VIBE. First nodes are placed on a 2D-plane with
respect to the proportion of relevance attributed to each POI. For instance, with two
POIs A and B, one document may have a score of 4 to POI A and 3 to POI B. This
document is then placed at 4/7 of the distance along the line joining A and B. A vertical
displacement is then introduced to each object to represent the degree of relevance.
The second method allows the objects to be placed freely at any point in the threedimensional space, but within the confines of the POIs. Intrinsic dimensions such as
shape and colour indicate the selection when two separate documents might have the
same set of scores. Users are able to navigate freely through the structure. Meanwhile
they can select documents, perform queries, apply filtering, or request additional
information. The interactive force of the model is proved by the possibility to add or
remove certain POIs, thus trying different configurations, and even to move POIs to see
which documents get pulled after them.
Vineta
Vineta is a visualisation system that can be compared with 3D-visualisations like VRVIBE, although it has one fundamental distinction, namely the number of dimensions or
terms this model is able to present. Within this technique, the objects’ position in the
navigation space is dependent of the semantic relevance between documents, terms,
and the user’s interests. Furthermore, multidimensional analysis and numerical linear
algebra for mapping documents and terms into the three-dimensional space are used
as well. All this results in the concept of spatial proximity (close together/far apart) to
represent semantic meaning (similarity/distinction) of the elements.
91
Figure IV-5 Vineta: On the left, the landscape and on the right the galaxy metaphor.
(http://www.crg.cs.nott.ac.uk/research/technologies/visualisation/vineta/)
Two metaphors were programmed for producing the visualisations. First, the galaxy
metaphor represents the space as a collection of stars, in which documents are fixed
stars and terms are being represented by ‘shooting stars’. Also here, semantic similarity
is encoded in the proximity of the stars. The other implemented concept, the landscape
metaphor, has proved to be more intuitive and was easier to comprehend. The space is
represented as a flat, textured surface containing flowers with stems and petals. The
surface itself is a useful feature that offers the user an indication of depth and distance
of the many objects. It is even so that “the inclusion of the ground plane was
encouraged by the study of ecological optics which emphasises that perception of
objects should never be considered apart from a textured ground surface.”16
Furthermore, flowers nearer to the user are of more relevance, while the stems aid the
perception of the actual location onto the ground plane. Finally, the directions as well as
the colours of petals on the flowers represent the search terms and their relevance to
each document.
Informationslandschaften
CAAD I: Informational Landscapes17 is the name of a CAAD-course being taught in the
first semester at the Architectural Department of the Federal University in Zürich. The
course starts with a large blank electronic surface that is divided by a number of equal
rectangles. Each of these smaller rectangles is assigned to a couple of students, who
are asked to draw certain signs, representing all kinds of information or unique ideas,
within this ‘domain’.
16
YOUNG, PETER, Three Dimensional Information Visualisation, 1996, (also published in Computer
Science Technical Report, No. 12/96), http://www.dur.ac.uk/~dcs3py/pages/work/documents/lit—survey/IVSurvey/
17
http://alterego.arch.ethz.ch/informationslandschaft/
92
Figure IV-6 The final end result of the abstract data field representation after months of
individual creative adapting processes by the students.
(http://www.alterego.arch.ethz.ch/infomationslandschaft/)
In the second phase, students are required to take the already visible manifestations of
the eight neighbouring rectangles (four orthogonal, four diagonal) into consideration.
This means that a kind of graphical communication-form is being established between
the related groups in particular and the collection of groups globally, to exchange
meaningful and thus informational elements. Some elements might be taken over,
continued, stopped, or ignored, dependent on the characteristics and the concepts of
the groups. In the next phase, a higher level of communication was implemented as the
groups were now allowed to use email for expressing and sharing their intentions with
the others. After the whole abstract surface was grown and finally had adapted itself to
its own intrinsic elements, another layer of information was added, as the most
remarkable signs now had to carry links, searched and chosen by the students. The
resulting surface was now not only the result of many processes in time, but became a
navigable map of links. Ultimately, this map can be used as a three-dimensional texture
or surface in a virtual reality environment, or can simply be represented as a kind of
mental map by a navigable two-dimensional web page. The following list summarises
the things implemented and learned by the students when they designed this original
digital information visualisation.
§
§
§
§
§
§
Software: PhotoShop, Netscape, UNIX, etc.
Interface: the communication mechanism between the hierarchical
educational levels and the global underlying and shared structure of the
course.
Communication: email, talk-command, tele-presence, abstract
representations, etc.
Search techniques on the Internet: online search engines.
Information architecture: the concept and its potential future.
Designing: creatively working in a group while visualising abstract ideas.
ZIP-CUBE
ZIP-CUBE18, developed in VRML language by Paul Meyer at ETHZ, is a convincing
example of the visualisation of a building as an important shared information source. It
tries to clarify the huge amount of information available when a project under
development, while offering its content to several potential users. The three axes
represent conceptual variables dependent of domains, functions and phases of a
certain building project. Although this objective is certainly not considered as
18
SCHMITT, GERHARD, Architektur mit dem Computer, Vieweg, Wiesbaden, 1996, p.82
information can also be found on: http://caad.arch.ethz.ch/research/ZIPBau/ or
http://caad.arch.ethz.ch/visits/zipbau.html
93
revolutionary, the implementation as an interactive, three-dimensional computer model
widely accessible on the Internet however obviously is.
Figure IV-7 Two views of the ZIPCube. Two orthogonal surfaces divide the cube’s axes and
data-objects for further investigation, while documents open up when approached.
(http://caad.arch.ethz.ch/research/ZIPBau/)
In this model, information is represented by box-like volumes, each of them able to
show gradually more detailed data, whenever the user approaches its surroundings.
When the symbols are clicked, dynamic presentations are transmitted and displayed
which can vary from simple linear HTML-documents until three-dimensional interactive
models. For instance, different schemes of the projects can be seen when successive
points are chosen on the phase-axis, resulting in time-based navigating. In short, it can
be noted that this technique is a powerful aid for navigation in the building process and
for representation of different shared objects in a databank.
Archaeology of the Future City: TRACE
TRACE19 is the title of an exhibition held in the Museum of Contemporary Art in Tokyo
in July 1996. This project uses the similarity between the concept of an urban city and
the new unexplored realm of information as its primarily goal. In this way, it can be
noticed that the relation between natural systems, such as a city, and virtual systems,
such as the Internet, is certainly not well defined yet. For this reason, virtual
environments as such should take the role of intermediate elements, able to create a
common language of understanding.
TRACE, programmed by Florian Wenz at ETH-Zürich, tries to accomplish this by
building the model out of a number of spaces, in which activities of users get registered,
interpreted, and then in turn represented back into the environment. This means that,
like in a physical city, the environment is actually generated by the users themselves.
They explore the surroundings and by doing so leave ‘traces’ behind them, while
reading the traces left by other former visitors in this world. The rooms receive their
representation by the narrow relation between a certain database that continuously
stores all the traces, and a so-called geometry generator, which translates the traces
into specific forms. In this way, the global space is dynamic and thus able to change
constantly as visitors use it.
19
SCHMITT, GERHARD, Architektur mit dem Computer, Vieweg, Wiesbaden, 1996, p.177 and the project
can be found at http://caad.arch.ethz.ch/trace
94
Figure IV-8 Left image presents a view of the public_out.world, on the right: the
private_in.world (http://caad.arch.ethz.ch/trace/)
Conceived out of the concept of an urban surrounding, a public as well as a private
zone is being created: a so-called ‘public_out.world’ and a ‘private_in.world’.
Furthermore, it can be noticed that TRACE is represented in a rather abstract
architectural syntax, which is only dependent of these two complementary immersive
situations. The user enters TRACE in the out.world, a navigation-space containing
closed volumes (Blobs) and some so-called NURBS-surfaces (Non-Uniform Rational BSpline). The user is able to move on top of and around these objects, which are created
out of equilibrium of certain imaginable forces that symbolise Internet sites. The
containers and interwoven network of the private_in.worlds on the contrary, which
actually represent the traces of former users, received a more simple and specific
design. The user is strictly captured in this labyrinth where he finds his own movements
to be bounded, but where the access to some presently multimedia files and links
seems to be completely free.
Legibility for Abstract Data Spaces: LEADS
This last interesting example, developed at Nottingham University, manipulates the
concept of the more physical city planning to design the overall model. It tries to prove
the notion that the legibility of urban environments can be improved greatly by a careful
design of certain key features. In this view, Kevin Lynch’s theory is introduced from his
book ‘The image of the City’, in which he identifies five major elements in constructing
cognitive models of an urban environment. Habitants of various cities were questioned
about the city they lived in. These interviews, written descriptions of routes through the
city and drawing maps resulted in these five features; which were in turn implemented
by a number of algorithms to structure the data in the LEADS-model.
§
Districts: are distinct areas, characterised by some form of commonality or
character, and generally identifiable by the nature of buildings within them,
such as residential or commercial areas.
The creation of districts in an abstract space is accomplished by primarily
determining similarity between data items, after which the results are grouped
together and placed in a particular area. The appearance of such area can be
fairly distinct as objects are displayed in different colours or shapes.
95
§
§
§
Edges: provide distinctive borders to districts, such as rivers and motorways.
Edges are positioned by finding intersections between districts, which can be
done by three different methods. The first method is to identify the two nearest
data items between districts and place the edge between them. Secondly, a
hull or bounding box could be determined that encloses a district after which it
can act as an edge. In the third and last method a common hull of two
surroundings should be found in which an edge can be calculated by
interpolating points along joining edges of the districts. It is noted that the latter
two methods were not chosen due to the fact that they are computable
expensive and rather time demanding.
Landmarks: are static and easily recognisable features, giving a sense of
location, such as distinctive buildings or structures.
Here as well, three methods have been recognised. Landmarks can be
placed in the centre of districts, although this method ignores the size or
density of the surrounding area, which could make them almost invisible or
useless. Otherwise, landmarks could be placed at intersections between three
or more districts. Finally, landmarks can be placed by a triangulation between
the centres of any three adjacent districts.
Nodes and Paths: are the lowest level of elements, more individual and
dynamic and interchangeable in time.
In the abstract model, paths are proposed to be links between nodes, which in
turn represent individual informational objects within the visualisation. Usage
information of the model is stored and used for complementary visualisation.
Metrics such as the frequency of access could be used to identify and create
new nodes, whereas frequent successive accesses could be defining new
paths, while old ones fade over time.
IV.4 VR/search
The next paragraphs are based on a virtual reality application that was programmed to
demonstrate some aspects and problems that are characteristic when a cyberspace
environment is being designed.
Introduction
The VR/search-program here described is meant to clarify and three-dimensionally
visualise the information provided when an Internet search result is offered. This VRMLrepresentation should utilise the intuitive cognitive capabilities of the users in their
process of finding the most suitable link that meets most of their individual
requirements. It is hereby assumed that the user wants immediate visible access to
some parts of the data such as the title, size, date, number of ordering, and relevance of
the provided links, and wants to use this information cognitively when decisions are
made. In this virtual application, a surface is laid out on which links are represented by
box-like objects unto which the necessary data is mapped. First, the technical side of
the program will be explained, while the process of mapping will be described soon in
the next paragraph.
96
Figure IV-9 Two visualisations of the same information. The left image shows the standard
result page of the search engine Lycos, on the right the three-dimensional world of VR/search.
Tools
In order to understand the application, the three main programming tools being used will
now shortly be explained. The principles of CGI are applied to run an executable
application on the server after the user (client) has entered a certain personal chosen
query inside an embedded HTML-form on a web page. Then, a Perl program on the
server reads the input values and submits this query to a search engine (in this case the
search engine called Lycos was chosen). It waits for the returned results and retrieves
them, after which the Perl-program starts to scan the text of the provided HTML-page.
During this scan, useable information is stored in variables that will be passed to related
values in the structure of a VRML-program. It is only when all this is accomplished and
the whole structure is assembled that the VRML-file, which contains all the requested
values in the form of heights, coordinates, texts, etc, is finally sent back to the user. On
this so-called client side of the connection, the type of file is sequentially recognised and
interpreted by the user’s browser as a VRML-content, after which the three-dimensional
world is displayed.
Figure IV-10 Simplified scheme of the in VR/search applied techniques after a certain user requests an item to
be searched, here for example the term ‘asro’.
97
IV.4.1 CGI
It should be noted that the Common Gateway Interface (CGI) is not a computer
programming language, but a standard protocol along the lines of TCP/IP and HTTP. In
this view, CGI is a standard for incorporating executable programs into Web documents
by way of the server. In other words, it is a way of providing dynamic output as opposed
to the static output of a normal HTML document. When an URL-request for a CGIprogram is sent, the server where the program is physically located will execute it in real
time, after which its output will be sent back directly to the user who has requested it.
The server as well as the client knows how to interpret the transmitted data (in HTML,
VRML, GIF, MPG,…) by extra information that is included in the beginning of the
messages being exchanged at both the input as well as output levels. A CGI application
can be written in any computer language that is available on the server’s system. There
are two categories of CGI-languages. The first consists of scripting, or interpreted
languages like Perl, TCL, the UNIX scripting language, and Python. Compiled
languages like C and C++ comprise the second. The difference between the two sorts
lies in the fact that the latter category of programs needs to be compiled into machine
code before it can be made available to the web.
IV.4.2 PERL
The Practical Extension and Report Language (Perl) is an interpreted language that is
well-suited for scanning texts files, extracting information from them, and printing reports
based on that information. It is in fact designed to be easy to use and efficient, and
combines elements of C and UNIX scripting languages. Several reasons are mentioned
in the book Using VRML20 that should explain the relative popularity of Perl as the
programming language of most of the CGI-applications on the Web.
§
§
§
Perl is derived from existing languages so that people with some
programming knowledge are familiar with many aspects of it.
Potential programmers do not have to know everything about Perl in order to
use it. Only a small part of the language must be learned for an efficient and
practical use.
Perl uses a sophisticated pattern-matching system that is very quick and
efficient and is able to scan a large amount of text.
IV.4.3 VRML
The Virtual Reality Modelling Language is a portable, open scene- and objectdescription language. Its development is greatly influenced by the characteristics of
Internet and the force of some highly specialised newsgroups it contains.
The Origin
The combination of three different threads would become the ultimate force that created
the foundation on which the VRML language was built. First, the visionary thoughts
about William Gibson’s cyberspace were still apparent in the minds of many
researchers and programmers. From that point, the graphics target was set to create a
shared realistic simulated environment, based on virtual reality and the interaction
between users over a certain network.
The second development started in 1992, with the introduction of the Inventor graphics
toolkit from Silicon Graphics. Inventor allows programmers to develop quickly interactive
3D graphics programs of all sorts, based on concepts of scene structure and object
description. But although Inventor had nothing to do in particular with networks, it would
become the technical basis for VRML.
20
MATSUBA, STEPHEN & ROEHL, BERNIE, Using VRML, Que Corp., Indianapolis, 1996
98
The last spark brought everything together rapidly. At the first annual World Wide Web
conference in 1994 in Geneva, Mark Pesce and Tony Parisi proposed their idea of a
standard scene-description virtual reality interface that could be used in conjunction to
the Web.21 The term VRML stood then for ‘Virtual Reality Markup Language’, based on
the HyperText Markup Language (HTML) already spoken of, although the word
‘Markup’ was later changed to become the more specific and accurate ‘Modelling’.
The Development
Within a week, the mailing list about the development of a specification for VRML grew
to include over a thousand members. As most participants proposed to adapt an
existing modelling language, it was then agreed that a draft specification should be
proposed within five months. Several proposals were contemplated and discussed on
the email list, and the Silicon Graphics proposal won the general vote. This meant that
VRML would be based on the Open Inventor file format (the meanwhile non-proprietary
development of Inventor), after which in 1994 the VRML 1.0 specification was set. Many
browsers were written that were able to interpret and display all specifications of VRML,
such as QvLib and WebSpace Navigator. In 1996, the VRML community read and
discussed a number of proposals for the next VRML 2.0 version, including the Moving
Worlds project from Silicon Graphics, HoloWeb from Sun Microsystems, ActiveVRML
from Microsoft, Out of This World from Apple and others. After much revision and
reshaping done by the community itself, 70 percent of the overall votes pointed Moving
Worlds as the next specification of VRML 2.0. While VRML 1.0 allowed only creating
static worlds containing hyper-linked objects, the new VRML 2.0 offered four new
features.22
VRML 2.0 New Features
4. Enhanced static worlds. New objects were added like the elevation grid,
extrusion, background, fog and many others nodes. Also, some multimedia
standards as audio and movie applications were included to be mapped unto
the objects.
5. Interaction sensors are specified to wait until a particular event occurs and
then to do something in response to that event.
6. Animation and behaviour scripting is now supported. This consists on the one
hand of so–called interpolators, which allow keyframe animation between two
or more pre-defined situations. On the other hand, scripts offer the ability to
perform simple logic or complex analyses of user and environmental events in
the scene and respond to that in some intelligent way.
7. Prototyping. This feature allows the creation of a user-defined node, which
can consist of many complex objects. This single object can easily be reused
by the programmer, who is then able to change certain characteristics of
these objects when desired.
21
MATSUBA, STEPHEN & ROEHL, BERNIE, Using VRML, Que Corp., Indianapolis, 1996, p. 142
HARTMAN J. & WERNECKE J., The VRML 2.0 Handbook: Building Moving Worlds on the Web,
Addison-Wesley, Amsterdam, 1996, p.8
22
99
Characteristics of VRML Worlds
§ Immersion. Generally, the user enters the three-dimensional built-up world on
the computer screen and explores it as an almost real inhabitant of it. This
means that each person can choose a different and unique course through
the model.
§ User-Control. The local browser allows the user to explore the VRML world in
a personal manner. Thus the computer does not provide a fixed set of choices
or paths, although the original author of the world could have suggested some
recommendations. Consequently, the possibilities are certainly unlimited.
§ Interactivity. Objects have the capability of responding to one another and to
external events caused by the user. The user can ‘reach in’ to the scene and
change the characteristics of the elements.
§ Blending. A VRML is considered to blend 2D and 3D objects, animation, and
multimedia effects into a single medium.
IV.5 Mapping Information in Cyberspace
Some of the techniques and principles explained by Michael Benedikt in his book
Cyberspace: First Steps.23 now will be investigated. It should be remembered that these
rules are in fact meant to be applied by the so-called cyberspace architects of the
future. In this Benediktine cyberspace, both the space as well as the geometry carry
meaning. This means actually that this cyberspace is built in a way that some spatial
metaphors like up or down, left or right, closeness or distance have all some sort of
informational and interpretable significance. To accomplish such a virtual realm,
Benedikt defines seven principles in relation to those of natural, physical space, and
classifies these under four, essentially topological rubrics: dimensionality, continuity,
limits, and density. Furthermore, to demonstrate and clarify the VR/search-program already
mentioned, some of Benedikt’s principles will be shortly compared to the concepts of
this visualisation.
IV.5.1 Dimensionality
The rubric of dimensionality describes the possible solutions that could be used when
more than three dimensions have to be visualised in a certain designed, immersive,
virtual world.
Extrinsic and Intrinsic Values
First, it is assumed that a set of N different kinds of measurements should be visualised.
Benedikt describes then how problems can be solved when more than three variables
have to be presented. Two different approaches can be followed to describe the state of
the system. First, of course, the designer can simply decide which dimensions to work
with and drop the others, so that many different non-complete representations of the
system can be produced. Secondly, certain dimensions, called extrinsic, can be
assigned to perform ‘coordinate duty’, while the others, called intrinsic, are assigned to
describe the character of a point in the coordinate space. Thus, unlike an Euclidean
point, a point-object might have a colour, a shape, a weight, a size, a spin, etc., all some
intrinsic qualities that are logically independent of its position in space. In this way, any
N-dimensional state of a system can be represented in the data space of point-objects
having n spatio-temporally locating, extrinsic dimensions and m intrinsic dimensions.
N=n+m
(N > 0, 0 < n < 5)
23
BENEDIKT, MICHAEL (Ed.), Cyberspace: Some Proposals, inCyberspace: First Steps, MIT Press,
London, 1991, pp.119-224
100
This technique makes the conception of certain animated actions possible. Since
intrinsic dimension data in fact only exist at address points, it is for instance possible
that an object changes its size and shape as it moves, revealing the data embedded in
each address in the data space. It should also be noted that there is in fact some
freedom in how the partition and combination of m intrinsic and n extrinsic dimensions
are chosen. Provided that no information is lost, such representations are
mathematically equivalent, but not necessarily functionally equivalent. It is certainly not
obvious to create a three-dimensional view containing mapped data that is immediately
understandable by the user, and in this sense, a ‘good’ visualisation is considered to
have more information.
Figure IV-11 Two legends represent the size and date values which are mapped on the X and
Z-axes
Extrinsic Dimensions in VR/search
§ The size and the date of a linked web page, provided by the search engine,
determine the X- respectively Z-coordinate of any presented threedimensional object. It should be noted that in this text, the Y-axis is placed
vertically, and the Z-axis comes ‘out of the screen’, like it is specified by the
VRML-code.
Intrinsic Dimensions in VR/search
§ The main intrinsic dimension is the relevance or the rating of a provided link,
which determines the height (Y-dimension) of any link object.
Other (Intrinsic) Variables in VR/search
§ The title of the link.
§ The abstract or comment of the link.
§ The provided number by which the links are ordered; varying between one
and ten.
§ The URL-address of the link.
§ Again, the date and size of the web page, but now interpreted as a text-string
instead of a real value.
It should be noted that the so-called ‘other variables’ could also be understood as
potential intrinsic dimensions, as they determine the generally perceived view and
character of the object as well. For instance, the lengths of the vertically placed
titles are suitable for additional visual interpretation (similarity, effectiveness,…) if
the user finds this issue important, and are particularly well suited for finding
similar links which exist on different servers. Then, for instance, size and title will
appear the same, while the date can be a little shifted.
101
1. Principle of Exclusion (PE)
First, Benedikt tries to define the following three terms to describe the situation of any
two objects existent in his data space. These objects are said to be identical if they have
the same values on the same intrinsic dimensions, similar if they have different values
on the same intrinsic dimensions and different if they do not have the same intrinsic
dimensions24. Logically, some unpredictable problems arise when two non-identical
objects have, at some time, the same extrinsic dimensions. Then, the Principle of
Exclusion25, commonly understood as “you cannot have two things in the same place at
the same time” clearly states that this is, in fact, forbidden.
At the same time, it should be noted that this first principle already is denied by the other
architectural voice in the book, Marcos Novak. He states that although in physical space
two objects cannot occupy the same space at the same time, this restriction is not
necessary in cyberspace. He motivates this statement with two arguments: “to allow a
poetic merging of objects into evocative composites, and second, to keep the
implementation of cyberspaces as simple as possible.”26 Hereby no decisions should be
taken by the programmer to follow this principle, as he foresees this to be the
characteristic task of his cyberspace desk that visualises the two objects, to resolve this
conflicting situation according to its computable capabilities and the representations
chosen.
VR
/search
At the implementation-level of the program, the Principle of Exclusion is not
added for the same reasons Novak already has described. There is no way to
equip VRML-objects with some kind of programmable sensor that triggers the
proximity of other VRML-objects so that they could perform certain actions.
However, it should be noted that on the specialised newsgroups, some authors
foresee this feature in one of the next versions of VRML.
2. Principle of Maximal Exclusion (PME)
Given any N-dimensional state or phenomenon, and all the values (actual and possible)
on those N dimensions, a designer has to choose that set of extrinsic dimensions that
will minimise the number of violations of the Principle of Exclusion.
This ‘fundamental’ principle is meant to be a helpful rule to be used when a cyberspace
designer has to decide which dimensional partition he wants to implement on the
offered data. The table underneath summarises the possible relations of two data
objects.
Intrinsic
Extrinsic
same dimensions +
same values
same dimensions +
different values
same dimensions +
same values
same dimensions +
different values
self-same
identical
PE and PME excluded
similar
24
Actually, Benedikt goes much deeper when he investigates objects that have different extrinsic
dimensions, and thus in fact are existent in ‘different spaces’. To decribe this phenomenon, the terms superidentical, super-similar, and wholly different are respectively defined.
25
This principle is named after a similar postulate in quantum mechanics that says that no two electrons
belonging to the same atom can have the same quantum numbers.
26
NOVAK, MARCOS, Liquid Architectures in Cyberspace, in BENEDIKT, MICHAEL (Ed.), Cyberspace:
First Steps, MIT Press, London, 1991, p. 239
102
VR
/search
Since the program contradicts the first principle, it is in fact in direct conflict to the
second one as well. Hereby the question arises whether the visualisation would
carry a more significant and interpretable information if one of the next
approaches would be followed.
§
§
Approach 1.The number by which the links are ordered by the search engine
could be used as an extrinsic dimension, but is in fact already partly
incorporated by the order of the relevance percentages.
Approach 2. The values of the extrinsic dimensions could be sorted and
then, instead of being linearly interpolated, be placed unto the fixed squares
of a divided grid. No overlapping would then be possible as each row and
column would possess one single object. It should be noted that then also a
large part of the perceived relationships (closeness, proximity,…) would be
lost, as these would be replaced by very arbitrary and interchanging, very
difficulty perceivable connections.
Both PE and PME are considered as very effective when, in the future, cyberspace will
increase in complexity and content, as the representations have the capability to adapt
to this new situation. In this case, each controllable aspect of the world, such as its
overall size, the amount of dimensions and mapped information, could be increased
until the new situation and the visible representation find some kind of new equilibrium.
Size and Shape
Benedikt is convinced that object size is not generally a good variable because an
extreme largeness of a certain item might crowd out other objects, while simultaneously
some sub-features of the shape (like corners, edges,…) might be misinterpreted as
having some significance. Moreover, it should be noted that not all the surfaces of an
object are visible at all time. However, just zooming in by the user could solve many of
these problems. The enlarged object then becomes isolated from the overall context
while some of the intrinsic dimensions expand in inner detail and behave more like
extrinsic dimensions.
Figure IV-12 A link-object (here possessing the information of the first link) first viewed from further
away, and on the right image, the change of behaviour by displaying more detailed information
when it becomes closer examined.
Another possible technique is that of unfolding. When an object unfolds, its intrinsic
dimensions open up to form a new coordinate system, in fact resulting in a new threedimensional space.27
27
This hierarchical scheme is the same technique that for instance the Windows operating system uses
when a small icon is clicked which opens into a larger-dimensional window or data-field including new icons.
103
VR
/search
In case of the VR/search-program, ‘clicking’ the top part of a link-object could be
interpreted in the same ‘unfolding’ way. This action causes a new browserwindow to appear which however does not contain a new virtual world, but in fact
the requested linked web page represented and ‘possessed’ by the clicked
object.
Figure IV-13 The top part of the link-object is ‘clicked’ by the user: the requested
website appears in a new browser window, ready for approval by the user. The linkobject then gradually changes its colour, making visible the occurrence of this userti
IV.5.2 Continuity
The X, Y, Z (and T) axes of any ordinary rectangular coordinate system are understood
each to have the character of the real number line: they have to be infinitely divisible
and are monotonic, and have to support ordinary arithmetic operations. This numberline character generally forms the intuitive and functional basis for any representational
mapping technique. But it must not be forgotten that in order to determine the values of
the mapped coordinates of any object, most of the available data first have to be
ordered so that these variables can be treated as spatial interpretable number-line
dimensions. The applied ordering techniques can be arbitrary, although the three most
known naturally consist of the alphabetical, geographical and chronological
classification systems. These more or less ‘arbitrary’ sets of aspects should be arranged
skilfully, if the cyberspace designer wants to create a comfortable, active and navigable
three-dimensional database.
VR
/search
Like already mentioned, the values of file size and file date determine the twodimensional coordinates of each link-object. First, the two extreme maximum
(coordinate 100) and minimum (coordinate 0) size and date values of the whole
collection are searched, after which the remaining eight others are interpolated
between these values. The ten sizes and dates are thus respectively ordered by
the classification of chronology and magnitude. This approach results then in logic
and spatial relationships between the final locations of the objects, instead of
104
arbitrary ones, when for instance sorted values would be mapped unto the
intersections of a fixed, regular grid (coordinates 0-10-20-30-…).
IV.5.3 Limits
“Will cyberspace have edges to blackness, walls of final data? Or will it be
endless? If the latter, how? … Might it be possible to present cyberspace
phenomenally as a four-dimensional sphere, were striking out in any (threedimensional), direction brings one eventually back to where one started?” 28
Benedikt himself proposes the conceptualisation of cyberspace into an abstractly glued
two-torus. This is a rather difficult term to describe the fact that: first, the vertical
dimension is seen as open-ended, and secondly, that any continuous movement in the
horizontal plane will ultimately return to the initial start position. The user however, never
sees the torus itself but perceives instead a terrestrial geometry of a plain, a horizon,
and a sky.
VR
/search
The language of VRML is fundamentally specified to represent an infinite space.
This means that any traveller is able to move everywhere and thus endlessly
without any standard pre-programmed restriction of the application. On the other
side, certain programmable nodes can be specified to represent a (never
reachable) background that can be seen above an abstract ground-surface,
which in fact also constitutes the horizon. The data-space built by the VR/searchprogram however is understood as representing a very little part of the
unimaginable collection of possible Internet-links, and the user has thus the
freedom to wander infinitely far and away from the visualisation. Furthermore, the
size of the underlying created surface is conceptualised a little bigger than the
floating data-grids in between which the link-objects are allowed to be placed.
Although this feature is not implemented in the program, it is designed in this way
for the fact that then additional search-requests could be represented as well,
next to or within the first visualisation. In this way, a whole landscape of data-grids
containing personally preferred links could be made, and each surface could then
represent certain subjects and personal interests.
IV.5.4 Density
How much space is there in space? Benedikt uses a certain theoretical insight to be
able to clarify and quantify this particular problem. First, he defines spaceo (space-over),
the space of varying amount, and spaceu (space-under or space-uniformly), which is a
certain underlying, absolute and homogenous space. The density of a threedimensional ‘space-in-space’ can then be written as:
D(3) = spaceo / spaceu
When cyberspace would become more and more complex, the range of scales at which
a user can operate could be increased, so that the density of information per volume
unit of (cyber)space would expand as well. Increasing density however, cannot be
accomplished without some technical difficulties, which nearly all experienced virtual
reality users most probably have noticed before. For instance, any dense virtual
environment is surrounded by some strange phenomenon that could effectively be
described as a ’reverse gravity field’. This means that when any user approaches a
group of complex objects, the speed of his or her motion becomes gradually slower.
28
BENEDIKT, MICHAEL, Cyberspace: Some Proposals, in Cyberspace: First Steps, MIT Press, London,
1991, p. 152
105
The explanation can naturally be found in the finite computational speed that is
dependent of the rate of a new-frame display, of the level of detail displayed, and of the
amount of the ‘increase of information‘ with each frame. When a maximum of
smoothness and imitation of nature has to be achieved, the solution of adaptive
refinement is chosen. This means that the level of detail of an approached object
automatically and gradually increases. However, applying this technique as a norm
would violate the next principle.
3. Principle of Indifference (PI)
The felt realness of any world depends on the degree of its indifference to the presence
of a particular ‘user’ and on its resistance to his/her desire.29
This principle is primarily based on a simple observation of situations in reality, where
mysterious complexity often is characterised by ignorance and indifference of the
actions of its perceiver. Moreover, it can be described as a characteristic that commonly
is known under ‘life goes on whether or not you are there’. It is, in fact, one of the
powers of everyday reality, in which people become curious about certain
developments and have to adapt themselves to the flow of data, transactions, and
situations. Ultimately, it is the argument for the independent existence of virtual worlds
as in the thoughts of William Gibson, in which the user finally is able to believe the
relevance and continuity of the witnessed actions and events. Nevertheless, this
principle should be used with care, and a balance must be found to design cyberspaces
both indifferent and responsive, both beyond and for the individual.
Figure IV-14 Left image: a dark surface cuts the link-object. This view offers the user an idea of
the relevance this object possesses (which is obviously a little higher than the corresponding
number that appears in the lower left corner of the screen, here 50%). On the right, the user
moves underneath the surface, and is now able to conclude that link 8 and 9 have a lower
relevance than the actual number on which the surface is set on (70%), while link 1 obviously
VR
/search
Most importantly, the user is able to request any desired combination of search
items so that, in fact, the resulting VR/search-world becomes individually determined.
But even a certain unchanged input does not necessarily has to produce one
single defined world, as the resulting links obtained by the search engine are not
fixed in time themselves. In the world itself, some little or continuous actions are
taking place without any triggering of the user. On the other hand, the user has
the capability to control parts of the world by means of a provided user-interface.
Moreover, the user is able to move a horizontal flat surface in the environment,
which visually tries to clarify the relationship between the vertical dimension and
the relevance of the represented links.
29
BENEDIKT, MICHAEL, Cyberspace: Some Proposals, in Cyberspace: First Steps, MIT Press, London,
1991, p. 160
106
Questions can be raised about what solution would prove to be most suitable when the
user wants to govern the level of detail instead of the system’s sensors, or when the
movement velocity would be held constant, hereby releasing it from the so-called
gravitational grip. Then, limiting the amount of new object information per frame could
result in a consistent realm where ‘phenomenal immensity follows information density’,
and where certain laws of information begin to create a new spatio-temporal physics.
This last solution is stated in the next principle.
4. Principle of Scale
The maximum (space0) velocity of user motion in cyberspace is an inverse, monotonic
function of the complexity of the world visible to him.30
Benedikt compares this principle with the characteristics of a traditional Japanese
garden, in which miniaturised elements reveal their detail only from very close. Partial
views direct the offering of new information, while various spatial elements (bridges,
stones, obstructions,…) slow the movement of the viewer down. Nevertheless, the user
feels himself powerful as the motion itself makes a difference to what can be seen. This
in contrast to, for instance, the enormous, empty halls favoured by the Romans, which
in fact represent, with their visual simplicity and no visual change, a near zero gain of
information.
VR
/search
In this program, some precautions are taken to avoid that too many moving
objects, complex textures, proximity-sensors, touch-buttons, and other elements
would appear at the same time, an event that in fact would ask a very large
amount of computing time. For instance, the user-interface, which actually
contains many sensors, is only existent (and thus computable) in the
programmed scene structure when the user has touched a certain button.
Furthermore, the link-objects reveal only an extra amount of more detailed
information when the user approaches them close enough to perceive it.
Meanwhile other informational elements that become too large in size to be
understood correctly, disappear from the user’s view. This technique is
conceptualised for two reasons: to decrease the number of objects to be
calculated by the computer, and to avoid any oversupply or overlapping of data
seen by the user. On the other hand, it should be noted that standard VRMLnodes (such as Box, Sphere, ElevationGrid, IndexedLine,…) were used as much
as possible, this instead of certain complex geometries that can only be
generated and programmed via uneconomic, large data-files derived from nonintelligent data-translators.31
IV.5.5 The Remaining Principles
5. Principle of Transit (PT)
Travel between two points in cyberspace should occur phenomenally through all
intervening points, no matter how fast, and should incur costs to the traveller
proportional to some measure of distance.32
The concept of ‘cost’ in this context is left open to some interpretation (for example loss
of resolution, loss of range of view, of smoothness of motion,…), although it seems
reasonable to identify it with the notion of time. Then this Principle of Transit may seem
rather unnecessary, as one of the main advantages of network computing is certainly
30
BENEDIKT, MICHAEL, Cyberspace: Some Proposals, in Cyberspace: First Steps, MIT Press, London,
1991, p. 162
31
For instance, AutoCAD à VRML translators change a rather simple file-structure into a very high amount
of abstract and inefficient numerical data.
32
BENEDIKT, MICHAEL, Cyberspace: Some Proposals, in Cyberspace: First Steps, MIT Press, London,
1991, p. 168
107
the almost instant access possible to every file, document, and program one is
interested in. However, Benedikt offers four different considerations to argument the
validity of this principle.
§
§
§
§
Access is never really instant. It always takes some time to search and locate
information and send it to the distant user. These delays could thus made
proportional to a determinate ‘distance’ in cyberspace, and evenly be used to
show/experience the route between.
Navigating around file structures, selecting paths, accessing different and
distant computers, and so on constitutes a good deal of the pleasure of
computing. This action in fact demands human imagining, and creates
ultimately a kind of (for computer fans) addictive ‘environment’, of which
various mental geographies can be designed and physically managed.
Benedikt thus clearly wants to preserve the orienting, world-building tendency
possessed by human beings, and preserve the fundamental concepts of
distance and velocity. These techniques should then be implemented as
opposed to abstract structures such as menus, hierarchies and graphs, while
users have to remember codes and manuals to be able to ‘hop’ (instead of a
‘slide’) from one location to another.
Being in transit for significant periods of time in relatively public areas can be
considered as useful. Benedikt is convinced that in between tasks both
spatially and temporally, a person is open for ‘accident’ and ‘incident’. He
refers hereby to coincidental meetings varying from hallways to airports,
which can be considered as essential when an interpersonal network is being
formed.
The process of progressive revelation inherent in closing distance between
self and object, and the narrative of travel are important. Destinations would
all be ‘certain’, and notions as time and history and the unfolding of situations
would collapse, hereby existing in the physical world only.
Figure IV-15 Two alternative representations of links inside the VR/search world.
On the other hand, Benedikt proposes certain regions in cyberspace that might have a
number of designated transfer stations. These elements should offer the users a very
quick, blind and easy transportation, in which the last principle is ignored. When such a
cyberspace is entered, the user should appear in a so-called port of entry, which should
resemble its real-world counterpart such as an airport, a train station and so on. These
ports should function as geographic, cultural, and economic landmarks, offering users
an orientation point in their exploration of the virtual world. For Benedikt, there should
also be a special kind of ports, namely gateways, which have the capability to connect
users to parallel, perhaps proprietary, cyberspaces. The three mentioned systems
contain own sets of protocols and are able to overlap, coincide, grow, and connect as
required. Finally, leaving cyberspace could be accomplished more or less instantly or by
means of a kind of ‘autopilot’, which offers the user a quick view on the logic of
cyberspace on the way back to the port of entry.
108
Navigation Data and Destination Data
Navigation data is that class of information that orients the users in time and space, in
location and direction, by means of addresses, instructions, or warnings, in fact the tool
that organises users in spatio-temporal terms. Destination data, is that class of
information that in some sense satisfies, as it answers some kind of questions or
promises. It may take the form of text, an image, a face to speak to, a piece of music or
code, a diagram and so on, as it is that information which has to be judged by the user.
Navigation and destination data always appear together, and it is even so that they can
be transformed in one another. The amount of available information of each category
varies in time and by the user’s actions. For instance, continuously choosing items on a
menu-driven interface generally decreases the amount of displayed navigation data,
until destination data fills the screen.
Figure IV-16 In the left image, the user has to judge a large amount of navigation data. When a
certain object is finally reached, a majority of destination data is offered, as it ‘waits’ for or
‘expects’ further actions of the user.
VR
/search
Navigation data: the X- and Z-axis are clarified by means of a clickable legend
on ground surface, so that an object-link’s coordinates (and thus its size and
date) can be roughly estimated. Furthermore, an always-visible text informs
the user about the vertical position of the adaptable cutting horizontal surface
so that the object-link’s height (and thus its relevance) can be spatially read
out of these both relationships. It should also be noted that the conceptual
ground-surface was chosen for a better orientation and cognitive estimation of
distances between the objects, as users seemed to encounter some
difficulties when doing this in a complete black and empty space.
§ Destination Data: when the user approaches an object-link close enough, or
when he moves with his pointer over the object, detailed information appears
respectively on its surface and on the VRML-interface. This information is
then completely available to be judged by the user’s requirements.
§
6. Principle of Personal Visibility (PVV)
(1) individual users in/of cyberspace should be visible, in some non-trivial form, and at
all times, to all other users in the vicinity, and (2) individual users may choose for their
own reasons whether or not, and to what extent, to see/display any or all of the other
users in the vicinity.33
Although this principle seems a direct threat to the notion of privacy, Benedikt is
convinced this is certainly not, because he thinks about a very minimal amount of
visibility. For instance, small coloured spheres might represent persons in cyberspace,
each indicating alone its position, its movement and, of course, its presence. No
restrictions are mentioned about the channel that should be used for interpersonal
contact, such as voice, video, text, gesture, VR-touch, etc. User-identity information
33
BENEDIKT, MICHAEL, Cyberspace: Some Proposals, in Cyberspace: First Steps, MIT Press, London,
1991, p. 177
109
obviously is not an essential part of the minimal presence, and anonymity is thus
considered as acceptable. The first rule of the principle is not only meant for potential
hackers and dangerous ‘sys-ops’, but is actually based on the assumption of
democracy and accountability, or in Benedikt’s words “to be there, in some deep sense,
for others.” Other, less hypothetical reasons are also mentioned. The possible social
phenomenon called grouping behaviour for instance, or how the appearance of groups
of many little spheres might draw the attention of many other users. A good part of the
information available in cyberspace becomes then apparent in people, and even is
people. The second rule of this principle is meant to add the user’s capability to work
and feel alone, when other persons would behave in some distracting manner. The user
even might possess the power to select who should be turned visible by criteria such as
proximity, task orientation, sex, origin, interest, and so on.
VR
/search
As this program is completely built up by VRML-objects, it is also bound to its
restrictions. Not only does this mean that the programmer had to deal with some
instabilities of this new programming language, but this resulted also in the fact
that a VRML-world shared by multiple users is still impossible in the available
standard. However, it should be noted that this field of research is developing at a
rapid race and that some commercial applications are existent that overcome this
problem. Most probably, this will be a technical issue for the next VRMLspecification.
7. Principle of Commonality
Virtual places should be ‘objective’ in a circumscribed way for a defined community of
users.34
In other words, this principle requires that all users in a certain domain and at a given
time see and hear largely the same things, or at least subsets of them. Obstructions
(think of shadows for example) might be different in separate views of two viewers, and
it is allowed to bring other features into the world as well, dependent from the feeling,
experience, and knowledge that is brought into the situation. For instance, one might
reach for a cigarette that is in an other’s world in fact a pen, one might sit in a leather
chair which is for another user an ordinary wooden bench. Thus, in short, two worlds of
the users A and B only have to be subsets of an overall domain D. The things that are
experienced by both the users are then called common and happen through the
intersection of the world perceived by A and B only.
Figure IV-17 A simple representation to clarify the Principle of Commonality.
34
BENEDIKT, MICHAEL, Cyberspace: Some Proposals, in Cyberspace: First Steps, MIT Press, London,
1991, p. 180
110
Benedikt broadens this theoretical principle to encompass the possibility of various
users and the technical sides of the possible communication between them, individually
or in group. Furthermore, he theoretically investigates the consequences if users would
not be allowed to see those worlds of which they are not part, hereby creating the
capability to make some ‘private’ worlds, in opposition to the always visible and
commonly shared public worlds. Moreover, he recommends a monotonic relationship
between the relative volume of space commonly visible to any two users and the
bandwidth of possible communication between them.
IV.5.6 Conclusion
It becomes now obvious that, even when the rules of Benedikt would have been strictly
followed, many different solutions could be possible to solve a given visualisation
problem. Benedikt clearly imagines a peaceful and commonly shared cyberspace in
which all the users possess the same protocols. It seems that Benedikt believes in all
the positive chances that future cyberspaces will offer to its users and its designers,
although it can certainly be considered remarkable how he continuously avoids to
mention most of the drawbacks that would result if his principles would be integrally
implemented. It should thus consequently be noticed, that most probably not all of
Benedikt’s recommendations will be realised in the cyberspaces of the future, as most
of these rules need some common controlling protocol that everyone has to agree
upon. For instance, it would be more than valuable if the positive or negative
consequences would be investigated that could arise when some other cyberspace
travellers would use different principles in this vast and shared realm. Nevertheless,
most of his insights were more than interesting, mainly because he is one of the first
persons who dared to write down the initial cyberspace designing rules in a very straight
manner.
IV.6 Conclusion
Many different approaches and original ideas of how information could threedimensionally be visualised were mentioned. It is obvious that if architecture would be
further applied in this field, indeed, the new materials of the cyberspace architect would
consist of code, bandwidth, and many other electronical tools. In this view, also the
VR
/search –program was an important argument to prove both Benedikt’s cyberspatial
principles and the still rather visionary character of this statement. For instance, as the
computation speed of the program was gradually and considerably raising during the
development of the program, the first, and maybe most important, designing constraint
was almost being reached: that of usability of the cyberspace application when the
technical developments of today are applied.
“Architects of the twenty-first century will shape, arrange, and connect spaces
(both real and virtual) to satisfy human needs. They will still care about the
qualities of visual and ambient environment. They will still seek commodity,
firmness, and delight. But commodity will be as much a matter of software
functions and interface design as it is of floor plans and construction materials.
Firmness will entail not only the physical integrity of structural systems, but also
the logical integrity of computer systems. And delight? Delight will have
unimagined new dimensions.”35
35
MITCHELL, WILLIAM J., City of Bits: Space, Place, and the Infobahn, MIT Press, Massachusetts, 1995,
p.105
111
Conclusion
And indeed, we could certainly be considered as “witnesses to an extraordinary era that
will no doubt be remembered in history as an appropriately revolutionary development
to accompany the new millennium”, like already was promised before in one of the very
first quotes of this text. And hopefully, this statement has gained at least some credibility
while its wide-ranging content was being read.
Program
The proposed VR/search-program should certainly not be considered as a final statement
of a three-dimensional search-result visualisation. In the contrary, many different
approaches are possible to represent the various layers possessed by the offered
information. Furthermore, this program was more conceptualised as an intuitive
exploration than a really ‘workable’ application. Some of VR’s characteristics were found
remarkable, like for instance the fact that the spatial implications of certain digitised
design decisions could be experienced almost immediately. As the used platform
consisting of VRML and Netscape Communicator 4.04 certainly could not be
considered as ‘stable’, other criteria next to the aesthetic one, such as the workability,
user-speed, and computing effort had to be strictly evaluated as well. Moreover, this
program proves in a way also the importance of coding knowledge (in a certain degree),
when a design has to be made in a virtual environment and, for instance, dynamic
actions have to be included. This means that, despite the existence of user-friendly
form-generating VRML-applications such as CGI’s Cosmoworlds, the implementation of
animated and triggered behaviour plus the objects themselves had to be programmed
‘by hand’, which required a complete understanding of the scene-hierarchy and objectfeatures. Thus, despite the fact that these ‘interactive user interface development’programs certainly will further develop in more sophisticated versions, it cannot be
ignored that most probably understanding programming code will always be an
advantage to some architectural inspired minds concerned with the design of
cyberspace.
Future
It seems only logic that the future of cyberspace cannot be predicted with great
certainty. As it is almost completely build up out of the notion of dynamic information,
the input as well as its interrelated output will continuously change. Cyberspace is then
that resulting ‘hyper-dimensional’ framework left behind as a fluid, floating, and unstable
representation of numerous abstract thoughts of in fact those people who try to
conceptualise, imagine, and visualise it. Undeniably, in the whole shift towards the
notion of the informational and cyberspatial realm, architecture plays an essential role.
Although it is bound to the laws of the marketplace and the principles of its history and
theory, still it retains the capacity to provide innovation within a margin of action that is
free from standardisation and regulation. The aim of this text was thus to show a part of
the wide range of opportunities architectural practice and theory could both grasp when
the traditional notion of architecture would be broadened up to include now also the
structures and expectations of the digital future. Ultimately, this new movement could
then provide and inspire society with a new sort of imagination, of which at least the
spirit was meant to exist in this text.
“ARCHITECTS OF ALL DIMENSIONS, THERE IS AN IMMENSE AMOUNT OF WORK TO BE
DONE!”
(Ole Bouman - RealSpace in QuickTimes, p.55)
112
References
Books
BENEDIKT, MICHAEL (Ed.), Introduction & Cyberspace: Some Proposals, in Cyberspace: First
Steps, MIT Press, London, 1991, pp.1-25 & pp.119-224
BEAUBIEN, P. MICHAEL, Playing at Community: Multi-User Dungeons and Social Interpretation
in Cyberspace, in STRATE LANCE, JACOBSON RONALD & GIBSON B. STEPHANIE (Ed.),
Communication and Cyberspace: Social Interaction in an Electronic Environment, Hampton Press,
New Jersey, 1996, pp.179-188
BOUMAN, OLE, RealSpace in QuickTimes: Architecture and Digitization, Rosbeek, Nuth, 1996,
pp.61, http://www.nai.nl/www_riq/RiQ_essay.html
BOUMAN, OLE, Quick Space in Real Time, in Archis, No.4 1998, pp.52-55
BRICKEN, MEREDITH, Virtual Worlds: No Interface to Design, in BENEDIKT, MICHAEL (Ed.),
Cyberspace: First Steps, MIT Press, London, 1991pp.363-381
DAVID, TOMAS, Old Rituals for New Space: Rites de Passage and William Gibson’s Cultural
Model of Cyberspace, in BENEDIKT, MICHAEL (Ed.), Cyberspace: First Steps, MIT Press,
London, 199, pp.31-47
DUISBERG, CHRISTOPHER & GUIHAND, MARC, …growing Buildings out of Data Fields?, in
transForm, No.2, January 1998, pp.65-69
EISENMAN ARCHITECTS, The Virtual House, in Dialogue, No.9, November 1997, pp.56-60
EISENMAN, PETER, The Time of Serra’s Space: Torquing Vision, in ANY, How the Critic See,
No.21, pp.56-62
ESCOBAR, ARTURO, Welcome to Cyberia: Notes on the Anthropology of Cyberculture, in
SARDAR, ZIAUDDIN & RAVETZ, JEROME R. (Ed.), Cyberfutures: Culture and Politics on the
Information Superhighway, Pluto Press, London, 1996, pp.111-137
FRAZER, JOHN, An Evolutionary Architecture, E.G.Bond Ltd., London, 1995, pp.117,
http://www.gold.net/ellipsis/evolutionary/evolutionary.html
GIBSON, WILLIAM, Biochips, (Dutch translation of Count Zero, 1986), in De Cyberpunkromans,
Meulenhoff-*M, Amsterdam, 1994, pp.277-542
GIBSON, WILLIAM, Neuromancer, (first published: Victor Gollanecz Ltd., London, 1984),
HarperCollinsPublishers, London, 1995, pp.320
GIBSON, WILLIAM, Virtueel Licht, (Dutch translation of Virtual Light), Meulenhoff-*M,
Amsterdam, 1994, pp.318
GIBSON, WILLIAM, Academy Leader, BENEDIKT, MICHAEL (Ed.), Cyberspace: First Steps,
MIT Press, London, 1991, in pp.27-29
GLOOR, PETER, Elements of Hypermedia Design: Techniques for Navigation and Visualisation in
Cyberspace, Birckhäuser, Berlin
GRAHAM, STEPHEN & AURIGI, ALESSANDRO, Urbanising Cyberspace? The Nature and
Potential of the Virtual Cities Movement, in CITY, Nr.7, May 1997, pp.18-39
113
HARTMAN J. & WERNECKE J., The VRML 2.0 Handbook: Building Moving Worlds on the Web,
Addison-Wesley, Amsterdam, 1996, pp.412, http://vrml.sgi.com/handbook
HEIM, MICHAEL, The Erotic Ontology of Cyberspace, BENEDIKT, MICHAEL (Ed.), Cyberspace:
First Steps, MIT Press, London, 1991, in pp. 59-80
IEEE 1997, Virtual Reality Annual International Symposium, IEE computer Society Press, Brussels,
1997
KINNEY, JAY, Is There a New Political Paradigm Lurking in Cyberspace?, in SARDAR,
ZIAUDDIN & RAVETZ, JEROME R. (Ed.), Cyberfutures: Culture and Politics on the Information
Superhighway, Pluto Press, London, 1996, pp.138-153
KOEKEBAKKER, OLOF, Gevormde Ruimte en Bewegende Huid, in Items, No.5, 1996, pp.50-54
KOOLHAAS, REM, SMLXL, 010 Publishers, Rotterdam, 1995
LECUYER, ANNETTE, Building Bilbao, in Architectural Review, No.12, December 197, pp.42-45
LYNN, GREG, Blobs, or why Tectonics is Square and Topology is Groovy, in ANY, No.14, 1996, pp.
59-61
MATSUBA, STEPHEN & ROEHL, BERNIE, Using VRML, Que Corp., Indianapolis, 1996
MITCHELL, WILLIAM J., City of Bits: Space, Place, and the Infobahn, MIT Press,
Massachusetts, 1995, http://mitpress.mit.edu/e-books/City_of_Bits/
MORNINGSTAR, CHIP & FARMER, R. RANDALL, The Lessons of Lucasfilm’s Habitat, in
BENEDIKT, MICHAEL (Ed.), Cyberspace: First Steps, MIT Press, London, 1991, pp.273-300
NEGROPONTE, NICHOLAS, Being Digital, Hodder & Stoughton, London, 1995, pp.243
NOVAK, MARCOS, Liquid Architectures in Cyberspace, in BENEDIKT, MICHAEL (Ed.),
Cyberspace: First Steps, MIT Press, London, 1991, pp.225-254
PHELAN, JOHN M., CyberWalden: The Inner of Interface, in STRATE LANCE, JACOBSON
RONALD & GIBSON B. STEPHANIE (Ed.), Communication and Cyberspace: Social Interaction in
an Electronic Environment, Hampton Press, New Jersey, 1996, pp.39-48
SANDERS, KEN, The Digital Architect: a common-sense Guide to Using Computer Technology in
Design Practice, John Wiley & Sons, New York, 1996
SALA, LUC & BARLOW, JOHN P., Virtual Reality: De Metafysische Kermisattractie, SALA
Communications, Amsterdam, 1990
SARDAR, ZIAUDDIN, alt.civilisations.faq: Cyberspace as the Darker side of the West, in SARDAR,
ZIAUDDIN & RAVETZ, JEROME R. (Ed.), Cyberfutures: Culture and Politics on the Information
Superhighway, Pluto Press, London, 1996, pp.14-41
SCHMITT, GERHARD, Architectura et Machina : Computer Aided Architectural Design und
Virtuelle Architektur, Vieweg, Wiesbaden, 1993
SCHMITT, GERHARD, Architektur mit dem Computer, Vieweg, Wiesbaden, 1996,
http://caad.arc.ethz.ch/projects/acm
SKEATES, RICHARD, The Infinite City, in CITY, Nr.8, December 1997, pp.6-21
STONE, ALLUCQUERE ROSANNE, Sex, Death and Architecture, in Any, No.3, Nov/Dec, 1993,
pp.34-40
114
STONE, ALLUCQUERE ROSANNE, Will the Real Body Please Stand Up?: Boundary Stories about
Virtual Cultures, in BENEDIKT, MICHAEL (Ed.), Cyberspace: First Steps, MIT Press, London,
1991, pp.81-118
STRATE LANCE, JACOBSON RONALD & GIBSON B. STEPHANIE (Ed.), Surveying the
Electronic Landscape: An Introduction to Communication and Cyberspace, in Communication and
Cyberspace: Social Interaction in an Electronic Environment, Hampton Press, New Jersey, 1996,
pp.1-22
TAYLOR, C. MARK & SAARINEN, ESA, Imagologies: Media Philosophy, Routledge, London,
1994
TAYLOR, C. MARK, De-signing the Simcit, in Any, No.3, Nov/Dec, 1993, pp.10-18
TOLHURST W., PIKE M., BLANTON K., Using the Internet, Special Edition, Que Corp.,
Indianapolis, 1994
TSCHUMI, BERNARD, Ten Points, Ten Examples, in Any, No.3, Nov/Dec, 1993, pp.40-44
TUFTE, EDWARD R., Envisioning Information, Graphics Press, Cheshire, 1990
VACCA, R. JOHN, VRML: Bringing Virtual Reality to the Internet, AP Professional, London
VAN BERKEL, BEN & BOS, CAROLINE, Mobile Forces, Monograph, Ernst & Sohn, Berlin 1994
WEXELBLATT, ALAN, Giving Meaning to Place: Semantic Spaces, in BENEDIKT, MICHAEL
(Ed.), Cyberspace: First Steps, MIT Press, London, 1991, pp.255-271
WHITTLE, DAVID B., Cyberspace: The Human Dimension, W.H. Freeman Co., New York, 1996
WOOLLEY, BENJAMIN, Virtual Worlds: a Journey in Hype and Hyperreality, Blackwell
Publishers, Oxford, 1992, pp.255
ZAMPI, GIULIANO & MORGAN, LLOYD CONWAY, Virtual Architecture, McGraw-Hill,
London, 1995
WWW
BELL, JONATHAN, Communication Technology and Architecture, 1996,
http://ctiweb.cf.ac.uk/dissertations/virtual_architecture/contents.html
BOYER, M CHRISTINE, Cybercities, http://kubrick.ethz.ch/fake_space/reader/cybercities1.html
CAMPBELL, A. DACE
• Vers une Architecture Virtuelle..., 1994,
http://www.hitl.washington.edu/people/dace/portfoli/arch560.html
• Design in Virtual Environments using Architectural Metaphor,
http://ftp.hitl.washington.edu/publications/campbell/document/
CASPERSEN, TORBJOERN, Architecture of Cyberspace, 1995,
http://kit.trdkunst.no/~casper/diplom/thesisindex.html
CHANG, TERRENCE, A Model for Organizing Architectural Design Information, 1995,
http://www.caad.ed.ac.uk/~salih/thesis/thesis.htm
ETHZ
• Babylon S M L XL, http://caad.arch.ethz.ch/~wenz/babylon
• TRACE, http://caad.arch.ethz.ch/trace
115
•
•
•
•
•
•
•
AQUAMICANS, http://caad.arch.ethz.ch/projects/aquamicans
!hello_world?, http://caad.arch.ethz.ch/hello_world
Informationslandschaft, http://alterego.arch.ethz.ch/infotmationslandschaft
Raumgeschichten: http://alterego.arch.ethz.ch/raumgeschicht
Virtual House, http://virtualhouse.ch
ZIPBau, http://caad.arch.ethz.ch/visits/zipbau.html
ZIPBau, http://caad.arch.ethz/research/ZIPBau/
JUN, TANAKA, From (im)possible to Virtual Architecture,
http://ziggy.c.u-tokyo.ac.jp/files/Virtual.html
McMILLAN KATE, Architecture and the Broader Community, May 1994,
http://www.arch.unsw.edu.au/subjects/arch/specres2/mcmillan/
NOVAK, MARCOS
• Transmitting Architecture, http://www.ctheory.com/a34-transmitting_arch.html
• Trans Terra Form,
http://www.t0.or.at/~krcf/nlonline/nonMarcos.html,
http://flux.carleton.ca/SITES/PROJECTS/LIQUID/Novak1.html
OFLUOGLU, SALIH, Cyberspace Architecture: Slouching towards Babylon, 1995,
http://www.accentgrave.com/thesis.html
RALPH, D. RANDY, Search Engines: Indexes, Directories and Libraries, April 1997,
http://www.netstrider.com/search/directory.html
YOUNG, PETER, Three Dimensional Information Visualisation, 1996, (also published in Computer
Science Technical Report, No. 12/96), http://www.dur.ac.uk/~dcs3py/pages/work/documents/lit—
survey/IV-Survey/
AUTHOR UNKNOWN
• The Success Story between Gehry and Dassaults Systèmes Software Program CATIA,
http://www.catia.ibm.com/custsucc/sufran.html
• Cyberspace: the new Jerusalem,
http://marlowe.winsey.com/~rshand/streams/gnosis/cyber.htm
• Cyber23: Virtual Architecture: Liquid Architectures, Interview with Marcos Novak,
http://www.best.com/~cyber23/virarch/novak.htm
Recommended Sites
Architecture and Technology, http://www.archi.org
ETHZ CAAD Research Group, http://caad.arch.ethz.ch
Javascript Tutorial, http://caad.arch.ethz.ch/~stouffs/javascript/
Marcos Novak, http://www.aud.ucla.edu/~marcos/marcos.html
MIT MediaLab, http://www.media.mit.edu/
Nederlands Architectuur Instituut (Nai), http://www.nai.nl
Post-Graduate Projects at ETHZ, http://caad.arch.ethz.ch/teaching/nds
SGI’s Cosmoworlds, http://www.cosmo.sgi.com
Phase(x), http://space.arch.ethz.ch/ws97/
VRML Specification, http://webspace.sgi.com/moving-worlds/spec/
VRML Repository, http://www.sdsc.edu/vrml_repository/
VRML Version of a MUD, http://www.cybertown.com
and much more…
116
117