Frontiers in Conceptual Navigation for Cultural Heritage

Transcription

Frontiers in Conceptual Navigation for Cultural Heritage
Kim H. Veltman
Frontiers in Conceptual Navigation for Cultural Heritage
Toronto: Ontario Library Association, 2000
Table of Contents1
ii
Foreword
Introduction
Acknowledgements
iii
iv
vii
Chapter
1
Libraries
1-18
2
Digital Reference Rooms
19-27
3
Search Strategies
28-40
4
Cultural Interfaces
41-91
5
New Knowledge
92-131
6
Epilogue
132-46
List of Plates
Appendix
147-48
1
2
3
4
5
6
7
8
9
10
Key Elements of the SUMS-SUMMA Model
Taxonomy of Information Visualization User Interfaces
Key Individuals in Human Computer Interface (HCI)
Information Visualization
Libraries
Museums and Museum Networks
Education
Application Fields
Glossary
Metadata in Individual Disciplines
Notes
1
149
150-51
152-53
154-59
160-76
177-87
188-96
197
198-99
200-04
205-61
This title page reflects the pagination of the original edition.
1
Foreword
This book is a collection of five papers, four of which began as keynote lectures at
conferences, the other as a workshop. In November 1996, the author was honoured to
give the fourth annual Cummings lecture. The ideas explored on this occasion were
developed for an Inaugural Address of the Ontario Library Association (OLA)
Superconference (Toronto, February 1997), which appears as chapter one in this
collection and explores some of the new roles for libraries in the digital age. The second
paper began as a keynote at the Second International Conference. Cultural Heritage
Networks Hypermedia, (Milan, September 1997) sponsored by the European
Commission and addresses specifically a proposal to develop digital reference rooms as a
fundamental step for improving search strategies.1
A more detailed study of these search strategies is found in the third chapter, which grew
out of a keynote to the German chapter of the International Society of Knowledge
Organisation (ISKO, Berlin, October 1997). This paper appeared in abridged form in the
conference proceedings and as a regular article in the ISKO journal Knowledge
Organization, under the heading “Frontiers in Conceptual Navigation.” It appears here in
slightly modified form. The second part of that paper developed for a workshop on
Cultural Interfaces at the Advanced Visual Interfaces Conference in L’Aquila (May
1998). This appears in slightly modified form as chapter four. The fifth essay was
prompted by an inaugural address at an international conference: Digital Euphoria? held
at the Siemens Nixdorf Museum (Paderborn, 28-30 October 1998), devoted specifically
to how computers are changing our approaches to knowledge. The ideas expressed in
these essays are also partly a reflection of ongoing studies on new developments for
Nortel’s Bell Northern Research Division. A small subset thereof is listed in the
appendices.
Postscript re: the 2004 online edition
When it was published the book served two purposes:
1) to introduce some new ideas about the organsiation of knowledge in the digital
age.These ideas remainof interest and are the maion reason for this online version
of the book.
2) to make professionals in the library world in Canada aware of the scope of
activities on the International scene. This explains the large number of web sites.
Since I am working on a new way to convey this information through a dynamic
database using the SUMS principles there seemed no sense in updating the
original references to Internet sites. Hence many of these are likely to be outdated.
To avoid copyright problems I have not reproduced the 20 illustrations (pp 199-218) in
the original publication.
2
Introduction
Many persons use computers as if they were electronic typewriters. Many persons still
assume that the so-called computer revolution is really only about scanning in books and
pictures so that we can see them on screen. These essays argue that much more is
happening and they provide some idea of the scope of international projects already
underway. Earlier knowledge and culture in libraries, museums and galleries is being
translated into digital form. This is leading to new roles for these institutions and will
potentially make enormous heritage accessible to individuals anywhere. All this is
transforming our definitions of learning, transforming the ways we know and the very
meaning of knowledge itself.
These developments are fraught with many dangers. Some claim that there is now so
much information that we can no longer make sense of it. In this view, the age-old quest
for universal knowledge and understanding is a naive illusion. Some large corporations
assume that libraries, museums and libraries are merely another variant of companies: if
they are interesting they can simply be bought. This uninformed view overlooks the
reality that museums and libraries reflect the investment and collective memory of
hundreds, in some cases, thousands of years of collecting. The treasures of world cultural
heritage are infinitely more valuable than even the largest corporations. While
acknowledging these dangers, the essays that follow offer some concrete suggestions for
mastering the problem. For, instance, the European Commission developed a
Memorandum of Understanding for Multimedia Access to Europe’s Cultural Heritage,
which in its latest form will called MEDICI. This is designed to provide concrete
examples for a global initiative foreseen by G7 (8) pilot project five: Multimedia Access
to World Cultural Heritage.
Implicit in all these developments is the need for governments to make fundamental
policy decisions to keep culture in the public domain, of great concern to the European
Commission and the European Parliament; and also the need to develop coherent,
international copyright agreements, a focus of the World Intellectual Property Office
(WIPO). Some of these issues are explored in the first essay.
The essays also consider other problems: of standards for interoperability, the need for
tools for searching, structuring, using and presenting knowledge and questions of new
kinds of interfaces. Many assume that we shall soon be working entirely in threedimensional spaces. These essays argue that we need tools which permit us to move
systematically from two-dimensional to three- and ultimately n-dimensional spaces.
Some characteristics of a prototype System for Universal Multi Media Access (SUMMA)
are outlined. An essential characteristic of these new tools is that they will be linked with
a new kind of digital reference room.
From the time of the great library at Alexandria there has been a vision of collecting all
(written) knowledge in a single place. Recent attempts such as the British Library and the
Bibliothèque de France have demonstrated the limitations of this approach. At the other
extreme, a completely distributed system, as is presently the case with the World Wide
3
Web, is equally problematic. If everyone has their own rules for organizing knowledge,
and there is no common framework for translating and mapping among these rules, then
there the whole is only equal to the largest part rather than to the sum of the parts.
Without a translation method, someone who hears a Dutch person speak of Luik, a
Belgian speak of Liège and a German speak of Lüttich, has no way of knowing that these
are all the same place. Nor will it occur to them that what an English person classes under
perspective might fit under drawing (dessein) in a French mind, or even very simple
things that San Giuseppe is Saint Joseph. Hence we need standardized authority lists of
names, subjects and places. This may require new kinds of meta-data.
Reference rooms contain the sources for such standardized authority lists. Reference
rooms are the traditional equivalents of search engines and structuring tools. Indeed the
reference rooms of libraries have served as civilization’s cumulative memory concerning
search and structure methods through classification systems, dictionaries, encyclopaedias,
book catalogues, citation indexes, abstracts and reviews. Hence, digital reference rooms
offer keys to more comprehensive tools. Thus the second essay focusses on such digital
references rooms, which remain a recurrent theme throughout this book.
Our concern is neither a full inventory of the problems nor a magic box of solutions. The
chief purpose of these essays, rather, is to provoke thought about implications of these
changes. While very conscious that enormous efforts will be required, we are optimistic
in our assessment of the equally great potential benefits that loom ahead. The revolution
is about much more than great amounts of new information, very large databases of facts,
great repositories of new answers. It entails asking questions in new combinations and
even questioning our most basic assumptions.
For instance, already in Antiquity, again in the Renaissance, throughout the industrial
revolution and to the present, there have been basic assumptions about progress. In
technology it was assumed that comparative was preferable and superlative was the goal.
Machines that were bigger, faster, hotter were also better. Machines that were biggest,
fastest and hottest were best.
It is true that a cold car will not start. To this extent a warmer car is better. But if a car
becomes too hot, the carburetor overheats and under extreme conditions the car will melt.
So unless one is a wrecker trying to melt the cars for scrap metal, hottest is not always
better than hot. Similarly, in technology it was assumed that faster is better. Accordingly
an automobile that goes 100 kilometers/hour should be twice as “good” as an automobile
that goes 50 kilometers an hour. Hence, an auto that goes 200 should be four times as
good and a machine that goes 400 km/hr should be eight times as good and so on. What
this overlooks, of course, is that the value of speed depends almost entirely on context. In
a parking lot 400 km/hr would be disastrous.
The twentieth century has also shown that a small boat, which is safe, is better than a
Titanic that sinks, even though the latter may be more successful at the box office. A
small aircraft, which is safe, is better than some monster plane by Hughes that is unable
to fly properly. In an overcrowded metropolis where parking is difficult a small
4
Volkswagen is often better than a Lincoln Continental. Ants can often go where elephants
cannot. In the 1970’s such discoveries prompted a famous book, Small is Beautiful.
Ironically, most persons remain unaware of a seminal book written over a half century
earlier by D’Arcy Wentworth Thompson, who produced a fundamental framework for
these issues in his classic, On Growth and Form.2
Thompson, observed, for instance, that weight increases geometrically, while volume
increases arithmetically. So a dog with twice the size, weighs eight times as much, which
makes it clear that there are natural limits to the size of legged animals and thus helps to
explain the problem of the dinosaurs. Thompson demonstrated that form and function
were intimately connected, long before the Bauhaus made form and function famous. He
showed that Nature was very adaptive in all its constructions. He implied that we have
much to learn from this.
Nonetheless, many of the champions of the emerging global networks continue to follow
the old assumptions about progress. They assume that any advance will necessarily be a
bigger pipeline, bigger routers so that we can have faster connections with bigger
pipelines with more bandwidth. They happily assume that bigger is always better and
unhappily forget the stories of David and Goliath or Jack and the Beanstalk. To be sure
we do need faster machines and connections. But that is not enough. If it were then once
we had search engines that are fast enough to check the full text of everything, then
everything would be solved. In fact, such a search engine would find us millions of uses
of freedom in the Library of Congress, but it would not make us free. Keywords give
isolated information out of context.
Access to knowledge, which deals with claims about information, requires more than
keywords. Systematic access requires integrating tools for searching, structuring, using
and presenting knowledge linked with digital reference rooms in order that one has:
a) standardized (authority files for) names, subjects, places with their variants;
b) knowledge in context; c) multicultural approaches through alternative classifications;
d) geographic access with adaptive historical maps; e) views of changing terms and
categories of knowledge in order to access earlier collections; f) common interfaces for
libraries, museums and knowledge in general; g) adaptive interfaces for different levels
of education and degrees of experience as well as h) seamless links with learning tools.
While the essays that follow are not exactly a blueprint for such adaptive tools, they
provide a context for understanding why such tools are a necessity if we are truly to reach
new levels of knowledge and understanding at a time when information is expanding so
rapidly. In the tradition of Montaigne these essays frequently resemble questions more
than they do answers. In a world where our daily lives are often spent in (computer) bug
warfare, these essays offer visions of what could happen if all the machines, all the bits
and bytes were doing what we would like them to. Where would that bring us?
Hopefully, to a whole new set of questions about the potentials of structuring knowledge
such that even the limits of our intelligence can be augmented. In a globally distributed
knowledge network many new things are possible.
5
Acknowledgements
I am particularly grateful to the Ontario Library Association for providing me with an
office and for the honour of giving the fourth Cummings Lecture (November 1996),
which provided an initial stimulus for putting to paper the ideas in chapter one. Particular
thanks go to those who kindly read this and subsequent chapters, offering their
encouragement and helpful comments: Brian Bell, Larry Moore, Jeff Gilbert, who also
provided the statistics for figure 4, and Keith Medley. The preliminary statistics on
computers were generously provided by Linda Kempster, a world expert in the field of
storage technology. Chapter two benefited from the kind observations and
encouragement of Professore Alfredo Ronchi (Politecnico di Milano) and Mario Verdese
(DGXIIIb).
The third chapter is based on a paper dedicated to Professor Dr. Ingetraut Dahlberg, who
has very generously encouraged the author for the past fifteen years. I am very grateful
also to Deans Wiebe Bijker (Maastricht) and Hans Koolmees (IDM, Maastrict) for
provocative questions during a visit, which helped me to shape the discussions about
access versus presentation. I am grateful to Dr. Anthony Judge (UIA) and Heiner
Benking (Ulm) for challenging me to think more thoroughly about problems in moving
from two-dimensional to three-dimensional navigation. Mr. Benking kindly read the
manuscript, offered suggestions and provided a number of references. With respect to the
fourth chapter I am grateful to Professors Stefano Levialdi and Tiziana Catarci (Rome, La
Sapienza) for their generous encouragement. As mentioned above the fifth chapter grew
out of an opening keynote in Paderborn, for which honour I am grateful to Dr. Harald
Kraemer. I am grateful to Laurie McCarthy of the Lockheed Martin Advanced
Technology Center (Palo Alto), Dr. Flaig of the Fraunhofer Gesellschaft (Darmstadt) and
John T. Edmark of Lucent Technologies (Holmdel, NJ) for kindly sending videos of their
work.
The larger framework of this book has grown out of discussions with friends and
colleagues such as Dr. Rolf Gerling, Dipl. Ing. Udo Jauernig, Eric Dobbs, John Orme
Mills, O.P., and Professor André Corboz and many years of experience at research
institutions including the Warburg Institute (London), the Wellcome Institute for the
History of Medicine (London), the Herzog August Bibliothek (Wolfenbüttel), where Frau
Dr. Sabine Solf played an essential role, the Getty Center for the History of Art and the
Humanities – until recently the Getty Research Institute (Santa Monica), the McLuhan
Program in Culture and Technology at the University of Toronto and the Ontario Library
Association, where Larry Moore has been most encouraging. I am grateful to the
individuals at all of these centres. Finally I am grateful to members of my own team, who
have been both generous and supportive, notably, Rakesh Jethwa, Andrew McCutcheon,
Greg Stuart, Hugh Finnegan, John Bell, Elizabeth Lambden and John Volpe. The fine
diagrams in chapter three were patiently produced by Mr. Hameed Amirzada. This essay
has grown partly out of work for Eric Livermore (Nortel, Advanced Networks), and
Stuart McLeod (CEO, Media Linx). I thank both the individuals and their companies for
their support. Finally, I am profoundly grateful to the Ontario Library Association for
deciding to publish these essays as a book.
6
Chapter 1
Libraries
1. Introduction
A revolution is underway. It is inevitably linked with computers, with Internet, Intranet
and now Extranet. Much of this is fueled by hype to the extent that one might need to
revise the saying from Scripture: many are claimed, but few are chosen (to work). There
are many extremes. Some see these developments as a new panacea, acting as
technophiles driven to techno-lust. Some have gained fame by decrying the so-called
Silicon snake oil3, while others raise questions whether we can ever afford the process.
With respect to libraries some predict that digitizing collections will make them obsolete.
This paper takes a different view. It begins with some anecdotal ball park figures to
provide some idea of the magnitude of the changes at hand. A main thrust of the paper
outlines ways in which new electronic media can open up new roles for libraries and new
relations to museums and education. It challenges an assumption popular in political
circles that libraries and museums should be entirely privatized and would be more
efficient if they were run as businesses. Some fundamental differences between culture
and business are analysed. Some dangers and possibilities are explored.
2. Ball Park Figures
A few simple statistics4 provided by Linda Kempster may help to give some idea of the
magnitude of the phenomenon. Most of us are familiar with the basic terms of electronic
storage (fig.1). It is useful to relate these seemingly abstract concepts to more concrete
facts. One megabyte of disk space equals approximately .0423 of a tree worth of paper.
One gigabyte equals 42.3 trees worth of paper. One terabyte equals 42,300 trees worth of
paper. To take a slightly different measuring stick, there are roughly 499 megabytes in a
file cabinet full of paper. It takes 2.1 terabytes or 4,286 file cabinets to fill one football
field. It is estimated that in 1996 there are presently 250,000 terabytes of information online, or 11,904 football fields full of file cabinets. By 2000, i.e. within four years, it is
estimated that the amount of on-line data will increase to 600 petabytes which is the
equivalent of 28,569,600 football fields full of file cabinets. We are told, however, that
this on-line material will only represent five percent of the actual material which has been
scanned. Hence, within four years there will be 12,000 petabytes in digital form which
amounts to roughly 571,392,000 football fields of file cabinets or 507,600,000,000 trees
worth of paper (fig. 2).
1000 bytes
1000 kilobytes
" " megabytes
" " gigabytes
" " terabytes
" " petabytes
= 1 kilobyte
= 1 megabyte
= 1 gigabytes
= 1 terabyte
= 1 petabyte
= 1 exobyte
Figure 1. Basic terms of size in electronic storage.
7
Digital Amount
Physical Amount
1
megabyte
.0423 tree
1
gigabyte
42.3
trees
1
terabyte
42,3000 trees
499
megabytes
1
file cabinet
2.1
terabytes
4286
file cabinets
"
"
1 football field of cabinets
250.000 "
"
11,904
"
"
600
petabytes
28,569,600
"
"
12,000 "
"
571,392,000 "
"
Figure 2. Some basic relations between bytes, trees, file cabinets and football fields full
of file cabinets.
Storage capacities are expanding enormously. In 1950, IBM's Rama C tape contained 4.4.
megabytes and they were able to store 50 of such tapes together. At that time 220
megabytes represented the frontiers. Thirty six years later, many of today's desktops are
beginning with a gigabyte, i.e. more than four times that capacity and two gigabyte discs
are quite common. Such progress has not quite kept pace with the hype. It is sobering to
remember that full motion video in uncompressed form requires 1 gigabyte per minute
and that the 83 minutes of Snow White digitized in full colour amount to 15 terabytes of
space. Fortunately new technologies are underway. Holograms, sugar cube storage and
ion etching offer a range of new possibilities. Some basic statistics concerning their
capacities are listed in figure 3.
Developments at CERN (Geneva) provide some idea of the immensity of new
information requirements. Their new Large Hydron Collider (LHC) will entail 1 billion
collisions of protons per second. One megabyte is required to record each collision,
which means that 1 petabyte of storage space per second is required, although ultimately
only 100 of these collisions per second may prove of lasting interest. Using traditional
methods this would require 1.6 million CD-ROMS annually, amounting to a pile that is 3
kilometers high.5
3. Libraries
These developments are transforming our libraries in some obvious ways: cataloguing
practice, on-line file cards and catalogues, interlibrary loan, and full text retrieval. They
are also changing the roles of a library and librarians. Some speak of this as information
ecology.6
Magnetic hard disk storage
Holograms
Sugar cube
10 micron ion etching
3 micron ion etching
375 megabytes per square inch
125 gigabytes per cubic inch
125 gigabytes per cubic centimeter
50 terabytes per cubic inch
190 "
"
Figure 3. Some new technologies and their storage capacity per cubic inch.
8
Cataloguing
In the past there were basic cataloguing rules such as Anglo-American or the Prussian
Instructions which were then interpreted differently by each local library in cataloguing
its own collection. Typically a scholarly institution such as the British Library would
provide detailed records noting peculiarities in the individual copy, while a small public
library might opt for a minimal description of a given book. Some sense of this variety is
provided by the RLIN system, which allows libraries to provide alternative entries if they
wish. The advent of on-line catalogues has, however, introduced a quite different trend.
Once a book has been catalogued by a national or major library most other libraries
simply adopt that format rather than providing their own independent entry for that book
or title. This has the great advantage of establishing a sense of uniformity and
standardisation across the system. Unless the individual variants of names and titles of
books are explictly kept, these means of access are lost in the process.7
On-Line File Cards and Catalogues
This automation process has affected users as much as cataloguers. By the 1970's it
became the fashion to automate library cards. The University of Toronto Library
Automated System (UTLAS now part of AutoGraphics Canada8) effectively became one
of the first automated National Union Catalogues and now has approximately 65
gigabytes of data. In Washington, the Library of Congress established a Machine
Readable Card (MARC) format. This was adopted by the Research Libraries Information
Network (RLIN), now known as the Research Libraries Group (RLG), which, in the past
few years, has expanded the scope of the MARC format to include archival materials, art
(paintings, architecture) and museum objects. RLG has over 100 million records.
Significantly the RLG network now includes a number of the major European research
libraries and is adding over a million titles from Europe each year. The MARC format is
also used by the OnLine Computer Library Catalogue (OCLC, a network linking
universities mainly in the United States), which now has over 30 million records online.
In the United States, there are also regional networks such as the Washington Library
Network (WLG) on the west coast and those linking multi-campus universities notably
California (the MELVYL System) and Colorado (CARL).
In addition, the Library of Congress and the National Library of Canada have
championed the use of a protocol for the interchange of electronic information (Z 39.50
see below p. 105), a standard based on the exchange of original MARC records and
which has been adopted by over 200 libraries mainly in the United States and Europe for
their World Wide Web sites. This protocol has had a considerable impact on the museum
world through organizations such as Computer Interchange of Museum Information
(CIMI). While criticized by some for its limitations in terms of high programming, Z
39.50 is destined to become more important because IBM and others` have been working
on GIS extensions to this basic format.
In Europe, two alternative approaches are emerging: one fee based, the other to provide
universal Online Public Access Catalogues (OPACS). In Britain, for example, a fee based
9
system originating from the British Library (BLAISE) dominates the scene. In France, a
project for a national PanCatalogue is underway which will require a subscription. The
Netherlands and sections of Northern Germany are connected by the Pica System, which
is subscription based. Meanwhile, other parts of Germany such as Bavaria have their
regional catalogues accessible free of charge on the World Wide Web (www). This is
also the case with countries such as Austria and Norway which have an on-line national
catalogue freely available today. Sweden will soon be added to this list. In addition there
are hundreds of libraries on line via telnet at present, many of which are planning to
switch to www. At the European level there is an initiative to create a Gateway to
European National Libraries (GABRIEL). The European Commission is supporting a
number of initiatives which support these developments, notably the ONE project which
aims to provide a common interface for all the major European libraries.
At the world level the G7 countries have also included libraries as one of their eleven
pilot projects, namely, number 4: Bibliotheca Universalis. Thus far, this project headed
by France has focussed on standardising author names in the national libraries of Britain,
France, Belgium, Spain and Portugal. Meanwhile, Japan has its own approach to the
electronic libraries project and has been developing a prototype, which includes High
Definition Tele-Vision (HDTV). As a result one can look at copies of books on high
resolution screens. Major corporations are also entering the field. IBM's Digital Library
Project offers a comprehensive approach to these problems. Xerox, through its research
facilities in Grenoble is developing an alternative set of solutions.
The consequences of this automation in file cards are already enormous, although it will
take years, perhaps decades before the full impact thereof is felt. In the past one was
restricted largely to the contents of the library in which one happened to work. The
spread of published library catalogues changed this somewhat but ironically these were
typically only available in the greatest libraries where there was already a great range of
books. The evolution of electronic catalogues which are standardized means that one can
now check the locations of a book from the comfortable location of one's personal (or
network) computer terminal in one's office, at a library or at home. One can search for
copies around the world while sitting at one's desk.
Access to on-line library catalogues is but one dimension of this process. National book
catalogues and publishers catalogues such as Books in Print are also becoming available
in electronic form. So users can interchangeably search for books and explore whether
they wish to buy them for their own collections.
Interlibrary Loan
In the past it took up to a year to order a book by interlibrary loan. As the networks
expand, interlibrary loan is increasingly being automated such that a user can enter their
identification number and order a book from their desktop.
10
Full Text Retrieval
Most of us are aware that full text versions of major works such as the Bible, Dante's
Divine Comedy or the Works of Shakespeare are already available on-line. Initiatives
such as the Gutenberg Project aim to make the major writings of western culture freely
accessible in electronic form. Less well known are the growing electronic repertoires. In
France, there is the database of French classics which has a mirror site in Chicago. In
Britain, there is the Oxford Text Archive which is linked with the Text Encoding
Initiative (TEI). In the United States a Coalition for Networked Information is speaking
of entering ten million books in full texts.
While some view such projects as futuristic, the Bibliothèque Nationale de la France is
presently engaged in scanning in 400,000 books in full text versions. IBM, through its
Digital Libraries Project, has scanned in 10 million images at the EDO Museum in
Tokyo, is scanning in 50,000 manuscripts at the Luther Library in Wittemberg and,
thanks to funding from Rio di Janeiro, has begun scanning in the full texts of the 150,000
manuscripts at the Vatican Library. At present there are only eight test sites in the world
for this particular project, including one in Canada, namely, the Perspective Unit. Even so
glimpses of where this is leading are already available. The Royal Libraries in the Hague
and Stockholm have each made 100 pages of illustrated manuscripts available. The
Vatican Library has made several hundred pages available through an on-line exhibition
at the Library of Congress while the the Bibliothèque de France has made available 1000
pages of illustrated manuscripts on-line. The French examples are particularly striking
because they illustrate the potentials of tracing thematic developments such as papal
visits or royal coronations over time. Such potentials will be greatly enhanced as they are
co-ordinated with electronic versions of specialized classification systems such as
Iconclass or the Art and Architectural Thesaurus.
Roles of Libraries
Eventually all books now in manuscript and printed form will be translated into
electronic form and made available on-line. This process is analogous to that which took
place after Gutenberg introduced printing to the West, when everything had to be
translated from written to printed form. That process took nearly two centuries. No one
knows how long the electronic equivalent will take: much shorter or even longer? In a
sense it does not matter. Already now and increasingly in the future the roles of libraries
are changing as a consequence of these developments.
In the past, libraries were places for storing books but it was primarily their role as places
for reading books, which gained attention. In addition great libraries served as an
important meeting place for scholars. A drawback was that scholars had to travel long
distances to reach a major library and spend considerable resources while they lived in
the city in which the library was located. One of the motivations behind IBM's Digital
Libraries Project is to save scholars the cost of travel and accommodation by providing
them with manuscripts and published rare books on demand. If this model were pushed to
the limit, then libraries might in future be reduced to specialized storage houses.
11
There are several reasons why libraries are likely to remain important in spite of or
perhaps because of digitization. Firstly, some aspects of books and manuscripts cannot be
conveyed through electronic versions or even facsimiles, such as the manner in which a
book is bound; its feel, whether it is well worn or almost untouched. While such aspects
can theoretically be replicated in holographic or three-dimensional laser images,
historians of the book and publishing will need continued access to the originals.
Second, although it is foreseen that there will be universal access to the Internet in
developed countries, it is generally assumed that this will entail relatively slow speed
connections. The notion of ATM at everyone's desktop is still a long way off and may not
happen at all. Meanwhile, experts have suggested that ATM or analogous high speed
connections will evolve in the context of service centres9.
Given the traditional role of libraries as focal points for their communities, they are
ideally suited to take on the role of such service centres. While it may be impractical to
connect every home with ATM, it is perfectly feasible to connect all the major libraries
and even the lesser libraries throughout the country. Figure 4 provides some idea of the
scope of such an enterprise. Linking 5060 institutions in a province with high speed
connections is simple compared to the challenge of trying to provide over 10 million
individuals with a direct high-speed connection. Connections within the institutions
might in turn be at different speeds. Public and university libraries might be at OC12
speed, whereas schools might be linked at T1 speed.
The manuscripts and rare books, which are presently being scanned in, are typically 3050 megabytes per page. Paintings range from 50-100 megabytes at the low level to 1.4
gigabytes per square meter at the high level. On regular modems these would be entirely
unwieldy. On the other hand, reading rooms with high-speed connections would make
consultation of such works entirely feasible. Lecture rooms with high speed connections
would make feasible new kinds of on-line lectures. Such facilities would in turn serve to
revitalize the role of libraries as a focal point in the community. Students in schools could
go their local public and/or university libraries in order to consult not only books but also
the latest high level technologies.
Number
410 with 1100 points of presence
75
3700
775
c.100
------5060
Kind of Library
Public
University
Elementary School
Secondary School
Private School
Figure 4. Approximate number of libraries in the province of Ontario.
12
For example, Infobyte (Rome) is using virtual reality to reconstruct major historical sites
such as the entire Vatican complex. Thus far this includes only Saint Peter's Basilica, a
version of the historical Basilica, and most of the Stanze of Raphael. There are plans to
include the Sistine Chapel, Vatican Library and the Vatican Museums. As noted earlier
IBM is scanning in the full text of the manuscripts of the Vatican Library. Hence it will
be possible to walk through the Vatican in virtual reality, find a particular book and then
consult the contents of that book.
In future, other possibilities are feasible. The present arrangement of the library is recent.
The position of the collection has changed over the centuries. Using the old catalogues
and other documentary evidence, it is possible to reconstruct the former states of the
Vatican library and museums. Potential visitors could then experience the historical
development of this and other famous libraries in a simulation of time travel. To achieve
this will require a great deal of scholarly study and interpretation. As the technologies
become available it would therefore make sense to integrate such reconstructions within
the school curriculum with the high level versions being done at universities. This will
result in a whole new corpus of materials to collect, archive and to display, for which
libraries are naturally suited. Among the new activities for libraries in the future can be to
showcase such reconstructions in a high speed networked environment such that readers
can make cross-cultural comparisons on-line, while at the same time having access in
electronic from to more traditional forms of documentation, notably, books, manuscripts
and archival materials.
In the past, libraries were much more than collections of books. A major collection such
as that of the Duke of Lower Saxony (Herzog August Bibliothek, Wolfenbüttel)
contained books, prints, paintings and scientific instruments. Over time there was an
increasing specialization whereby each type of object found its way into separate
institutions, libraries for books only, drawing cabinets for drawings, art galleries for
paintings and history of science museums for scientific instruments. In the process it has
frequently been forgotten that all of these seemingly disparate objects are reflections of a
single culture. It has taken figures such Lord Kenneth Clark, Jacob Bronowski and James
Burke to remind us of those connections. The advent of networked systems introduces
new possibilities for re-integrating these disparate strands. Which raises new roles for
libraries as venues for integrating the resources of other institutions such as museums and
offering special resources for education and training.
4. Meta –Data
Recently there has been increasing attention to the term, meta-data, which is often used
as if it were a panacea, frequently by persons who have little idea precisely what the term
means. In its simplest form, meta-data is data about data, a way of describing the
containers or the general headings of the contents rather than a list of all the contents as
such. Some of the interim measures listed above could be seen as efforts in this direction.
More specifically there are a number of serious efforts within the library world. (These
are discussed below at greater length in chapter 4). The Library of Congress is heading
work on the Z.39.50 protocol, designed to give inter-platform accessibility to library
13
materials. This is being adopted by the Gateway to European National Libraries
(GABRIEL) and the Computer Interchange of Museum Information (CIMI) group.
A number of meta-data projects are underway. For instance, the Defence Advanced
Projects Agency (DARPA), in co-ordination with the National Science Foundation
(NSF), NASA and Stanford University are working on meta-data in conjunction with
digital library projects. DARPA itself is working on Knowledge Query Markup Language
(KQML) and Knowledge Interchange Format (KIF). The Online Computer Library
Centre (OCLC) has led a series of developments in library meta-data (Dublin Core,
Warwick Framework). In essence these projects have chosen a core subset of the fields in
library catalogues and propose to use these as meta-data headers for access to the
complete records. An alternative strategy is being developed by the Institut für
Terminologie und angewandte Wissensforschung (Berlin). They foresee translating the
various library schemes such as the Anglo-American Cataloging Rules and the
Preussusiche Regeln into templates using Standardized General Markup Language
(SGML). This approach will allow interoperability among the different systems without
the need for duplicate information through meta-data headers.
Each of the above initiatives is laudable and useful in its own right. They will all
contribute to easier access to materials and to efficiencies in that users can sometimes
rely on overviews, excerpts and abbreviations rather than needing to consult the whole
database in the first instance. But all of these remain short term solutions in that they do
not solve questions of how one determines variant names, places etc. Meanwhile some
members of the computer industry continue to argue that the troubles surrounding the
Internet are merely a passing phase; that although connectivity and search engines and
were initially too slow, as soon as these hindrances are resolved, all will be well. While
rhetorically attractive, such reassurances are not convincing for several reasons.
First, there is a simple question of efficiency. A local database may have only local
names. The name for which one is searching may only exist in specialized databases.
Going to a typical database does not guarantee finding the name. Going to all databases
just to identify the name is highly inefficient The same problem applies to subjects,
places, different chronological systems etc. It applies also to different media. If I am
looking for one particular medium such as video then it makes sense to look at sites with
video, but not all sites in the world. Searches to find anything, anywhere, anytime should
not require searching everything, everywhere, every time. As the number of on-line
materials grows apace with the number of users, the inefficiencies of this approach will
become ever greater.
A second reason is more fundamental. Even if computer power were infinite and one
could search everything, everywhere, every time, this would not solve the problems at
hand. Names of persons and places typically have variants. If I search for only one
variant the computer can only give me material on that variant. If, for example, I ask for
information about the city of Liège, the computer can at best be expected to find all
references to Liège. It has no way of knowing that this city is called Luik in Dutch,
Lüttich in German and Liegi in Italian. This is theoretically merely a matter of translation.
14
But if every place name has to be run through each of the 6,500 languages of the world
each time a query is made, it would be an enormous burden to the system. And it would
still not solve the problem of historical variants. For instance, Florence is known as
Firenze in modern Italian but was typically written as Fiorenza in the Renaissance. It
would be much more practical if every advanced search for a place name went through a
standard list of names with all accepted variants. Such a standardised list acting as a
universal gazetteer needs to be centralised.
The same basic principle applies to variant names of authors, artists etc. If I have only
one standard name, the computer finds that name but it can never hope to find all the
variants. Sometimes these variants will be somewhat predictable. Hence the name Michel
de France, will sometimes be listed under de France, sometimes under France, Michel
de. In other cases the variants are more mysterious. Jean Pélerin, for instance, is known
as Viator, which is a Latin equivalent of his name, but other variants include Le Viateur,
and Peregrinus. No simple translation nor even a fuzzy logic programme can be expected
to come up with all the actual variants of names. Needed is a central repository to ensure
that these variants can be found efficiently. In the case of artists names, for instance,
Thieme-Becker’s Allgemeine Künstler Lexikon offers a useful starting point, as do the
great library catalogues (e.g. National Union Catalogue, British Library and Bibliothèque
Nationale). These lists need to be collated to produce one authority list with all known
variants, much in the way that the Getty found it needed in the case of its (in house)
Union List of Names (ULAN). The problem applies also to subjects,10 as anyone who has
tried to find things in foreign versions of Yellow Pages, will know. In Toronto, for
example, a person wishing to know about train schedules will find nothing under Trains,
but needs to look under Railroads. A person looking for a paid female companion will
find nothing under Geisha, Call Girl or Prostitute, but will find 41 pages under the
heading Escort Service.
Hence a fully distributed model for housing collections may be extremely attractive
because it means that museums, galleries and other cultural institutions can remain in
control of the databases and information pertaining to their own collections. The
disadvantage is that there are already hundreds and there will soon be tens of thousands
of individual repositories and if every user around the world has to visit all of these sites
for every search they do, this approach will become hopelessly inefficient.
An alternative is to link this distributed model of individual collections with a centralized
repository for meta-data. The basic idea behind such a repository is to use the methods
established by thousand of years of library experience as a general framework for
searching libraries, museums, galleries and other digitized collections. This centralized
meta-database will have three basic functions:
First, it serves as a master list of all names (who?), subjects (what?), places (where?),
calendars, events (when?), processes (how?) and explanations (why?). This master list
contains all typical variants and versions of a name, such that a person searching for
Vinci, Da Vinci or Leonardo da Vinci, will be directed to the same individual.
15
Second, this master list contains a high-level conceptual map of the parameters of all
major databases in cultural and other institutions. Hence, in the case mentioned above of
the user searching for Chinese art of the Han dynasty, the master list will identify which
databases are relevant. Recent initiatives in site mapping and content mapping will aid
this process.
Third, this master list of names and subjects is linked to indexes of terms (classification
systems), definitions (dictionaries), explanations (encyclopaedias), titles (bibliographies),
and partial contents (reviews, abstracts, and citation indexes). Thus this centralized
database effectively serves as a digital meta-reference room which links to distributed
contents in libraries, museums, galleries and other institutions. This process of
contextualisaton of otherwise disparate information enables the centralized source to act
as a service centre in negotiating among distributed content sources.
Libraries have long ago discovered the importance of authority lists of names, places and
dates. In addition to the efforts of national libraries such as the Library of Congress and
the National Library of Canada, a number of international organizations have been
working in this direction during the past century. These include the International
Federation of Library Associations, Office Internationale de Bibliographie, Mundaneum,
the International Federation on Documentation (FID11), the International Union of
Associations (UIA12), branches of the International Standards Organization (e.g. ISO TC
37, along with Infoterm13) as well as the joint efforts of UNESCO and the International
Council of Scientific Unions (ICSU) to create a World Science Information System
(UNISIST). Over 25 years ago, the UNISIST committee concluded that: “a world wide
network of scientific information services working in voluntary association was feasible
based on the evidence submitted to it that an increased level of cooperation is an
economic necessity”.14 Our recommendation is that this world-wide network should
include both cultural and scientific information.
As a first step one would combine the lists of names already available in RLIN, OCLC,
BLAISE, PICA, GABRIEL, with those of the Marburg Archive, the Allgemeine Künstler
Lexikon, Iconclass, the Getty holdings (ULAN, Thesaurus of Geographic Names), and
the lists owned by signatories of the MOU. This will lead to a future master list which is
essential for all serious attempts at a meta-data approach to cultural heritage and
knowledge in general. Because such a list represents a collective public good it is
important that it should be placed in safekeeping with UNESCO. Senior officials at
UNESCO already support this idea. It would make sense to link this list with related
bodies such as UNISIST or ICSU. A series of copies will be replicated in various centres
around the world.
The basic framework for such a digital reference room might come under the combined
purview of the European Commission’s Memorandum of Understanding in its next phase
and the G8 pilot projects 5 (Multimedia Access to World Cultural Heritage ) and 4
(Bibliotheca Universalis). A series of national projects can then add country specific
information. These national projects can be organized by consortia of industry and
16
government. By contributing lists from a given country, that country receives access to
the centralized meta-data base. An outline of the structure is provided in Appendix 1.
5. Museums, Galleries, Drawing and Engraving Cabinets
As in the case of libraries, initiatives are underway to produce electronic images of the art
and artifacts in galleries and museums. At the local level many museums already have
projects to make some or all of their collections available on-line over the Internet. A
number of collections are already available in electronic form including: the National
Galleries of Canada, Britain and the United States, the Louvre in Paris, and the Uffizi in
Florence. At a national level the Canadian Heritage Information Network (CHIN,
founded in 1974) was the first electronic network connecting all the major museums of a
country. At the European level the European Commission has produced a Memorandum
of Understanding (MOU) concerning Multi-Media Access to Europe's Cultural Heritage,
the signatories of which now include 282 museums and cultural institutions, 25
governments and regional government organisations, 10 communications
service/software companies, 2 telecom/CATV operators, 22 IT-telecom equipment
companies, 22 new media companies and 24 non-governmental organisations. A stated
aim of the MOU is that fifty percent of the collections of the museums concerned will be
available in electronic form by the year 2000. The MOU is leading to the MEDICI
framework in October 1998.15
Meanwhile at the G7 level, one of the eleven pilot projects has been dedicated to this
theme, namely, pilot project 5: Multimedia Access to World Cultural Heritage. An initial
presentation at the Information Society and Developing Countries (ISAD) Conference
(Midrand, May 1996) included sections on methods to capture (the 3D laser camera of
the National Research Council of Canada), archive (the integrated multimedia system of
the Museum for the History of Science in Florence), display (the virtual reality
reconstruction of the tomb of Nefertari by Infobyte/ENEL in Rome) and navigate (the
System for Universal Media Searching, SUMS, Toronto). Through working group one of
the MOU there is a framework for co-operation between initiatives at the European level
and those of G7.
6. Education and Training
In the past the educational resources available to a student depended almost entirely on
the location of their school. Someone in a major city with great libraries and museums
had available to them very different resources than a student in an isolated rural village.
When the resources in great museums and libraries are on-line, students in both cities and
rural areas can have access to the same cultural heritage.
There are many initiatives in this direction. Individual schools are becoming connected.
In Canada, Schoolnet has a project which foresees all schools throughout the country
being on line. Analogous albeit less comprehensive projects exist in a number of
countries. The European Commission, in collaboration with the Pegasus Foundation, has
begun a project which makes cultural monuments available on-line within the framework
17
of schools. Plans exist to integrate materials from libraries, galleries and museums within
this framework. At the global level, G7 pilot project 3 focusses specifically on education.
Thus far such educational initiatives have focussed on translating traditional resources
into electronic form such that they can be shared on the Internet. These resources include
traditional lesson plans, curricula, outcomes, exams. A next challenge will lie in coordinating and integrating these materials such that one can relate items on an exam to a
specific text, course, curriculum and the corpus of knowledge in that field. This will have
two fundamental consequences. It will expose students and teachers alike to an
immensely greater corpus of materials from which to learn. It will also re-contextualize
this existing knowledge.
Ironically there has been relatively little attention as to how the new technologies will
introduce new methods and activities to learning. Some educators point to e-mail and
collaborative learning groups but this is almost incidental compared to that which is now
possible. For our purposes, a few examples may suffice. In the past students learned
about the religious revivals in the middle ages which brought the development of
Romanesque and Gothic art. Already today it is technically possible to consult a map of
Europe and watch how these new movements spread geographically and spatially,
influenced in no small part by the existing pilgrimage routes. A student thus sees how and
why the cathedrals at Cologne in Germany and Burgos in Spain are related. They can
choose a motif, such as the Last Judgement and trace its evolution on the tympanums
over the west portals of churches. Or they can choose a theme such as Lives of the Saints
and trace the evolution of major narrative cycles over the centuries, while being able to
see these in context as they wish and to consult written sources such as Voragine's
Golden Legend (Legenda Aurea) as appropriate. Or there is the brilliant example of the
engineering students at the technical university in Dresden. During the second world war,
the famous Church of our Lady (Frauenkirche) was reduced to a rubble heap of stones. In
the late 1980's and early 90's students recorded photogrammetrically and numbered every
stone in this heap. Each stone was then recreated using a Computer Aided Design (CAD)
programme, then reassembled in order to produce a complete re-construction in virtual
reality which is being used in turn as a basis for rebuilding the original church. This
project accomplished in conjunction with IBM was a star attraction of the 1994 CEBIT
exhibition in Hanover.
Some of these connections have been well established and merely need to be presented in
multi-media form. Many such connections have never been made because the material
has been too widely dispersed: one wing of a tryptych may be in its original place, while
a second wing is in some European museum and a third wing is in an American or
Japanese museum. Creating reconstructions of these physically dispersed art works could
become a task for students.
In other cases the evidence is lacking, equivocal, or in any case open to multiple
interpretations. Students would then produce alternative reconstructions or simulations of
what a former work of art, monument or church might have looked like. Such efforts
could begin in elementary school and become progressively more complex through
18
secondary school, university and throughout a scholar's professional career. Not every
product of these exercises will be memorable just as every child's school notes is not
obvious archival material. Resource centres in elementary schools and libraries in other
institutions of learning would have the challenge of sifting, collecting and making
accessible the better examples and making these generally accessible. Just as galleries
now sometimes feature the work of children, galleries, museums and libraries could
feature multi-media products from all levels of the educational system. In so doing
libraries would become repositories for new as well as old knowledge.
7. Public-vs Private
Throughout the Middle Ages libraries and museums were typically the outgrowth of
personal collections by noblemen. In the Renaissance, rich bankers such as the Medici
and the Fuggers and other merchants also amassed collections which often became the
property of the city state or province. In the nineteenth century the growth of national
libraries, often inspired directly by Panizzi's model at the British Museum, introduced
new levels in the universality of collections. It was assumed, quite rightly some would
insist even today, that such public collections could offer a wealth of cultural heritage far
beyond the scope of even the richest patron. Implicit in this assumption of a public
heritage, open to be enjoyed by all, was the principle that everyone would contribute in
small way to its upkeep and expansion by way of tax monies.
The past decades as business interests continue to gain power, the business model has
been extended to many other areas. It is assumed that business operates efficiently and
therefore other bodies would be more efficient if they were run along the lines of a
business. Public structures, the rhetoric goes, would be more effective if only they could
be privatized. According to this reasoning, the same should hold true for culture, i.e.
culture would be more effective if it were linked with business. For some this means
simply that culture should look to business for sponsorship. Others would go further to
argue that business is the only model for success, and that culture should therefore
emulate the supposed efficiencies of business.
Some assume that the aims of business and culture are effectively synonymous and speak
calmly about the business of culture, the cultural industries etc. In so doing they overlook
some fundamental differences between business and culture. Business is concerned with
selling. Culture is concerned with collecting. If a business collects too many things its
warehouse overhead becomes prohibitive and the business fails. This model, if applied to
cultural institutions would be disastrous. Imagine the Library of Congress, British
Museum, the Louvre or the Vatican putting all its items up for sale. No doubt they would
sell well and, yes, they would receive a great deal of money. But what would happen the
next day? No tourists would wish to come to an empty museum. No readers would come
to an empty library. The institutions would have lost their raison d'être. The great
libraries and museums are valuable and significant only to the extent that they do not sell
their contents.
19
This is not to say, of course, that libraries and museums should have no business sense. It
is fully reasonable for them to sell reproductions of their images in various sizes from
postcards through to large posters. In an electronic network they can make low level
versions of their collections available free of charge for study purposes and then provide
postcards on demand at a charge. Given the latest developments in stereolithography
there can even be sculpture on demand. Hence, libraries and museums should not have
business as their primary and central concern. Yet they can very reasonably have
ancillary business activities.
In short, the efficiencies of libraries and museums are basically different from those of
business because their goals are fundamentally different. Business is concerned with
turnover of goods and amassing of wealth. Libraries and museums are concerned with
amassing of goods in the form of precious books, paintings, objects etc. The time frames
are also very different. Business is judged in terms of short term gains, how much profit
they make in a specific quarter or a given year. Cultural institutions are judged on the
amount they collect in the course of a century or even a millennium.
From a narrow business viewpoint such cultural institutions are inevitably non-profitable.
The better they are the more they collect and to collect they need to spend money. From a
larger viewpoint these same institutions can nonetheless contribute greatly to business
interests by indirect means. A great collection attracts tourists who spend money on
hotels, restaurants, shopping, transportation etc. A great collection also brings a public
good because it raises the cultural dimensions of a city. For these reasons, it is simplistic
to speak of culture as business as if the two were synonymous. It is also misleading and
wrong to assume that long-term public interests will be more efficient if they are reduced
to the short term aims of business in the narrow sense of making money.
8. National-Local-International
A failure to distinguish clearly between long term public interests and short term private
goals helps to explain some important political trends of the past decade. Rhetoric would
have us believe that governments, which have traditionally been the supporters of cultural
institutions and education, will be more efficient if they privatized and their various
assets run as businesses. In response to this rhetoric, federal governments typically
privatize important national assets. In Britain, this has included not only industries such
as telecommunications, coal and hydro but even national research laboratories, with the
result that long term core research has almost entirely been destroyed. Elsewhere, there
has also been a tendency to transfer federal responsibilities to provinces, who in turn try
to shift them to municipalities. The challenge is to recognize that the whole gamut of
interests have economies to scale. Some local libraries and museums are just that: others
play an integral part in defining our sense of what our country is. To sell our National
Gallery would diminish our sense of Canada. To sell the Louvre would diminish our
concept of France.
Culture, Janus like, has two seemingly contradictory expressions. One is in the present, in
capturing the genius of the passing moment which finds expression in the performing
20
arts: music (from the bravura of a soloist to the range of the symphony), ballet, opera and
theatre. This is the time bound side of culture. The other is in finding links with lasting
values, which are unaffected by the passing whims and fashions of the day, which
somehow link us to the eternal. This is the role of libraries and museums. The physical
books and objects are only the surface of these institutions. Libraries and museums house
a community's, a country's spiritual memories and achievements. They store, protect and
nurture a cumulative awareness of the possible. As such they are also the centres of the
imagination and hopes of a country. By showing us our past in ever new ways in the
present they effectively shape the potentials of the future.
Governments and countries are not unlike the libraries and museums they support. They
can sell off their assets and have excellent profits for a given quarter but in so doing they
destroy their long term value. They become like libraries without books or museums
without paintings.
Ironically, leading individuals in the business world have recognized that the narrow
business viewpoint of maximal profits in the short term is counter to long term business
interests16. They have recognized that unless companies take into account environmental
issues such as global warming, they may not be viable in a few decades time. So
politicians are seeking to impose on governments and cultural institutions such as
libraries and museums a rhetorical narrow model of business which has been dismissed
by the leaders of industry.
Paradoxically this shift from national to regional and local village concerns has been
paralleled by a contrary movement to consolidate local municipalities into megacities or
megalopoli. This dual trend has only gradually come into focus. A generation ago
Marshall McLuhan coined the phrase global village to characterize the phenomenon of
how television was connecting persons all around the world. The implication seemed to
be that new media such as television would introduce a general homogenization of
experience. More recently analysts such as Barber have noted that computers are
introducing a more complex picture. On the one hand, there is still a trend towards
internationalisation, which Barber characterizes with the phrase McWorld. On the other
hand, there is an opposite trend towards what used to be called balkanization, a tendency
to focus on one's local realities as if they were all that existed, which Barber terms jihad
to underscore how it is frequently linked with religious fanaticism. In the past it was easy
for persons in a given village to imagine that their problems were unique. As mass media
make us aware of commonalities with persons in villages throughout the world, there is a
greater incentive to re-examine and redefine the particular characteristics of our village in
order to re-assert its uniqueness. Thus globalization simultaneously teaches us how we
are the same and challenges us to discover wherein our uniqueness actually lies.
9. Dangers
This twin tendency towards internationalisation (McWorld) and localization (jihad)
points to one of the greatest dangers of the emerging global network of information. The
same network which could bring to light the magnificent richness and diversity of world
21
cultural heritage in libraries and museums could serve only to highlight certain narrow
stereotypes, as if all Iraqis were as CNN portrayed them in the Gulf War and so on.
Librarians and museum curators must take great care to ensure that the new media
convey the riches of their collections and not simply their symbolic pieces such as
Leonardo's Mona Lisa in the Louvre and Botticelli's Venus in the Uffizi. Most museums
have 95% of their collections in their basements and storage rooms. It is such riches
which the Internet needs to make visible.
Closely connected with this is the problem of standards. If libraries, museums and other
content holders are to share records they need to have common standards. However, this
quest for standards which ensures interoperability holds within it the danger of a common
plateau that is less rich than the sum of its parts. The standards must assure access to the
diversity and not just the lowest common denominator.
There is a great concern as to who will pay for all these developments. In a climate of
decreasing funding, museums and galleries need all their existing budgets simply to
continue their traditional responsibilities of buying, collecting and preserving books,
paintings and other objects. They cannot afford to pay for scanning images, creating
databanks and maintaining servers. It is assumed that industry will somehow help with
this process. This is an excellent idea. Libraries and museums need the advice and cooperation of computer and telecommunications companies to arrive at the most suitable
technical solutions to their needs. On the other hand, if industry is allowed a completely
unfettered hand, there is a great danger that they will treat this strictly from the viewpoint
of short term profit. This has already occurred in some of the early experiments (e.g.
Telecom France under the guise of its subsidiary Télésystèmes) which would have led to
excessive user fees, greatly widened the gulf between the rich and the poor and ultimately
destroyed the concept of public institutions existing as a reflection of national interest. In
some countries it is striking to find ministries of Industry more active in these fields than
ministries of Culture. Culture is seen more as a possible inroad for e-commerce, than as
an inherent set of traditions, which must be defended in the interests of national
awareness and possibly even the survival of such a national awareness. Here the French
notion of culture linked with inalienable rights goes deeper than some American
tendencies.
The need for co-operation between vendors of technology and users in libraries is
rendered more complex by other factors. Salespersons frequently do not understand the
long term needs of these institutions and even when they do, they may find that these
long term needs conflict directly with their short term sales goals. An individual selling
storage technologies, for example, will be wanting to sell new technological solutions as
frequently as possible, rather than providing a permanent solution. Some become so
caught up with this mentality that they look upon the records themselves as short term
disposables. This may sound an exaggeration. At a recent imaging conference an
international representative from Kodak cited the case of a U.S. lawyer who had used the
evidence of a company's archives to sue the company, in order to recommend that
companies should destroy their records as soon as is legally permitted. When the present
author pointed out that if this mentality had been succcessful in the Renaissance there
would be no studies of that period today, the salesperson was visibily surprised and
22
annoyed. He had not considered that his sales pitch was destroying the sources of future
historians, another case of conflicts between short-term business sales and long term
cultural accumulation.
10. Possibilities
The possibilities posed by the new electronic media are nearly boundless. Traditional
libraries involve real books, which have to be organized physically using a single
classification scheme. Electronic libraries can have multiple classification schemes as
points of entry or access to the virtual records of those books. Each classification scheme
serves as a mental map of a particular culture at a given time. So multiple classification
schemes become a method for multiple mind sets, multiple entry points into what the
French have termed histoire des mentalités.
As noted above, electronic libraries can be rearranged to reflect different stages in the
evolution of a major collection. One can also create virtual libraries and museums in the
sense of collections which do not or perhaps could not exist physically in one place, for
example, the complete works of a genius such as Leonardo, combined with all the
secondary literature thereon, and all the paintings and objects somehow connected with
that person now scattered around the world. Of great importance here is the integration of
library and museum materials, reconstructions of actual physical objects and plans within
a single framework.
At the annual conference of the International Institute of Communications (Munich,
October 1996), there was a special panel on the future of public broadcasting. A chief
from Nigeria noted that outside the major cities where it was impossible to have
telephones and televisions in every home, the tribal hut of the village chief which had
these technologies served as a meeting place for the community. The same hut also
served as the village library. At that same meeting, it was striking that Michael McEwen,
representing the Canadian Broadcasting Company (CBC), defended his institution on the
grounds of its role as a public meeting place which helps create a national sense of
community. For the reasons outlined earlier we would argue that libraries and museums
are also integral elements in creating this national sense of community. They reveal to us
more than any propaganda, the enduring values of a culture, expressions of the spirit
which have stood the test of time. Electronic versions of libraries and museums should
not, must not replace the physical sources on which they are based but they can give to a
widespread populace a coherent vision of an otherwise scattered heritage.
In a sense libraries and museums are the repositories of the collective memory of a
culture. Just as individual memory is the richer if it is refreshed, this collective memory is
the richer if libraries and museums become centres for its ongoing interpretation and
reappraisal. In this way they become more than records of past deeds. They become the
source for present discussion about future directions, hopes, and dreams.
11. Conclusions
This paper began with an outline of recent developments in technology and their
consequences for libraries. Some obvious effects such as automated cataloguing, and on23
line inter-library-loan were mentioned in passing. The main thrust of the paper explored
ways in which the new technologies are changing and will eventually transform the roles
of libraries. The possibilities of meta data were outlined. It was noted that some see these
new roles of libraries and museums in terms of a business model, as if this were the key
to their future efficiency. The pitfalls of this view were analysed. Fundamental
differences in the goals of business and libraries were noted. Some dangers and
possibilities posed by the new technologies were outlined.
Libraries and museums are much more than storehouses of physical books and objects.
They serve as centres which collect and nourish the collective conscious and unconscious
memory of a country variously described as our heritage or culture. For this reason they
play an essential role as centres of community which is more than community centres in
the usual sense. The new technologies will make aspects of this heritage accessible online such that it can be shared by individuals throughout the country and not only in the
large centres. Yet paradoxically, because the most dramatic new technologies, such as
virtual reality, are so expensive, these products will need to be limited to specialized
institutions such as libraries and museums and thus provide a new foundation for their
role as centres of community.
24
Chapter 2
Digital Reference Rooms
0) Background
G8 pilot project 5 is devoted to Multimedia Access to World Cultural Heritage. It began
with a narrow focus on the leading industrial countries, which was then greatly expanded
through the Information Society and Developing Countries (ISAD) conference in
Midrand (May 1996) by including 42 countries from all over the world. The European
Commission’s Memorandum of Understanding (MOU) concerns Multimedia Access to
Europe’s Cultural Heritage. These two projects began quite separately because they were
very different in scope, one global, the other regional. In the interests of efficiency,
discussions in the past year have turned to greater co-operation between the global efforts
of G8 and those of the European Commission. At the first Milan Congress on Cultural
Heritage (September 1996), the author outlined some subjunctive possibilities for such a
co-operative framework between G8 and the Commission.17
More recently the European Commission hosted a meeting in Brussels (June 1997) to
discuss the future of its Memorandum of the Understanding. Several speakers noted the
desirability of closer co-operation between G8 and the MOU. Among them was the
author, who outlined nine needs and challenges which could further this goal: 1) an open
distributed processing environment (such as TINA); 2) open demo rooms serving as
prototype service centres; 3) toolboxes; 4) strategies for digitizing museum content; 5)
library connections ; 6) applications to education; 7) a meta-data reference “room”; 8)
search, access and navigation interfaces (such as SUMS and SUMMA); and 9) a selflearning environment.18
A subsequent smaller meeting, sponsored by the Commission, under the auspices of
Arenotech and the French Ministry of Culture in Paris (July 1997), led to a draft
statement concerning Common Goals of G7 Pilot Project 5 and MOU which was
submitted to the steering committee for approval on 23 September 1997 and approved.
This draft calls for “ scenarios for use of new technologies in public sector areas such as
education and commercial applications” and foresees two steps: 1) increased coordination with other R&D projects of the European Commission and 2) open demo
rooms – as outlined in need two above-- which can serve as information centres and
prototype service centres for museums. It is proposed that:
In the first instance these pilot centres will be based in representative cities of the
G8 and connected at an ATM level using emerging global standards. These pilot
centres will then be connected with practical trials being initiated by the telephone
companies to demonstrate applicability with real users. A next phase will connect
these centres directly with museums and other public institutions.
To make the connectivity between the demo rooms a reality requires co-operation
between major telephone companies which, in other contexts, may be in competition with
one another. Fortunately a few projects within the ACTS programme have laid the
foundations for the connectivity required. For example, the MUSIST project links Italy’s
Telecom Italia and Italtel with Germany’s Deutsche Telekom. The VISEUM project is
25
working at links between Germany and Canada: i.e. Deutsche Telekom, Teleglobe and
Bell Canada. These connections thus provide stepping stones for a proposed initial link
between Rome and Toronto and/or Ottawa. As the museums section of the Trans
European Networks (TEN) programme, MOSAIC is an obvious choice for co-ordinating
such efforts.
As a first concrete step it was suggested that there might be two demo rooms linking Italy
(Rome) and Canada by an Asynchronous Transfer Mode (ATM) connection. These
rooms would be linked to Asynchronous Digital Subscriber Line (ADSL) trials of two
telephone companies, namely, Telecom Italia and Bell (Medialinx). A preliminary
meeting on 17 September, 1997, hosted by the Canadian Embassy, focussed specifically
on issues of connectivity in linking Italy and Canada and on commercial dimensions.
The Ministero dei Beni Culturali is exploring these possibilities and is examining related
issues of cultural content, a more formal governmental framework and an infrastructure
for handling of rights management (copyright, smart cards etc.).
A second phase will add three more centres in Berlin, Paris and London. To this end,
projects such AQUARELLE and VASARI offer obvious starting points, with others such
as MENHIR and the Canadian project AMUSE as further partners. A third phase will
expand the range of the centres to include the other G8 cities, namely, Washington,
Tokyo and Moscow. As these initiatives unfold it is foreseen that both the Ministero dei
Beni Culturali and the Commission will include a number of other projects within this
joint framework, those with clearly international implications still under the aegis of G8,
while others remain under the aegis of the MOU. While the co-operation will bring a
sharing of resources and goals, the two organisations will, nonetheless, continue in an
independent and inter-dependent fashion.
As was noted in the Brussels meeting, such proto-type service centres represent but one
of nine challenges which need to be addressed as the Memorandum of Understanding and
the Commission in general move into a next phase. Rather than attempt to explore all of
these, this paper focusses specifically on the seventh of the above challenges, namely, the
need for a meta-data reference room. This idea is one of the most wide reaching and will
require considerable co-ordination among cultural institutions of many kinds. Taken
together with the other challenges it offers a long term goal for the Commission’s MOU
which is consonant with the broader aims of G8 and at the same time answers a recent
call for a coherent cultural policy for Europe by the Council of Ministers of Culture (30
June 1997). Only a carefully planned long term solution will protect us from the fate of
doomsayers who predict that we will drown in excess information as in a second flood,
rather than benefiting from the positive visions of an information society.
1) Introduction
Models for knowledge organization have tended towards two extremes of a spectrum. At
one extreme, traditionally, there has been a vision of centralized contents. At another
extreme, more recently, the rise of the Internet has favoured a model whereby everything
is distributed. A number of recent developments in meta-data reflect such a model. This
26
paper calls for a new intermediary model which links centralized meta-data with
distributed contents.
2) Centralised versus Distributed Contents
At least since the time of the library at Alexandria there has been a dream of collecting
the whole of human knowledge within a single enormous library. Panizzi revived this
idea in the nineteenth century with the creation of the British Museum, which soon
became a model for national libraries throughout the world. While such libraries had
many advantages as major repositories of knowledge, they suffer from one major flaw.
The rate of new books increases more rapidly than the spaces needed to house them.
Recent experiences with the new versions of the British Library and the Bibliothèque de
la France confirm this. Both buildings will be too small to house all aspects of their
collections even before they are fully operational. Hence while the quest to have
everything under a single roof is noble, it is simply not practical.
As an interim measure major libraries moved their extra books to nearby buildings or
elsewhere. In the case of the British Library by the 1970’s some of these depots were so
far away that it took as long as a week for a book to be moved from the remote location
to a reader’s desk in the main library.
The advent of the Internet seemed to promise a solution to such problems. In theory one
could digitise titles and contents of books on any site and connect them via a network,
thus leading to a completely distributed system. Some factors combined to cloud this
picture. First, in terms of content providers, in addition to libraries and professional
institutions, many individuals without training in information management placed their
materials on the Internet. Second, on the user side, many of those searching for
information had no clear ideas about how to ask questions. As a result the distributed
model typically produced enormous amounts of general responses but seldom precise
answers.
3) Interim Measures
To deal with the chaotic state of information distributed throughout the net, a number of
initiatives are underway. These include: i) domain names; ii) mime types, iii) site
mapping, iv) content mapping, v) abstracts, and vi) rating systems and agents.
i) Domain Names, URL, URN, and URI
Present search tools typically rely on the domain names or the Uniform Resource
Locators (URLs) to find things. The Internet Society has formed a consortium, which will
greatly expand the number of high-level domain names (e.g. com, gov, edu) such that
these can be linked with country codes to provide search strategies by topic and region.
Meanwhile, the W3 Consortium is working on Universal Resource Names (URN) and
Universal Resource Identifiers (URI) which will complement existing URLs and provide
more subtle versions of the above. They are also working on meta-data tags to be added
27
to the next generation of Hypertext Markup Language (HTML), called dynamic HTML,
and a new subset of Standardized Graphic Markup Language (SGML), called Extensible
Markup Language (XML).
ii) Mime types
Those at the forefront (e.g. Larry Masinter, Xerox) of the next generation of Hypertext
Transfer Protocol (HTTP), are working on tags for different multimedia (MIME) types,
which will allow one to identify whether the message contains audio, video, text etc. This
will add a further parameter to one’s search criteria such that one can discover which
URLs (and URNs) contain video before scanning through all the contents of a site.
iii) Site Mapping
A new technique invented at Georgia State and now being developed at Xerox PARC
allows one to visualize the structure of a web site, i.e. see how many layers the site has,
which pages are cross-referenced to which others such that one can recognize crucial
points in the structure. A similar idea is evident in Apple’s Hot Sauce. Microsoft is also
working on a similar feature.
iv) Content Mapping
Major research labs such as Lucent (the former Bell Labs now linked with Philips), are
working on defining the parameters of databases in terms of basic questions. This is
being done on an inductive basis using sources with databases which are considered
reliable. A combination of manual techniques and agent technologies are used to
determine the parameters of the contents, such that one can know, for instance, whether
the database contains video, and if so, which artists and from which year to which year.
v) Abstracts
Microsoft Windows among its tools has an Autosummarize function. Companies such as
Apple are creating software (Vespa), which allows for automatic summaries of a page, a
paragraph or a single sentence. Hence one will able to check these summaries first
instead of having to search through the entire documents at the outset.
vi) Rating Systems
One of the problems with the Internet at present is that it is often very difficult to
establish the quality or reliability of a given site. The W3 Consortium is developing a
Protocol for Internet Content Selection (PICS) which will allow rating of sites. The are
also developing a concept of digital signatures which will introduce the equivalent of a
peer rating initiative for web sites.
28
vii) Content Negotiation
A number of models are being developed for content negotiation such as that of the
TINA Consortium. This includes rights management, licensing fees and secure
transactions. Others (e.g. IBM, Fraunhofer) are developing visible and invisible
watermarking methods for copyright protection. These will increase the precision with
which materials on the web can be handled.
viii) Agents
A great deal of research is being done on agents. Recently, Leonardo Chiariglione
(CSELT), one of the key individuals responsible for the MPEG 4 and MPEG 7 standards,
has initiated the Foundation for Intelligent Physical Agents (FIPA), which promises to be
an international meeting ground for developments in this field. Many thinkers (e.g.
Negroponte, Laurel) assume that agents will serve primarily as electronic butlers
producing, as it were, tailor made selections of newspapers and other sources in keeping
with our particular interests.
4) Recent Developments in Meta-Data
More recently there has been increasing attention to the term, meta-data, which is often
used as if it were a panacea, frequently by persons who have little idea precisely what the
term means. In its simplest form, meta-data is data about data, a way of describing the
containers or the general headings of the contents rather than a list of all the contents as
such. Some of the interim measures listed above could be seen as efforts in this direction.
More specifically there are a number of serious efforts within the library world. The
Library of Congress is heading work on the Z.39.50 protocol, designed to give interplatform accessibility to library materials. This is being adopted by the Gateway to
European National Libraries (GABRIEL) and the Computer Interchange of Museum
Information (CIMI) group.
A number of meta-data projects are underway. For instance, the Defence Advanced
Projects Agency (DARPA), in co-ordination with the National Science Foundation
(NSF), NASA and Stanford University are working on meta-data in conjunction with
digital library projects. DARPA itself is working on Knowledge Query Markup Language
(KQML) and Knowledge Interchange Format (KIF). The Online Computer Library
Centre (OCLC) has led a series of developments in library meta-data (Dublin Core,
Warwick Framework). In essence these projects have chosen a core subset of the fields in
library catalogues and propose to use these as meta-data headers for access to the
complete records. An alternative strategy is being developed by the Institut für
Terminologie und angewandte Wissensforschung (ITAW, Berlin). They foresee
translating the various library schemes such as the Anglo-American Cataloging Rules and
the Preussusiche Regeln into templates using Standardized General Markup Language
(SGML). This approach will allow interoperability among the different systems without
the need for duplicate information through meta-data headers.
29
5) Number Crunching or the Limits of Brute Force
Each of the above initiatives is laudable and useful in its own right. They will all
contribute to easier access to materials and to efficiencies in that users can sometimes
rely on overviews, excerpts and abbreviations rather than needing to consult the whole
database in the first instance. But all of these remain short term solutions in that they do
not solve questions of how one determines variant names, places etc. Meanwhile some
members of the computer industry continue to argue that the troubles surrounding the
Internet are merely a passing phase; that although connectivity and search engines and
were initially too slow, as soon as these hindrances are resolved, all will be well. While
rhetorically attractive, such reassurances are not convincing for several reasons.
First, there is a simple question of efficiency. A local database may have only local
names. The name for which one is searching may only exist in specialized databases.
Going to a typical database does not guarantee finding the name. Going to all databases
just to identify the name is highly inefficient The same problem applies to subjects,
places, different chronological systems etc. It applies also to different media. If I am
looking for one particular medium such as video then it makes sense to look at sites with
video, but not all sites in the world. Searches to find anything, anywhere, anytime should
not require searching everything, everywhere, every time. As the number of on-line
materials grows apace with the number of users, the inefficiencies of this approach will
become ever greater.
A second reason is more fundamental. Even if computer power were infinite and one
could search everything, everywhere, every time, this would not solve the problems at
hand. Names of persons and places typically have variants. If I search for only one
variant the computer can only give me material on that variant. If, for example, I ask for
information about the city of Liège, the computer can at best be expected to find all
references to Liège. It has no way of knowing that this city is called Luik in Dutch,
Lüttich in German and Liegi in Italian. This is theoretically merely a matter of translation.
But if every place name has to be run through each of the 6,500 languages of the world
each time a query is made, it would be an enormous burden to the system. And it would
still not solve the problem of historical variants. For instance, Florence is known an
Firenze in modern Italian but was typically written as Fiorenza in the Renaissance. It
would be much more practical if every advanced search for a place name went through a
standard list of names with all accepted variants. Such a standardised list acting as a
universal gazetteer needs to be centralised.
The same basic principle applies to variant names of authors, artists etc. If I have only
one standard name, the computer finds that name but it can never hope to find all the
variants. Sometimes these variants will be somewhat predictable. Hence the name Michel
de France, will sometimes be listed under de France, sometimes under France, Michel
de. In other cases the variants are more mysterious. Jean Pélerin, for instance, is known
as Viator, which is a Latin equivalent of his name, but other variants include Le Viateur,
and Peregrinus. No simple translation nor even a fuzzy logic programme can be expected
to come up with all the actual variants of names. Needed is a central repository to ensure
30
that these variants can be found efficiently. In the case of artists names, for instance,
Thieme-Becker’s Allgemeine Künstler Lexikon offers a useful starting point, as do the
great library catalogues (e.g. National Union Catalogue, British Library and Bibliothèque
Nationale). These lists need to be collated to produce one authority list with all known
variants, much in the way that the Getty found it needed in the case of its (in house)
Union List of Names (ULAN). The problem applies also to subjects,19 as anyone who has
tried to find things in foreign versions of Yellow Pages, will know. In Toronto, for
example, a person wishing to know about train schedules will find nothing under Trains,
but needs to look under Railroads. A person looking for a paid female companion will
find nothing under Geisha, Call Girl or Prostitute, but will find 41 pages under the
heading Escort Service.
Hence a fully distributed model for housing collections may be extremely attractive
because it means that museums, galleries and other cultural institutions can remain in
control of the databases and information pertaining to their own collections. The
disadvantage is that there are already hundreds and there will soon be tens of thousands
of individual repositories and if every user around the world has to visit all of these sites
for every search they do, this approach will become hopelessly inefficient.
6) Centralized Meta-Data
An alternative is to link this distributed model of individual collections with a centralized
repository for meta-data. The basic idea behind such a repository is to use the methods
established by thousand of years of library experience as a general framework for
searching libraries, museums, galleries and other digitized collections. This centralized
meta-database will have three basic functions:
First, it serves as a master list of all names (who?), subjects (what?), places (where?),
calendars, events (when?), processes (how?) and explanations (why?). This master list
contains all typical variants and versions of a name, such that a person searching for
Vinci, Da Vinci or Leonardo da Vinci, will be directed to the same individual.
Second, this master list contains a high-level conceptual map of the parameters of all
major databases in cultural and other institutions. Hence, in the case mentioned above of
the user searching for Chinese art of the Han dynasty, the master list will identify which
databases are relevant. Recent initiatives in site mapping and content mapping will aid
this process.
Third, this master list of names and subjects is linked to indexes of terms (classification
systems), definitions (dictionaries), explanations (encyclopaedias), titles (bibliographies),
and partial contents (reviews, abstracts, and citation indexes). Thus this centralized
database effectively serves as a digital meta-reference room which links to distributed
contents in libraries, museums, galleries and other institutions. This process of
contextualisaton of otherwise disparate information enables the centralized source to act
as a service centre in negotiating among distributed content sources.
31
Libraries have long ago discovered the importance of authority lists of names, places and
dates. Indeed, a number of international organizations have been working in this direction
during the past century, including the Office Internationale de Bibliographie,
Mundaneum, the International Federation on Documentation (FID20), the International
Union of Associations (UIA21), branches of the International Standards Organization (e.g.
ISO TC 37, along with Infoterm) as well as the joint efforts of UNESCO and the
International Council of Scientific Unions (ICSU) to create a World Science Information
System (UNISIST). Over 25 years ago, the UNISIST committee concluded that: “a world
wide network of scientific information services working in voluntary association was
feasible based on the evidence submitted to it that an increased level of cooperation is an
economic necessity”.22 Our recommendation is that this world-wide network should
include both cultural and scientific information.
As a first step one would combine the lists of names already available in RLIN, OCLC,
BLAISE, PICA, GABRIEL, with those of the Marburg Archive, the Allgemeine Künstler
Lexikon, Iconclass, the Getty holdings (ULAN, Thesaurus of Geographic Names), and
the lists owned by signatories of the MOU. This will lead to a future master list which is
essential for all serious attempts at a meta-data approach to cultural heritage and
knowledge in general. Because such a list represents a collective public good it is
important that it should be placed in safekeeping with UNESCO. Senior officials at
UNESCO already support this idea. It would make sense to link this list with related
bodies such as UNISIST or ICSU. A series of copies will be replicated in various centres
around the world.
The basic framework for such a digital reference room might come under the combined
purview of the European Commission’s Memorandum of Understanding in its next phase
and the G8 pilot projects 5 (Multimedia Access to World Cultural Heritage ) and 4
(Bibliotheca Universalis). A series of national projects can then add country specific
information. These national projects can be organized by consortia of industry and
government. By contributing lists from a given country, that country receives access to
the centralized meta-data base.
7) Conclusions
Models for knowledge organization have ranged on the one hand from dreams of a single
centralised source for all contents (e.g. the Library at Alexandria), to a fully distributed
model on the other. We have shown that, although they may be conceptually attractive,
both of these extremes are impractical. It was shown that these problems will not be
resolved as a result of a) recent innovations on the Internet, b) new initiatives with respect
to meta-data or c) even through the advent of nearly infinite computing power which
promises to increase greatly the possibilities of number crunching using brute force. In
the end, all of these solutions are piecemeal and short term.
This paper outlines an alternative model, entailing centralised meta-data in the form of a
digital reference room and distributed content sources. This digital reference room, will
combine in virtual space the resources of famous reference collections such as the British
32
Library, the Bibliothèque Nationale and the Vatican Library, and thus serve as an entry
point for digital libraries of primary and secondary sources on a global scale.23 It would
be fitting if the European Commission working in tandem with the G8 pilot projects and
UNESCO, would co-ordinate such a project in conjunction with consortia of industry and
governments. Centralised meta-data in a digital reference room will present considerable
challenges, but it offers a long term answer to the problems of an information age on a
global scale.
33
Chapter 3
Search Strategies
1. Introduction
The digitisation of knowledge is provoking a wide range of responses. On the one hand,
optimists paint scenarios of an information society with knowledge workers, and even
electronic agents who will do our work for us by consulting digital libraries and
museums, making learning an almost automatic process. They describe a world of
seamless connectivity across different operating systems on computers and also among
the communications devices in myriad shapes: from televisions, video-cassette recorders,
CD-ROM players and radios at home, to faxes and photocopiers in the office as well as
telephones, cellular phones, and other gadgets. The advent of nomadic computing24 will
make ubiquitous computing a reality. The result, they promise will be a world where we
have access to anything, anywhere, anytime, where there are self-learning environments
and the world is a much better place.
On the other hand, thinkers such as Pierre Lévy25 argue that the new computer age is
bringing a second flood, whereby we risk being drowned in the massive amounts of
information. In their view systematic approaches to knowledge are a thing of the past.
Their pessimistic view is that there is simply to too much knowledge to cope. If they
were right, one could be driven to the conclusion that the high goals of the Information
Society as articulated in the Bangemann Report, illustrated in the G7 exhibitions and pilot
projects have only created new problems rather than long term solutions. This paper takes
a more positive view. It acknowledges that the challenges are more formidable than some
of the technophiles would have us believe; that these challenges cannot be solved by
simple number crunching, but can be resolved with strategies that will lead to new
insights in the short term and potentially to profound advances in understanding in the
long term. The hype about anything, anytime, anywhere makes it sound as if the only
advance is in terms of convenience. This paper claims that much more is possible and
thus concludes on a note of restrained optimism.
By way of introduction there is a brief discussion of paradoxical links between access,
content and presentation. Next, basic distinctions are made between different kinds of
knowledge: ephemeral and long term; static and dynamic. It is shown that decisions
whether information is stored locally or remotely affect how knowledge is handled. A
series of strategies for access to knowledge are then outlined: the role of purpose,
questions, spatial and temporal co-ordinates, multiple classifications, authority files. It is
claimed that these strategies depend in the long term on a new kind of digital reference
room. The final section of the paper turns to emerging methods in three- dimensional
navigation and explores some potential applications in terms of visualising connections,
seeing invisible differences and predicting by seeing absence.
2. Access, Content and Presentation
The differences between knowledge in books and in digital versions are more
fundamental than they might at first appear. Books are linear, which means that the
34
method of peresentation is linked with the content in a fixed way. Any decision to change
presentation requires re-publication of the book. In electronic versions these constraints
may apply as in the case of CD-ROMS or HTML pages, but they need not apply. At the
level of programming, the last decades have brought an increasing awareness that it is
useful to separate the transport of content from its presentation26. For example, at the high
level, Standardized General Markup Language (SGML), separates fully the rules for
delivery of a document from the ways in which it is presented and viewed by different
users. XML, which is a simplified subset of SGML, attempts to make these high level
principles accessible to a larger audience.27 This same philosophy pertains to databases.
Once the contents have been organized into fields, then any number of subsets can be
viewed and presented without having to re-write, or even re-organize the original content.
Solutions such as SGML, XML and databases, mean that one can write once and view in
many different ways with only a minimum of effort. At the level of programming they
require a strict separation between access to content and presentation of content.28
One might expect that what applies at the database and programming level should also
apply at the viewing level, namely that the best way to make progress in access to
knowledge is to separate this entirely from questions of presentation. This assumes that
one can employ raw computing power to searches and not have to worry how things look.
According to this approach search engines are the domain of serious programmers, while
presentation is quite a different thing, the domain of users and incidental at best. Hence
search engines are about computational speed, theoretically require teams of
professionals, whereas presentation can be relegated to a little section on user preferences
in a browser, as if all that were involved were matters of taste such as the colour of
screens or the image on a screensaver.
The problem goes deeper. Searching, it is assumed, requires one set of tools, an active
search engine such as Altavista, Yahoo or Hotbot, which then presents its results in a
passive browser such as Netscape or Microsoft Explorer. Presentation requires another
set of tools such as Powerpoint at the everyday level or more advanced tools such as
Toolbook, Macromedia Director or Authorware. It is assumed that they are two very
different problems: passive viewing of materials that have been accessed through a search
engine, or active editing of materials for presentation.29 Hence if we are interested in both
problems, we need to download the findings from our browser, via a notepad, to a
presentation tool.
Alas, these assumptions account for some of the important limitations of access methods
for the moment. Different views of information are much more than a matter of taste.
They are crucial to understanding materials in new ways and therefore an essential
element in access strategies. To take a concrete example: I am searching for titles by
Alberti on perspective in the Library of Congress. The Library has all its records in a
complex database with many fields. A typical query presents materials from some of
those fields in a static form whereby the dynamic properties of the database form are
effectively lost to me as a user. Hence if the Boolean feature is working, I can search on
Alberti and perspective and this will give me a random set of titles answering those
criteria. The latest version of a browser such as Yahoo allows me to arrange this list
35
alphabetically. The titles are listed in their original language such that the same text
which has as its original title, De pictura, is found at different points of the list, under B
as in Buch von der Malerei in German, D as in Della pittura in Italian, M as in (O)
Malarstwie or O as in On Painting in English. The MARC records have a field for the
standard title, but this is typically not available in searches. Downloading a full MARC
record would provide me with the basic ingredients for the answer I need. But what I
actually want is to have the materials from the remote database in the Library of
Congress or some other institution loaded into a database within my local browser such
that I can have different views of this material on the fly without needing each time to
make a new search from the remote source. If I wished, for instance, to look at these
same titles chronologically I could do so on the spot. This functionality becomes essential
if I am making a search, which produces several hundred or even several thousand titles
as, for instance, with the Ameritech software on the web. Hence, what is needed is a
search engine and browser, with editing functions to provide multiple views of materials.
We have software for isolated functions. We need an integrating software which allows
us to move seamlessly from searching and access to editing and presentation. Access and
presentation are connected.
There are other cases when connections between access and presentation become even
more vital. I am searching for a term. It results in a text containing words which I do not
understand. I want to click on one of these terms, then click on a dictionary function
which takes me to a copy of Websters or the Oxford English Dictionary and provides me
with an online definition of the term in question. At present we have to stop our present
search, look for a dictionary, search for the term in a dictionary and then go back to the
text we were reading. To expand the search it would be useful to know its synonyms and
antonyms. In this case we want to click on the word, choose one of these alternatives and
go on line to Roget’s Thesaurus to find the answers. Without a close coupling of access
and presentation this contextualizing of searches is difficult if not impossible. Thus, while
the programming level requires a clear separation of access strategies from presentation
methods, the user level requires the re-integration of these two domains in order to
achieve more effective conceptual navigation.
The above examples entail an approach to searching which allows a user to have access
simultaneously to a series of basic reference tools that are the electronic equivalent of a
scholar writing at a desk with standard dictionaries, encyclopaedias and other reference
works within arms’ reach. This is one of the underlying assumptions in the prototypes for
a System for Universal Media Searching (SUMS, ©1992-1997). A senior scholar
working in a major library would have access to a large reference room such as the
Reading Room in the British Library. An electronic improvement would be a digital
reference room, which offered the cumulative resources of all the major reference rooms.
This would give every user desktop access to the combined materials available in the
British Library, the Bibliothèque de la France, the Vatican and other great collections.
Such a tool is foreseen in the System for Universal Multi-Media Access (SUMMA,
©1996-1997, cf. Appendix 1).
36
Implicit in the above is a new methodology to search strategems. An old model assumed
that everything could be collected in a single place. This has been a dream since the
Library of Alexandria. At the other side of the spectrum, a more recent Internet model
assumes a fully distributed model. The proposed method offers a middle way between
these two extremes in that it foresees a centralized digital reference room30 (which could
of course have a number of mirror sites) serving as an electronic repository for global
meta-data and pointing to distributed repositories of contents around the world.31
One fundamental idea underlying this new search strategy is very simple. Libraries, in
particular their reference sections, have constructed enormous lists of materials, notably
their (now electronic) catalogues of author names, subject headings, keywords, titles,
place names, call numbers and the like. These catalogues are presentation methods to
gain better access to their own collections of books, and other materials. At the same
time, these catalogues are effectively authority files with standard and variant versions of
authors’ names, place names etc. and as such can be used as instruments for navigating
through materials elsewhere. As a library catalogue, each name, subject, title and place,
may point to only one specific object in that particular library. As a list of names, the
same catalogue can help me refine my search for a J. Taylor in general to John Taylor of
Cambridge in particular. As a list of subjects the same catalogue can help me refine my
search from medicine in general to osteoperosis, cardio-vascular research or other
particular branches the names of which I might have forgotten or never have known until
I looked at the systematic branches of the field. Alternatively, as a list of titles, each
single title has a discrete call number and so I can go to that call number to discover other
titles classed in the same areas. Or browsing near that number, I discover related
problems and fields. Thus the presentation tools for a given library collection can become
search and access tools for other collections, and even help in refining questions about
other materials which may not have organised as carefully as the library materials. Thus
past efforts at organising knowledge can help us in present efforts and future searches.32
To take a specific and unexpected example of this idea: Roget’s Thesaurus of English
Words and Phrases is typically used as a standard reference work for finding related
words. On closer inspection we find that this was much more than a handy list of words.
Roget set out to divide the whole field of meaning into 1000 headings (now reduced in
Longman’s edition to 990), subsumed under six basic classes (figure 1). A few years ago
(1992), a scholar named Day,33 used this classification system as a finding tool for
Biblical verses, pointing out that this is often much more effective than either topical
indexes (which are frequently redundant because there is no critical heirarchy of terms)
or standard concordances based on actual words (which are usually linked to a specific
edition of the Bible and thus less universal in their application).
This approach, which could potentially be applied to any book, again suggests a basic
principle: classes used to order the world, can be used to find materials in the world:
presentation methods are keys for access methods. The WWW virtual library has begun
to exploit this principle, limiting themselves to the upper levels of the classes and using
them only to find web sites rather than as an integrated means for finding web sites,
library titles and materials.
37
Class
Class
1.
Abstract Relations
2.
Space
3.
Matter
4.
Intellect
5.
Volition
6.
Emotion
Figure 1. Six basic classes used in Roget’s Thesuarus, under which he organized all the
fields of meaning into 1000 headings. These have since been used as categories for
finding Biblical passages and could potentially be applied to any book.
In one mode these searches will be manual. In another they will be aided by voice
technology such that one is taken directly to the appropriate points in a list. In other
modes, agent technologies will record one’s search habits in order to gain a profile of the
user’s interests. These agents would then use this profile to search other sources. In
addition, they could take a list of the user’s subject interests, determine closely related
subjects, use this to explore potentially relevant material and suggest these as being of
possible interest to the user.34 Parallel to these activities of agents at a personal level will
be their role in systematically making a series of authority lists of names, places, events
etc.35 Eventually, access strategies will include a whole range of choices including:
agents, filters, languages, levels of education, machine configurations, personal profiles,
relations, special needs, structures and viewers.
3. Learning Filters and Knowledge Contexts
In any given field the complete corpus of knowledge is enormous. This corpus, which is
the sum total of materials available in the world’s libraries and research institutes is
seldom understood by more than a handful of experts around the world. These experts
and their colleagues use subsets of this corpus to write the curricula and subsets thereof
for standard textbooks in their fields which then become the basis of courses. The courses
at various levels of education from elementary school to the post-graduate level are
subsets of these textbooks. Exams, in turn, are further subsets of the courses.
Traditionally, students are expected to recognise the links between exam, text and course;
teachers recognise further links with the curriculum and only a handful of experts can
recognise all the precise links between a particular exam question, course, textbook,
curriculum, and the corpus. Given the advent of computers these mental links can be
translated into electronic hot links, such that even a beginning student will be able to
trace the whole range of links between an exam (question) and the corpus (answer on
which it is based), and thereby understand the full context. Conversely, one could begin
with any part of the corpus and trace the (exam) questions it entails.
Digital libraries are about making available in electronic form the corpus of knowledge.
Ministries of education and are translating curricula and learning outcomes into digital
form. Faculties of education and institutions of learning as a whole are making individual
courses available in electronic form. Once this process is complete, the links between
38
these efforts can be made and there will be a whole range of subsets of every corpus
corresponding in each case to a different level of education. A pre-school child will begin
with the smallest of these subsets. Their list of persons (Who?) will be very short. Once
they have mastered this list, they can go to the next level which has a few more names,
and they can continue in this fashion until they have reached the full list of all names at
the research level.36 The same principle applies to subjects, key words (What?), places
(Where?), events, times, chronologies (When?), methods, procedures, processes (How?)
and explanations (Why?). The genius of the system lies therein that it does not have to
create all this content. It uses the presentation materials of existing knowledge
organisations, particularly libraries, museums and schools as the starting point for new
access strategies. The novelty lies in making new use of the old through integration rather
than trying to re-invent the wheel as so many search engines assume to be the case.
Today’s search engines aim to find a title we want. Research is about finding what we did
not know existed. We need to develop re-search engines.
4. Levels of Knowledge
From a global point of view reference rooms in libraries contain mainly five kinds of
materials, namely, 1) terms (classification systems, subject headings, indexes to
catalogues); 2) definitions (dictionaries, etymologies); 3) explanations (encyclopaedias);
4) titles (library catalogues, book catalogues, bibliographies); 5) partial contents
(abstracts, reviews, citation indexes). All of these are pointers to the books in the rest of
the library, or 6) full contents which can conveniently be divided into another four
classes, 7) internal analyses (when the work is being studied in its own right); 8) external
analyses (when it is being compared or contrasted with other works); 9) restorations
(when the work has been been altered and thus has built into it the interpretations of the
restorer) and 10) reconstructions (when the degree of interpretation is accordingly larger).
From this global point of view the first six of these categories are objective,37 while the
last four (6-10) are increasingly subjective. The first category is also the most simple
(isolated terms), and each successive level enters into greater detail: i.e. dictionary
definitions range from a phrase or a few sentences; encyclopaedia explanations extend
from a few sentences to a few pages etc. Once again the physical arrangement of major
libraries serves as a starting point for the conceptual system; the presentation system of
libraries offers another key to access into its electronic version. The heritage of
experience in organising the known provinces of knowledge, offers the departure points
into its unknown lands.
At the same time the electronic version offers more possibilities than its physical
counterpart. For instance, the classification system of a library changes over time. In print
form this is documented by successive editions of the Dewey, Library of Congress and
other systems. These print forms are static and it would require my having the various
editions in front of me all opened at the same section in order to trace their evolution. In
electronic form the various editions can be linked to a time scale such that the categories
change dynamically as I slide the time scale. This offers new means of understanding the
history of a field. In the longer term one will be able to go from a term in any given
39
classification scale to the same term or its nearest equivalent in other classification
schemes. If one entered all the major systems traced by Samurin38 in his standard history
of classification systems, shifts along the time scale would allow one to see the gradual
branching of knowledge over time and trace how a general category such as medicine has
led to thousands of specialty topics.
5. Questions as Strategy: Purpose as Orientation
There is the popular game of twenty questions. In simple cases we can learn a great deal
with only six questions. The first is the purpose, or Why? Secondly, we ask about the
means, or How? Thirdly, we determine the time of its occurrence or When? Fourthly we
ask whether it is local, regional, national or international or Where? Fifthly, we ask about
the precise subject, or What? Sixthly, we ask about the persons involved or Who? This
sequence of questions is part of the method. There is the well known episode in Alice in
Wonderland where she asks the cat for directions only to be told that this depends on
where she wants to go. To navigate effectively the totality has to be reduced to navigable
subsections. To this end questions offer a strategy.39
Knowing the purpose in terms of basic categories such as everyday, business, health, law,
religion and leisure determines the main thrust of the information and knowledge we are
seeking. Everyday includes classifieds, news, sports, traffic and weather. If our goal is
business oriented we will be concerned with a very different subset of knowledge about a
city and a person visiting it for leisure as a tourist. The purpose (Why?) thus provides a
first way of determining the scope of the search and thus narrows the field. Knowing the
means by which the goal is to be accomplished (How?) further narrows this scope. For
instance, if our interest in everyday news is limited to television, then we can ignore radio
and newspapers. Knowing the temporal boundaries provides further limits (the old
terminus ante quem and post quem or simply When?) Knowing the geographical
boundaries of our interests (Where?) further limits the scope of the search. If we are
interested in leisure and tourism specifically in Italy or India, then we can ignore the
leisure information for all the rest of the world. Knowing the precise subject (What?)
and/or the precise persons (Who?) provide final refinements to this process of narrowing
the scope.
This means that, in the case of basic searches, a simple series of six simple choices can
guide a user from a vague intention to a quite specifically defined question. This slightly
tedious, highly structured procedure is appropriate for beginners and some members of
the general public who wish to use their search engines in the manner that they use their
remotes. Accordingly the number of choices are sufficiently limited such that they could
fit onto their remote: they can do question hopping as easily as channel hopping. A slight
advance introduces longer lists of choices elsewhere on the screen to increase the number
of alternatives.
The Internet has introduced the notion of Frequently Asked Questions (FAQ). These are
typically listed without any particular order. The questions methodology foresees that
these are organized by question type. Hence, having chosen a topic, by pressing Who?
40
the user would receive all FAQ’s concerning persons, by pressing When? they would
have all FAQ’s concerning temporal attributes. By pressing How? they would have all
FAQ’s concerning function. These lists could in turn be viewed from multiple
viewpoints: alphabetically, temporally, etc.
A next stage in sophistication introduces the questions as headers each which has a box
below it into which users can define more precisely the details of their search. For
instance, under Who, they might write, Leonardo da Vinci; under What they might write,
optics; under Where they might write Milan, under When they might write 1505-1508 and
under why they might write Education. They are not constrained to enter information
under every question, except that every omission results in a more general answer. If they
leave out When they would get all his optical works written in Milan. If they leave out
Where they would get all his optical works and if they leave out What they would get his
complete works. The elegance of this approach is that a simple word typed in by the user
is in each case automatically translated into a question linked with appropriate fields of a
database. The user’s one word statements on the surface are translated effortlessly to
formal programming queries below the surface.
Further stages of sophistication transform these random entries of general subjects to
successively more comprehensive lists based on a) the user’s personal list of subjects; b)
headings in database fields; c) the subject lists of the Library of Congress; d) the Library
of Congress classification list; e) multiple classification lists. Once again the order
established in libraries and other institutions (partly for presentation purposes) helps to
refine the subtlety of the access strategies and render elegant the navigation principles.40
6. Media Choices
Librarians have long been concerned with careful distinctions about the medium of the
message, and they have typically separated objects accordingly, books go to one place on
the shelves, maps to another, prints to another. In great collections, such as the British
Library, maps and prints have their own rooms. Similarly music recordings go to a
special section. Very different media have gone to fully separate institutions including
film libraries, television archives, art galleries and museums. Each of these have
developed their own specialised techniques for cataloguing and recording. So having
used the experience of libraries to create cumulative authority lists of names, subjects and
places for books and related media, the records of these other institutions can be used to
arrive at a more comprehensive picture of known materials. One important aspect of
recent meta-data trends is the identification of media types as an essential aspect of their
description. Thus a simple list of media choices serves as yet another tool in refining
one’s potentially infinite search of all things, to a manageable subset covering some
thing(s) in some particular media.41
The net result of such strategies is equally interesting because it means that hitherto
dispersed information about different media can now be accessed at once rather than have
to rush from a book catalogue to a print catalogue, map catalogue etc. Hence one will be
able to see at a glance how one subject inspired many media whereas others have been
41
limited to a few of even an isolated medium. McLuhan focussed on the medium as the
message. But there are clearly histories to be told of messages which inspired a whole
range of media.
7. Quality, Quantity, Questions
Until recently many persons measured search engines by their ability to find things.
Today the problem with most search engines is that they find too much. The problem has
more to do with quality than quantity, which depends not so much on the power of the
engine as the precision with which the engine is given direction. If we ask for everything
we get everything. Even in the case of something fairly specific such as Smith,
everything about all the Smiths in the world is still a great deal more than we usually
want to handle. Most of us have no idea that there is a great deal more available than we
suspect, so when we begin searches on a global scale we need a crash course in
specificity.
Bits of this are common sense. If we are searching about the latest things happening in
Sydney, Australia it is wise to look first in databases in Sydney rather than searching
through every site on the other continents. If we do not find it there, we can do a global
search and perhaps discover that an Australian expatriate now living in Los Angeles or
London, has a database on precisely what we need. If we are looking specifically for
financial materials relating to banks it makes no sense to search databases of legal firms,
health centres or tourist spots. As the IBM advertisement states: it is better to fish where
the fish are. To this end, our course, Specificity 101, would state, as was noted above,
that in the case of simple searches, a strategic sequence of Why?, How?, When?, Where?,
What? and Who?, is often enough to narrow the scope of our general interest into
something sufficiently specific to result in a manageable number of answers.
If the number of answers is still too large then we need further means to make them
smaller. In the physical world when the number of applicants to a university is too high,
standards are raised, which means that only those of a certain quality are chosen. The
same principle applies in the electronic world. Until recently there was a problem that in
many electronic documents the precise level of quality was undefined. A number of
initiatives are underway to alleviate this, including a) new conventions for high level
domain names so that one can tell at a glance whether the site is basically concerned with
education, business, or some other subject; b) the W3 Consortium’s Protocol for Internet
Content Selection (PICS) and c) their initiatives in the direction of digital signatures. The
latter of these is perhaps the most far reaching. It places the responsibility of description
on the content producer, although their accuracy can then be checked by others, such that
a digital signature effectively functions as a submission to peer review. Quality articles
can only gain from the fact that their value has been confirmed by others, so those who
refuse to provide digital signatures will effectively be disqualifying themselves.
As in the world of books, more substantial electronic projects and websites have reviews
and contexts which further identify quality and remarkable achievement. Thus lists of
basic qualities become additional parameters for paring down the many choices to a
42
manageable small list. The same is true for lists of quantitative features. These too act as
tools for refining the parameters of a search.
In some cases an iterative use of the basic questions will be useful. For instance, if my
primary interest is biographical then my original question would have been in terms of
Who? I might have started with a generic term for Who? such as Artists and then
narrowed this universal list to subsets of Italian then Renaissance, then Florentine in
order to arrive at Sandro Botticelli. To know more about his colleagues I would again ask
Who? To know more about the subjects connected with Botticelli I would again ask
What? And so on. Hence, as we know from cases with celebrity interviewers,
investigators and psychologists, a careful sequencing of a small number of precise
questions often brings better answers than many imprecise inquisitive stabs in the dark.
8. Maps, Projections and Space
Geography is an important tool for navigation both in its literal and metaphorical
applications. In its literal uses, points on a map are linked with information about those
points. As Tufte42 has shown, this has a long tradition in cultures such as Japan. In the
West, these techniques have evolved rapidly through the advent of Geographical
Information Systems (GIS), which are being linked with Global Positioning Systems
(GPS) at the world level and Area Management/Facilities Management (AM/FM) at the
local level such that one can zoom from a view in space to a detail on earth, which may
be static or dynamic.43 Among the more interesting projects in this context are T(erra)Vision of Art+Com44 and Geospace at MIT45.
As one descends from a view in space one can begin in two dimensions and change to
three dimensions as the scale increases.46 The detail on earth may be a building, a room, a
painting, a museum object or some other item. Each item takes one via its author, artists
(Who?), subjects, title, theme (What?), place (Where?) and time (When?) to lists of the
author’s works and related authors’ works, copies, works done on the same and related
themes, other works produced in the same place and other works produced at the same
time. Once again the presentation scheme of the gallery or museum serves as a starting
point for an access strategy and navigation method.
In the above example one is going from a geographical mode to the textual mode of
database materials. The converse is equally interesting. Any place name in a list one
encounters is recognised as a place name and its position is located and presented on a
map. This is another aspect of the digital reference room. I am reading a text and come
across the name Uzuncaburc and may not remember exactly where in Turkey this city is.
I highlight the name and am taken West of Tarsus, and just North of Silifke to a spot with
an impressive Greek temple on a hillside. Calling on another source in the digital
reference room, I can get a full description of the town from the equivalent of a Guide
Michelin or Baedeker for Turkey.
43
Temporal Maps, Boundaries and Buildings
A map is a record of conditions and boundaries at a given time. These boundaries change
with time and are therefore dynamic. Viewed dynamically with a temporal filter the
boundaries of the Roman empire expand and then contract. The provinces of Italy
change. The city limits of Rome increase, recede and then expand anew. In some cases
these boundaries are a matter of controversy. Tibet's view of its boundaries with China
may be very different from China's view of Tibet's boundaries. Hence, for any given
place there are as many versions of the maps of boundaries as there are competing
interpretations. This principle can be applied along the whole gamut of scales from global
views of a country’s boundaries to views of cities, complexes, and even individual
monuments and buildings. The reconstruction of such sites, particularly ancient ones is
rapidly becoming an industry in itself.47
Individuals
Such temporal maps can also serve to trace the movements of an individual artist or
scientist from their birthplace, to their places of study and work, including their travels
until the time of their death. In the case of a famous artist such as Leonardo, such
temporal maps can trace the history of ownership of a given painting from the place
where it was originally painted through the various collections in which it was found.
There may be debates about the movements of the artists or the history of one of their
paintings. Hence each interpretation becomes an alternative map. Standard interpretations
are indicated as such. The reliability of interpretations is dealt with under the heading of
quality (see section 7 above).
Concepts
Such maps can equally be applied to concepts such as Romanesque churches or Gothic
architecture, although in this case there are two temporal dimensions to consider. First,
the number of buildings included in the corpus varies from author to author and will tend
to become more detailed as we approach the present. Secondly, these authors will have
different theories concerning the historical development of a given church. For instance,
author A may believe that the sequence of Gothic churches was St. Denis, Chartres,
Naumburg, whereas author B may claim that there was a different sequence in the history
of the buildings.48 These alternatives are again available under lists by author, year, place
etc.
Virtual Spaces
In Antiquity virtual spaces were used as a tool in the art of memory49. One imagined a
house with different rooms, associated different things with each room and then returned
in one's mind's eye to the house and a given room when one wished to remember a given
fact. The imaginary worlds of Virtual Reality Modelling Language (VRML) are
effectively translating this tradition of mental spaces into visual spaces50. A fundamental
difference is that whereas the mental space was private and limited to only one person the
44
visual space is potentially public and can be shared by any number of persons. There is
much rhetoric about the importance of collaborative environments. The whole question of
when such an environment is more effective and when it is not requires much further
study.
9. Multi-Temporal Views
In traditional knowledge environments we typically assume a single method of time
reckoning. A European expects to find a Gregorian calendar in everyday life. Exceptions
to this rule are Jewish, Muslim, Hindu and persons of various religions who follow other
calendars for religious purposes. Temporal navigation will allow for conversion of
different calendars and chronologies similar to the conversion tools which already exist
for major areas of physics such as measurement, power, force, light and heat. Hence, if I
am studying a Christian event in 1066 or during the Crusades, I might wish to see what
the eqivalent date was in the Muslim calendar and more significantly what were the main
events at the time from the viewpoint of their religion or culture.
As in other cases above, lists of the categories of time, hours of the day, days of the week,
months of the year, historical periods, geological periods, all the categories which have
been developed for presentation of our knowledge of temporal events can be used as
means for gaining access to temporal materials.
10. Integrating Tools
As noted earlier the software industry began by creating a host of tools for writing,
drawing, graphing, designing and editing, each of which appeared in separate shrink
wrapped packages. Often they were incompatible with one another. More recently a
number of these have been bundled into office suites such that they can be used together.
A problem remains that each of these typically retains its own interface such that every
operation requires a change of style. Within individual products such as Microsoft Word
we find a function called Tools, which includes a series of functions including spelling
and grammar, language, word count, auto summarize, autocorrect and look up reference
(which could be seen as a first step in the direction of a digital reference room). The
challenge is to extend this notion of tools within Word to include the whole spectrum of
tools used in all the other software packages: to have multiple operations within a single
presentation framework; a common look and feel for one’s search, and access, creating,
editing, and presentation tools.51
Such a set of tools is listed by general categories: mathematical, scientific, simulation,
verbal, and visual; each of which breaks down into further functions. For instance, verbal
tools include: class, create, discuss, edit, input, output, search and translate. Input tools
include e-mail, fax, media, scan, telephone, xerox. This is convergence at another level.
Whether all these tools are on one’s local hard drive or on a remote server through a thin
client will not alter this basic principle.
45
11. Conclusions
Recent advances in technology assume a separation of content and presentation with
respect to data structures. In terms of access, however, there are important reasons for
relating content and presentation (different views, perspectives). The paper has outlined
some fundamental concepts underlying a prototype for a System for Universal Media
Searching (SUMS), namely, learning filters, and knowledge contexts, levels of
knowledge, questions as strategy: purpose as orientation; media choices, quality,
quantity, questions, space using maps and projections; multi-temporal views and
integrating tools. It foresees how such a system, linked with the equivalent of a digital
reference room, will provide the basis for a System for Universal Multimedia Access
(SUMMA). A second part of this paper, included below under the heading of Cultural
Interfaces, will explore recent developments in three-dimensional interfaces and claim
that these are particularly suited for certain tasks such as visualising connections in
conceptual spaces; seeing invisible differences as well as comprehension and prediction
by seeing absence. It will also suggest ways in which two- and three-dimensional
interfaces can be used as complementary methods.
46
Chapter 4
Cultural Interfaces
1. Introduction
The enormous rise in new information has been paralleled by an equally extraordinary
rise in new methods for understanding that information, new ways of translating data into
information, and information into knowledge. New fields are emerging. For instance, at
the frontiers of science and in the military, scientific visualization is a thriving discipline
with close connections to virtual reality, augmented, enhanced and mixed reality. In
business, database materials are being linked with spreadsheets to produce new threedimensional visualisations of business statistics (e.g. companies such as Visible
Decisions). In industry, data mining is emerging as an important new field. In the field of
culture, where immediate profit is less obvious, these techniques remain largely
unknown.
Interestingly enough, standard books on human computer interface by Shneiderman52 do
not give a complete picture of techniques now available or in development, nor even
recent books with promising titles.53 There are a few journals, organizations54 and some
conferences55 devoted to the subject of which the present is the most prestigious.
Meanwhile, on the world-wide-web itself, there are a series of useful sites, which offer
the beginnings of a serious overview into these developments. For instance, Martin
Dodge (Centre for Advanced Spatial Analysis, University College, London), has
produced a useful Atlas of Cyberspace,56 with examples of at least four basic map(ping)
techniques, namely, conceptual, geographic, information (landscapes and spaces), and
topology (including ISP and web site). A more thorough survey is provided in an
excellent study by Peter Young, (Computer Science, Durham University), on Three
Dimensional Information Visualisation.57 Here he lists twelve basic techniques: surface
plots, cityscapes, fish-eye views, Benediktine space, perspective walls, cone trees and
cam trees, sphere-visualisation, rooms, emotional icons, self-organising graphs, spatial
arrangement of data and information cube. He also has a very useful list of research
visualization systems. Chris North (University of Maryland at College Park), has also
produced a useful and important Taxonomy of Information Visualization User
Interfaces58 (see Appendix 1. Cf. the list of individuals in Appendix 2). Pat Hanrahan59
(Stanford) has made a taxonomy of information visualization, while Mark Levoy
(Stanford) also has a taxonomy of scientific visualization techniques.60
The significance of emerging interface technologies will be considered, namely, voice
activation, haptic force, mobile and nomadic, video activation, direct brain control, brain
implants, and alternative methods. A problem with such taxonomies and the technologies
which they class, is that they are mainly from the point of view of the technology’s
capabilities, as if we were dealing with solutions looking for a purpose.
In order to arrive at a taxonomy of users’ needs, a deeper understanding of their potential
purposes is required, the whys? This paper offers preliminary thoughts in that direction. It
begins with an outline of five basic functions relating to cultural interfaces, namely,
virtual guides, virtual museums, libraries and spatial navigation, historical virtual
47
museums, imaginary museums and various kinds of cultural research. The role of
metadata is considered briefly. Particular attention is given to the realms of research,
since it is felt that the new technologies will transform our definitions of knowledge. The
conclusion raises some further questions and challenges.
Bibliographical references to Human Computer Interaction61 specifically with respect to
Graphic User Interfaces (GUI)62 and Network Centred User Interfaces (NUI)63 are
provided in the notes. The appendices provide a taxonomy of information visualization
user interfaces by data type (Appendix 1), a list of individuals and their contributions
(Appendix 2) and a survey of other developments mainly in Canada, Germany, Japan, the
United Kingdom and the United States (Appendix 3).
2. Emerging Interface Technologies
It is generally assumed that the two-dimensional spaces of current computers are largely
a reflection of hardware limitations, which will soon be overcome. Hence there are
numerous initiatives to create three-dimensional spaces. Alternative interfaces and input
devices are also being developed.
Three Dimensional Spaces
Dr. Henry Lieberman (MIT) is exploring the use of very large three-dimensional
navigation spaces, with new techniques which allow “zooming and panning in multiple
translucent layers.”64 Silicon Graphics Inc. (SGI) foresees the use of landscapes.65 Dr.
Stuart Card (Xerox PARC) and his team have been working on a series of tools for
visualizing retrieved information using techniques such as a galaxy representation, spiral
calendar, perspective wall, document lens and cone tree .66 There is an analogous project
at the Gesellschaft für Mathematik und Datenverarbeitung (GMD) in Darmstadt called
Lyberworld.67 This takes a concept, searches for related terms, links these with the
concept in question and presents them spatially in a cone. Alternatively the concept in
question is positioned in the centre while various related terms are placed along the
circumference of a circle where they exercise the equivalent of a centrifugal gravitational
force. If all these surrounding terms are equal in strength they exercise an equal force on
the central concept. As one of the terms becomes more significant it exercises a greater
force on the central concept. Another GMD project, SEPIA, foresees a hypermedia
authoring environment with four concurrent spaces: a content space, planning space,
argumentation space and rhetoric space.68
At the level of abstract ideas a series of new products are being developed. For instance, a
group at Rensselaer Polytechnic is developing an Information Base Modelling System
(IBMS)69 which allows one to visualize relationships in parallel perspective. At the
Pacific Northwest National Laboratory70 (Richland, Washington) a team led by Renie
McVeety is developing a Spatial Paradigm for Information Retrieval and Explanation
(SPIRE,71 cf. Themescape72), while John Risch is developing Text Data Visualisation
Techniques as part of the Starlight project. At Carnegie Mellon University the
Visualization and Intelligent Interfaces Group73 is creating a System for Automated
48
Graphics and Explanation (SAGE) and related methods (Sagebrush, Visage74) for
Selective Dynamic Manipulation75 (SDM). At the Sandia National Laboratory, Chuck
Myers, Brian Wylie and a team at the Computational Sciences, Computer Sciences and
Mathematical Center are working on three dimensional Data Analysis76 Data Fusion and
Navigating Science,77 whereby frequency of articles can be visualized as hills in an
information landscape. This is part of their Advanced Data Visualization and Exploration
initiative called EIGEN-VR. Another project at Sandia is an Enterprise Engineering
Viewing Environment78 (EVE). This:
multi-dimensional user-oriented synthetic environment permits components of the
model to be examined, manipulated and assembled into sub-systems and/or the
final structure. A movable clipping plane allows internal structure examination.
Craft wall displays provide schematic or cut-away views of an assembled model.
The Sandia team is also working on Laser Engineered Net Shaping (LENS) and has been
exploring the implications of these techniques for modelling and simulation in
manufacturing and medicine. The implications thereof for culture are no less impressive
as will be suggested in the section on research and knowledge (see below section 7).
Alternative Interfaces and Input Devices
While monitors controlled by a mouse remain the most popular form of navigation at the
moment, a number of other alternatives are being developed. Bill Buxton79 has, for
instance, produced what appears to be the most thorough list of existing input devices.
This includes: aids for the disabled, armatures, bar code readers, boards, desks and pads,
character recognition, chord keyboards, digitizing tablets, eye and head movement
trackers, foot controllers, force feedback ("haptic") devices, game controllers, gloves,
joysticks, keyboards and keypads, lightpens, mice, MIDI controllers and accessories,
miscellaneous, motion capture, speech recognition,80 touch screens, touch tablets and
trackballs. A full assessment of the pros and cons and philosophical implications of all
these devices would be a book in itself. For our purposes, it will suffice to refer to some
of the main alternative human web interaction systems.
Video Interaction
One very innovative technique entails using video cameras to capture human movements
and use these as cues for manipulating virtual environments. For instance, David Rokeby,
in the Very Nervous System, links human movements such as dance to acoustic
environments. As one moves more slowly or quickly, a different range of sounds is
produced.
Vincent John Vincent and the Vivid Group have developed other aspects of this approach
in their Mandela software, such that the video camera and a blue screen essentially allow
the user’s movements in the real world to be transposed to the virtual space within the
screen. This permits a person to interact as a player in a virtual space on screen. For
example, at the Hockey Hall of Fame in Toronto one can stand in a real goal, see oneself
standing in a virtual goal on screen and interact with other virtual players there. This
49
complex software requires customized programming for each site or special event. By
contrast, the Free Action and Human Object Reactor software of a new company called
Reality Fusion,81 offers more simplified versions of this approach allowing persons “to
interact on screen with the body using video cameras”.
Such techniques are potentially of great interest not just for physically challenged
persons. One could imagine a museum or gallery carefully equipped with video cameras
such that one needed only to point to an object, or part of a painting and one’s notebook
computer would give one an explanation at the level desired. Hence, if one had identified
oneself as a grade school child at the outset there would be an elementary explanation,
whereas a research student would be given a much more thorough description.
Voice Activated Interfaces and Visualization Space
In the 1960’s there was considerable fanfare about dictation machines which, it was
claimed, would replace the need for secretaries. After more than thirty years of hype, the
first reliable products for the general public have been made available in the past year
through companies such as Dragon Systems82 and IBM. Such systems presently entail
vocabularies of 10-20,000 words, but will soon expand to vocabularies of 100,000 words
and more. At the same time, researchers such as Mark Lucente83 (IBM Watson), working
in conjunction with MIT have been developing futuristic scenarios whereby a person can
control a wall-sized computer screen using voice commands.
There are related projects elsewhere. The Gesellschaft für Mathematik und
Datenverarbeitung (GMD) has an Institut für Integrierte Publikations und
Informationssysteme (IPSI), which is working on a Co-operative Retrieval Interface
based on Natural Language Acts (CORINNA).84 Such methods are attracting attention
within the cultural community. In the United States, the Information Infrastructure Task
Force (IITF) has created a Linguistic Data Consortium 85to develop a Spoken Natural
Language Interface to Libraries.
Voice activation clearly opens many new possibilities. For instance, many lists are treelike hierarchies, which means that choices inevitably require burrowing down many
levels until one has the set of choices one seeks. If these choices are voice activated then
one can go directly to the appropriate branch of a decision tree and skip the levels in
between. The effectiveness of the technique will, however, depend, very much on the
situation. In the case of public lectures voice commands can help dramatic effect. In a
classroom, if everyone were talking to their computers the results might border on chaos.
Meanwhile, there is increasing study of the ways in which visual and auditory cues can
be combined. For instance, a team at the Pacific Northwest National Laboratory86
(Richland, Washington) is working on the Auditory Display (AD) of Information “to take
advantage of known strengths of both visual and auditory perceptual systems, increasing
the user's ability to glean meaning from large amounts of displayed information”:
50
An Auditory Display Prototype adding non-speech sound to the human-computer
interface opens a new set of challenges in the system's visual design; however,
there are many reasons why one would want to use auditory display. The human
auditory system has acute temporal resolution, a three-dimensional eyes-free
`orienting' capacity, and greater affective response than the visual system.
Especially promising for analysis applications is the natural ability to listen to
many audio streams simultaneously (parallel listening) and the rich quantity of
auditory parameters (pitch, volume, timbre, etc.) that are intuitively apparent to
musicians and non-musicians alike. Current software leaves the potential of audio
at the interface almost completely unused, even while visual displays (subject to
well-understood limitations) are increasingly cramped. Auditory display poses a
way to expand the human-computer interface by taking advantage of innate
properties of the human perceptual system.87
Such combinations of visual and auditory cues, are also being studied by Richard L.
McKinley88 (Wright Patterson Airforce Base) in the context of a new field of biocommunications. If we truly learn so much better when we see and hear things in
combination or at least in certain combinations then we clearly need to find ways of
incorporating such experiences within the learning framework.
Haptic Force and Tactile Feedback
Research into artificial arms and limbs, by pioneers such as Professor Steven J.
Jacobsen89 (University of Utah, Salt Lake City) has led to new awareness of haptic force
and tactile feedback as potential aspects of input systems. Corde Lane and Jerry Smith90
have made a useful list of a number of these new devices. Grigore Burdea,91 in a recent
book, offers a very useful survey of this emerging field, showing that present applications
are limited mainly to the military (combat simulation, flight simulator), medicine (eye
surgery and arthroscopy training simulator) and entertainment (virtual motion threedimensional platform).
In the military, these principles are leading to tele-presence in the sense of telemanipulation or tele-operation, whereby one can carry out actions at a distance. In the
case of a damaged nuclear reactor, for instance, from a distance a person could safely
control a robot, which would enter a space lethal for humans and do a critical repair. In
medicine, these same principles are leading to tele-surgery.92
In the field of culture such haptic force and tactile feedback mechanisms could well lead
one day to new types of simulated conservation experiments. Before trying to restore the
only extant example of a vase or painting, one creates a model and has various
simulations before attempting to do so with the actual object. Not infrequently, there will
only be one or two experts in the world familiar with the techniques. These could give
tele-demonstrations, which advanced students could then imitate.
In the eighteenth century, the Encyclopédie of Diderot and D’Alembert attempted to
catalogue all the known trades and crafts. Within the next generations it is likely that
51
these will be recorded in virtual reality complete with haptic simulations. These
techniques will continue to change with time, such that in future one could, for instance,
refer back to how things were being done at the turn of the twentieth century.
Mobile and Nomadic Interfaces
The advent of cellular telephones and Personal Digital Assistants (PDA’s) such as the
Apple Newton or Texas Instruments’ Palm Pilot has introduced the public to the general
idea of mobile communications, an emerging field, which involves most of the major
industry players93. At the research level the Fraunhofer Gesellschaft (Darmstadt) is
working on Mobile Information Visualization,94 which includes Active Multimedia Mail
(Active M3) and Location Information Services (LOCI)
To understand more fully the larger visions underlying mobile communications it is
useful to examine Mark Weiser’s (Xerox PARC) vision of ubiquitous computing.95 This
goes far beyond the idea of simply having a portable phone or computer. Instead of
thinking of the computer as an isolated machine, he sees all the technological functions of
a room integrated by a whole series of co-ordinated gadgets, which are effectively
miniature computers. Employee A, for instance, might always like a big lamp shining at
their desk, have their coffee promptly at 10:30 a.m. each morning and not take calls from
2-3 p.m. because that is a time when the person writes letters. Assuming that the room
could “recognize” the person, say through their badge, all of these technology “decisions”
could be activated automatically, without employee A needing to turn on the big lamp at
8:30, the coffee machine just before 10:30 and turn on the answering machine from 2-3
p.m. In Weiser’s vision this recognition process would continue outside one’s own office.
Hence, if employee A had walked down the hall and was visiting the office of employee
C, the telephone would “know” that it should not ring in their now empty office and ring
instead in C’s office for employee A, using a special ring to link it with A. Such
challenges are leading to an emerging field of adaptive and user modelling.96
In the military, where mobile computing is frequently called nomadic computing, this
vision is taken to greater extremes. Here one of the leading visionaries is a former
director of the Defence Advanced Projects Agency (DARPA), Professor Leonard
Kleinrock97 (University of California at Berkeley). In his vision, a computer should
simply be able to plug into a system without worrying about different voltage (110, 220,
240) or needing new configurations of IP addresses. A soldier on the ground with their
view obstructed by a hill, could communicate with an aircraft overhead, which would
then relay to the soldier a bird’s eye view of the situation. Companies such as Virtual
Vision98 are exploring some of the non-military implications of this approach.
While museums and galleries are far removed from the life-threatening aspects of the
battlefield, one can readily see how the greatly increased interoperability of devices
being developed in a military context, has enormous implications for museums and
galleries. Imagine a notebook computer that “knows” which painting is in front of one,
and thus downloads the appropriate information without needing to be asked. Imagine a
computer that immediately sought the information one might need for a city the moment
52
one arrived in that city. Hence, on landing in Rome, it would download an appropriate
map of Rome, complete with information about the relevant museums and their
collections.
Direct Brain Control and Brain Implants
Those concerned with universal access for persons with various disabilities99 have
developed various devices such that one can, for instance, control computers simply by
eye movements or other minimal motions.100
A number of projects are moving towards direct brain control whereby intermediary
devices such as a mouse are no longer necessary. In Germany, the main work is occuring
at the International Foundation of Neurobionics in the Nordstadt Hospital (Hanover),101 at
the Institute for Biomedical Technique (St. Ingbert)102 and at the Scientific Medical
Institute of Tübingen University (Reutlingen).103 In Japan, Dr.
Hinori Onishi104 (Technos and Himeji Institute of Technology) has produced a Mind
Control Tool Operating System (MCTOS). In the United States, Masahiro Kahata (New
York) has developed an Interactive Brainwave Visual Analyser (IBVA).105 At the Loma
Linda Medical Center work is being done on controlling computers with neural
signals.106
Dr. Grant McMillan107 (Wright Patterson Airforce Base) has been exploring the
potentials of brain waves (namely, Alpha, Beta, Theta, Delta and Mu) on control
mechanisms. For example, a pilot may be in a flight simulator and find themselves flying
upside down. Every time one thinks, the brain produces electric pulses. By harnessing
these waves a pilot has only to think and the resulting waves can act as a command to
return the simulator to an upright position.
A more futuristic and potentially disturbing trend entails direct brain implants in a
manner foreseen in the film Strange Days. Part seven of a BBC television series Future
Fantastic108 directed by Gillian Anderson, entitled Brainstorm, discusses the work on
brain implants by Dr. Dick Norman and Dr. Christopher Gallen.109
Given such developments, phrases such as “I see what you mean”, “sharing an idea”,
“look at it from my viewpoint” or “giving someone a piece of one’s mind” a may one day
be more literal than we now imagine. As noted above, it is already possible to activate
certain commands simply by eye movement or through bands which measure one’s
thought waves. In future, instead of voice activation, there might well be thought
activation. Dictation would then simply require thinking the words which could
conceivably lead some to forget how to speak properly. Will we be able to let others into
our dreams and daydreams? Such questions lead quickly beyond the scope of this essay
and yet the problems they entail may well become central to interface design sooner than
we think. In order to assess more realistically the potentials of such applications it will be
useful to step back and explore some basic functions of cultural interfaces.
53
3. Virtual Guides and Physical Museums
At the simplest level, one can imagine a physical museum endowed with different kinds
of virtual guides. Instead of having a traditional tour guide, trying to shepherd a group of
twenty or thirty visitors through various rooms, standing around a painting and having to
shout to make themselves heard above the noise of the crowd, a visitor could simply rent
a walkman-like device and listen to descriptions of paintings as they go. At the Museum
in Singapore, for instance, such a device is already available. Certain displays and
paintings are specially marked and for these a virtual guided tour is available. In Italy, the
National Research Council (CNR110) is developing a similar device, which will function
much like a push-button dial on a telephone. However, instead of dialing a telephone
number, one will key in the painting or monument number to receive the desired
description. In Germany, the GMD is developing a system called Hyper Interaction
within Physical Space (HIPS),111 which allows visitors to listen to information using
earphones and make notes on a Personal Digital Assistant (PDA). This system will be
tested in the Museo Civico of Siena. In Japan, Rieko Kadobayashi (Kyoto), is working
on a meta-museum which would link visitors with specialists on various topics.112
It is foreseen that these descriptions will be on-line. Hence, when a tourist arrives in a
new city such as Rome for the first time, they will simply download the appropriate tours
for that city, not unlike the way one now buys cultural videos of the city in question,
except that all this will be on-line over the Internet. Given new electronic billing
procedures, the “rental” of the tour can be arranged to allow only one hearing, or be
limited to a series of hearings, or to tours within a set time-frame of a day, a week or a
month.
The walkman-like guide is but one possibility. As notebook computers move increasingly
towards electronic versions of notepads113 (e.g. IBM’s CrossPad. Cf. the Newton,
Palmtop), much more than a pleasant description of a painting or monument is possible.
The notepad computer can give a visitor images of related paintings. For instance,
standing in front of Uccello’s Battle of San Romano in the National Gallery of England
(London), the viewer can be reminded exactly how it differs from the two other versions
by Uccello in the Louvre and the Uffizi respectively. More advanced viewers could use
this technology to compare minute differences between originals, versions by students of
the painter, members of their workshop, copies and so on.
Those not able to visit an actual painting would still be able to do such comparative study
from their desktops even if these were far from major centres of culture. To be sure,
seeing the original has and always shall be preferable to seeing surrogates. But in the past
those in remote areas were typically doomed to seeing nothing other than occasional –
usually poorly reproduced images in books. Now at least they will potentially have access
to an enormous array of heritage wherever they happen to be.
For those able to visit the famous museums there are still numerous barriers to seeing the
painting as directly as one might wish. In extreme cases such as the Mona Lisa the work
resides in a cage behind a solid sheet of glass which often refracts light in a way that
54
hinders careful viewing. In most cases there are ropes or other barriers to keep one from
getting very close to a picture. Even if one could get as close as one would like, many of
the most intriguing aspects of paintings are invisible to the naked eye. Often, for
example, there are subtle variations beneath the surface (pentimenti) as a result of a
painter having changed their mind: changes in the position of a figure, or sometimes its
complete removal. In the past, the only way of studying such changes was by means of xray photographs, which were only seldom available to a general viewer. Recently (1997),
a new method called infrared reflectography allows one actually to see the different
layers of paint beneath the surface. For instance, in Leonardo da Vinci’s Adoration of the
Magi (Florence, Uffizi) there are elephants, which he drew and were subsequently
painted over. It is likely that future tourists will rent a notepad computer, which allows
them to see all the layers beneath the surface, thus giving new meaning to the concept of
looking closely at pictures.
The role of virtual guides is, of course, not necessarily limited to the interfaces of hand
held devices as one goes around a real museum. They can be adapted for virtual and
imaginary museums. IBM’s pioneering reconstruction of Cluny Abbey, had such a virtual
guide or avatar, in the form of a mediaeval nun, who took one around the virtual reality
model of the famous church. If Philippe Quéau’s visions of tele-virtuality come about,
then we shall, in the near future, be able to choose the kind of avatars we wish and have
them take us around whichever monuments may interest us.
In the past, a day at a museum often ended with a visit to the museum shop, where one
bought postcards or posters of the images which one particularly liked. Those available
were typically a small selection of the holdings of a museum, and often it seemed that
these invariably omitted the ones one wanted. In future all the images of a museum can
be available on line and can be printed on demand. These images will include threedimensional objects. At the National Research Council of Canada (Ottawa), a laser
camera has been developed which produces fully three-dimensional images, which can be
rotated on screen. Using stereo-lithography, three-dimensional copies of such objects can
be “printed” on demand.
Virtual reality permits one to create full-scale three-dimensional simulations of the
physical world. Augmented reality goes one step further, allowing one to superimpose on
that reconstruction additional information or layers of information. There are a number of
such projects around the world. For instance, at Columbia University, Steve Feiner114 has
been exploring the architectural implications of augmented reality in the context of
various projects.115 One is termed Architectural Anatomy.116 This allows one to view a
virtual reality version of a room and then see the position of all the wires, pipes and other
things hidden behind the walls.
A second is called Urban Anatomy and entails a method aptly termed X-Ray Vision.117
Here one can look at a virtual reality view of a street or a whole neighbourhood,
superimposed or more precisely underlying which one sees the various layers of
plumbing, wires and tunnels that one would see in a Geographical Information System
(GIS). Except that, in this case, it is as if the earth were fully transparent and one can see
55
precisely how they are collocated with the actual space. Similar techniques are being
developed by researchers such as Didier Stricker at the Institut für Graphische
Datenverarbeitung118 (IGD, Munich) which is linked with the Fraunhofer Gesellschaft’s
Zentrum für Graphische Datenverarbeitung e.V. (ZGDV, Darmstadt). In this case
augmented reality is being used to superimpose on real landscapes, proposed designs of
bridges and other person-made constructions. Other projects at the same institute are
working on Multimedia Electronic Documents (MEDoc) and Intelligent Online Services
to create Multimedia Extension[s] (MME).
Of even greater direct interest for cultural applications are the research experiments of
Jun Rekimoto at Sony (Tokyo). Using what he terms augmented interaction, he has
created a Computer Augmented Bookshelf,119 with the aid of Navicam. This “is a kind of
palmtop computer, which has a small video camera to detect real-world environments.
This system allows a user to look at the real world with context sensitive information
generated by a computer.” Hence, looking at a shelf of magazines, the system can point
out which ones arrived today, in the last week and so on. A related invention of Dr.
Rekimoto120 for use in theatres is called the Kabuki121 guidance system:
The system supplies the audience with a real time narrative that describes the
drama to allow a better understanding of the action without disturbing overall
appreciation of the drama. Synchronizing the narration with the action is very
important and also very difficult. Currently, narrations are controlled manually,
but it is possible for the system to be automated.
Applied to libraries, versions of such a system could essentially lead a new user through
the complexities of a major collection. In the case of a regular reader, it could remind
them of the location of books previously consulted. The reader might know they were
there last year in June and that the book was somewhere in section C. The system could
then identify the books in question.
This approach also introduces new possibilities in terms of browsing. Instead of just
perusing the titles on a shelf, a person could ask their notepad computer for abstracts and
reviews with respect to the book in question using an interface of the System for
Universal Media Searching (SUMS)122. Alternatively, if a person were tele-browsing
from their home computer they could call up these features while sitting at their desk at
home.
Computers are also changing the kinds of information conveyed concerning museum
contents. In the past, museums typically saw themselves as spokespersons for
authoritative knowledge about the works in their collections, thus providing introductory
captions for individual paintings and objects; more extensive descriptions in their general
catalogues and detailed information in their exhibition catalogues, research papers and
other publications. Complementary to this, professional museum guides typically
provided ancillary stories about the paintings. As a result, museum visitors were very
much passive recipients of authoritative, received knowledge. In this approach, the rich,
56
anecdotal knowledge of the local population or the potential insights of the viewers
themselves played no role.
Recent projects, such as Campiello,123 are attempting to redress this balance. The goal is
also to integrate the formal, authoritative views of research conveyed via the museum,
with the richness of local knowledge, often based on legend, hearsay or even gossip. A
further goal is to create new kinds of information systems, that permit an integration of
both passive use of established knowledge and active entry of a visitor’s own impressions
-- an analogue version of which has been part of experiments at the Art Gallery of
Ontario for some years. This invariably introduces problems of distinguishing among
different levels of authority. Once established some may very well choose to limit
themselves to the more objective information provided by museums. Others will
welcome the richness provided by a spectrum of different voices. Implicit herein is a
whole new concept of what is “published”: not just the standard, scholarly view, but also
many other views, some of which have no assurance of being reliable. This greater range
of access will invariably call for new filters to aspects of that range.
4.Virtual Museums and Libraries
Complementary to the above scenarios, are cases where virtual museums and libraries
create digital versions of their physical spaces. Perhaps the earliest example of such an
experiment was the Micro-Gallery at the National Gallery of England (London), a small
room within the physical gallery with a handful of computers, where one could view
images of the paintings in the collection and plan a tour in keeping with one’s particular
interests. This approach has since been copied at the National Gallery in Washington and
is being adapted by the Rijksmuseum at Amsterdam.
Some of the early experiments in the field of cultural heritage pursued one metaphor to
the exclusion of others. For instance, the Corbis CD-ROM of the Codex Leicester fixed
on the image of a virtual museum for both paintings and books, such that the manuscripts
appeared on the walls as if they were paintings. While optically appealing, such attempts
were unsatisfactory because they eliminated many of the essential characteristics of
books. Physical books give important clues as to thickness, size, age and so on. Their
surrogates in terms of virtual books also need to convey these characteristics.
Present research is actively engaged in creating such surrogates. For instance, Professor
Mühlhauser (Johanneum Research, Graz), is working on virtual books, which will
indicate their thickness. Dr. Stuart Card and colleagues (Xerox PARC), are exploring the
book metaphor in virtual space and developing ways of moving from representations of
concrete books to visualisations of abstract concepts which they contain. Companies such
as Dynamic Diagrams124 have created a simulation of file cards in axonometric
perspective for the Britannica Online site and for IBM’s web site. IBM (Almaden,125
Visualization Lab) has developed views of pages in parallel perspective as part of their
Visualization Data Explorer, such that one can trace the number of occurences of a given
term in the course of a text.
57
Such virtual museums and libraries can exist at various levels of complexity and their
viewing need not, of course, be limited to some ante-room of the actual museum. As
noted above, a number of museums include Quick-Time Virtual Reality (VR) tours on
CD-ROMS of their collections. Meanwhile, others such as the Uffizi, have recreated
online a version of their entire museum complete with simple Quick Time VR models of
each room, such that one can look around to each of the walls as if one were there. These
relatively simple images reflect the present day limitations of Internet connectivity, which
will probably be overcome within the next decades.126
At the frontiers, an Italian company, Infobyte, is developing software called Virtual
Exhibitor, which will allow museums to create such virtual galleries with a minimum of
effort. Although this presently requires a Silicon Graphics machine, within two years
regular PCs will be powerful enough to perform the same tasks. This software, along with
SUMS127 are part of the European Commission’s Museums over States in Virtual Culture
(MOSAIC) project in the context of their Trans European Networks (TEN) intitiative.
Such virtual visits can go much further than simply visiting the rooms of museums ahead
of time. In Tarkowsky’s famous film (1972) of Stanislaw Lem’s Solaris (1961), the
viewers of Breughel’s Winter Landscape (Vienna, Kunsthistorisches Museum) enter into
the painting and walk around in the landscape. Professor Robert Stone (VR Solutions,
Salford), in a project called Virtual Lowry, uses virtual reality to take viewers through the
spaces of Lowry’s painting. In Infobyte’s version of Raphael’s Stanze, viewers are able to
view the School of Athens and then enter into the space and listen to lectures by famous
ancient philosophers and mathematicians. Museums and galleries typically have one or
more rooms where visitors can watch slide-shows, videos, or attend lectures pertaining to
some aspect of their collections. In future such virtual visits could reasonably occur in
such rooms or halls.
In the context of museums, a series of cultural interfaces thus present themselves. In the
equivalent of an ante-room, viewers are able to prepare for tours using monitors or more
elaborate technology. For on-site tours there will be computer notepads. Monitors linked
to printers in the museum shop will allow one to print postcards and full size posters on
demand. For research purposes, visits will occur sometimes on a computer screen, a large
display panel, an IMAX type screen (which will probably be available on-line in the next
generation128), on planetarium ceilings129 or in entirely immersive Cave Automatic
Virtual Environments (CAVE), where each of the four walls serves as a projection screen
(cf. below in section 7), within the museum or gallery rooms. In future as bandwidth
increases these materials will become available on-line such that visitors (children and
adults alike), can prepare for visits to museums and galleries by studying some their
highlights or their detailed contents ahead of time, either at school or in the comfort of
their homes.
Museums and galleries have traditionally been famous for their “do not touch” signs.
Many visitors, especially children, want to know how things feel. This is an area where
virtual reality reconstructions of objects, linked with haptic feedback, could be of great
help, thus adding experiences to museum visits which would not be possible in the case
58
of original objects. Prostheses of sculptures, statues, vases and other objects can provide
visitors with a sense of how they feel without threatening the original pieces.
In most cases, these museum interfaces increase interest in seeing the original. Their
purpose is to prepare us to see the “real” artifacts. Only in the case of special sites such as
the caves at Lascaux or the Tomb of Nefertari, will the new technologies serve as a
substitute for seeing the actual objects in order to protect the originals. By contrast, in the
case of library interfaces, virtual libraries130 will very probably replace many functions of
traditional libraries. Instead of using card catalogues to find a title and then searching the
shelves for related books on a given topic, readers will use on-line catalogues and then do
tele-browsing. Having found a book of interest, they will print them on demand.
The continuing role of libraries will be defined in part by the kind of information being
sought. Much of the time readers are searching for a reference, fact, a quote or a passage.
Such cases can readily be computerized and replaced by on-line facilities. On the other
hand historians of palaeography and of the book are frequently concerned with the feel of
the cover, details of the binding or subtle aspects of hand-painted miniatures. In such
cases, electronic facsimiles may help them answer preliminary questions, but consultation
of the actual manuscript or book will remain an important part of their profession which
only libraries can fulfill.
Why even print when one can read on screen? Physiological experiments have shown
that one sees about a third less when light comes to the eye directly from a monitor
screen rather than being reflected from the surface of a page.131 Hence, while computer
monitors are an excellent interim measure, they are not an optimal interface for detailed
cultural research. A new kind of device, similar to a slide or film projector, is needed that
projects images onto a solid surface.
Spatial Navigation
Knowing how to get there, spatial navigation, is one of the fundamental concerns in the
organization and retrieval of all knowledge including culture. The use of maps for this
purpose is almost as old as civilization itself. Since the Renaissance there have been
enormous advances in relating different scales of maps. In the past decades rapid
developments in Geographical Information Systems (GIS) have begun linking these
scales electronically (as vector images). Parallel with this has been a linking of scales of
satellite and aerial photographs ( as raster images). The 1990’s have seen increasing
translation between raster and vector images such that there is a potential interoperability
between maps and photographs (figure 1).
Projects such as Terravision in the United States and T-Vision in Germany can be seen as
first steps in this direction. This means a potentially seamless integration of all spatial
data such that we could move at will from views in space down to any painting on a wall
or sculpture in a room. In Powers of Ten, a famous film by Charles and Ray Eames, a
viewer was taken along one such visionary set of connections using photographs alone.
Today it is technically feasible to do this interactively with any object in the world.
59
Scales of Abstract
Map of-
Plan of -
Scales of Concrete
Satellite Photos of - World
- Continent
- Country
- Province
Aerial Photos of - City
- Building (GIS)
Quick Time VR of -Room
-Objects in Room
Figure 1. Basic scheme of scales of abstract images (maps and plans) and concrete
images (satellite photographs, aerial photos and Quick Time VR images).
Implicit in these breakthroughs is a reconciliation of methods which earlier generations
perceived as different and even potentially incompatible. For instance, Gombrich (1975),
132
in his Royal Society lecture distinguished between the mirror (photographs) and the
map. Dodge, in his Atlas of Cyberspace, distinguishes between topological maps and the
photographic type maps of information landscapes. While such distinctions may
continue, the breakthroughs mentioned above will increasingly permit us to move
seamlessly between categories, such that we can switch from viewing a topological map
to a topographical map or an aerial photograph. By this same principle it will be possible
to move seamlessly between photographs of physical rooms and Computer Aided Design
(CAD) reconstructions of those same rooms used for Area Management/Facilities
Management (AM/FM). This will bridge many earlier oppositions between abstract and
concrete, making it clear that both can be correlated with the same reality. This has
implications also for temporary and imaginary tours discussed below.
Thus far only isolated aspects of this integrated vision have been adopted in the cultural
context. For instance, city guides on the Internet are beginning to list maps with major
museums and galleries. CD-ROM’s of galleries such as the Louvre, Pushkin, or the Uffizi
typically have Quick Time VR views of the individual rooms. The technology exists to
link together all these individual elements.
Virtual reality allows complete reconstructions of objects, archaeological sites and
historical monuments in three-dimensions. Some of the best examples of these
possibilities are being created by Infobyte (Rome). These include reconstructions of the
Upper Church of San Francesco (Assisi), Saint Peter’s Basilica (Vatican) and more
recently the Rooms (Stanze) of Raphael as part of an ongoing project which may one day
recreate the whole of the Vatican museum complex and become integrated with IBM’s
Vatican Library project. The enormous number of such reconstructions, listed in a very
useful book by Maurizio Forte,133 attests that such examples are part of a much larger
phenomenon and that some of the cultural implications are clearly appreciated.
Many of these reconstructions are typically viewed on a computer monitor. Sometimes
glasses are used to permit stereoscopic viewing of the images. Sometimes this effect is
achieved using a Binocular Omni-Oriented Monitor (BOOM). It is of course possible to
60
make this experience fully immersive by projecting images on all the walls of a room as
in the case of CAVE environments. Alternatively, one could project them onto the hemispherical surface of a planetarium using multiple projectors to create a fully immersive
effect as Infobyte is doing by working in conjunction with the Japanese firm GOTO.
Under discussion is the possibility that Infobyte’s reconstructions could be projected onto
IMAX screens.
One of the leading pioneers in the field of virtual reality is the German Gesellschaft für
Mathematik und Datenverarbeitung (GMD, Sankt Augustin), which has a section on
Visualisation and Media System Design.134 Among its many projects is Virtual Xanthen.
Besides its well-known mediaeval church, Xanthen has a famous Roman archaeological
site. The GMD project transforms a regular projection screen into an entire wall. A
viewer standing in front of the wall sees an entire landscape from a bird’s eye view. The
small platform on which they are standing serves as a navigating instrument, permitting
one to “fly” higher above the landscape or get closer to the earth. This adds a whole new
dimension to virtual visits.
Traditional blue screens permit an actor to stand in front of a screen and be projected into
a scene with a completely different background, as happens, for example, with the
weatherman after the evening news on television. A limitation of this technique is that the
backdrop is two-dimensional whereas the actor typically moves in a three-dimensional
space. The GMD’s Distributed Video Project (DVP) takes these principles considerably
further135. The blue screen is transformed into a blue room and the actor’s movements in
three-dimensional space are accompanied by three-dimensional perspectival adjustments
in the background. Some of the obvious applications of this new technique are in the field
of television and film production. Suppose for example, that one wished to do a film
about the Sahara desert. Instead of needing to take a crew out to extreme conditions of
the North African desert, one could simply digitize views of the desert and project them
onto the equivalents of four walls and then use the blue room technique for actors to be
virtually transported to the Sahara. The implications of this approach for culture are
considered below in section 5.
Viewpoints
Museums typically give an official mode of presentation (and implicitly an official
interpretation) to a collection of objects. This presentation ranges from a classic ordering
in which objects dominate, such as the Cairo Museum, to newer approaches where the
instruments of interpretation are almost at par with the original objects as in the Heinz
Nixdorf Museum Forum (Paderborn). The collections of a number of museums can be
collected together in on-line networks as is the case, for instance, in the European
Commission’s Museums Over States in Virtual Culture (MOSAIC) project and is also
part of the G7’s vision in pilot project five: Multimedia Access to World Cultural
Heritage. One of the challenges of such projects is to offer a coherent interface for all
museums while at the same time reflecting the idiosyncrasies of individual museums.
61
The methods outlined above lend themselves not only to entire museums but also to
special exhibitions within museums. This includes blockbuster exhibitions of famous
artists such as Picasso and movements such as the Impressionists. These can re-enact
earlier classic exhibitions and serve as ways of looking at sections of collections through
the eyes of a given school of interpretation, a distinguished scholar or an outstanding
curator. This interpretation could potentially cover the entire history of a field as in the
case of Rene Berger’s World Treasures of Art.136
By the same token, such virtual collections can represent the personal view of a scholar
or an artist on an epoch, an event, a concept or their own work as, for example, with
George Legrady’s Anecdoted Archive from the Cold Wars.137 This approach can be
expanded to include multiple interpretations or viewpoints of a given work. In the case of
Shakespeare, for instance, both IBM138 and MIT139 have produced versions of the plays
which include readings of the same passage by a series of different actors. Implicit in the
development of such multiple viewpoints is a need for more detailed methods to identify
the levels of competency and authority of those involved – a challenge which should be
solved by the emerging Resource Description Format (RDF) of the World Wide Web.
There is also a need to classify different types of virtual museums, a project being
pursued by a group at the Academy of Media Arts (Cologne).140
While such a translation of physical into virtual space constitutes the most obvious
application of the new technologies, it is in a sense the least exciting. For the cultural
field the most fascinating challenges lie in a new series of combinations of real and
virtual, some of which will now be considered.
5. Historical Virtual Museums
In the case of major museums and galleries, one virtual museum will not suffice. The
buildings of the Louvre, for example, have existed on the premises of the present
museum since at least the eleventh century. So one will need historical virtual museums,
reconstructions, which help us to understand how what began as a mediaeval fortress
gradually evolved into one of the world’s great picture galleries. These reconstructions
will trace not only the physical growth of various rooms and galleries but also help to
trace the changing arrangements of the permanent and temporary collections of paintings
therein. Where was Mona Lisa hanging in the eighteenth and nineteenth centuries, as
opposed to today and what do these changing configurations tell us about the history of
exhibitions, taste and so on? Such digital versions of earlier spaces and former versions
will allow simulations of temporal travel.
This principle is also being applied to urban landscapes to create historical virtual cities.
For example, CINECA141, as part of the MOSAIC project, is reconstructing the
mediaeval city of Bologna such that one can trace the growth of the city and changes in
its basic structure in the course of several centuries. This reconstruction using virtual
reality modelling language (VRML) allows one to walk through the streets and watch
how they change as if one were in a time machine. Traditionally, some historians have
claimed that Bologna developed an elaborate water and sewage drainage system during
62
the Middle Ages. Other historians have challenged this. The model is sufficiently detailed
that it can be used to check the validity and veridity of such claims. In such cases
reconstructions of cultural heritage become significant for social and even economic
history.
Similarly, in the case of archaeological sites, this approach offers further possibilities.
Today, a major museum typically has a photograph and/or a small model of the Acropolis
at Athens. Students studying Greek history at school typically only have access to poor
black/white images. Those with Internet access can, of course, consult a number of colour
images through the Perseus Project.142 Much more is technically possible. Most
European countries have their own archaeological schools in Athens and have developed
their own theories about the Acropolis. So one could theoretically call up photographs of
the site as it exists today. One could then call up various historical photographs and
drawings in order to appreciate how it looked before Elgin took his marbles, what the
Greek temple looked like when it became a Muslim mosque and compare it with how it
looks today. One could then view various reconstructions by Greek, American, British,
French, German and other archaeologists. Instead of just looking at buildings as static
entities, one could examine how they change in the light of different cultural and
scholarly traditions. Such reconstructions could be available on-site using notepad
computers, as well as on-line for study at school and home.
Professor Iwainsky143 has explored further potentials in his reconstruction of the
Pergamon Altar, complementing the virtual reality reconstruction of the altar now in
Berlin with filmed video clips of the original landscape in Pergamon, thus helping
viewers to see how it would have looked in its original context. The GMD’s Distributed
Video Production initiative introduces new techniques to develop this approach. One can,
for instance, theoretically film views of and from the Acropolis, then, using a blue room,
combine this with virtual reality reconstructions of the Parthenon and other buildings
such that one could walk through the buildings as they might have been and have realistic
views of the landscape. Given sufficient bandwidth such reconstructions can be on-line,
permitting students and others around the world to get a realistic sense of sites long
before they have a chance to actually visit the original.
6. Imaginary Virtual Museums
Imaginary museums144 in a true sense will show paintings, sculptures and other artifacts,
which never physically existed together, as coherent collections. Renaissance painters
such as Botticelli, Leonardo and Raphael were typically commissioned to paint works for
a church, monastery or a private patron with the result that their works were dispersed
from the outset. To overcome this, art historians developed the catalogue raisonnée, but
the high costs of printing typically meant that these catalogues offered only black-white
images of paintings and often poor ones at that. Imaginary museums will allow one to see
the collected paintings of an artist. This can happen in different contexts. In an actual
museum such as the Uffizi, using one’s notebook computer one could stand in front of a
Madonna and Child by Raphael and ask for other images by that painter on the same
theme in other collections. This same principle can be extended to apply to thematic
63
study also. Standing in front of a Baptism of Christ by Piero della Francesca (London,
National Gallery) one would ask Baptisms within a given temporal or geographical
framework. Alternatively one will acquire the equivalent of a CD-ROM which allows
one to make these comparisons on one’s computer at home or at school. Increasingly
these materials will be available on-line and future equivalents of hard disks will function
in the manner of personal libraries today. They will have subsets on topics that are of
particular interest to a given scholar, amateur or member of the general public.
Imaginary museums can also offer viewers a whole range of interpretations concerning
the structure and history of paintings. Standing in front of Piero della Francesca’s
Flagellation of Christ (Galleria Nazionale delle Marche, Urbino), one could see the
different interpretations of its perspectival space. Standing in front of Leonardo’s Last
Supper (Santa Maria delle Grazie, Milan), one could compare it with other copies, see
alternative reconstructions of its perspectival space and impressions of what it once
looked like as well as having access to details of restorations concerning individual
figures.
Major collections such as the National Gallery (London) have an Art in Focus series,
which are effectively special exhibitions focussing each time on an in depth analysis of
special effects or characteristics of a given painting. Today such materials are typically
available in an exhibition catalogue, which soon goes out of print. In future, such
analyses could be available using notebook computers such that one could see such
special characteristics at any time. This will give extended life to the concept of special
exhibitions and indeed change their significance. A series of basic functions for cultural
interfaces thus emerge. A first is virtual guides in physical museums; a second is virtual
museums; a third is virtual historical museums and a fourth is imaginary museums. A
fifth basic function of cultural interfaces entails research. Before exploring this and its
implications for changing definitions of knowledge, a brief note on metadata is necessary.
7. Metadata
In seeking to find data, information and knowledge on the web, system designers and
scholars have devoted increasing attention to metadata145 in the sense of data about
data.146 Initial efforts in this direction were focussed on identifying basic keywords and
minimal descriptors (such as those being developed in the context of the Dublin Core) in
order to permit identification of an article or book. The World Wide Web Consortium
introduced a potential for rating quality through their Protocol for Internet Content
Selection (PICS), the scope of which is being extended within their Resource Description
Framework (RDF) to include Intellectual Property Rights and Privacy Management. In a
recent keynote (Brisbane, April 1998), Tim Berners-Lee outlined a considerably more
ambitious goal of adding a veridity parameter to information. His vision is to develop a
global reasoning system within the world wide web, whereby individual knowledge
elements function as distributed axioms, which can be combined to reach truth
statements.
64
5. Rights
4. Privacy
3. Quality
2. Veridity
1. Summaries
0. Contents
(Agreements)
(Copyright)
(Ratings, Reviews)
(Truth Value, Axioms)
(Keywords, Descriptors)
(Facts, Claims)
Figure 2. Schema showing basic levels of metadata.
Implicit in the above is a new approach to information and knowledge whereby facts and
claims will no longer exist in isolation. Knowledge packages will be surrounded by five
layers of metadata (figure 2). On the basis of such added parameters it will be possible to
search for various subsets of materials. If one were searching for adventure films, one
could ask, for instance, for all five, four, three, two, and one star examples separately or
all films irregardless of their rating and then explore what percentage belong to each of
the categories. One could compare the percentages in other fields. Are there more five
star films, relatively speaking, in the mystery, thriller, childrens or other category?
One could also begin mapping relationships of texts to commentaries about texts,
statements of objective truth versus subjective claims about those truths. Levels of
certainty might be depicted in different diaphanous colours, such that one could visualize
a given verity and all claims surrounding it becoming further removed as their levels of
certainty decrease. Not all materials will be certified. So one can choose whether one
wishes only certified, officially sanctioned, materials or all materials pertaining to a given
book, painting or other cultural object. A cultural object will no longer be a single entity,
it will have associated with it a series of attributes defining not only its physical
characteristics but also its quality. In terms of object oriented programming there will be
objects of objects.
It is important to recognize that the increasing importance of metadata is part of a larger
shift whereby there is a separation between knowledge and views of that knowledge. The
1960’s, for example, saw the rise of databases. These allowed one to enter basic facts into
fields of information, which could then be called up in a number of different ways as
reports without needing to alter the original facts. The rise of Standardized General
Markup Language (SGML) took this approach considerably further by effectively
devising one set of tags for the original content and a second set of tags for ways of
viewing that content. To put it slightly differently an SGML knowledge object has a
“content” section and a “views” section. The evolving Extensible Markup Language
(XML)147 uses exactly the same principle with the exception that it is designed for less
complex situations than SGML and accordingly is easier to use. In the case of both
SGML and XML one can change or add to the “views” section without altering the basic
content. This is fundamentally different from the print world where the content and
layout become inextricably mixed to the extent that any decision to alter layout requires
all the work of a new edition.
65
Figure 3. Systematic approach to a museum or library beginning with a groundplan, view of a room, a wall, and finally a painting or book as if in file card form
with basic descriptive information.
66
Figure 4a. Visualization of editions as a list, a graph, as a circle or as an
undulating inverted cone. b) Visualization of related terms as a list, as a series of
surrounding terms (cf. Hotsauce), as a series of intersecting circles or as other undulating
inverted cones.
67
Figure 5. Lists of editions of a work, reviews, and commentaries thereof, translated into
spatial locations and then into inverted cones. In this approach unimportant writings
become narrow cylinders and influential important works become broad cones.
68
Metadata, in the sense that it is being used by the World Wide Web Consortium, takes the
basic approach of databases and SGML another significant step forward. It continues the
distinction between content and views, but adds to the content section basic parameters
concerning veridity and value. Facts remain constant. The ways of viewing them, using
them, presenting them change. This opens the way to reusable knowledge in a new sense.
The same repository can be used to tailor views for a beginner and an expert, without
needing to rewrite the repository each time.
Marshall McLuhan characterized the history of the West as a constant shift in emphases
among the three elements of the trivium: grammar (substance or structure), dialectic
(logic) and rhetoric (effect). Does the multimedia world of metadata mark a return to the
structural dimensions of grammar or does it mark an entirely new chapter in the evolution
of knowledge? One thing is certain. As will become clear in the pages that follow,
metadata is changing the nature of knowledge and the horizons of study.
8. Research and Knowledge
Quick Reference
With respect to research, appropriate interfaces will also depend very much on the
purpose at hand. Often a reader or visitor is interested in questions of Who? What and
Where? In such cases requiring quick access to basic facts, they will need on-line access
to the digital reference room outlined in part one of this paper. At the simplest level, this
will give them factual information about persons, places and things. Standing in front of
Botticelli’s Birth of Venus (Uffizi, Florence), or looking at an image of that painting on
the Internet, a viewer might want to have biographical facts. This can range from wanting
the most elementary listing of when he born and when he died, through a one page
synopsis of his life, to reading a standard biography. Standing in front of Leonardo’s
Ginevra de Benci (National Gallery, Washington), one could ask about the history of
previous owners in order to learn how it got from Florence via Liechtenstein to
Washington.
Today paintings are in galleries. Cultural artifacts are in museums. What we know about
those paintings and artifacts is usually in articles and books in libraries, particularly in the
reference section of libraries which contain terms (classification systems), definitions
(dictionaries), explanations (encyclopaedias), titles (bibliographies, catalogues), and
partial contents (abstracts, reviews). Given a universally accessible, digital reference
room, viewers and readers will readily be able to find definitions and explanations
without having to run to dictionaries, encyclopaedias, bibliographies and the like. Such
searches can readily happen on a regular PC or a portable notebook computer. In these
cases simple lists and paragraphs in a coherent interface will usually suffice.148
Objects and Subjects
This type of quick reference or hunting after basic facts about objects and subjects, is the
most elementary level of conceptual navigation which interests us. It sounds very
69
straightforward and yet to achieve even this will require an enormous reorganization of
knowledge. It would, for instance, be highly inefficient, and very time consuming, if
everyone who wanted to know about an artist such as Botticelli, had to search every
database around the world. Even searching through every database relating to art would
take too long.
Using the principles of object-oriented programming, we need to develop objects of
objects, a richer kind of metadata, which will contain key information concerning
them.149 In the case of Botticelli, for instance, this will include his variant names, his date
of birth and death, where he worked, and a list of all his drawings, paintings and other
works. This will build on the authority files for artists’ names such as Thieme Becker’s
Allgemeine Künstler Lexikon (AKL),150 those of museums and libraries. In addition to this
key information about his primary works there will be a reference to all secondary
sources about Botticelli, in books, refereed journals and elsewhere. To achieve this will
require the development of individualized agents which seek extant materials, gather
them and are then vetted by the leading experts on that artist, author or individual. The
net result of such efforts will be a Botticelli “object,” with all the metadata pertaining to a
traditional “complete works.”
In the case of a given painting this will include preparatory drawings, versions by
students, members of the workshop, school and followers; copies, different owners,
restorers and details of their restorations; their locations and dates. In the case of a
manuscript this will entail all copies, all book versions, quantities published, editions,
locations, and dates.151
Once these “knowledge objects” of artists, books, paintings, sculptures and other cultural
heritage exist, they can be combined in new ways. If, for example, one were beginning
from the context of a virtual museum, one might zoom in from a view of the world, to the
continent of Europe, to the country of Italy, the city of Florence, the ground plan of the
Uffizi, to a wall in the Botticelli room, and focus on his Adoration of the Magi (figure
3a). This would bring up basic information about the painting. One could then choose to
see preparatory drawings, copies, other versions, other paintings by the same artist, or
other paintings on the same theme by different artists.
Three-dimensional navigation spaces are particularly valuable for such contextualisation
of knowledge. A two-dimensional title or frontispiece of a book tells us nothing about its
size. A three-dimensional scale rendering helps us to recognize at once whether it is a
pocket sized octavo or a folio sized book: a slender pamphlet or a hefty tome152. Hence,
having chosen a title one will go to a visual image (reconstruction) of the book; see, via
the map function, where it appears on the shelf of a library, do virtual browsing of the
titles in its vicinity or wander elsewhere in the virtual stacks (cf. figure 3b)153.
In the case of such a book, one might choose to see various editions in a chronological
list. One could then choose to see the same list of editions as a graph showing
fluctuations over time. Alternatively one might visualize the original edition as a small
circle linked with successive editions in the form of an inverted cone which sometimes
70
contracts and then expands further (figure 4a). Or one might begin with all the keywords
related to a given edition, render these spatially either as a series of concepts surrounding
the original, or as circles intersecting a central circle in the manner of a Venn diagram,
each of which can in turn be visualized as inverted cones (figure 4b).
Value
In the excursus on metadata we mentioned a trend toward creating objects of objects,
which will describe their physical characteristics and their quality. There will be
numerous ways of visualizing quality. An author’s primary literature could define a
circle, surrounded by a larger circle of secondary literature. Influential authors would
have large surrounding circles. Unimportant authors would have only their initial circle: a
visualization of “no comment.” Alternatively, one could have a small circle for an
edition, surrounded by larger circles for reviews, commentaries and citations: effectively
a section of the cone in figure 5. In some cases there will be specific ratings such that one
can identify specifically the grade or rating and not just the popularity of a book, painting
or cultural artifact. Not all materials on the Internet will be certified. So one can visualise
an object as a circle, surrounded by a certified circle and a larger uncertified circle.
Combinations of these approaches are possible, such that one might discern which
portions of reviews, commentaries and citations are certified or uncertified.
In itself the creation of such circles may seem a rather fatuous exercise. If, however, they
are produced on a specific scale and applied systematically to a subject, a field, a region,
a country, a period, a movement or a style, or combinations of these, then the approach
can be very useful in helping us to see new patterns of development. What correlations
are there between the most influential books and the most important books? Does the
production of important books in a field change over time? Does it shift from country to
country? Can the reasons for the shift be determined?
The attentive reader will have perceived that the systematic approach here outlined does
not pretend that computers will use artificial intelligence (AI) or other algorithms to
arrive at new insights in isolation. Rather the claim is that their systematic treatment of
data and information will expand the range of questions which can reasonably be
answered. By providing comprehensive treatment of the four basic questions: Who?,
What?, Where?, and When?, they will prepare the ground for new answers to questions
of How and Why? In this sense computers will help in intelligence augmentation (IA
rather than AI in the senses of Doug Engelbart and Fred Brooks).
Transformation of Knowledge
This quest to achieve objects of objects which contain information concerning all the
physical and qualitative characteristics of the original is analogous to the quest for
determining the structure of DNA and the mapping of nature in the human genome
project. It is much more than just another cataloguing project. It is a quest, which will
transform the very meaning of knowledge.
71
On a seemingly quite different front, companies such as Autodesk have extended the
notion of object-oriented programming to the building blocks of the person-made world
through what they term industry foundation classes. A door is now treated as a dynamic
object which contains all the information pertaining to doors in different contexts. Hence
if one chooses a door for a fifty-storey skyscraper, the door object will automatically
acquire certain characteristics which are very different from those of a door for a cottage
or a factory warehouse. This is leading to a revolution in architectural practice because it
means that architects designing buildings will automatically have at their disposal the
"appropriate" dimensions and characteristics of the door, window or other architectural
building block which concerns them. There is a danger that this could lead to stereotyped
notions of a door or window, a McWorld effect, whereby buildings in one country are
effectively copies of those in other countries, and travel loses its attractions because
everywhere appears the same.
This same technology can be used with very different consequences if one extends the
concept of foundation classes to include cultural and historical dimensions. If this occurs,
an architect in Nepal wishing to build a door, in addition to the universal principles of
construction applying to such objects, will be informed about the particular
characteristics of Nepalese doors, perhaps even of the distinctions between doors in
Khatmandu or those near Annapurna. Similarly an Italian restorer will be informed about
the particular characteristics of doors in Lucca in the fifteenth century.
All this may seem exaggerated and unnecessary. During the second World War, however,
some of the key historical houses with elaborate ornamental carvings in Hildesheim (e.g.
the Knochenhaueramtshaus or Bone Hacker’s administrative office) were bombed and it
took a small group of carpenters several decades to reconstruct the original beam by
beam, carving by carving. They did so on the basis of detailed records (photographs,
drawings etc.). If this knowledge is included in the cultural object-file of Hildesheim
doors, windows and houses, then rebuilding such historical treasures will be much
simpler in future.
At stake is something much more than an electronic memory of cultural artefacts, which
would serve as a new type of insurance against disaster. The richest cultures are not
static. They change with time gradually transforming their local repertoire, often in
combination with motifs from other cultures. The Romanesque churches of Northern
Germany adopted lions from Italy for their entrances. The church of San Marco in Venice
integrated Byzantine, Greek and local motifs. The architecture of Palermo created a
synthesis of Byzantine, Norman, Arabic and Jewish motifs. The architects in Toledo and
at Las Huelgas near Burgos created their own synthesis of Jewish, Arabic and Christian
motifs.154 A comprehensive corpus of variants in local heritage thus leads to much more
than a glorification of local eccentricities and provincialism. It can prove an inspiration to
multi-culturalism in its deepest sense.
The same principle, which applies to doors and windows, applies equally to ornament,
decoration, various objects such as tables and stools and different building types: temples,
colosseums, monasteries, cathedrals, and churches. This transforms the meaning of
72
knowledge. According to Plato, knowledge of a temple was to recognize in some
particular manifestation the “idea” of some ideal temple. Knowledge did not require
knowing the exact dimensions of the Parthenon or any other temple. According to
Aristotle knowledge lay in the precise characteristics of a temple such as the Parthenon.
Plato was interested in a universal concept, Aristotle in a particular example. Their
mediaeval successors became embroiled in philosophical debates whether knowledge lay
in universals or particulars. Even in schoolbooks today155 this problem has not been
resolved. History texts typically refer to one example, the Colosseum in Rome, as if it
were the only example, as if the particular were synonymous with the universal class.
The new object oriented approach to knowledge is very different from all of these
precedents. For a “temple object” will not only contain within itself the precise
description of the Parthenon, but also the exact descriptions of all the other temples
including those at Segesta, Selinunte, Agrigento, and Syracuse in Sicily, at Paestum and
Rome in Italy, at Ephesus, Miletus, and Uzuncaburc in Turkey and elsewhere.156 This
new definition of knowledge resolves the age old opposition between universal and
particular, for it can describe the essential characteristics which all the temples have in
common (universal) and yet render faithfully all their individual differences (particular).
Knowledge now lies in a combination of the two. The Platonic idea of a temple reduced
individual complexity to common characteristics, destroyed individual differences and
thereby the notion of uniqueness. The modern “temple object” centres knowledge on the
fundamental significance of differences. Thus temples gain universal value through the
richness of their local variation. The universal becomes a key to particular expression.
Knowledge lies not in recognizing how good a copy it is but rather in how well it has
created a variation on the theme.
Spatial
The Colosseums in Rome (Italy) and El-Djem (Tunisia) were built in the same style.
Nonetheless their effect is profoundly different due to their spatial settings, one in the
midst of the Roman Forum, the other in a near desert setting. Hence knowledge of spatial
location, the co-ordinates familiar to Geographical Information Systems (GIS), and Area
Management/ Facilities Management (AM/FM) will also be an essential part of a
“colosseum (knowledge) object”.
Temporal
The Colosseum in Rome was built at a given time. It was not, however, a static building,
in the sense that it remained exactly the same in the course of the centuries. We know, for
instance, that a large portion of it was dismantled in the Middle Ages to construct other
buildings. Hence a “colosseum (knowledge) object” will need to include all our
knowledge about changes over time: i.e. its complete history, including all restorations
and interventions for its conservation. Knowledge now includes time as well as space.
73
A New Encyclopaedia
Some will say that this new approach to knowledge is merely a revival of an age-old
encyclopaedic tradition. This is potentially misleading because the encyclopaedic
tradition itself has undergone fundamental shifts in its goals. Aristotle was encyclopaedic
but his quest was to create summaries, which were subsets of the originals such that the
originals could be abandoned. That is why we have what Aristotle said about many of the
ancient authors rather than the ancient authors themselves. Their works were allowed to
go lost because it was assumed that the Aristotelian summary replaced them. Vitruvius
was also encyclopaedic in this sense, except here there was an added goal of making the
subset readily memorizable, an aide-mémoire, rather than creating a record of all that
existed.
Such decisions were not only guided by profound philosophical reflections. They were
partly pragmatic reflections of the available storage media. If knowledge is writ on stone
tablets, the burden of knowledge is truly great. The advent of parchment, manuscripts and
then printing expanded those horizons considerably. Ironically the same Renaissance
which introduced the medium of printing, introduced also a tendency to use media to
separate knowledge157: books were put into libraries, pictures into galleries, drawings into
print galleries (cabinet de dessins), engravings into engraving galleries (Kupferstich
Kabinett), maps into map rooms and cultural objects into museums. Knowledge was
decontextualized.
The 18th century Encyclopédistes re-introduced a vision of encompassing all knowledge.
But as the rate of knowledge continued to increase, even the organizers of the
Encyclopaedia Britannica, decided, after 1911, to abandon the quest for universality.
Recent innovations in terms of macro-paedia and micro-paedia sections have neither recontextualized knowledge nor re-introduced a quest for an inclusive encyclopaedic
approach.
The new “knowledge objects” distinguish themselves from earlier efforts in several ways.
First, computers remove both the barriers of storage capacity and any need to separate
knowledge on the basis of media. Second, the “knowledge objects” require a new kind of
encyclopaedic approach: one that is globally inclusive of all the variants rather than
merely a local summary thereof. This will change the meaning of “objects” insomuch as
we shall have collected in one place all quantitative and qualitative information about an
object.
Multiple Views
In the past scholars typically spent a majority of their time trying to locate basic facts
about an object: Who painted it? Where was it made? When was it finished? For the next
few generations scholars will likely be pre-occupied with assuring that the new
“knowledge objects” are reliable and as comprehensive as they claim. Once all such
74
information is at our fingertips, will scholars find themselves redundant in the face of
automation as in the case of many traditional manufacturing jobs? The answer is
definitely not. It is simply that the tasks will be different. In the Middle Ages it took one
hundred monks ten years of full-time effort to create an index to the works of Saint
Thomas Aquinas. Today that same task can theoretically be performed in minutes by a
computer. Having an index spares us the need of reading the complete works everytime
we are looking for some particular thought, argument, or fact. But this does not remove
the challenge of choosing thoughts, arguments and facts and deciding how or why to
apply them. The process of thinking remains.
Once the basic facts have been arranged, scholars will find themselves devoting more
attention to presentation. Professors will become view masters in a new sense. Their
challenge may lie less in conveying basic facts, than in teaching students to look at facts
and concepts in new ways: as a list, a chart, on a map or more abstractly. To take a simple
example, any book has a series of key words associated with it, which provides us with
some clues concerning its scope. Few keywords indicate a specialized title. Many
keywords suggest a title with many applications. Such keywords can be visualized as
sides of regular and semi-regular shapes and solids. In this configuration, specialized
authors produce points, lines and triangles. Generalists produce increasingly many-sided
solids. This introduces new possibilities for looking at the authors in a given field, or of a
certain distinction. Were Nobel laureates in 1908 mainly generalists or specialists? Were
there significant differences between the arts and science? Did geography play an
obvious role? For instance, were the Nobel laureates from Europe more specialized than
those from America, or conversely? Did this pattern change over time? Implicit in such
activities is a shift from questions about substance (Who? What? Where? When?) to
those of function (How?) and purpose (Why?). The old notion of scholars as philosophers
may witness a revival.
Presentation is much more than deciding whether to use overheads or slides, whiteboards
or virtual reality. It is ultimately concerned with new methods of structuring knowledge,
not just individual objects, but also the larger frameworks into which they can be
arranged. This is the terra incognita of future scholarship. Knowledge organization will
become as important as knowledge acquisition and raise many new challenges for
cultural interfaces.
One aspect of this structuration process will lie in integrating hitherto disparate
knowledge elements. For instance, to continue with our earlier example, the “colosseum
(knowledge) object” will entail all the physical characteristics of the colosseum at Rome
and those of all the other colosseums at Arles, Nimes, El Djem, Pula and elsewhere in the
empire. Using a map one will be able to see where all these places are. Linking a time
line with this map one will be able to trace the order of their appearance. Were there close
connections between the rise of colosseums and theatres? If so were these connections
geographical and chronological or only one of the above? Or were the rise of colosseums
and theatres two quite distinct phenomena? Similar questions could be posed with respect
to the rise of monasteries, churches and cathedrals.
75
One can imagine scholars devoting their energies to posing what they think could be
fruitful or significant questions. One can also imagine a future generation of agents
automatically generating questions and comparisons and only reporting on cases where
some significant correlation emerged. In which case the challenge of scholarship would
focus less on finding patterns and more on explaining their significance. There would be
various levels of patterns, some local, others regional or national, a few international or
even global. These patterns will lead to new attempts to characterize periods, movements
and styles.
How will these differ from the periods of traditional historical studies? Because they
encompass a much larger sample of evidence they will frequently come to very different
conclusions. By way of illustration, it is useful to cite the case of the Renaissance. In
traditional studies, ancient Greece marked a period of enlightenment. During the “Dark
Ages”, the story goes, the lights went out. Then around 1400, someone found the light
switch and there was a Renaissance, literally a rebirth. The light switch, we are told lay in
the rejection of the Dark Ages and a return to the wondrous insights of Antiquity. Thus
far the received wisdom.
In this stereotypical view, Renaissance art is usually reduced to the achievements of a
handful of remarkable painters including Botticelli, Leonardo, Raphael and
Michelangelo, and museums such as the Uffizi and San Marco are of central importance.
And although passing reference is made to the importance of Assisi and the Arena Chapel
of Giotto, standard books tend to ignore the predominant role played by fresco cycles in
all the major churches of the Renaissance, not only in Florence and Venice, but equally in
Castiglione d’Olona, Milan, Montefalco, Perugia, San Gimignano, Sansepolcro, Siena,
Rome and lesser known centres such as Atri. A careful examination of these cycles
reveals that they focussed on the lives of the saints, from Christ’s contemporaries such as
Saint John the Baptist and the Apostles (such as Peter and Paul), through the early
martyrs (Steven and Lawrence) and church fathers (Augustine, Jerome), right though the
Middle Ages (including more recent saints such as Thomas Aquinas and Saint Francis of
Assisi). Seen as a whole this corpus points to some very different conclusions about what
was happening in the Renaissance. The artists of the Renaissance discovered an
uninterupted continuity between the time of Christ and their own period provided by the
lives of the saints. Far from rejecting entirely a so-called Dark Ages, one could argue that
they recognized for the first time its essential role. In short the entire history of what
happened in the Renaissance needs to be rewritten.
In the longer term there is a larger challenge of finding ways to show how the complexity
of cultural activities in the period 1300-1600 could be so reduced as to make the myth
about rejecting the Middle Ages a temporarily convincing misrepresentation of the truth.
This is another manifestation of the relationship between content and views. Let us posit,
hypothetically, that there were 10,000 buildings of cultural interest during the
Renaissance. Every major art historian such as a Berenson, a Chastel, or a Gombrich
focusses on some subset thereof. So we should have interfaces which show us how
schools of scholarship in a given country both bring into focus some aspects while at the
76
same time obscuring many other aspects. We need to make visible the way secondary
literature functions as a prism that leads us to overlook complexity while at the same time
explaining other bits.
Figure 6. A term in a classification system shown as a two-dimensional list. This list is
folded ninety-degrees to a plane at right angles to the screen. Another classification
system is introduced in a plane parallel to the first. A third classification system is
introduced in like manner. This is used to visualize links between the term in the three
systems.
77
Classification
An important dimension of knowledge structuration lies in classification systems. The
major international systems are relatively few. They include Bliss, Dewey, Göttingen,
Library of Congress, Ranganathan, Riders International for books as well as the Art and
Architectural Thesaurus and Iconclass for art. To a certain extent these reflect national
differences. The United States has the Library of Congress and Dewey. Germany has the
Göttingen system and others. India has the Ranganathan system. In terms of lesser
systems or systems specialized on some particular field there are at least 950 others. Each
of these presents different ways of classing the world, with different branches, facets,
alternative associations, different broader and narrower terms. These systems also change
over time. Ranganathan initially had little about art compared to western systems, yet a
great deal about consciousness and higher states of awareness.
When we find a cultural object it can be classed in many ways. Traditionally museums
have developed one way of classing, art galleries another and libraries another again. Yet
a given painting may well represent an object which exists physically in a museum and
about which there is written material in a library. This is why we need meta-data and
meta-databases in order to discover the commonalities required to create integrated
knowledge objects.
To study a cultural object systematically we need authority lists to have their standard
names and recognize which are their variants. Classification systems reveal how that
object has been classed as a subject, topic, theme, field, discipline and so on. Such
systems also reveal the hierarchies or trees within which objects have been placed. These
structures change with time. So we need ways of visualizing equivalences either
geographically, chronologically or both. We might begin, for example, (figure 6) by
treating the term on the screen as a plane, make this transparent, rotate it downwards by
90 degrees such that it becomes the top surface of a transparent box. The x-axis now
becomes the time-axis such that we can literally trace the connections between various
subjects. Such an example outlines a means of moving seamlessly from two-dimensional
lists to three-dimensional conceptual maps of subjects with their related topics and also
offers a whole new way of seeing interdisciplinarity.
One of the challenges in moving between different cultures lies in knowing where to look
for equivalent terms. So a person from Canada familiar with the Library of Congress
(LC) might choose a series of Library of Congress Subjects. If they were interested in
India, the system would then find the closest related terms in Ranganathan and use these
to search other catalogues and lists. At a next stage this set of terms can be used to create
a cluster of closely related terms and use these for searching.
Related Objects and Subjects
As noted above the quest for equivalent terms leads almost inevitably to a search for
related terms, objects and subjects, much in the way that browsing in a library while
78
looking for one book, very frequently leads us to find others, which are as relevant or
perhaps even more so than the book we originally set out to find. Classification systems
provide another means of contextualising our search: i.e. seeing relations between one
subject and another. When we are studying a subject, we typically want to know about
related subjects.
In the past we went to a library catalogue, found a title and saw the related topics at the
bottom of the card. Electronic versions thereof exist. Recent software such as Apple’s
Hotsauce allows us to go from a traditional two-dimensional list of terms, choose one,
and then see all the related topics arranged around it. These related subjects evolve with
time, so with the help of a simple time scale we can watch the evolution of a field’s
connections with other subjects. This idea can easily be extended if we translate the main
topic into a circle and the related subjects into other (usually smaller) circles intersecting
the main one to create a series of Venn diagrams. This visualisation allows us to choose
subsets common to one or more related fields, which is important if one is trying to
understand connections between fields (figure 4b).
Relators
Classification systems typically take us to broader and narrower terms in our quest for
related terms. But as thinkers such as Perrault158 and Judge159 have noted there are
numerous other means to acquire related terms including: alternatives, associations,
complementaries, duals, identicals (synonyms as in Roget’s Thesaurus), opposites
(antonyms), indicators, contextualizers and logical functions such as alternation,
conjunction, reciprocal, converse, negative, subsumptive, determinative and ordinal. It is
feasible that these will eventually become part of the “knowledge objects,” such that if
one has a term, one can see its synonyms without needing to refer to a thesaurus. All
these kinds of relations thus become different hooks or different kinds of net when one is
searching for a new term and its connections.
Ontologies
Such classification systems are the most familiar, important efforts at bringing order to
the world in terms of subjects. But subjects in isolation are still only somewhat ordered
information160. Meaning which brings knowledge and wisdom requires more, namely a
systematic ordering of these subjects in terms of their logical and ontological relations.
Efforts in this direction go back at least to the I Ching. Aristotle, Thomas Aquinas,
Ramus, Francis Bacon and Roget were among the many contributors to this tradition. In
our generation, Dr. Dahlberg presented these fundamental categories in a systematic
matrix161. More recently these have been adapted by Anthony Judge into a matrix of nine
columns and nine levels (figure 6), which generates a chart of 99 subjects. These range
from fundamental sciences (00), astronomy (01) and earth (02) to freedom, liberation
(97) and oneness, universality (99)162. Anthony Judge is using this as an “experimental
subject configuration for the exploration of interdisciplinary relationships between
organizations, problems, strategies, values and human development”.
79
9
8
7
6
5
4
3
2
1
0
Matrix columns
Condition of the whole
Environmental manipulation
Resource redistribution
Communication reinforcement
Controlled movement
Contextual renewal
Differentiated order
Organized relations
Domain definition
Formal pre-conditions
Matrix levels
Experiential (modes of awareness)
Experiential values
Innovative change (context strategies)
Innovative change (structure)
Concept formation (context)
Concept formation (structure)
Social action (context)
Social action (structure)
Biosphere
Cosmosphere/Geosphere
Figure 7. An integrative matrix of human preoccupations by Anthony Judge (Union
Internationale des Associations) adapted from Dr. Ingetraut Dahlberg.
Heiner Benking, builds upon the framework of Dahlberg and Judge (as in figure 7
above), to produce his conceptual superstructure or cognitive Panorama Bridge, which is
the basis of his Rubik’s Zauberwürfel [Cube of Ecology or Magic Cube].163 He argues
that one can use planes in order to see patterns in thought. These planes, he claims, can
include continua between the animate and the inanimate on one axis and between micro-,
macro- and meso-scales on another axis. Planes, he claims, can be used to compare
different viewpoints at a conceptual as well as a perceptual level; to see relations among
different actions, options and strategies.
Seen in this context, it becomes evident that our discussion thus far has been rather
narrow. It has dealt primarily with physical objects in the cosmosphere/geosphere (level
0) although the comments on classification have touched briefly on concept formation
(level 4). From this point of view the amount of knowledge structuration that remains to
be done is staggering indeed. Scholars are not about to be without work.
If we were trying to achieve a truly big picture involving the interplay of two or more of
the planes in this matrix, then a three-dimensional interface with the kinds of planes
outlined earlier will be essential (cf. figure 6). Parallel planes can be used to see different
levels of abstraction. A challenge remains how precisely we are to navigate between such
conceptual landscapes and the knowledge structures of libraries, which have been a main
focus of this paper. At a programming level this should be relatively straightforward.
Each of the ninety-nine subjects is tagged with its equivalents in the main classification
schemes. At the user level, this and similar matrices then become a natural extension of
the system. When we use these categories as a filter to look at publications in the
Renaissance or research trends in the late twentieth century, we have another means to
comprehend which areas were the focus of attention and which were abandoned, or even
forgotten. Search and access systems must help us to see absence as well as achievement,
and possibly provoke us to look more closely at the spaces which are being ignored.
Were they truly dead ends, have they surfaced in a new guise or do they now require new
study?164
80
Visualising Connections in Conceptual Spaces
The third dimension has many uses beyond producing such electronic copies of the
physical world. Pioneers of virtual reality such as Tom Furness III,165 when they were
designing virtual cockpits, realised that pilots were getting too much information as they
flew at more than twice the speed of sound. The challenge was to decrease the amount of
information, to abstract from the myriad details of everyday vision in order to recognise
key elements of the air- and land-scape such as enemy planes and tanks.
Matrix of Human
Preoccupations
General level
of Classification
More detailed level
of Classification
Author
Figure 8. Visualisation of an author’s activities whose specialist activities touch on four
fields (three of which are closely related) and whose more generalist activities are limited
to one major field. This can be linked, in turn, with the matrix of human preoccupations.
Further layers could be added to show how the same concepts recur in different places in
various classification systems.
81
Figure 9. Venn diagram of a subject and its related subjects, shown as intersecting circles.
In addition to regular searches by subject, this visualisation allows a user to choose
subsets common to one or more related fields, which is important if one is trying to
understand interdisciplinary relationships. Cf. fig. 4.
Figure 10. In this diagram the large circles again represent two fields and the smaller
circles represent branches of these fields. The lines joining them represent work linking
hitherto different branches. These lines thicken as the amount of articles and other
research activities increase and thus become a new means of tracing the growth of an
emerging field.
82
Plane 4
Problems Predicted
Plane 3
Problems Solved
Plane 2
Problems Financed
Plane 1
Problems Identified
Figure 11. Using spatial arrangements of concepts to map problems identified and to
visualise which subsets thereof were financed as research projects, which were solved in
the sense of producing patents, inventions and products and which led to new predictions
in the form of hypotheses and projections.
83
Adding these fields together leads to an alphabetical list of that author’s intellectual
activities. Producing such a list in electronic form is theoretically simple. This principle is
equally important in knowledge organisation and navigation. A library catalogue gives
me the works of an author. Each catalogue entry tells me under how many fields a given
article or book is classed. What we need, however, is a conceptual map. To what extent
did an author work as a generalist in large subject fields and to what extent as a
specialist? This lends itself to three dimensions. Author A is in one plane and the subject
headings of their works are on other planes. These are aligned to relative positions in
classification systems such that one can see at a glance to what extent this person was a
generalist or a specialist and linked with the matrix of human preoccupations to discern
how they relate to this (figure 8). This principle can be extended in comparing the
activities of two authors.
Social
This approach can in turn be generalised for purposes of understanding better the
contributions of a group, a learned society or even a whole culture. Scholars such as
Maarten Ultee have been working at reconstructing the intellectual networks of sixteenth
and seventeenth century scholars based on their correspondence. A contemporary version
of this approach would include a series of networks: correspondence, telephone and email which would help us in visualising the complexities of remarkable individuals be it
in the world of the mind, politics or business.
The geographical aspects of these intellectual networks can be visualised using maps.
Conceptually the subjects of the letters, (and the e-mials to the extent that they are kept),
can be classed according to the layers outlined above such that one gains a sense of the
areas on which they focussed. For instance, what branches of science were favoured by
members of the Royal Society? Did these change over time? It is a truism that
Renaissance artists were also engineers and scientists.
84
Figure 12. Diagram relating to metadatabase research at Rensselaer Polytechnic in
conjunction with Metaworld Enterprises entailing a Two Stage Entity Relationship
(TSER) in the context of an Information Base Modelling System.
What particular fields did they favour? Can one perceive significant differences between
artist-engineers in Siena, Florence, Rome and Venice? We could take the members of a
learned society or some other group and trace how many layers in the classification
system their work entailed and then study how this changed over time. Are the trends
towards specialisation in medicine closely parallel to those in science or are there
different patterns of development? Alternatively by focussing on a given plane of
specialization we could trace which authors contributed to this plane, study what they had
in common in order to understand better which individuals, networks of friends and
which groups played fundamental roles in the opening of new fields. Such trends can in
turn be linked with other factors such as research funding or lack thereof. In addition to
universities, major companies now have enormous research labs. Nortel, for instance, has
over 17,000 researchers. Hitachi has over 30,000. We need maps of Who? is doing What?
and Where? In our century we could also trace where the Nobel and other prizes have
gone both physically and conceptually. Navigation provides virtual equivalents of
journies in the physical world. It is also a means of seeing new patterns in the conceptual
world through systematic abstraction from everyday details in order to perceive new
trends.
85
If we were trying to trace the evolution of a new field, we could begin by using a
dynamic view of classification systems described above. We could also use combinations
of these intersecting Venn diagrams. For example, the last generation has seen the
emergence of a new field of bio-technology. This has grown out of two traditional fields,
biology and technology. These could be represented as large circles surrounded by
smaller ones representing, in this case, their related branches and specialties. Any
academic work would be represented in the form of a line, which thickens in proportion
as the connections increase. These connections are of differing kinds. Initially they tend
to be in the form of sporadic articles, conferences, or isolated research projects, which
have no necessary continuity.
Later there are books, journals, professorships, research institutes and spin-off companies
which bring a conscious cumulative growth to the new field. Each of these phases could
be identified with different colours so arranged that one can distinguish clearly between
sporadic and cumulative activities (figure 10). We can integrate these circles within the
context of frames as described above. For example, the two fields of biology and
technology could be on one plane. Major universities could be on a second plane. We
could then trace which universities are producing the most papers of these two fields and
specifically on what sub-sections thereof. On another plane we could list the major
research institutes in order to determine other trends. Are these areas being studied more
in the United States than Europe or Japan? If so what are the percentages? Which
companies dominate the field? What links are there between research institutes and
universities? Are these increasing or decreasing?
Experiments in the realm of metadatabase research at Rensselaer Polytechnic166 provide a
preliinary idea of how a concept at one level can be linked via planes with a series of
concepts at another level. Such a notion of planes can be extended to see further patterns.
Plane one can list all the known problems or potential research areas in a given area of
science. Plane two lists which subset of these problems is presently being studied. Plane
three shows which problems have been solved, or rather have some solutions in the form
of inventions, patents, trademarks and products. Plane four lists a further subset for which
solutions are predicted or which have hypotheses for their solution (figure 12).
Such comparative views can help scientists and decision-makers alike to understand more
clearly trends in rapidly developing fields. Such matrices of problems can in turn be
submitted to problem structuring methodologies whereby technical, practical and
emancipatory dimensions are submitted to frameworks in order to discern where they fit
into what some have called a Methodology Location Matrix.167
Returning for a moment to the framework outlined in figure 7, one can envisage the
direction which a future encyclopaedia will take. For instance, level seven in this
framework outlines context strategies including logic, philosophy, security, community,
peace and justice. These will be related to the context of concept formation (level five)
and its structure (level 4), the context of social action (level 3) and its structure (level 2).
86
Earlier we discussed the spread of ancient temples, of mediaeval monasteries, churches
and cathedrals. These would be linked with the growth of religious ideas and the religious
orders which followed from them. Which were the ideas that led to mainstream religions?
Which were the ideas that led to peripheral sects? Which ideas led to the development of
significant groups, organizations, parties, political movements? Earlier we outlined the
development of objects in spatio-temporal terms. The history of ideas will need to be
explored in spatio-temporal-socio-conceptual terms, each represented by levels in the
third-dimension, which can be translated back to two-dimensional lists and descriptions
as appropriate.
Seeing Invisible Differences
During the Renaissance the discovery of linear perspective brought new skill in
visualising the physical world, but it began by illustrating episodes from the lives of
saints, which none of the artists had witnessed personally. Hence it helped simultaneously
in expanding the horizons of the visible world of nature and the invisible world of the
mind. This dual development continues to our day. Three-dimensional visualisations,
especially using virtual reality help to illustrate both the visible and invisible, and to
introduce many new possibilities.
If, for instance we take the Library of Congress classification, as above, and link each
layer in its hierarchy with a different layer, then we arrive at a truncated pyramidal shape
beginning with twenty initial topics at the top and increasing to many thousands as we
descend. Say we are interested in total publications in the Library of Congress. At the top
level, these publications can be linked to each of the twenty basic fields, such that each
major subject is represented proportionately as a square or circle. We can thus see at a
glance to what extent the number of books on science is greater than those in the fine arts.
By going down a level in the hierarchy we can see how those figures break down, e.g. to
what extent there are more books on physics than chemistry or conversely. At another
level we can see whether and if so to what extent astro-physics has more publications
than bio-physics or quantum physics. We are thus able to see patterns in knowledge
which we could not see simply by looking at the shelves, although even shelves can give
us some hint that one topic has more books than another.
A slightly more refined version would link this approach to book catalogues such that we
can trace how these trends in publications change over time. From a global point of view
we could witness the rise of the social sciences in the nineteenth century. At a greater
level of detail we could see the rise of psychology as a field. This same approach can also
be applied to usage patterns as studied by scholars in reception theory.168 In future usage
patterns by on-line readers will become important for scholars as well as those doing
market studies.
In our quest to see significant patterns it will sometimes prove useful to have agents
examine trends and draw our attention only to cases where there are considerable
changes, of say 10 or 20%. This will be another way to discover emerging subjects. This
same methodology has a whole range of other applications including marketing,
87
advertising, stock markets and even network management. Say, for example, that we
want to monitor potential trouble spots on the system. Agent technologies measure usage
at every major node of the system in terms of a typical throughput and usage. When these
ratios change significantly the system identifies where they occur, and introduces
standard adjustment measures. If these fail, the system visualises relevant points in the
neighbourhood of the node such that operators can see remotely where the problem might
lie and take appropriate action. Hence, instead of trying to keep everything visible at all
times, the system only brings to our attention those areas where trouble could occur: an
electronic equivalent of preventative medicine. Such strategies will no doubt be aided by
the further development of sensory transducers whereby significant changes in heat
within the system would also be rendered visible. Seeing the otherwise invisible is a key
to navigating remotely through complex environments.
Comprehension and Prediction by Seeing Absence
In the early days of the scientific revolution there was great emphasis on the importance
of inductive as opposed to deductive research, which entailed an emphasis on experience,
experiment, often on a trial and error basis. As scientists gained a better understanding of
the field to the extent that they were able to create physical and conceptual maps of their
objects of study, it became possible to deduce what they had not yet observed. For
example, from a careful observation of the motions of the known planets, astronomers
were able to predict the location of Uranus and subsequently other planets. In
combination with induction, deduction regained its role as an important ingredient in
science. The same proved true in chemistry. Once there was a periodic table, chemists
found that the known chemicals helped them to chart the positions of as yet unknown
compounds. Once we have a matrix we can see where there is activity and where activity
is missing. By now, chemistry is expanding with enormous speed. It is estimated that
every week over 14,000 new chemical combinations are discovered. As in the case of
pilots flying at twice the speed of sound we need methods for abstraction from the day to
day details, new ways of seeing patterns. Access, searching and navigation are not just
about seeing what we can find, but also about strategies such that we see the appropriate
subsets at any given time.
Until a generation ago mainframe computers typically relied on punch cards with holes.
Each hole represented a specific configuration of a subject or field. Rods or wires were
then used to determine which cards had the same fields. Early versions of neural
networks adopted a virtual version of the same principle by shining virtual lights through
configurations of holes. When the holes co-incided the subjects were the same. Database
indexes effectively accomplish the same thing with one fundamental difference: we see
the results but have no means of seeing the process.
To see a bigger picture we need to see how the tiny details fit into the larger categories of
human endeavour so that we can discern larger patterns. Roget as we saw had six basic
classes (figure 1). Dewey had ten: 0) generalities; 1) philosophy and related disciplines;
2) religion; 3) social science; 4) language; 5) pure sciences; 6) technology; 7) the arts; 8)
literature; 9) general geography and history. The Library of Congress has twenty such
88
fundamental classes. Beneath these universal headings are many layers of subordinate
categories hierarchically arranged. If we treat each of these layers as a plane, and have a
way of moving seamlessly from one plane to the next, then operations performed at one
level can be seen at various levels of abstraction.
Suppose, for example, that we have been searching for Renaissance publications by
Florentine authors. Moving up to the highest level we can see on which fields they wrote:
religion, science, art and so on. Moving back down a few levels we can identify which
particular branches of science and art concerned them most. Going back to the top level
we can also see that there were many topics which they did not discuss. The Renaissance
view was supposedly universal in its spirit. In practice, it often had distinct limitations. If
we have access to multiple classification systems, then we can see how these patterns
change as we look at them say, through the categories of Duke August and Leibniz at
Wolfenbüttel or through the categories of Ranganathan’s system. These approaches
become the more fascinating when we take a comparative approach. How did German
Lutheran cities differ from Italian Catholic or Swiss Calvinist cities in terms of their
publications? How does religion influence the courses of study and research? What
different cultural trends are evident from a comparison of publications in India, China,
Turkey, Russia and Japan?
In the GALEN project (Manchester), magic lenses are being used to determine where
there are gaps in relations between parts of the body, and their functions. One could
imagine how the planes, which were outlined above, would be imbued with the qualities
of lenses in order to develop their systematic potentials.
9. Challenges
Most discussions of challenges today focus on input, capacity and transmission. How can
the vast materials be scanned in as efficiently and quickly as possible? How can we
develop storage facilities capable of dealing with thousands of exobytes of material? How
can we develop bandwidth, which will be capable of dealing with such vast quantities?
These are hardware problems, which are being overcome and will soon dwindle to
everyday maintenance problems. In our view the deeper challenges lie elsewhere,
namely, problems of translating verbal claims to visual viewpoints, questions of
advanced navigation in terms of scale and problems of cultural filters.
Pictures and Words
The quest to develop systematic ways of comparing objective dimensions with different
subjective views is important and potentially very useful for cultural interfaces. However,
the integration of verbal subjects and objects into a visual scheme, may be more
problematic than it at first appears due to fundamental differences between words and
pictures.
There is a long tradition of comparisons between pictures and words. Already in
Antiquity, Horace made comparisons between the pictures of painting and the words of
poetry169. In our century, famous art historians such as my mentor, Sir Ernst Gombrich,
89
began by assuming that pictures and words were effectively interchangeable. His famous
Art and Illusion began as a series of Mellon lectures entitled the Visibile World and the
Language of Art. He gradually reached the conclusion that there were very significant
differences between the two.
One of the fundamental differences between pictures and words is that pictures can use
space systematically in a way that words cannot. Pictures are potentially perspectival,
words are not. Gombrich attempted to express this through his distinction between the
“What and the How”170 and in his essay on the “Visual Image.171” Pictures can show
what happened and precisely how it happened. Words can only convey what happened,
e.g. that a given person was shot from a particular position, not all the details of how it
happened.
Notwithstanding, this fundamental difference between pictures and words, there have
been surprising parallels between the growth in depicting stories in pictures and the quest
to tell them in words. The rise of narrative in painting and literature are connected.
Attempts to show pictures from a specific point of view and efforts to tell stories from a
given “viewpoint” in the sense of first person narrative also seem to be connected.
Metaphorically perspectives of pictures are closely linked with those of words. Luther,
referring to his dogmatic position, wrote, “Here I stand”. The great philosopher Kant,
wrote an essay on “standpoint” as a fundamental philosophical act. Today, typically, we
speak of literary viewpoints and even literary perspectives. We even speak of seeing a
person’s point of view after listening to their story. It is essential, however, to remember,
that all these are metaphorical acts rather than literal ones. Herein lies a key to
understanding why is easy to speak verbally of seeing another person’s viewpoint, but
almost impossible to depict this verbal viewpoint pictorially. We may speak of another’s
space, entering into, sharing their space, but this is hardly the same as actually seeing or
depicting the world as they see it.
Words are about universals. The noun, dog, refers, to all dogs. Pictures are about
particulars. One may attempt to depict all dogs, but if the picture is precise, it shows a
given dog such as the neighbour’s three-month old, brown pet, rather than some abstract,
universal concept of dogginess.
For this reason, the moment we try literally to represent pictorially a metaphorical verbal
position or viewpoint, we encounter enormous difficulties. The verbal description is
almost always much less precise than a visual depiction and therefore open to a whole
series of alternative possibilities. This does not necessarily mean that the quest is futile.
One solution would be the direct brain interfaces, which are being explored by scientists
today (cf. above p. 8*). Until these become available an interim solution is to create
alternative reconstructions, from which the author of a position can choose in deciding
which is an accurate visual translation of their verbal description.
As noted earlier, in terms of virtual museums, Infobyte has already developed a Virtual
Exhibitor software, which allows museum directors and curators to explore a series of
90
hypothetical arrangements of paintings in designing the layout for their regular museum
and for special exhibitions. In terms of verbal viewpoints, perhaps we need a variant of
this software, a type of Virtual Verbal Viewpoint Exhibitor, to help bridge the gap
between metaphorical and literal sharing of viewpoints. It is quite possible, of course, that
we shall, on closer reflection, conclude that there are profound reasons for keeping these
viewpoints metaphorical and not translating them into potentially banal literal versions.
Or it may well become a matter of choice, just as a number of persons prefer to remain
silent in difficult situations rather than spelling out the situation in boring detail. The
ability not to use functionalities is both a prerequisite of culture and one of its highest
expressions.
Scale
In the film Powers of Ten, viewers were taken from a person lying on the beach upwards
by tenfold scales to the universe and then back to the microcosmic level. More recently,
Donna Cox adapted this principle for the IMAX film, Cosmic Journey. This film used
both real photographs and computer simulations. A project at the Sandia Labs is creating
a Dynamic Solar System in scale:
The scale model of the solar system covers a spatial range of 10 km with an
individual positioning resolution of ~20km. It contains 73 objects, each with
appropriate motion. Tethering or locking permits a viewer to attach to an object
and travel with it, duplicating all or part (e.g., center of mass) of its inertial
motion while retaining the ability to move independently. Here, tethering also
triggers a search of available NASA data. Photographs and associated text
information are displayed on the craft wall while tethered.172
Recently, thinkers such as Ullmer,173 have speculated how one could use similar
principles of changing scales for navigation in extremely large data spaces. Proper
contextualization of knowledge requires being able to move seamlessly between the
nano-structures of the atomic particles to the macro-structures of galaxies at the cosmic
level. Being able to do without getting “lost in space” is truly a challenge.
Cultural Filters
Getting at the essential facts concerning objects of culture is a worthy and important goal.
More subtle and elusive are the challenges of interfaces reflecting a variety of cultures:
problems of learning to see things in different ways, through the eyes of different
cultures. This entails a whole range of challenges including terms, languages, symbols,
narratives, values and questions.
Terms
As noted earlier one of the great challenges in research lies in finding equivalent and/or
related terms to the topic which interests us. Classification systems offer one method.
Synonyms, antonyms and indicators offer another. Such terms vary culturally and often
91
do not lend themselves to a simple translation: a public house or pub in English is quite
different from a maison publique in French. Burro in Italian is very different from burro
in Spanish. We need new methods for mapping systematically between different
classification systems to continue finding equivalent terms when they are classed in very
different places.
Languages
Cultural filters can potentially provide translations from one language to another. At the
most obvious level this could entail taking a virtual tour in English and translating it into
French, German or some other language. In other cases, it might well be looking at a
painting, which has a Latin or Chinese caption. Given the rapidly evolving field of optical
character recognition, one could have a simple video-camera attached to one’s notebook
computer, point this camera at the caption in question, which would relay the caption via
the computer to an on-line databank, and provide a summary translation thereof.
Within a major language there are many levels of expression ranging from formal,
through informal, to dialect and slang. Cultural filters will eventually need to provide
translations in both directions. Sometimes, for instance, there will be an expression in
dialect or slang for which one wants to have the formal equivalent, as when Dante or
Shakespeare use colloquial terms which require explanation. At other times, a
particularly formal turn of phrase by a Proust would need explication in a less formal
style. In traditional publications standard editions of a famous play or novel typically
relegate such explanations to footnotes. In future, these can also be offered on demand
either as visual captions or as verbal commentaries.
Symbols
At the level of symbolism, cultural filters are more obviously important. In Europe, white
is symbol of purity, the spirit, and life. In China, white is typically a symbol of death and
mourning. On the other hand, in Europe a white cala lily is a symbol of death, whereas in
other parts of the world it has a more joyous meaning. As a result an interface with
colours designed for one culture, may well have unexpected effects on persons form
another culture. Having identified one’s culture, the interface should “know” the
appropriate colours and adjust itself accordingly.
Colours are but one small aspect of very complex traditions of symbols. In Germany, the
swastika is associated with all the horrors of Nazi fascism. In the Far East the swastika is
sometimes a symbol of the sun or of the Buddha’s heart. In Chinese the swastika is a pun
on the word ten thousand and the bat a pun on happiness. Hence a bat with a swastika
dangling from its mouth means “may you have happiness ten thousand-fold.”
As an extension of the quick reference provided by the digital reference room, one would
thus be able to choose a symbol and explore its meanings in different cultures. This
assumes, of course, that one knows the name of the item in one’s own culture. Once
again, given rapid developments in pattern recognition with software such as IBM’s
92
Query by Image Content (QBIC), new solutions are likely to present themselves in the
near future. Using a video camera attached to one’s notebook computer as described
above, one would point the camera at the symbol in question, which would relay it via the
computer to an on-line databank. This would identify the object and provide the viewer
with the multiple meanings thereof according to various cultures.
Narratives
Often cultural and especially religious symbols entail much more than some isolated
object. They typically entail narratives, stories, based on a sacred text (e.g. the Bible or
the Mahabharata) or epic literature (e.g. Homer’s Iliad or Dante’s Divine Comedy).
Persons within an established culture take these narratives for granted and frequently
define themselves in terms of familiarity with that corpus. Outsiders find these narratives
confusing or meaningless. For example, a Catholic standing in front of a painting of
Moses in the Desert, unconsciously calls to mind the appropriate text in the Old
Testament or at least the gist of the event. Similarly, in viewing Christ Walking on Water
they call to mind the appropriate New Testament passage. In seeing the Saint Sebastian
they recall Jacobus de Voragine’s Golden Legend or some more simplified Lives of the
Saints. To a non-Christian unfamiliar with these sources, images of a person walking on
water, or of a man remaining calm while being pierced by multiple arrows, may well
seem curious, confusing or simply incomprehensible. Similarly a Christian unaware of
Buddhist traditions will encounter incomprehension when they confront Tibetan Thankas,
or Chinese scrolls.
The digital reference room serving as a cultural Baedeker will thus offer tourists much
more than a geographical map of sites and artifacts. It will provide access to the
narratives underlying all those otherwise enigmatic titles of paintings, sculptures, and
dances such as Diana and Actaeon, Majnun and Leila, or Rama and Krishna. This may,
in turn, have fundamental implications for battles in other areas of academia. In the
context of deconstructionism and its various branches, for instance, there have been
enormous debates concerning the viability or non-viability of speaking about a canon of
literature. Whereas earlier generations were fully confident in their ability to define the
“greats” and “classics”, many would argue that these lists have become so fluid that they
are almost meaningless. In Canada, for instance, only a generation ago, the Bible and
Shakespeare would have been seen as fundamental titles in such a canon.
Today, a number of persons would argue that no single canon is possible, that instead we
need to speak of canons for black, feminist, queer and other literature, rather than a basic
heritage shared by all civilized persons. For those who define culture and civilization in
terms of a common heritage, abandoning the idea of a shared corpus, implies the loss of a
shared heritage by means of which we feel at ease with one another. Meanwhile, others
argue that a true corpus can no longer be Euro-centric. It cannot be limited to Homer,
Virgil, Dante, Shakespeare and Goethe. It must include the great literature of India,
China, Persia, and other cultures. Here another problem looms. The corpus will become
so large that no one will have time to master it unless they make this their sole profession.
93
From all this it will again be apparent that the question of “viewpoints” is much more
complex than is generally imagined. Viewpoints are not just about comparing
abstractions. They are also about different bodies of knowledge, which are an essential
ingredient of culture. An Englishman sees the world through the prisms of Shakespeare
and Milton, an Italian through the verses of Dante, and a German through the poetry of
Goethe and Schiller.174 Each of these geniuses did more than create poetry: they launched
a heritage of associations which are shared by every cultured person in that tradition,
such that there is a manageable corpus of phrases that is mutually recognised by all
members. In order better to comprehend these shared traditions in the case of cultures
completely foreign to us, it may prove useful to develop agents familiar with all the
standard literature of those cultures such that they can help us to recognise quotes which
are so familiar to natives that they are expressed as ordinary phrases, e.g. To be or not to
be (Shakespeare) Every beginning is difficult (Goethe), One must live to eat, not eat to
live (Molière), and yet evoke a wealth of associations which the outside visitor could not
usually expect.
Here, at the end, we can only touch upon this most elusive aspect of navigation, which is
not only about what a culture says or writes. It is about what one culture asks and another
does not, about which one culture discusses and the other is silent, for which one culture
has a hundred terms (snow among the Inuit) and of which another culture has no
experience (a nomad in parts of the Sahara).
Values
More elusive than any of these are problems of cultural values. Anthony Judge175 of the
International Union of Organizations has drawn attention to nine Systems of Categories
Distinguishing Cultural Biases. Maruyama (1980),176 for instance, identifies four
epistemological mindscapes. Geert Hoftede (1984),177 outlines four indices of work
related values power distance, uncertainty avoidance, individualism, and masculinity.
Mushakoji (1978)178 focusses on four modalilities through which the human mind grasps
reality: affirmation, negation, affirmation and negation, non–affirmation and nonnegation. Will McWhinney (1991)179 uses four modes of reality construction: analytic,
dialectic, axiotic and mythic. Pepper (1942)180 expresses four world hypotheses: formism,
mechanism, organicism, and contextualism. Mary Douglas (1973)181 employs four
systems of natural symbols: body conceived as an organ of communication; body as a
vehicle of life, practical concern with possible uses of bodily rejects, life seen as spiritual
and body as irrelevant matter. Gardner (1984)182 relies on six forms of intelligence:
linguistic, musical, logical/mathematical, spatial, bodily-kinaesthetic, and personal. Jones
(1961)183 uses seven axes of methodological bias: Order vs. disorder, static vs dynamic,
continuity vs. discreteness, inner vs outer, sharp focus vs. soft focus, this world vs. other
world, spontaneity vs. process. Meanwhile, Todd (1983) 184 identifies eight family types
with different socio-political systems. A complete analysis of these systems would take
us beyond the scope of the present essay. What interests us here is that each of the
authors has chosen between four and eight concepts in order to explain fundamental
orientations in thought. The challenge is, how can these alternative approaches be
visualized in such a way that they can be integrated into the system.
94
Questions
The subtle aspects of culture extend not only to the kind of answers one gives but also to
the questions one asks. In some older cultures it is not polite to ask what a person’s father
does, the assumption being that if the person is properly established that question would
be redundant, and if they were not properly established then the question could lead to an
embarassing result. A person from such a culture may feel they are being polite in not
asking only to find themselves accused of disinterest in another culture. As usual these
variations find various humorous expressions. It is said that the English always know
with whom one sleeps but would never think to ask what one did, while the French will
happily supply detailed descriptions of what they did without ever asking with whom?
Such pleasantries aside, there are always topics, which can be discussed, questions, which
can be raised in one culture, which are quite taboo in others. In Irish polite society one
may find persons asking detailed questions about politics including for whom one voted,
questions which be considered indiscreet or completely taboo even in some other parts of
the Anglo-Saxon tradition. The Internet has drawn attention to frequently asked questions
(FAQs). We need new means to examine how such questions vary culturally and new
interfaces, to help persons discover which questions should or should not be asked where.
Taking into account all these factors could readily leave us with a fear of being
overwhelmed . In the field of training, such a threat of being overwhelmed reached
critical proportions in the late 1960’s. By way of a solution, manuals and training
materials were put on-line and made accessible as and when they were needed, under the
slogan of “learning on demand.” A cultural Baedeker as outlined above, would use
technology to provide “cultural learning on demand.” Some may object to the ideas of
“just in time culture” as being uncivilized. However, if the alternative is being uncivilized
pure and simple, then surely this is the preferable way, especially if it can save us from
undue feelings of inadequacy when faced by many different cultures as we travel around
the world. This is not to say that we should abandon our efforts to read the great literature
in our culture and as many other cultures as possible. Rather, we need to discover that
although the world may be shrinking in terms of physical access, its horizions continue to
expand in keeping with our capacity.
10. Two, Three and Multiple Dimensions
The above analysis suggest that the question of appropriate cultural interfaces is
considerably more complex than might at first be apparent. It depends largely on
function. In the case of virtual guides in physical museums, for instance, two-dimensional
lists will typically be appropriate. Such lists will also serve well in the case of research
involving quick reference. In the case of virtual, historical virtual, and imaginary virtual
museums and libraries, three-dimensional recontructions will usually be appropriate
whether these are perspectival representations, or virtual reality versions complete with
walkthroughs. In moving from an image of a painting on a wall to a record outlining the
basic characteristics thereof, one will wish to move from a three-dimensional space back
95
to a two-dimensional electronic equivalent of a file card, with an ability to return to the
three-dimensional space as desired.
With respect to research involving maps, one will typically move from two-dimensional
maps as in the case of satellite images, to three-dimensional scenes as one approaches
images of the physical landscape. Conceptual research will frequently begin with twodimensional lists of persons, subjects, or objects, some item of which is then shifted to a
plane, thence to be treated in the third-dimension. Such analyses typically become fourdimensional when these planes are, in turn, subjected to temporal variations. Hence a
cultural interface needs to move seamlessly into and out of a number of dimensions.
One of the important innovations of the Virgilio project at the GMD (Darmstadt) has
been the linking of object relational (Informix) databases, with Virtual Reality Modelling
Language (VRML) such that three-dimensional answer spaces are generated on the fly on
the basis of queries as they are made. In this approach it is assumed that metaphors such
as halls and rooms are useful means of presenting positive results from queries. One
could, however, imagine how the same technology could be used to generate different
information lists in keeping with their complexity. If there were less than ten choices they
could be generated simply in the form of a SUMS185 meter. If there were hundreds of
choices they would be generated as a traditional list. If there were thousands of choices
they would appear in the form of three-dimensional planes in order to obtain an overview
before examining details. Hence, the choice of interface generated by the system would
itself provide clues about the compexity of the results.
Historically, the advent of three-dimensional perspective did not lead artists to abandon
entirely two-dimensional (re-)presentations. There were many cases such as cityscapes
where three dimensions were very useful; others where two-dimensional solutions
remained a viable and even preferable alternative. Text is an excellent example, which
helps explain why text-based advertisements remain predominantly two-dimensional. If
we are searching for a simple name (Who?), subject (What?), place (Where?), event
(When?), process (How?) or explanation (Why?), two-dimensional lists are likely to
remain the most effective means for searching and access. As suggested earlier, long lists
benefit from alternative presentation modes such that they can be viewed alphabetically
(Who?), hierarchically in tree form (What?), geographically (Where?), and
chronologically (When?) if appropriate. A complex spatial interface may be
technologically attractive. The challenge, however, lies in integration with historically
relevant interfaces, in being able to encompass earlier structuration methods rather than
merely replace them with unfamiliar ones.
11. Conclusions
This paper opened with a brief outline of taxonomies of information visualization user
interfaces by data type (cf. Appendix 1) and a survey of emerging interface technologies,
namely, voice activation, haptic force, mobile and nomadic, video activation, direct brain
control, brain implants, and alternative methods. It was claimed that while such
technological solutions in search of applications are of some interest, a more thorough
96
understanding of interface problems requires an analysis of user needs. The main body of
the paper addressed this challenge with respect to culture. An outline was given of five
basic functions relating to cultural interfaces, namely, 1) virtual guides, 2) virtual
museums, libraries and spatial navigation, 3) historical virtual museums, 4) imaginary
museums and 5) various kinds of cultural research. The implications of these functions
for cultural interfaces were explored.
This led to a brief consideration of metadata and consideration how these new
developments are transforming our concepts of knowledge. Knowledge objects will
include not only basic characteristics but also information about their quality and
veridity. The Platonic idea destroyed individual differences and thereby the notion of
uniqueness. The new concept of objects centres knowledge on the fundamental
significance of differences. The universal becomes a key to particular expression.
Knowledge lies not in how good a copy it is but rather in how well it has created a
variation on the theme. This will transform the scope and horizons of knowledge.
The paper ended with an outline of further challenges such as problems of translating
verbal claims into visual viewpoints, questions of scale and cultural filters. It will be a
long time before all these challenges are overcome. Yet if we recognise them clearly,
there is no need to be overwhelmed by them. We must continue the process of sense
making and ordering the world, which began with our first libraries and schools and shall
continue, we hope, forever. For in this lies our humanity. Technology may offer many
solutions looking for an application. Nonetheless, cultural interfaces still pose many
applications looking for solutions.
97
Chapter 5
New Knowledge186
1. Introduction
As media change so also do our concepts of what constitutes knowledge. This, in a
sentence, is a fundamental insight that has emerged from research over the past sixty
years.187 In the field of classics, Eric Havelock,188 showed that introducing a written
alphabet, shifting from an oral towards a written tradition, was much more than bringing
in a new medium for recording knowledge. When claims are oral they vary from person
to person. Once claims are written down, a single version of a claim can be shared by a
community, which is then potentially open to public scrutiny, and verification.189 The
introduction of a written alphabet thus transformed the Greek concept of truth (episteme)
and their concepts of knowledge itself. In the field of English Literature, Marshall
McLuhan,190 influenced also by historians of technology such as Harold Innis,191 went
much further to show that this applied to all major shifts in media. He drew attention, for
example, to the ways in which the shift from handwritten manuscripts to printed books at
the time of Gutenberg had both positive and negative consequences on our world-view.192
In addition, he explored how the introduction of radio and television further changed our
definitions of knowledge. These insights he distilled in his now famous phrase: “the
medium is the message.”
Pioneers in technology, such as Vannevar Bush,193 Douglas Engelbart,194 and visionaries
such as Ted Nelson,195 have claimed from the outset that new media such as computers
and networks also have implications for our approaches to knowledge. Members of
academia and scholars have become increasingly interested in such claims, leading to a
spectrum of conclusions. At one extreme, individuals such as Derrick de Kerckhove,196
follow the technologists in assuming that the overall results will invariably be positive.
This group emphasizes the potentials of collective intelligence. This view is sometimes
shared by thinkers such as Pierre Lévy197 who also warn of the dangers of a second flood,
whereby we shall be overwhelmed by the new materials made available by the web.
Meanwhile, others have explored more nuanced assessments. Michael Giesecke,198 for
instance, in his standard history of printing (focussed on Germany), has examined in
considerable detail the epistemological implications of printing in the fifteenth and
sixteenth centuries and outlined why the advent of computers invites comparison with
Gutenberg’s revolution in printing. Armand Mattelart,199 in his fundamental studies, has
pointed out that the rise of networked computers needs to be seen as another step towards
global communications. He has also shown masterfully that earlier steps in this process,
such as the introduction of the telegraph, telephone, radio and television, were each
accompanied by more global approaches to knowledge, particularly in the realm of the
social sciences.
The present author has explored some implications of computers for museums,200
libraries,201 education202 and knowledge in general.203 In the context of museums seven
elements were outlined: scale, context, variants, parallels, history, theory and practice;
abstract and concrete; static and dynamic. Two basic aspects of these problems were also
98
considered. First, computers entail much more than the introduction of yet another
medium. In the past, each new innovation sought to replace former solutions: papyrus
was a replacement for cuneiform tablets; manuscripts set out to replace papyrus and
printing set out to replace manuscripts. Each new output form required its own new input
method. Computers introduce fundamentally new dimensions in this evolution by
introducing methods of translating any input method into any output method. Hence, an
input in the form of an oral voice command can be output as a voice command (as in a
tape recording), but can equally readily be printed, could also be rendered in manuscript
form or potentially even in cuneiform. Evolution is embracing not replacing. Second,
networked computers introduce a new cumulative dimension to knowledge. In the past,
collections of cuneiform tablets, papyri, manuscripts and books were stored in libraries,
but the amount of accessible knowledge was effectively limited to the size of the largest
library. Hence knowledge was collected in many parts but remained limited by the size of
its largest part. In a world of networked computing the amount of accessible knowledge
is potentially equal to the sum of all its distributed parts.
In deference to the mediaeval tradition, we shall begin by expressing some doubts
(dubitationes), concerning the effectiveness of present day computers. In a fascinating
recent article, Classen assessed some major trends in new media204. He claimed that
while technology was expanding exponentially, the usefulness205 of that technology was
expanding logarithmically and that these different curves tended to balance each other out
to produce a linear increase of usefulness with time. He concluded i) that society was
keeping up with this exponential growth in technology, ii) that in order to have
substantial improvements especially in education “fortunes have to be spent on R&D to
get there,” and finally iii) that “we in industrial electronics research can still continue in
our work, while society eagerly adopts all our results.”206
Dr. Classen’s review of technological progress and trends is brilliant, and we would fully
accept his second and third conclusions. In terms of his first conclusion, however, we
would offer an alternative explanation. He claims that the (useful) applications of
computers have not kept up with the exponential expansion of technology due to
inherent limits imposed by a law of usefulness. We would suggest a simpler reason:
because the technology has not yet been applied. In technical terms, engineers and
scientists have focussed on ISO layers 1-6207 and have effectively ignored layer 7:
applications.
Some simple examples will serve to make this point. Technologists have produced
storage devices, which can deal with exobytes at a time (figure 1 in chapter 1). Yet all
that is available to ordinary users is a few gigabytes at a time. If I am only interested in
word processing this is more than sufficient. As a scholar I have a modest collection of
15,000 slides, 150 microfilms, a few thousand books and seven meters of photocopies.
For the purposes of this discussion we shall focus only on the slides. If I wished to make
my 15,000 slides available on line, even at a minimal level of 1 MB per slide, that would
be 15 gigabytes. Following the standards being used at the National Gallery in
Washington of using 30 megabytes per image, that figure would rise to 450 gigabytes.
Accordingly, a colleague in Rome, who has a collection of 100,000 slides, would need
99
either need 100 gigabytes for a low resolution version or 4 terabytes for a more detailed
version.
In Europe museums tend to scan at 50MB/image which would raise those figures to 5
terabyes, while research institutions such as the Uffizi are scanning images at 1.4
gigabytes per square meter. At this resolution 100,000 images would make 1400
terabytes or 1.4 petabytes. There are no machines available at a reasonable price in the
range of 450 gigabytes to 1.4 petabytes. The net result of this math exercise is thus very
simple. As a user I cannot even begin to use the technology so it might as well not exist.
There is no mysterious law of usefulness holding me back, simply lack of access to the
technology. If users had access to exobytes of material, then the usefulness of these
storage devices would probably go up much more than logarithmically. It might well go
up exponentially and open up undreamed of markets for technology.
Two more considerations will suffice for this brief excursus on usefulness. Faced with the
limitations of storage space at present, I am forced as a user to employ a number of
technologies: microfilm readers, slide projectors, video players (sometimes in NTSC,
sometimes in PAL), televisions, telephones, and the usual new technologies of fax
machines, computers and the Internet. All the equipment exists. It is almost impossible to
find all of it together working in a same place, and even if it does, it is well nigh
impossible to translate materials available in one medium, to those in another medium.
We are told of course that many committees around the world are busily working on the
standards (e.g. JPEG, JHEG, MPEG) to make such translations among media simple and
nearly automatic. In the meantime, however, all the hype in the world about
interoperability, does not help me one iota in my everyday efforts as a scholar and
teacher. The net result again is that many of these fancy devices are almost completely
useless, because they do not address my needs. The non-compatibility of an American, a
European and a Japanese device may solve someone’s notion of positioning their
country’s technology, but it does not help users at all. Hence most of us end up not
buying the latest device. And once again, if we knew that they solved our needs, their
usefulness and their use might well rise exponentially.
Finally, it is worthwhile to consider the example of bandwidth. Technologists have
recently demonstrated the first transmission at a rate of a terabyte per second. A few
weeks ago at the Internet Summit a very senior figure working with the U.S. military
reported that they are presently working with 20-40 gigabits a second, and that they are
confident they can reach terabyte speeds for daily operations within two years.
Meanwhile, attempts by G7 pilot project five to develop demonstration centres to make
the best products of cultural heritage accessible on an ATM network (a mere 622
MB/second) have been unsuccessful. A small number of persons in large cities now have
access to ADSL (1.5 MB/sec), while others have access to cable modems (.5
MB/second). Even optimistic salesmen specializing in hype are not talking about having
access to ATM speeds directly into the home anywhere in the foreseeable future. Hence,
most persons are limited to connectivity rates of .028 or .056 MB/second, (in theory,
while the throughput is usually much lower still), which is a very long way from the
1,000,000,000 MB (i.e. terabyte) that is technically possible today.
100
With bandwidth as with so many other aspects of technology, the simple reality is that
use in real applications by actual users has not nearly kept pace with developments in
technology. If no one has access to and chances to use the technology, if there are no
examples to demonstrate what the technology can do, then it is hardly surprising that socalled usefulness of the technology lags behind. We would conclude therefore that there
is no need to assert logarithmic laws of usefulness. If technology is truly made available,
its use will explode. The Internet is a superb example. The basic technology was there in
the 1960’s. It was used for over two decades by a very select group. Since the advent of
the World Wide Web, when it was made available to users in general, it has expanded
much more each year than it did in the first twenty years of its existence.
So what would happen if all the technological advances in storage capacity, processing
power, bandwidth were available for use with complete interoperability? What would
change? There would be major developments in over thirty application fields (Appendix
8). Rather than attempt to examine these systematically, however, this paper will focus
instead on some of the larger trends implicit in these changes. I shall assert that
computers are much more than containers for recording knowledge, which can then be
searched more systematically. They introduce at least seven innovations, which are
transforming our concepts of knowledge. First, they offer new methods for looking at
processes, how things are done, which also helps in understanding why they are done in
such ways. Second, and more fundamentally, they offer tools for creating numerous
views of the same facts, methods for studying knowledge at different levels of
abstraction. Connected with this is a third innovation: they allow us to examine the same
object or process in terms of different kinds of reality. Fourth, computers introduce more
systematic means of dealing with scale, which some would associate with the field of
complex systems. Fifth, they imply a fundamental shift in our methods for dealing with
age-old problems of relating universals and particulars. Sixth, they transform our
potential access to data through the use of meta-data. Seventh and finally, computers
introduce new methods for mediated learning and knowledge through agents. This paper
explores both the positive consequences of these innovations and examines some of the
myriad challenges and dangers posed thereby.
2. Processes
Media also affect the kinds of questions one asks and the kinds of answers one gives to
them. The oral culture of the Greeks favoured the use of What? and Why? questions. The
advent of printing in the Renaissance saw the rise of How? questions. As storage devices,
computers are most obviously suited to answering questions concerning biography
(Who?), subjects (What?), places (Where?) and chronology (When?). But they are also
transforming our understanding of processes (How?) and hence our comprehension of
relations between theory and practice. In the past decades there has, for instance, been a
great rise in workflow software, which attempts to break down all the tasks into clearly
defined steps and thus to rationalize the steps required for the completion of a task. This
atomization of tasks was time consuming, expensive and not infrequently very artificial
in that it often presented isolated steps without due regard to context.
101
Companies such as Boeing have introduced augmented reality techniques to help
understand repair processes. A worker fixing a jet engine sees superimposed on a section
of the engine, the steps required to repair it. Companies such as Lockheed are going
further: reconstructing an entire workspace of a ship’s deck and using avatars to explain
the operating procedures. This contextualization in virtual space allows users to follow all
the steps in the work process.208
More recently companies such as Xerox209 have very consciously developed related
strategies whereby they study what is done in a firm in order to understand what can be
done. In the case of her Majesty’s Stationary Office, for example, they used VRML
models to reconstruct all the workspaces and trace the activities on the work- floor. As a
result one can examine a given activity from a variety of different viewpoints: a manager,
a regular employee or an apprentice. One can also relate activities at one site with those
at a number of other sites in order to reach a more global view of a firm’s activities.
Simulation of precisely how things are done provides insights into why they are done in
that way.
In the eighteenth century, Diderot and D’Alembert attempted to record all the professions
in their vast encyclopaedia. This monumental effort was mainly limited to lists of what
was used with very brief descriptions of the processes. The new computer technologies
introduce the possibility of a new kind of encyclopaedia, which would not only record
how things were done, but could also show how different cultures perform the same tasks
in slightly or even quite different ways. Hence, one could show, for instance, how a
Japanese engineer’s approach is different from that of a German or an American
engineer. Instead of just speaking about quality one could actually demonstrate how it is
carried out.
Computers were initially static objects in isolation. The rise of networks transformed
their connectivity among these terminals into a World Wide Web. More recently there
have been trends towards mobile or nomadic computing. The old notion of computers as
large, bulky objects dominating our desks is being replaced by a whole range of new
devices: laptop computers, palmtop and even wearable computers.210 This is leading to a
new vision called ubiquitous computing, whereby any object can effectively be linked to
the network. In the past each computer required its own Internet Protocol (IP) address. In
future, we are told, this could be extended to all the devices that surround us: persons,
offices, cars, trains, planes, telephones, refrigerators and even light bulbs.
Assuming that a person wishes to be reached, the network will be able to determine
whether they are at home, in their office, or elsewhere and route the call accordingly. If
the person is in a meeting the system will be able to adjust its signal from an obtrusive
ring to a simple written message on one’s portable screen, with an option to have a
flashing light in urgent cases. More elaborate scenarios will adjust automatically room
temperatures, lighting and other features of the environment to the personal preferences
of the individual. Taken to its logical conclusions this has considerable social
consequences,211 for it means that traditionally passive environments will be reactive to
102
users’ needs and tastes, removing numerous menial tasks from everyday life and thus
leaving individuals with more time and energy for intellectual pursuits or pure diversion.
At the international level one of the working groups of the International Standards
Organization (ISO/IEC JTC1/WG4) is devoted to Document Description and Processing
Languages, SGML Standards Committee. At the level of G8, a consortium spearheaded
by Siemens is working on a Global Engineering Network (GEN).212 Autodesk is leading
a consortium of companies to produce Industry Foundation Classes, which will
effectively integrate standards for building parts such as doors and windows. Hence,
when someone wishes to add a window into a design for a skyscraper, the system will
“know” what kind of window is required. In future, it will be desireable to add to these
foundation classes both cultural and historical dimensions such that the system can
recognize the differences between a Florentine door and a Sienese door of the 1470’s or
some other period.
The Solution Exchange Standard Consortium (SEL) consists of 60 hardware, software,
and commercial companies, which are working to create an industry specific SGML
markup language for technical support information among vendors, system integration
and corporate helpdesks. Meanwhile, the Pinnacles Group, a consortium which includes
Intel, National Semiconductor, Philips, Texas Instruments and Hitachi, is creating an
industry specific SGML markup language for semiconductors. In the United States, as
part of the National Information Infrastructure (NII)213 for Industry with Special
Attention to Manufacturing, there is a Multidisciplinary Analysis and Design Industrial
Consortium (MADIC), which includes NASA, Georgia Tech, Rice, NPAC and is
working on an Affordable Systems Optimization Process (ASOP). Meanwhile,
companies such as General Electric are developing a Manufacturing Technology Library,
with a Computer Aids to Manufacturing Network (ARPA/CAMnet).214 ESI Technologies
is developing Enterprise Management Information Systems (EMIS).215 In the automotive
industry the recent merger of Daimler-Benz and Chrysler point to a new globalization. A
new Automotive Network eXchange (ANX)216means that even competitors are sharing
ideas, a process which will, no doubt, be speeded by the newly announced automotive
consortium at MIT. A preliminary attempt to classify the roles of different interaction
devices for different tasks has recently been made by Dr. Flaig.217
As Mylopoulos et al.218 have noted, in the database world, this tendency to reduce reality
to activities and data goes back at least to the Structured Analysis and Design Technique
(SADT). It is intriguing to note that the quest for such an approach has a considerable
history. In the United States, where behaviorism became a major branch of psychology,
Charles S. Pierce claimed that: “The only function of thinking is to produce acting
habits.”219 Such ideas have been taken up particularly in Scandinavia. For instance,
Sarvimäki (1988),220 claimed that there is a continuous interaction between knowledge
and action; that knowledge is created through and in action. These ideas have more
recently been developed by Hjørland (1997).221 Some would see this as part of a larger
trend to emphasize subjective dimensions of reality in terms either of purpose
(Hjelmslev)222 or interest (Habermas).223 Meanwhile, Albrechtsen and Jacob (1998),224
have attempted to analyse work from a descriptive rather than a normative point of view.
103
Building on the ideas of Davenport,225 Star226 and Law,227 they have outlined an activity
theory in terms of four types of work, namely, industrialized bureaucratically regulated
work, learning network organization, craft type of individualised work and semiindependent market-driven result units.
If activities are seen as one aspect of the human condition such an activities based
approach makes perfect sense. If, however, such activities are deemed to be the sole area
to be studied, then one encounters the same problems familiar with a number of Marxist
theoreticians. While claiming that reality must be reduced to the visible dimensions of
practical, physical activities, they wish, at the same time, to create a conceptual,
theoretical framework which goes beyond those very limits on which they insist.
3. Views and Levels of Abstraction
One of the fundamental changes brought about by computers is increasingly to separate
our basic knowledge from views of that knowledge. Computer scientists refer to such
views as contextualization, and describe them variously in terms of modules, scopes and
scope rules.228 The importance of these views has increased with the shift towards
conceptual modelling.229 In the case of earlier media such as cuneiform, manuscripts and
books, content was irrevocably linked with a given form. Changing the form or layout
required producing a new edition of the content. In electronic media this link between
form and content no longer holds. Databases, for instance, separate the content of fields
from views of that content. Once the content has been input, it can be queried and
displayed in many ways without altering the content each time. This same principle
applies to Markup Languages for use on the Internet. Hence, in the case of Standard
Generalized Markup Language (SGML) and Extensible Markup Language (XML), the
rules for content and rules for display are separate. Similarly in the case of programming,
the use of meta-object protocols is leading to a new kind of open implementation
whereby software defined aspects are separated from user defined aspects (figure 1). An
emerging vision of network computers, foresees a day when all software will be available
on line, and users will need only to state their goals to find themselves with the
personally adapted tools. Linked with this vision are trends towards reusable code.230
Software Defined
Base Program
Base Interface
User Defined
Meta Program
Meta Interface
Figure 1. Separation of basic software from user defined modalities through meta-object
protocols in programming.
Related to the development of these different views of reality, is the advent of
spreadsheets and data-mining techniques, whereby one can look at the basic facts in a
database from a series of views at different levels of abstraction. Once a bibliography
exists as a database, it is easy to produce graphs relating publications to time, by subject,
by city, country or by continent. In the past any one of these tasks would have comprised
a separate study. Now they are merely a different “view.”
104
One of the serious problems in the new electronic methods is that those designing the
systems are frequently unfamiliar with the complexities of historical knowledge. An
excellent case in point is the entity-relationship model, developed by Chen,231 which is
the basis of most relational databases and object-oriented approaches. On the surface it is
very simple. It assumes that every entity has various relationships in terms of attributes.
Accordingly a person has attributes such as date of birth, date of death and profession. In
the case of modern individuals this is often sufficient. In historical cases, however, the
situation may be much more complex. For instance, there are at least five different
theories about the year in which the painter Titian died, so we need not only these
varying dates but also the sources of these claims. Although entity-relationship models do
not cope with this, other systems with conceptual modelling do. We need new attention to
the often, implicit presuppositions232 underlying software and databases and we need to
bring professionals in the world of knowledge organisation up to date concerning the
developments in databases.
4. Scale
These developments in views and different levels of abstraction are also transforming
notions of scale. Traditionally every scale required a separate study and even a generation
ago posed serious methodological problems.233 The introduction of pyramidal tiling234
means that one can now move seamlessly from a satellite image of the earth (at a scale of
1:10,000,000) to a life-size view of an object and then through a spectrum of microscopic
ranges. These innovations are as interesting for the reconstruction of real environments
such as shopping malls and tourist sites as they are for the creation of virtual spaces such
as Alpha-World235. Conceptually it means that many more things can be related.
Systematic scales are a powerful tool for contextualization of objects.
These innovations in co-ordinating different scales are particularly evident in fields such
as medicine. In Antiquity, Galen’s description of medicine was limited mainly to words.
These verbal descriptions of organs were in general terms such that there was no clear
distinction between a generic description of a heart and the unique characteristics of an
individual heart. Indeed the approach was so generic that the organs of other animals
such as a cow were believed to be interchangeable with those of an individual.
During the Renaissance, Leonardo added illustrations as part of his descriptive method.
Adding visual images to the repertoire of description meant that one could show the same
organ from a number of different viewpoints and potentially show the difference between
a typical sample and an individual one. However, the limitations of printing at the time
made infeasible any attempts to record all the complexities of individual differences.
Today, medicine is evolving on at least five different levels. The GALEN project is
analysing the basic anatomical parts (heart, lung, liver etc.) and systematically studying
their functions and inter-relationships at a conceptual level. The Visible Human project is
photographing the entire human body in terms of thin slices, which are being used to
create Computer Aided Design (CAD) drawings at new levels of realism.
105
Conceptual
Physical
Structural
Molecular
Atomic
GALEN236
Visible Human237
OP 2000 Medically Augmented Immersive Environment (MAIE)238
Bio-Chemical
Human Genome239
Figure 2. Different levels of scale in the study of contemporary medicine.
In Germany, the Medically Augmented Immersive Environment (MAIE), developed by
the Gesellschaft für Mathematik und Datenverarbeitung (GMD) and three Berlin
hospitals, dedicated to radiology (Virchow), pathology (Charité) and surgery (RRK)
respectively, are developing models for showing structural relations among body parts
in real time. This system includes haptic simulation based on reconstructed tomographic
scans. Other projects are examining the human body at the molecular and atomic level
(figure 2). At present these projects are evolving in tandem without explicit attempts to
co-relate them. A next step will lie in integrating all this material such that one can move
at will from a macroscopic view of the body to a study of any microscopic part at any
desired scale.
In the past, anatomical textbooks typically provided doctors with a general model of the
body and idealized views of the various organs. The Virtual Human is providing very
detailed information concerning individuals (three to date), which can then serve as the
basis for a new level of realism in making models. These models can then be confronted
with x-rays, ultra-sound and other medical imaging techniques, which record the
particular characteristics of individual patients.
Elsewhere, in the Medical Subject Headings (MeSH) project, a semantic net includes five
relationship classes: identity, physical, spatial, conceptual and functional, with tree
category groupings for anatomic spaces, anatomic structural systems, anatomic
substances and diseases.240 Potentially such projects could lead to a systematic linking of
our general knowledge about universals and our specialized knowledge about particulars
(see section 7 below).
A somewhat different approach is being taken in the case of the human genome project.
Individual examples are studied and on the basis of these a “typical model” is produced,
which is then used as a set of reference points in examining other individual examples.
Those deviating from this typical model by a considerable amount are deemed defective
or aberrant, requiring modification and improvement. A danger in this approach is that if
the parameters of the normal are too narrowly defined, it could lead to a new a version of
eugenics seriously decreasing the bio-diversity of the human race.241 If we are not careful
we shall succumb to believing that complexity can be resolved through the regularities of
universal generalizations rather than in the enormously varying details of individuals.
Needed is a more inductive approach, whereby our models are built up from the evidence
of all the variations.
106
Reality
Nature, Man Made World
Virtual Reality
Sutherland, Furness
Augmented Reality
Feiner, Stricker
Augmented Virtuality
Gelernter, Ishii
Double Augmented Reality242Mankoff243
Figure 3. Basic classes of simulated reality and their proponents.244
5. Kinds of Reality
Another important way in which computers are changing our approach to knowledge
relates to new combinations of reality. In the 1960’s the earliest attempts at virtual reality
created a) digital copies of physical spaces, b) simplified digital subsets of a more
complex physical world or c) digital visualizations of imaginary spaces. These
alternatives tended to compete with one another. In the latter 90’s there has been a new
trend to integrate different versions of reality to produce both augmented reality and
augmented virtuality. As a result one can, for instance, begin with the walls of a room,
superimposed on which are the positions of electrical wires, pipes and other fixtures.
Such combinations have enormous implications for training and repair work of all kinds.
Recently, for instance, a Harvard medical doctor superimposed an image of an internal
tumour onto the head of a patient and used this as an orientation method for the
operation. (This method is strikingly similar to the supposedly science fiction operation
of the protagonist’s daughter in the movie Lost in Space). As noted elsewhere, this basic
method of superimposition can also be very fruitful in dealing with alternative
reconstructions of an ancient ruin or different interpretations of a painting’s spatial
layout. Other alternatives include augmented virtuality, in which a virtual image is
augmented and double augmented reality in which a real object such a refrigerator has
superimposed on it a virtual list which is then imbued with further functions.245 (cf. figure
3).
Other techniques are also contributing to this increasing interplay between reality and
various constructed forms thereof. In the past, for instance, Computer Aided Design
(CAD) and video were fundamentally separate media. Recently Bell Labs have
introduced the principle of pop-up video, which permits one to move seamlessly between
a three-dimensional CAD version of a scene and the two-dimensional video recording of
an identical or at least equivalent scene.246 Meanwhile, films such as Forrest Gump
integrate segments of “real” historical video seamlessly within a purely fictional story.
This has led some sceptics to speak of the death of photographic veracity,247 which may
well prove to be an overreaction. Major bodies such as the Joint Picture Expert Group
(JPEG) are working on a whole new framework for deciding the veracity of images,
which will help to resolve many of these fears.
On the positive side, these developments in interplay among different kinds of reality
introduce immense possibilities for the re-contextualization of knowledge. As noted
earlier, while viewing images of a museum one will be able to move seamlessly to CAD
107
reconstructions of the rooms and to videos explaining particular details. One will be able
to move from a digital photograph of a painting, through images of various layers of the
painting to CAD reconstructions of the painted space as well as x-rays and electronmicroscope images of its micro-structures. One will be able to study parallels, and many
aspects of the history of the painting. A new integration of static and dynamic records
will emerge.
6. Complex Systems
The systematic mastery of scale in the past decades has lent enormous power to the zoom
metaphor, to such an extent that one could speak of Hollywoodization in a new sense.
Reality is seen as a film. The amount of detail, the granularity, depends on one’s scale.
As one goes further one sees larger patterns, as one comes closer one notices new details.
Proponents of complex systems such as Yaneer Bar-Yam,248 believe that this zoom
metaphor can serve as a tool for explaining nearly all problems as one moves from
atomic to molecular, cellular, human and societal levels. Precisely how one moves from
physical to conceptual levels is, however, not explained in this approach.
Complex systems entail an interdisciplinary approach to knowledge, which builds on
work in artificial neural networks to explain why the whole is more than the sum of its
parts. The director of the New England Center for Complex Systems (NECSI) believes
that this approach can explain human civilization:
One system particularly important for the field of complex systems is human
civilization the history of social and economic structures and the emergence of an
interconnected global civilization. Applying principles of complex systems to
enable us to gain an understanding of its past and future course is ultimately an
important objective of this field. We can anticipate a time when the implications
of economic and social developments for human beings and civilization will
become an important application of the study of complex systems.249
Underlying this approach is an assumption that the history of civilization can effectively
be reduced to a history of different control systems, all of them hierarchically structured.
This may well provide a key to understanding the history of many military, political and
business structures, but can hardly account for the most important cultural expressions. If
anything the reverse could well be argued. Greece was more creative than other cultures
at the time because it imposed less hierarchical structures on artists. Totalitarian regimes,
by contrast, typically tolerate considerably less creativity, because most of these
expressions are invariably seen as beyond the parameters of their narrow norms. Hence,
complex systems with their intriguing concepts of emergence, may well offer new
insights into the history of governments, corporations, and other bureaucracies. They do
not address a fundamental aspect of creativity, which has to do with the emergence of
new individuals and particulars, non-controlled elements of freedom, rather than products
of a rule based system.
108
7. Individuals and Particulars
As was already suggested above, one of the central questions is how we define
knowledge. Does knowledge lie in the details of particulars or in the universals based on
those details? The debate is as old as knowledge itself. In Antiquity, Plato argued for
universals: Aristotle insisted on particulars. In the Middle Ages, the debate continued,
mainly in the context of logic and philosophy. While this debate often seemed as if it
were a question of either/or, the rise of early modern science made it clear that the
situation is more complex. One needs particular facts. But in isolation these are merely
raw data. Lists of information are one step better. Yet scientific knowledge is concerned
with laws, which are effectively summaries of those facts. So one needs both the
particulars as a starting point in order to arrive at more generalized universals, which can
then explain the particulars in question.
Each change in media has affected this changing relationship between particulars and
universals. In pre-literate societies, the central memory unit was limited to the brain of an
individual and oral communication was limited to the speed with which one individual
could speak to another. The introduction of various written media such as cuneiform,
parchment, and manuscripts meant that lists of observations were increasingly accessible.
Printing helped to standardize this process and introduced the possibility of much more
systematic lists. The number of particular observations on which universal claims and
laws could be established thus grew accordingly. While there were clearly other factors
such as the increased accuracy of instruments, printing made Tycho Brahe’s observations
more accessible than those made at the court of Alphonse the Wise and played their role
in making Kepler’s new planetary laws more inclusive and universal.
The existence of regular printed tables greatly increased the scope of materials, which
could readily be consulted. It still depended entirely on the memory and integrating
power of the individual human brain in order to recognize patterns in the data and to
reach new levels of synthesis. Once these tables are available on networked computers,
the memory capacities are expanded to the size of the computer. The computer can also
be programmed to search both for consistencies and anomalies. So a number of the
pattern discoveries, which depended solely on human perception, can now be automated
and the human dimension can be focussed on discerning particularly subtle patterns and
raising further questions.
In the context of universities, the arts and sciences have traditionally been part of a single
faculty. This has led quite naturally to many comparisons between the arts and the
sciences, and even references to the art of science or the science of art in order to
emphasize their interdependence. It is important to remember, however, that art and
science differ fundamentally in terms of their approach to universals and particulars.
Scientists gather and study particulars in order to discern some underlying universal and
eternal pattern. Artists gather and study examples in order to create a particular object,
which is unique, although it may be universal in its appeal. Scientists are forever revising
their model of the universe. Each new discovery leads them to discard some piece or even
large sections of their previous attempt. Notwithstanding Newton’s phrase that he was
109
standing on the shoulders of giants, science is ultimately not cumulative in the sense of
keeping everything of value from an earlier age. Computers, which are only concerned
with showing us the latest version of our text or programme, are a direct reflection of this
scientific tradition.250
In this sense, art and culture are fundamentally different in their premises. Precisely
because they emphasize the uniqueness of each object, each new discovery poses no
threat to the value of what came before. Most would agree, for instance, that the Greeks
introduced elements not present in Egyptian sculpture, just as Bernini introduced
elements not present in Michelangelo, and he, in turn, introduced elements not present in
the work of Donatello. Yet it would be simplistic to deduce from this that Bernini is
better than Michelangelo or he in turn better than Donatello. If later were always better it
would be sufficient to know the latest artists’ work in the way that scientists feel they
only need to know the latest findings of science. The person who knows about the
Egyptians, Greeks, Donatello, Michelangelo and Bernini is much richer than one who
knows only the latest phase. Art and culture are cumulative. The greatest scientist
succeeds in reducing the enormity of particular instances to the fewest number of laws
which to the best of their knowledge are unchanging. The most cultured individual
succeeds in bringing to light the greatest number of unique examples of expression as
proof of creative richness of the human condition. These differing goals of art and
science pose their own challenges for our changing understanding of knowledge.
Before the advent of printing, an enterprising traveller might have recorded their
impressions of a painting, sculpture or other work of art, which they encountered in the
form of a verbal description or at best with a fleeting sketch. In very rare cases they might
have attempted a copy. The first centuries after Gutenberg saw no fundamental changes
to this procedure. In the nineteenth century, lithographs of art gradually became popular.
In the late nineteenth century, black and white photographs made their debut.251 In the
latter part of the twentieth century colour images gradually became popular.
Even so it is striking to what extent the horizons of authors writing on the history of their
subject remained limited to the city where they happened to be living. It has often been
noted, for example, that Vasari’s Lives of the Artists, focussed much more on Florence
than other Italian cities such as Rome, Bologna, Milan or Urbino. At the turn of the
century, art historians writing in Vienna tended to cite examples found in the
Kunsthistorisches Museum, just as others since living in Paris, London or New York
have tended to focus on the great museum that was nearest to home. The limitations of
printing images meant that they could give only a few key masterpieces by way of
example. From all this emerged a number of fascinating glimpses into the history of art,
which were effectively summaries of the dominant taste in the main halls of the great
galleries. It did not reflect the up to 95% of collections that is typically in storage. Nor
did it provide a serious glimpse of art outside the major centres.
A generation ago scholars such as Chastel252 pointed to the importance of studying the
smaller cities and towns in the periphery of such great cities: to look not only at Milan
but also at Pavia, Crema, Cremona, Brescia and Bergamo. Even so, in the case of Italy,
110
for instance, our picture is still influenced by Vasari’s emphases from over four centuries
ago. Everyone knows Florence and Rome. But who is aware of the frescoes at Bominaco
or Subiaco, of the monasteries at Grottaferata and Padulo, or the architecture of Gerace,
Urbania or Asolo? The art in these smaller centres does not replace, nor does it even
pretend to compete, with the greatest masterpieces which have usually made their way to
the world’s chief galleries. What they do, however, is to provide us with a much richer
and more complex picture of the variations in expression on a given theme. In the case of
Piero della Francesca, for example, who was active for much of his life in San Sepolcro,
Arezzo and Urbino, we discover that these masterpieces actually originated in smaller
centres although they are now associated with great cities (London, Paris, Florence). In
other cases we discover that the smaller centres do not simply copy the great
masterpieces. They adapt familiar themes and subjects to their own tastes. The narrative
sequences at San Gimignano, Montefalco, Atri add dimensions not found even in
Florence or Rome.
To be sure some of this richness has been conveyed by the medium of printing, through
local guidebooks and tourist brochures. However, in these the works of art are typically
shown in isolation without any reference to more famous parallels in the centres.
Computers will fundamentally change our approach to this tradition. First they will make
all these disparate materials accessible. Hence a search for themes such as Virgin and
Child will not only bring up the usual examples by Botticelli or Raphael but also those in
museums such as L’Aquila, Padua, and Volterra (each of which were centres in a
previous era). Databases will allow us to study developments in terms of chronology as
well as by region and by individual artist. Filtering techniques will allow us to study the
interplay of centre and periphery in new ways.
More importantly, we shall be able to trace much more fully the cumulative dimensions
of culture, retaining the uniqueness of each particular object. In the past, each of the
earlier media precluded serious reproductions of the original objects. As noted above,
colour printing has only been introduced gradually over the past half-century. Even then,
a single colour image of a temple or church, can hardly do justice to all its complexities.
The advent of virtual and augmented reality, and the possibility of stereo-lithographic
printing, means that a whole new set of tools for understanding culture is emerging. They
will not replace the value and sometimes the absolute necessity of studying some of the
originals in situ, but if we always had to visit everything, which we wished to study in its
original place, the scope of our study would be very limited indeed.
Earlier media typically meant that one emphasized one example often forgetting that it
represented a much larger phenomenon. The Coliseum in Rome is an excellent case in
point. History books typically focus on this amphitheatre and tell us nothing of the great
number of amphitheatres spread throughout the Roman empire. Networked computers
can make us aware of all known examples from Arles and Nîmes in France to El-Djem in
Tunisia and Pula in Croatia. This new encyclopaedic approach means that we shall have a
much better understanding of how a given structure spreads throughout a culture to form
a significant element in our cultural heritage such as the Greek temple, the Romanesque
and Gothic Church, and the Renaissance villa. It means that we shall also have a new
111
repertoire of examples for showing even as these styles spread, each new execution of the
principle introduces local uniqueness. Hence the cathedrals at St. Denis, Chartres, Notre
Dame, Cologne, Magdeburg, Bamberg, Ulm and Burgos are all Gothic, and yet none is a
simple copy of the other.
A generation ago when Marshall McLuhan coined the phrase “the global village”, some
assumed that the new technologies would invariably take us in the direction of a world
where every place was more or less the same: where Hiltons and McDonalds would
spread throughout an increasingly homogenized planet. This danger is very real. But as
critical observers such as Barber have noted,253 the new technologies have been
accompanied by a parallel trend in the direction of regionalism and new local awareness.
The same technologies, that are posing the possibility of global corporations, are
introducing tremendous new efforts in the realms of citizen participation groups and of
local democracy. Networked computers may link us together with persons all over the
world as if we were in a global village but this does not necessarily mean that every
village has to look the same. Indeed, the more the mass-media try to convince us that we
are all inhabitants of a single interdependent ecosystem, the more individuals are likely to
articulate how and even why their particular village is different from others. In this
context, the new access to individuals and particulars introduced by networked
computers, becomes much more than an interesting technological advance. It provides a
key to maintaining the cultural equivalent of bio-diversity, which is essential for our well
being and development in the long run.
In themselves the particulars are, of course, only lists and as such merely represent data
or, at best, information. Hence they should be seen as starting points rather than as results
per se. Their vital importance lies in vastly increasing the sample, the available sources
upon which we attempt to draw conclusions. The person who has access to only one book
in art history will necessarily have a much narrower view than someone who is able to
work with the resources of a Vatican or a British Library. In the past, scholars have often
spent much more time searching for a document than actually reading it. In future,
computers will greatly lighten the burden of finding. Hence, scholarship will focus
increasingly on determining the veracity of sources, weighing their significance,
interpreting and contextualizing sources, and learning to abstract from the myriad details
which they offer, some larger patterns of understanding. Access to new amounts of
particulars will lead to a whole new series of universal abstractions.
Implicit in the above discussion are larger issues of knowledge organization that go far
beyond the scope of this paper. We noted that while the arts and science typically share
the same faculty and are in many ways interdependent, there are two fundamental ways
in which they differ. First, the sciences examine individual facts and particulars in order
to arrive at new universal summaries of knowledge. The arts, by contrast, are concerned
with creating particulars, which are unique in themselves. They may be influenced by or
even inspired by other particular works, but they are not necessarily universal
abstractions in the way that the sciences are. Second, and partly as a result thereof, the
sciences are not cumulative in the same way that the arts and culture are. In the sciences
only the latest law, theory, postulate etc. is what counts. In the arts, by contrast, the
112
advent of Picasso does not make Rubens or Leonardo obsolete, any more than they made
Giotto or Phidias obsolete. The arts and culture are defined by the cumulative sum of our
collective heritage, all the particulars collected together, whereas the sciences are
concerned only with the universals abstracted from the myriad particulars they
examine.254
It follows that, while both the arts and sciences have a history, these histories ultimately
need to be told in very different ways. In the arts, that history is about how we learned to
collect and remember more and more of our past. Some scholars have claimed, for
instance, that we know a lot more about the Greeks than Aristotle himself. In the
sciences, by contrast, that history is at once about how scientists developed ever better
instruments with which to make measurable that which is not apparent to the naked eye,
and how they used the results of their observations to construct ever more generalized,
universal, and at the same time, testable theories. To put it simply, we need very different
kinds of histories to reflect these two fundamentally different approaches to universals
and particulars, which underlie fundamental differences between the arts and sciences.
With the advent of networked computers the whole of history needs to be rewritten: at
least twice, a process that will continue in future.
8. Now and Eternity
Not unrelated to the debates concerning particulars and universals are those connected
with the (static) fine arts versus (semi-dynamic) arts such as sculpture and architecture255
and (dynamic) performance arts such as dance, theatre, and music. Earlier media such as
manuscripts or print were at best limited to static media. They could not hope to
reproduce the complexities of dynamic performance arts. Even the introduction of video
offered only a partial solution to this challenge, insomuch that it reduced the threedimensional field to a particular point of view reduced to a two-dimensional surface.
Hence, if a video captured a frontal view of actors or dancers their backs were necessarily
occluded. These limitations of recording media have led perforce to a greater emphasis
on the history of fine arts such as painting than on the semi-dynamic arts such as
sculpture and architecture or the dynamic arts such as dance and theatre.256
These limitations have had both an interesting and distorting effect on our histories of
culture. It has meant, for instance, that we traditionally knew a lot more about the history
of static art than dynamic art: a lot more about painting than about dance, theatre or
music. It has meant that certain cultures such as the Hebrew tradition, which emphasize
the now of dynamic dance and music over the eternal static forms of sculpture and
painting were under-represented in traditional histories of culture. Conversely, it has
meant that the recent additions of film, television, video and computers have focussed
new attention on the dynamic arts, to the extent of undermining our appreciation of the
enduring forms. Our visions of eternal art are being replaced by a new focus on the now.
From a more global context these limitations have also had a more general, subtle,
impact on our views of world culture. Those strands, which focussed on the static, fine
arts were considered the cornerstones of world cultural development. Since this was more
113
so in the West (Europe, the Mediterranean and more in recently North America),
sections of Asia Minor (Iran, Iraq, Turkey), and certain parts of the Far East (China,
Japan and India),257 these dominated our histories of art. Countries with strong traditions
of dance, theatre and other types of performance (including puppet theatre, shadow
theatre and mime) such as Malaysia, Java and Indonesia were typically dismissed as
being uncultured. The reality of course was quite different. What typically occurred is
that these cultures took narratives from static art forms such as literature and translated
them into dynamic forms. Hence, the stories of an Indian epic, the Ramayana, made their
way through Southeast Asia in the form of theatre, shadow puppet plays, dances and the
like. Scholars such as Mair258 have rightly drawn attention to the importance of these
performance arts (figure 4).
Ultimately, however, the challenge goes far beyond simple dichotomies of taste, namely,
whether one prefers the static, eternal arts of painting to the dynamic, now, arts of dance
and music. A more fundamental challenge will lie in re-writing the whole of our history
of art and culture to reflect how these seeming oppositions have in fact been
complementary to one another. In the West, for instance, we know that much
Renaissance and Baroque art was based directly on Ancient mythology either directly via
books such as Ovid’s Metamorphoses, or indirectly via Mediaeval commentaries on these
myths. We need a new kind of hyper-linking to connect all these sources with the
products, which they inspired. Such hyperlinks will be even more useful in the East
where a same mythical story may well be translated into half a dozen art forms ranging
from static (scupture and painting) to dynamic (dance, mime, shadow theatre, puppet
theatre, theatre). From all this there could emerge new criteria for what constitutes a
seminal work: for it will become clear that a few texts have inspired works over the
whole gamut of cultural expression. The true key to eternal works lies in those which
affect everything from now to eternity.
9. Meta-Data
How is the enormity of this challenge to be dealt with in practice? It is generally assumed
that meta-data offers a solution. The meta concept is not new. It played a central role in
the meta-physics of Aristotle. In the past years with the rise of networked computing,
meta has increasingly become a buzzword. There is much discussion of meta-data, metadatabases, and meta-data dictionaries. There is a Metadata Coalition,259 a Metadata
Council260 and even a Metadata Review.261 Some now speak of meta-meta data in ways
reminiscent of those who spoke of the meaning of meaning a generation ago.
Etoki
Par
Parda Da
Pien Wen
Waysang Beber
Japan
India
Iran
China
Malaysia
Figure 4. Examples of narrative based performance art in various countries.
114
The shift in attention from data to meta-data262 and meta-meta-data is part of a more
fundamental shift in the locus of learning in our society. In Antiquity, academies were the
centres of learning and repositories of human knowledge. In the Latin West, monasteries
became the new centres of learning and remained so until the twelfth century, when this
locus began to shift towards universities. From the mid-sixteenth to the mid-nineteenth
centuries universities believed they had a near monopoly on learning and knowledge.
Then came changes. First, there was a gradual shift of technical subjects to polytechnics.
New links between professional schools (e.g. law, business) and universities introduced
more short-term training goals while also giving universities a new lease on life.
The twentieth century brought corporate universities of which there are now over 1,200.
It also brought national research centres (NRC, CNR, GMD), military research
laboratories (Lawrence Livermore, Los Alamos, Argonne, Rome), specialized institutes
(such as Max Planck and Fraunhofer in Germany) and research institutes funded by large
corporations (AT&T, General Motors, IBM, Hitachi, Nortel). Initially the universities
saw themselves as doing basic research. They defined and identified the problems the
practical consequences of which would then be pursued by business and industry. In the
past decades all this has changed. The research staffs of the largest corporations far
outnumber those of the greatest universities. AT&T’s Lucent Technologies has 24,000 in
its Bell Laboratories alone and some 137,000 in all its branches. Hitachi has over 34,000,
i.e. more researchers than the number of students at many universities. Nortel has over
17,000 researchers. The cumulative information produced by all these new institutions
means that traditional attempts to gather (a copy of) all known knowledge and
information in a single location is no longer feasible. On the other hand a completely
distributed framework is also no longer feasible. A new framework is needed and metadata seems to be a new holy grail. To gain some understanding of this topic and the scope
of the international efforts already underway will require a detour that entails near lists of
information. Those too impatient with details are invited to skip the next twelve pages at
which point we shall return to the larger framework and questions.
It is generally accepted that meta-data is data about data,263 or key information about
larger bodies of information. Even so discussions of meta-data are frequently confusing
for several reasons. First, they often do not define the scope of information being
considered. In Internet circles, for instance, many authors assume that meta-data refers
strictly to Internet documents, while others use it more generally to include the efforts of
publishers and librarians. Secondly, distinctions need to be made concerning the level of
detail entailed by the meta-data. Internet users, for instance, are often concerned only
with the most basic information about a given site.
In extreme cases, they believe that this can be covered through Generic Top Level
Domain Names (GTLD), while publishers are convinced that some kind of unique
identifying number will be sufficient for these purposes (see figure5). Present day search
engines such as Altavista, and Lycos also use a minimal approach to these problems,
relying only on a title and a simple tag with a few keywords serving as the metadata.
115
Basic Description
Internet and Computer Software
Generic Top Level Domain Names
Hypertext Transfer Protocol
Multipurpose Internet Mail Exchange
Uniform Resource Name
Uniform Resource Locator
(GTLD)264
(http)
(MIME)
(URN)
(URL)265
International Standards Organization
International Standard Book Numbering, ISO 2108:1992
International Standard Music Number, ISO 10957:1993
International Standard Technical Report Number
Formal Public Identifiers
(ISO)
(ISBN)266
(ISMN)267
(ISRN)268
(FPI)269
National Information Standards Office
Serial Item and Contribution Identifier
International Standard Serials Number
(NISO)
(SICI)
(ISSN)270
Publishers
Confédération Internationale des Sociétés d’Auteurs et Compositeurs (CISAC)271
Common Information System
(CIS)
International Standard Works Code
(ISWC)
Works Information Database
(WID)
Global and Interested Parties Database
(GIPD)
International Standard Audiovisual Number
(ISAN)272
International Federation of the Phonogram Industry
(IFPI)
International Standard Recording Code
(ISRC)273
Cf. Other Standard Identifier
(OSI)274
Universal Product Code
(UPC)
International Standard Music Number
(ISMN)
International Article Number
(IAN)
Serial Item and Contribution Identifier
(SICI)
Elsevier
Publisher Item Identifier
(PII)275
Corporation for National Research Initiatives and
International DOI Foundation
Digital Object Identifier
(DOI)276
Libraries
Persistent Uniform Resource Locator
(PURL)277
Handles
Universities
Uniform Object Identifier
(UOI)278
Object ID
Figure 5. Major trends in meta-data with respect to basic identification.
116
Summary Description
Internet
W3 Consortium
Hyper Text Markup Language: Header
META Tag279
Hyper Text Markup Language Appendage
Resource Description Format
Extensible Markup Language
Protocol for Internet Content Selection
Uniform Resource Identifier
Uniform Resource Characteristics
Universally Unique Identifiers280
Globally Unique Identifiers
Whois++ Templates
Internet Anonymous FTP Archives Templates
Linux Software Map Templates
Harvest Information Discovery and Access System
Summary Object Interchange Format
Netscape
Meta Content Framework
Microsoft
Web Collections284
Libraries
International Federation of Library Associations
International Standard Bibliographic Description
Electronic Records
Dublin Core
Resource Organization and Discovery in Subject Based Services
Social Science Information Gateway
Medical Information Gateway
Art, Design, Architecture, Media
(HTML Header)
(HTML Appendage)
(RDF)
(XML)
(PICS)
(URI)
(URC)
(UUID)
(GUID)
(IAFA)281
(LSM)
(SOIF)282
(MCF)283
(IFLA) 285
(ISBD)286
ISBD (ER)
(ROADS)
(SOSIG)
(OMNI)287
(ADAM)
Full (Library Catalogue Record) Description
Libraries
Z.39.50
Machine Readable Record288 with many national variants (MARC)289
Other Catalogue formats summarized in Eversberg290 (e.g. PICA, MAB)
Full Text
Libraries and Museums
Standard Generalized Markup Language
Library of Congress Encoding Archival Description
Text Encoding Initiative
Consortium for Interchange of Museum Information
(SGML)291
(LC EAD)292
(TEI)
(CIMI)
Figure 6. Major trends in meta-data with respect to more complete description.
117
Others, particularly those in libraries, feel that summary descriptions, full library
catalogue descriptions or methods for full text descriptions are required. Meanwhile some
are convinced that while full text analysis or at least proper cataloguing methods are very
much desireable, it is not feasible that the enormity of materials available on the web can
be subjected to rigorous methods requiring considerable professional training. For these
the Dublin Core is seen as a pragmatic compromise (figure 6). As can be inferred from
the lists above, there are a great number of initiatives with common goals, often working
in isolation, sometimes even ignorant of the others’ existence. Nonetheless, a number of
organizations are working at integrated solutions for meta-data. What follows is by no
means comprehensive. Gilliland-Swetland, for instance, has recently identified five
different kinds of metadata: administrative, descriptive, preservation, technical and
use.293 We shall begin by examining four crucial players. While presented separately, it is
important to recognize that there are increasing synergies between/among these players
and their solutions, which are to a certain extent competing with one another.
i) Internet Engineering Task Force (IETF)
The IETF, which is directly linked with the Internet Society, is active on a great number
of fronts. At present, sites on the World Wide Web typically have a Uniform Resource
Locator (URL). These suffer from at least two basic problems: i) they often change
location and ii) there may be several mirror sites for the same material. The IETF has
been working on a more comprehensive approach:
Resources are named by a URN (Uniform Resource Name), and are retrieved by
means of a URL (Uniform Resource Locator). Describing the resource for
purposes of discovery, as well as making the binding between a resource's name
and its location(s) is the role of the URC (Uniform Resource Characteristic).
The purpose or function of a URC is to provide a vehicle or structure for the
representation of URIs [Uniform Resource Indicators] and their associated metainformation.294
The precise meaning of these terms is not as clear as one might wish. Weider,295 for
instance calls Universal Resource Names (URNs)296 the equivalent of an ISBD number
for electronic resources, whereas Iannella calls them a naming method. As for Universal
Resource Characteristics (URC), Iannella calls them meta-data, whereas Ron
Daniels297gives them quite a different take. Similarly, the exact nature and function of the
Uniform Resource Indicators (URI) has been the subject of considerable debate and at a
meeting in Stockholm (September 1997), the IETF URI committee was officially
disbanded. Subsequently, the W3 Consortium has taken up the problem (see below).
Meanwhile, URNs still need to be mapped back to a series of disparate URLs. To this end
the IETF is exploring at least four methods of URN to URL Mapping (Resource
Discovery) and URC298 using http:
118
i) Domain Name Server (dns)299
ii) x-Domain Name Server 2 (x-dns-2) with trivial URC syntax300
iii) SGML designed to interoperate with the trivial URC scenario301
iv) Path, same as 2 above except that it is hierarchically arranged.302
A fifth method, Handle, is being explored by ARPA. Ultimately the technical details of
these competing schemes is less important than the result that they promise: a framework
which will allow various sources to interoperate. It is noteworthy that institutions around
the world are working on these challenges. The Distributed Technology Centre (DSTC)
in Brisbane has a Basic URN Service for the Internet (BURNS) project,303 and The URN
Interoperability Project304 (TURNIP), while Earth Observation at the Joint Research
Centre (JRC) has an URN Resolver Experiment305 as part of its European Wide Service
Exchange (EWSE) initiative. Meanwhile the IETF, is exploring Uniform Resource
Agents (URA's) 306:
as a means of specifying composite net-access tasks. Tasks are described as
"composite" if they require the construction and instantiation of one or more
Uniform Resource Locators (URL's) or Uniform Resource Names (URN's),
and/or if they require transformation of information returned from instantiating
URL's/URN's.307
Precisely, how all these initiatives should be integrated is still a matter of conjecture. For
example, the Internet Anonymous File Transfer Protocol Archives Working Group
(IAFA),308 initially worked on Templates for Internet data. This became a new group
called Integration of Internet Information Resources Working Group (IIIR).309 This group
also worked toward Query Routing Protocol (QRP), which they abandoned in favour of
working on a Structured Text Interchange Format (STIF).310 More significantly, they also
set out to integrate WAIS, ARCHIE, and Prospero into a Virtually Integrated Information
Service (VUIS). To this end they introduced four Requests for Comments.311 Of these,
the Integrated Internet Information Service (IIIS) foresees the integration of some of the
major types of information used on the internet (figure7):
Gopher WAIS WWW Archie Others
Resource discovery system perhaps based on Whois++
Uniform Resource Name to Uniform Resource Locator
Mapping System perhaps based on Whois++ or X.500
Transponder
Resource
Transponder
Resource
Transponder
Resource
Figure7. Basic Scheme from RFC 1727 showing how various protocols would be
integrated using Whois++ and X.500.
119
Client
Front End
Protocol
Object
Whois++
Whois++
PH
PH
LDAP
LDAP
Indexing
or Query Protocol
Database Backend
SQL or
Indexer API
Z39.50 or
GNU DBM or…(GDBM)
Figure 8. Basic Scheme concerning Common Indexing Protocol (CIP).
Another attempt by the IETF at creating an integrated strategy for meta-data on the
internet is their Common Indexing Protocol312 (CIP), which foresees a combination of
four elements: a client, a protocol for the front-end, an indexing object and a database
backend or query protocol (figure 8, cf. Appendix 9 which provides a glossary of some of
key technical terms). While undoubtedly essential, such attempts are focussed mainly on
information available on the Internet and do not yet address more complex challenges of
other knowledge available in museums and libraries.
Work is also progressing on an Application Configuration Access Protocol (ACAP, RFC
2244).313 Meanwhile other groups within the IETF are addressing more wide-ranging
solutions. One group, for instance, is working on World Wide Web Distributed
Authoring and Versioning314 (WebDAV), which will deal with meta-data, name space
management, overwrite prevention and version management, and has become part of the
W3’s Resource Description Framework (RDF, see below).
ii) World Wide Web Consortium (W3) 315
If the IETF is the chief body concerned with developing a pipeline for the Internet, the
W3 Consortium is the main body devoted to integrating meta-data with respect to content
on the Internet. It is, for instance, developing a convention for embedding meta-data in
HTML.316 When an IETF committee working on a Universal Resource Indicators (URI)
was disbanded for want of agreement, the problem was taken up by W3C, who are
tackling all the existing addressing schemes.317 The result of these efforts will be to create
a universal solution for the stopgap measures outlined above in figure 7.
One of the key activities of the W3 Consortium has been in the context of markup
languages. As was noted earlier, languages such as Standard Generalized Markup
Language (SGML),318 helped the aims of meta-data by separating form from content:
separating different views or presentation methods from the underlying information. The
advent of Hyper Text Markup Language (HTML)319 as an interim pragmatic solution
temporarily obscured this distinction. Since then the consortium has been working on a
subset of SGML, which is adequate for dealing with simpler documents and reestablishes the distinctions between form and content. This Extensible Markup Language
(XML)320 is also being submitted to the ISO (10179:1996).
120
It is foreseen that XML will form a basis to which one will add Cascading Style Sheets
(CSS)321 as part of a Document Style Semantics and Specification Language (DSSSL).322
Similarly one can then add specialized markup languages, decription languages and
formats (figure 9).
Markup Languages
Chemical Markup Language
(CML)
Handheld Device Markup Language
(HDML)
Mathematical Markup Language
(MML)
323
(PGML)
Precision Graphics Markup Language
Description Languages
Hardware Description Language
(HDL)
Web Interface Description Language
(WIDL)
Formats
Channel Definition Format
(CDF)
Resource Description Format
(RDF)
Figure 9. Special markup and description languages and formats linked with XML.
XML will serve as the underlying structure for a comprehensive scheme,324 which
includes Protocol for Internet Content Selection (PICS), Digital Signatures (Dsig),
Privacy Information (P3P) within a Resource Description Framework (RDF). PICS
initially began as a means of restricting access for children to pornographic and other
dangerous contents. PICS is evolving into a common platform for labelling online
resources and a system for describing content using a restricted vocabulary. The PICS
labels (metadata) for Internet resources325 have five aims: 1)Resource Descriptive
Schemas; 2) Organizational Management; 3) Discovery and Retrieval 4) Intellectual
Property Rights and 5) Privacy Protection Tasks.
PICS entails three kinds of metadata:326 i) embedded in content; ii) along with, but
separate from content and iii) provided by an independent provider (label bureau). In a
next phase PICS will become part of a larger Resource Description Framework327 (RDF),
which aims at machine understandable assertions of web resources in order to achieve:
1.
2.
3.
4.
5.
6.
7.
8.
Resource Discovery
Cataloging
Catalogue Information
Intelligent Software Agents
Content Rating
Endorsement Information
[PICS]
Intellectual Property Rights
Digital Signatures
Information about Sets of Documents [Dsig]
Privacy Information
[P3P]
Information About Sets of Documents and
Document Management
[Web DAV]
RDF will have at least three vocabularies, namely a Protocol for Internet Content
Selection (PICS) rating architecture; the Dublin Core (DC) elements for digital libraries
and Digital Signatures (Dsig) for authentication. RDF uses a Document Object Model328
121
(DOM), and a Resource Description Messaging Format329 (RDMF). Implicit in this
approach is the possibility of mapping a subject in the Dublin Core Framework, with
subjects in one of the main classification schemes (e.g. Library of Congress, Dewey,
Göttingen) and a version in everyday language.
XML will thus serve as an underlying structure for simple web documents, while SGML
continues to be used for complex information such as the repair manuals for aircraft
carriers or large jets.330 It is important to recognize that the W3’s approach to meta-data is
constantly evolving and is likely to change considerably in the course of the next few
years.331 For instance, the director of the W3 consortium, Tim Berners Lee, in a keynote
to WWW7 (Brisbane, April 1998), recently outlined his vision of a global reasoning web,
whereby every site would also be classed in terms of its veracity or truth value.
iii) Z39.50332
Complementing these efforts of the Internet community are those of the library world,
which have focussed almost exclusively on interoperability among libraries and have left
aside the more complex elements of Internet information. Chief among these is Z.39.50.
This is the American National Standard for Information (ANSI) Retrieval. It is based on
two ANSI-NISO documents (1992333 and 1995334), which led to a network protocol,335
that is session oriented and stateful, in contrast to http and gopher, which are stateless. An
early version ran on WAIS. The new version runs over TCP/IP. It uses an Object
Identifier (OID). Z39.50 has the following six attribute sets:
Bibliographic 1
Explain
Extended Services
Common Command Language
Government Information Locator Service
Scientific and Technical Attribute Set (Superset of Bib-1)
(Bib-1)336
(Exp-1)
(Ext-1)
(CCL-1)
(GILS)
(STAS)
In addition it offers six record syntaxes, namely:
Explain
Extended Services
Machine Readable Card including national variants
Generic Record Syntax
Online Public Access Catalogue
Simple Unstructured Text Record Syntax
(MARC)
(GRS-1)
(OPAC)
(SUTRS)
The Library of Congress has become the central library site for Z39.50 developments.
The solution is being used in the European Commission’s OPAC Network (ONE), a
project, which includes the British Library (BL), the Danish National Library (DB), the
Dutch Electronic libraries project (PICA), an Austrian initiative (Joanneum Research)
and the Swedish National Library. It is also being used in the Gateway to European
National Libraries (GABRIEL).
122
Meanwhile the Z39.50 protocol has been accepted as a basic ingredient by the
Consortium for the Interchange of Museum Information (CIMI), which in turn has been
supported as a part of the European Commission’s Memorandum of Understanding for
Access to Europe’s Cultural Heritage. Hence, while some technologists may lament that
the solution lacks elegance, it has the enormous advantage of having been accepted by
virtually all the leading players in the international library and museum scene and thus
needs to be considered as one of the elements in any near future solution.
iv) Dublin Core
Major libraries and museums typically have highly professional staff and therefore
assume that records will be in a MARC format, or possibly with more complex methods
such as SGML or EAD, or the variations provided by TEI and CIMI. Smaller libraries
cannot always count on access to such resources. To this end, the Online Computer
Center (OCLC) based in Dublin, Ohio in conjunction with the National Center for
Supercomputing Applications (NCSA), sponsored an initial Metadata Workshop (1-3
March 1995),337 at which 17 elements of the Dublin Core (DC)338 also known as
Monticello Core Elements (Mcore) were proposed (see figure 10 below) as well as three
types of qualifiers,339 namely, language, scheme and type. Since this was the first of a
series of meetings it is frequently referred to as Dublin Core 1.
A second meeting (Dublin Core 2), which took place in Warwick, England, produced the
Warwick Framework.340 This provided containers for aggregating packages of typed
meta-data and general principles of information hiding. A third meeting (Dublin Core 3)
held in Dublin, Ohio focussed on images.341 A fourth meeting (Dublin Core 4) took place
in Canberra342and a fifth (Dublin Core 5) in Helsinki.343
Title
Creator
Subject
Description
Publisher
Contributors
Date
Type
Format
Identifier
Identifier
Source
Language
Relation
Coverage
Rights
Figure 10. List of the fifteen Dublin Core (DC) or Monticello Core (Mcore) elements,
seen as a basic subset of more complex records such as MARC, SGML, TEI etc.
The Dublin Core has nine working groups: rights management, sub-elements, data model,
DC Data, DC and Z39.50; relation type, DC in multiple languages, coverage, format and
resource types. The Dublin Core is being applied to the Nordisk Web Index and the
European Web Index (NWI/EWI). One of the reasons why it is so significant is because it
is being linked with a number of other meta-data formats, namely, HTML 2.0/3.2 META
Elements, WHOIS ++ Document Templates, US MARC, SGML and possibly MCF.344
123
These meta-data records may be bibliographic, but can also relate to administration,
terms/conditions as well as ratings (figure 11).
MD Bibliographic
MD Dublin Core
MD Administration
MD MARC
MD Terms/Conditions MD Ratings
Figure 11. Basic scheme showing how meta-data (MD) pertaining to bibliographic
records can be linked with administration, terms/conditions and ratings.
The true power of this approach is that it can readily be expanded into a more
general method for handling, interchange and ultimately marketing of information and/or
knowledge packages, which helps to explain why firms such as IBM have become very
seriously interested in and supportive of this approach. It offers a new entry point for
their e-business vision of the world (figure 12).
Digital Object
Metadata Container
Metadata Package
Handle
Metadata Container
Content Container
Content Container
Content Element
Content Element
Metadata Container
Content Package
Figure 12. A more generalized scheme showing relations of meta-data sets to their
various parts345.
As the above figures reveal, it is foreseen that the Dublin Core elements from personal
sites and smaller institutions will interact with the more elaborate formats of major
institutions (MARC etc.). Hence while the Dublin Core may, at first glance, appear to be
merely a quick and dirty solution to a problem, it actually offers an important way of
bridging materials in highly professional repositories with those in less developed ones.
Moreover, while the Dublin Core in its narrow form is primarily a method for exchanging
records about books and other documents, within this more generalized, expanded
context, it offers a method for accessing distributed contents.
How will the extraordinary potentials of the technologies outlined above be developed?
Any attempt at a comprehensive answer would be out of date before it was finished. For
the purposes of this paper it will suffice to draw attention to a few key examples. One of
the earliest efforts to apply these new tools is the Harvest Information Discovery and
Access System346 The Harvest method uses the Summary Object Interchange Format
(SOIF),347 which employs the Resource Description Message Format (RDMF), in turn a
combination of IAFA templates and BibTex348 which is part of the Development of a
European Service for Information on Research and Education (DESIRE)349 project linked
with the European Commission’s Telematics for Research Programme. It has been
applied to Harvest, Netscape, and the Nordisk Web Index (NWI). This includes a series
124
of attributes,350 a series of template types351 and other features.352 While this method is
limited to Internet resources, it represents an early working model.
The challenge remains as to how these tremendously varied resources can be integrated
within a single network, in order that one can access both new web sites as well as classic
institutions such as the British Library353 or the Bibliothèque de la France. One possible
solution is being explored by Carl Lagoze354 in the Cornell Digital Library project.
Cornell is also working with the University of Michigan on the concept of an Internet
Public Library.355 Another solution is being explored by Renato Iannella356 at the
Distributed Technology Centre (DSTC). This centre in Brisbane, which was one of the
hosts of the WWW7 conference in 1998, includes a Resource Discovery Unit. In addition
to its Basic URN Service for the Internet (BURNS) and The URN Interoperability Project
(TURNIP), mentioned earlier, it has an Open Information Locator Project Framework357
(OIL). This relies heavily on Uniform Resource Characteristics (including Data,358 Type,
Create Time, Modify Time, Owner). In the Uniform Resource Name (URN), this method
distinguishes between a Namespace Identifier (NID) and Namespace Specific String
(NSS). This approach is conceptually significant because it foresees an integration of
information sources, which have traditionally been distinct if not completely separate,
namely, the library world, internet sources and telecoms. (figure 13).
urn:isbn:……………
inet:dstc.edu.au…….
telecom:……………
publishing
internet servers
telecom
ISBN no.
listname
telephone no.
Figure 13. Different kinds of information available using the Open Information Locator
Project Framework (OIL).
Yet another initiative is being headed by the Open Management Group (OMG).359 This
consortium of 660 corporations has been developing a Common Object Request Broker
Architecture (CORBA),360 which links with an Interoperable Object Reference (IOR).
One of its advantages is that it can sidestep some of the problems of interaction between
hyper text transfer protocol (http) and Transfer Control Protocol (TCP). It does so by
relying on Internet Inter Object Request Broker Protocol (IIOP). It also uses an Interface
Repository (IR) and Interface Definition Language (IDL, ISO 14750)361. CORBA has
been adopted as part of the Telecommunications Information Networking Architecture
(TINA).362
Some glimpse of a growing convergence is the rise of interchange formats
designed to share information across systems. The (Defense) Advanced Research
projects Agency’s (ARPA’s) Knowledge Interchange Format (KIF) and Harvester’s
Summary Object Information Format (SOIF) have already been mentioned. NASA has a
Directory Interchange Format (DIF). The Metadata Coalition has a Metadata Interchange
Specification363 (MDIS).
At the university level, Stanford University has a series of Ontology Projects.364 The
California Institute of Technology (Caltech) has a project called Infospheres concerned
125
with Distributed Active Objects.365 Rensselaer Polytechnic has a Metadatabase which
includes an Enterprise Integration and Modeling Metadatabase,366 a Visual Information
Universe Model,367 a Two Stage Entity Relationship Metaworld (TSER) and an
Information Base Modelling System (IBMS)368
Meanwhile, companies such as Xerox have produced Metaobject Protocols369 and Meta
Data Dictionaries to Support Heterogeneous Data.370 Companies such as Data Fusion (San
Francisco), the Giga Information Group (Cambridge, Mass.), Infoseek (Sunnyvale,
California),371 Intellidex372 Systems LLC, Pine Cone Systems373 and NEXOR374 are all
producing new software and tools relevant to metadata.375
Vendors of library services are also beginning to play a role in this convergence. In the
past, each firm created its own electronic catalogues with little attention to their
compatibility with other systems. In Canada, thanks to recent initiatives of the Ontario
Library Association (OLA), there is a move towards a province wide licensing scheme to
make such systems available to libraries, a central premise being their compatibility and
interoperability.
10. Global Efforts
Technologists engaged in these developments of meta-data on the Internet are frequently
unaware that a number of international organizations have been working on meta-data for
traditional sources for the past century. These include the Office Internationale de
Bibliographie, Mundaneum,376 the International Federation on Information and
Documentation (FID377), the International Union of Associations (UIA378), branches of the
International Standards Organization (e.g. ISO TC 37, along with Infoterm), as well as
the joint efforts of UNESCO and the International Council of Scientific Unions (ICSU) to
create a World Science Information System (UNISIST). Indeed, in 1971, the UNISIST
committee concluded that:
a world wide network of scientific information services working in voluntary
association was feasible based on the evidence submitted to it that an increased
level of cooperation is an economic necessity”.379
In 1977, UNISIST and NATIS, UNESCO's concept of integrated national information
concerned with documentation, libraries and archives, were merged into a new
Intergovernmental Council for the General Information Programme (PGI).380 This body
continues to work on meta-data.
Some efforts have been at an abstract level. For instance, the ISO has a subcommittee on
Open systems interconnection, data management and open distributed processing
(ISO/IEC JTC1/SC21). The Data Documentation Initiative (DDI), has been working on a
Standard Generalized Markup Language (SGML) Document Type Definition (DTD) for
Data Documentation.381 However, most work has been with respect to individual
disciplines and subjects including art, biology, data, education, electronics, engineering,
industry, geospatial and Geographical Information Systems (GIS), government, health
126
and medicine, library, physics and science. Our purpose here is not to furnish a
comprehensive list of all projects, but rather to indicate priorities thus far, to name some
of the major players and to convey some sense of the enormity of the projects already
underway. More details concerning these initiatives are listed alphabetically by subject in
Appendix 10.
The most active area for meta-data has been in the field of geospatial and Geographical
Information (GIS).382 At the ISO level there is a Specification for a data descriptive file
for geographic interchange (ISO 8211),383 which is the basis for the International
Hydrographic Organization’s transfer standard for digital hydrographic data (IHO DX90).384 The ISO also has standards for Geographic Information (ISO 15046)385 and for
Standard representation of latitude, longitude and altitude (ISO 6709),386 as well as a
technical committee on Geographic Information and Geomatics387 (ISO/IEC/TC 211),
with five working groups.388 At the international level the Fédération Internationale des
Géomètres (FIG) has a Commission 3.7 devoted to Spatial Data Infrastructure. The
International Astronomical Union (IAU) and the International Union of Geodesy and
Geophysics (IUGG) have developed an International Terrestrial Reference Frame
(ITRF).389
At the European level, geographical information is being pursued by two technical
committees, European Norms for Geographical Information (CEN/TC 287)390 and
European Standardisation Organization for Road Transport and Traffic Telematics
(CEN/TC 278),391 notably working group 7, Geographic Data File (GDF).392 At the
national level there are initiatives in countries such as Canada, Germany, and Russia. The
United States has a standard for Digital Spatial Metadata,393 a Spatial Data Transfer
Standard (SDTS)394and a Content Standard Digital Geospatial Metadata395 (CSDGM). 396
Meanwhile, major companies are developing their own solutions, notably Lucent
Technologies,397 IBM (Almaden),398 which is developing spatial data elements399 as an
addition to the Z39.50 standard, Arc/Info, Autodesk and the Environmental Systems
Research Institute (ESRI).
Related to these enormous efforts in geospatial and geographical information have been a
series of initiatives to develop meta-data for the environment. At the world level, the
United Nations Environmental Program (UNEP) has been developing Metadata
Contributors.400 In the G 8 pilot project dedicated to environment, there is a
Metainformation Topic Working Group401 (MITWG) and Eliot Christian has developed a
Global Information Locator Service (GILS).402 There is a World Conservation
Monitoring Centre,403 a Central European Environmental Data Request Facility
(CEDAR). Australia and New Zealand have a Land Information Council Metadata404
(ANZIC). In the United States, the Environmental Protection Agency (EPA) has an
Environmental Data Registry.405 Efforts at harmonization of environmental measurement
have also occurred in the context of G7 and UNEP.406
In the field of science, the same Environmental Protection Agency (EPA) has a Scientific
Metadata Standards Project.407 The Institute of Electrical and Electronic Engineers
(IEEE)408 has a committee on (Scientific) Metadata and Data Management. In the fields
127
of physics and scientific visualisation, the United States has a National Metacenter for
Computational Science and Engineering409 with the Khoros410project. In biology there are
initiatives to produce biological metadata411 and the IEEE has introduced a Biological
Metadata Content Standard. In the United States there is a National Biological
Information Infrastructure412(NBII) and there are efforts at Herbarium Information
Standards.
In industry, the Basic Semantic Repository413 (BSR), has recently been replaced by
BEACON,414 an open standards infrastructure for business and industrial applications. In
engineering, there is a Global Engineering Network (GEN) and, as was noted above there
are a number of consortia aiming at complete interoperability of methods. In the United
States, which seems to have some meta-association for almost every field, there is a
National Metacenter for Computational Science and Engineering.415 In the case of
electronics, the Electronic Industries Association has produced a CASE Data Interchange
Format (CDIF).
In the field of government, Eliot Christian’s work in terms of the G7 pilot project on
environment has inspired a Government Information Locator Service416 (GILS). In
health, the HL7 group has developed a HL7 Health Core Markup Language (HCML). In
education, there is a Learning Object Metadata Group,417 a Committee on Technical
Standards for Computer Based Learning (IEEE P1484) and Educom has a Metadata Tool
as part of its Instructional Management Systems Project. In art, the Visual Resources
Association (VRA) has produced Core Categories Metadata.418
Not surprisingly, the library world has been quite active in the field of metadata. At the
world level, the International Federation of Library Associations (IFLA) has been
involved, as has the Text Entering Initiative (TEI), the Network of Literary Archives
(NOLA), and the Oxford Text Archive (OTA). At the level of G8, it is a concern of pilot
project 4, Biblioteca Universalis.419 At the European level there is a list of Library
Information Interchange Standards (LIIS).420 In Germany, there is a Metadata Registry
concerned with metadata and interoperability in digital library related fields.421 In the
United States, there is an ALCTS Taskforce on Metadata and a Digital Library Metadata
Group (DLMG).
In the United Kingdom, the Arts and Humanities Data Service (AHDS) and the United
Kingdom Office for Library and Information Networking (UKOLN)422 have a Proposal
to Identify Shared Metadata Requirements,423 a section on Metadata424and for Mapping
between Metadata Formats.425 They are concerned with Linking Publishers and National
Bibliographic Services (BIBLINK) and have been working specifically on Resource
Organization and Discovery in Subject Based Services (ROADS)426 which has thus far
produced gateways to Social Science Information (SOSIG), Medical Information
(OMNI)427 and Art, Design, Architecture, Media (ADAM). They have also been active in
adopting basic Dublin Core elements. A significant recent by Rust has offered a vision
provided by an EC project, Interoperability of Data in E-Commerce Systems (INDECS),
which proposes an integrated model for Descriptive and Rights Metadata in ECommerce.428 This concludes the detour announced twelve pages ago.
128
Standing back from this forest of facts and projects, we can see that there are literally
hundreds of projects around the world all moving towards a framework that is immensely
larger than anything available in even the greatest physical libraries of the world. Tedious
though they may seem, these are the stepping stones for reaching new planes of
information, which will enable some of the new scenarios in knowledge explored earlier.
They are also proof that the danger of a second flood in terms of information as foreseen
by authors such as Pierre Lévy is not being met only with fatalistic, passive, resignation.
Steps have been taken. Most of the projects thus far have focussed on the pipeline side of
the problem. How do we make a database in library A compatible with that of library B
such that we can check references in either one, and then, more importantly, compare
references found in various libraries joined over a single network? Here the Z39.50
protocol has been crucial. As a result, networks are linking the titles of works in a number
of libraries spread across a country, across continents and potentially around the world.
Examples include the Online Computer Center (OCLC), the Research Library
Information Network (RLIN) based in the United States and PICA based in the
Netherlands. The ONE429 project, in turn, links the PICA records with other collections
such as Joanneum Research and the Steiermärkische Landesbibliothek (Graz, Austria),
the Library of the Danish National Museum, Helsinki University Library (Finland), the
National Library of Norway, LIBRIS (Stockholm, Sweden), Die Deutsche Bibliothek
(Frankfurt, Germany), and the British Library. Some of these institutions are also being
linked through the Gateway to European National Libraries Project (GABRIEL).430 The
German libraries are also working on a union catalogue of their collections. In the
museum world there are similar efforts towards combining resources through the
Museums Over States in Virtual Culture (MOSAIC)431 project and the MEDICI
framework of the European Commission. In addition, there are projects such as the
Allgemeine Künstler Lexikon (AKL) of Thieme-Becker, and those of the Getty Research
Institute: e.g. Union List of Author Names (ULAN) and the Thesaurus of Geographic
Names (TGN)432.
What are the next steps? The Maastricht McLuhan Institute, a new European Centre for
Knowledge Organization, Digital Culture and Learning Technology, will focus on two.
First, it will make these existing distributed projects accessible through a common
interface using a System for Universal Media Searching (SUMS). The common interface
will serve at a European level for the MOSAIC project and at a global level as part of G8,
pilot project five: Multimedia access to world cultural heritage.
A second, step will be to use these resources as the basis for a new level of authority lists
for names, places and dates. In so doing it will integrate existing efforts at multilingual
access to names as under development by G8 pilot project 4, Biblioteca Universalis, and
earlier efforts of UNEP, to gain new access to variant names. In the case of terms, it will
make use of standard classifications (e.g. Library of Congress, Dewey, Göttingen and
Ranganathan433), as well as specialized classification systems for art such as Iconclass434
and the Getty Art and Architectural Thesaurus.435 As such the research project will in no
129
way be in competition with existing projects. Rather it will integrate their efforts as a first
step towards a new kind of digital reference room.436
Access to knowledge, which deals with claims about information, requires more than
keywords in free text and natural language. Systematic access to knowledge requires a)
authority files for names, subjects, places with their variants as outlined above; b) maps
of changing terms and categories of knowledge in order to access earlier knowledge
collections; c) systematic questions. If one takes basic keywords, translates these into
standardized subject terms (what?), and combines these questions with those of space
(where?), time (when?) and process (how?), one has a simple way of employing the
Personality, Matter, Energy, Space and Time (PMEST) system of Ranganathan. With
some further division these questions also allow a fresh approach to Aristotle’s
substance-accident system (figure 14). In very simple terms: isolated questions provide
access to data and information. Combinations of questions provide access to structured
information or knowledge.
Who?
What?
How?
Where?
When?
Personality (P)
Matter (M)
Energy (E)
Space (S)
Time (T)
Being
Substance
Matter
Quantity
Quality
Relation
Activities437
Position
Dimension
Place
Time
Figure 14. Six basic questions related to the five key notions of Ranganathan’s PMEST
system and the ten basic categories of Aristotle’s substance accident system.
One of the major developments over the past thirty years has been a dramatic increase in
different kinds of relations. Perrault438 in a seminal article introduced a method of
integrating these systematically within UDC. The Medical Subject Headings (MESH) has
five kinds of relations. Systems such as Dewey are too primitive to allow a full range of
relations. Nonetheless, if the Dewey subjects are mapped to the UDC system where these
connections have been made, then one can integrate relations within the search
strategies.439 Thus relations such as broader-narrower offer further search stratagems.
In order to ensure that the scope of the project becomes more universal than merely
universally daunting, the digital reference room will begin with a subset of the whole,
creating the cultural section of a future comprehensive reference room. The research
function of the Institute will focus initially on extending the web of co-operation with
other cultural institutions in order to prevent duplication of efforts and reinvention of the
wheel. On this basis the cultural digital reference room will gradually be expanded to
include links to corresponding digital texts from the great institutions. The institute itself
will not attempt to replicate physically any of these collections. Rather it will serve as a
centralized list of authority names, places and dates linked with a distributed collection of
reference sources.
130
This seemingly narrow focus on art and culture will lead quite naturally to other fields.
Paintings typically entail narratives. Hence the reference room must expand to include
literature. As was already noted, to study the location of paintings and other museum
objects, requires systematic treatments of scale and thus the reference room will expand
to include the fields of geo-spatial and geographical information systems. In a subsequent
phase, research will turn to expanding the scope of the digital reference room from this
focus on culture, from the arts to the sciences, to the full range of human knowledge. As
this occurs the common interface will be linked with the digital reference room to
produce a System for Universal Multi-Media Access (SUMMA).
11. Emerging Scenarios
These authority lists of names, places and dates will, in the first instance, serve as the
basis for a new level of interoperability among collections, at the content level as
opposed to the basic pipeline connectivity. This entails considerably more than simple
access to titles or even the full contents of materials listed in contemporary author and
subject catalogues of libraries. On the one hand, it entails links to dictionaries and
encyclopaedias, which will provide searchers with related terms. It also involves crossreferences to citation indexes, abstracts and reviews.
Reference rooms, as the collective memory of civilization’s search methods, also contain
a fundamental historical dimension. To take a concrete example: today a book such as
Dürer’s Instruction in Measurement (Underweysung der Messung) is typically classed
under perspective. In earlier catalogues this book was sometimes classed under
proportion or more generally under geometry. As digital library projects extend to
scanning in earlier library and book publishers’ catalogues, a new kind of retrospective
classification can occur, whereby titles eventually have both their old classes and their
modern ones. This will radically transform future historical research, because the
catalogues will then lead scholars into the categories relevant for the period, rather than
to those that happen to be the fashion at the moment. Links to on-line versions of
appropriate historical dictionaries will be a next step in this dimension of the digital
reference room. Eventually there can be the equivalents of on-line etymologies on the fly.
There are, of course, many other projects concerning digital libraries. Some, such as the
Web Analysis and Visualization Environment (WAVE) specifically attempt to link
interoperable meta-data with facetted classification.440 This project is important because
it links methods from traditional library science (e.g. classifications) with those of
mathematics (concept analysis). Even so this and other systems are focussed on access to
contemporary information. What sets the MMI project apart from these initiatives is that
it sets out from a premise of concepts and knowledge as evolving over time, as an
historical phenomenon.
It will take decades before the digital library and museum projects have rendered
accessible in electronic form all the documents and artifacts now stored in the world’s
great libraries, museums and galleries. By that time the enormous growth in computing
131
power and memory, will make feasible projects that most would treat as science fiction or
madness today. In the past decades we have seen the advent of concordances for all the
terms in the Bible, Shakespeare and other classic texts. A next step would be to transform
these concordances into thesauri with formally defined terms, such that the relations and
hierarchies therein become manifest. This principle can then gradually be extended to the
literature of a school, a particular decade, a period or even an empire.
This will allow us to look completely afresh at our past and ask whole new sets of
questions. Rather than speaking vaguely of the growth of vernacular languages such as
English or Italian, we can begin to trace with some quantitative precision, which were the
crucial periods of growth. This will lead to new studies as to why the growth occurred at
just that time. We shall have new ways of studying the history of terms and the changing
associations of those terms. We shall move effectively to a new kind of global citation
index.
It is said that, in the thirteenth century, it took a team of one hundred Dominican monks
ten years of full time work to create an index of the writings of St. Thomas Aquinas.
With a modern computer that same task can theoretically be accomplished in a few
minutes. (Cynics might add that this would be after having spent several months writing
an appropriate programme and a few weeks debugging it). In the past, scholars also
typically spent days or months tracing the sources of a particular passage or crucial text.
Indeed, scholars such as Professor M. A. Screech, who sought to trace the sources of
major authors such as Montaigne or Erasmus, discovered that this was a lifetime’s work.
In the eye’s of some this was the kind of consummate learning that epitomized what
knowledge in the humanities was all about. For a reference to Aquinas might lead to a
reference to Augustine, who alluded to Plotinus, who was drawing on Plato. To
understand a quote thus took one into whole webs of cumulative philosophical, religious
and cultural contexts, which make contemporary hypertext efforts look primitive indeed.
If we can trace quotes, we should also be able to trace stories and narrative traditions.
Ever since the time of Fraser’s Golden Bough,441 we have been aware of the power of
recurrent themes in poems, epics, legends, novels and other writings. Indeed much of
academic studies of literature are based on little else. Trace the theme of x through this
author or that period often dominates assignments and exams. If tracing these themes
were automated, it would open up new approaches in much more than literature. For
instance, if we were standing in front of Botticelli’s Story of Griselda (London, National
Gallery), and were unfamiliar with the story, we could just point our notepad computer
and have it show and/or tell us the story.
In the case of direct quotations, machines can theoretically do much of this work today.
Often, of course, the references are more indirect than direct or they are allusions that
could lead to twenty other passages. It is well known that each of us has their favourite
terms, words, which are imbued with special significance, as well as preferred words or
sounds that serve as stopgaps in our thought. (Many of us, for instance, have met an
individual who bridges every sentence or even phrase with an “um,” or peppers their
speech with an “actually,” “indeed” or some other semantically neutral stopgap). An
132
average person often gets by with a vocabulary of only about a thousand words. By
contrast there are others in the tradition of Henry Higgins with vocabularies in the tens of
thousands of words. Will we some day have the equivalent of fingerprints for our
vocabularies, such that we can identify persons by their written and spoken words? Will
the complexities of these verbal maps become a new way for considering individual
development? Will we begin studying the precise nature of these great verbalizers? Some
languages are more substantive (in the sense of noun based) whereas other such as Arabic
are more verbal (in the sense of verb based)? Will the new “verbal” maps help us to
understand cultural differences in describing the world and life? Will such maps become
a basic element of our education?442
In the past, famous individuals wrote guidebooks and travelogues, which provided maps
of their physical journeys and they wrote autobiographies to offer maps of their mental
and emotional journeys. In the past generation, personalities such as Lord Kenneth Clark
produced documentaries such as Civilization to accomplish this in the medium of film. At
the Sony laboratories in Paris, Dr. Chisato Namaoka443 is engaged in a Personal
Experience Repository Project, which aims to record our memories as they occur while
we visit a museum or significant tourist site, and to use that captured information for
further occasions. Individuals such as Warren Robinett or Steve Mann have gone much
further to speculate on the possibility of having a wearable camera that records
everything one ever did in one’s life: another take on the scenarios presented in the movie
Truman Show (1998). Such developments could readily transform our conceptions of
diaries and other memory devices.
They also introduce possibilities of a new kind of “experience on demand” whereby any
visit to a tourist site might be accompanied with the expressions of famous predecessors.
In the past, the medium determined where we could have an experience: books tended to
take us to a library, films to a cinema, television to the place with a television set. In
future, we can mix any experience, anywhere, anytime. How will this change our patterns
of learning and our horizons of knowledge?
All this assumes, of course, that computers can do much more than they can today. This
is not the place to ponder at length how soon they will be able to process semantic and
syntactical subtleties of language to the extent that they can approach deep structure and
elusive problems of meaning and understanding. Nor would it be wise to speculate in
great detail or to debate about what precisely will be the future role of human
intervention in all this.
Rather, our concern is with some more fundamental problems and trends. One of the
buzzwords about the Internet is that it is bringing “disintermediation,”444 which is used
particularly in the context of electronic commerce to mean “putting the producer of goods
or services directly in touch with the customer.” Some would go further to insist that
computers will soon allow us to do everything directly: order books via sites such as
Amazon.com without needing to go to bookstores; go shopping on-line without the
distractions of shopping-malls. In this scenario, computers will make us more and more
active and we shall end up doing everything personally. At the same time, another group
133
claims that computers will effectively become our electronic butlers, increasingly taking
over many aspects of everyday life. In this scenario, computers will make us more and
more passive and we shall end up doing less and less personally. Indeed, some see this as
yet another move in the direction of our becoming complete couch potatoes.
In our view, there is no need to fear that computers will necessarily make us exclusively
active or passive. That choice will continue to depend on the individual, just as it does
today. Nonetheless, it seems inevitable that computers will increasingly play an intermediating role, as they become central to more and more aspects of our lives. In the past
decade, the concept of agents has evolved rapidly from a near science fiction concept to
an emerging reality. There is now an international Foundation for Intelligent Physical
Agents (FIPA).445 There are emerging fields devoted to user-modelling and user adapted
interaction, entailing person-machine interfaces, intelligent help systems, intelligent
tutoring systems and natural language dialogues.446
Leading technologists such as Philippe Quéau, have predicted the advent of televirtuality,447 whereby avatars448 will play an increasing role as our virtual representatives
in the Internet. Recently, in Paris, there was a first international conference on Virtual
Worlds (July 1998), attended by those at the frontiers of two, hitherto quite separate
fields: virtual reality and artificial life. Some predict that self-evolving artificial life forms
will soon be integrated into avatars. Some of the early virtual worlds began simply by
reconstructing actual cities such as Paris449 or Helsinki.450 Others such as Alpha World451
are creating a new three-dimensional virtual world based on elements familiar from the
man-made environment. Potentially these worlds could be synthetic ones, or purely
imaginary ones, no longer subject either to the physical laws or even the spatial
conditions of planet earth. At Manchester, Professor Adrian West,452 has begun to
explore the interactions of virtual worlds, parts of which are subject to different laws of
physics.
In a world where the Internet Society is planning to become interplanetary453, assigning
addresses for different planets, the solar system and eventually other galaxies, the
question of navigation is becoming much more acute and “Where in the world?” is
becoming much more than a turn of phrase. We shall need new methods to discern
whether the world we have entered is physically based or an imaginary construct;
whether our avatar has embarked on a “real” trip or almost literally into some flight of
phantasy. In the past generation, children have grown up expecting the realism of videogames to be considerably poorer than that of realistic films or the real world. Within the
next generation, such easy boundaries will increasingly blur and then disappear almost
entirely. Is it reality? Is it a game? Is it playful reality or realistic playfulness? Such
questions will become ever more difficult to discern.
In light of all this, some activities of scholars will certainly remain: reflecting on what
sources mean, weighing their significance, using them to gain new insights and to outline
new analyses, goals, dreams, visions, even utopias. Meanwhile, it is likely that many of
the activities which preoccupied scholars for much of their lives in the past will become
134
automated within the next generations, namely, hunting for texts, tracking down quotes
and looking for sources.
At the same time many new activities will emerge. Before the advent of space travel and
satellites no one could imagine precisely what it would be like to look at the earth from
space. Within a single generation we have developed methods for zooming systematically
from such satellite images down to a close up of an object on earth in its original scale,
and even how to descend to microscopic levels to reveal biological, molecular and atomic
properties. We need to create the equivalents of such zooms for our conceptual worlds,
moving systematically from broader terms to narrower terms. We need new ways of
visualizing how the horizons of our conceptual worlds grow. At the simplest level this
entails demonstrating how we have shifted from a Ptolemaic to a Copernican worldview.
Much more elusive and difficult is to find ways of showing how our mental horizons
have expanded over time. What impact did major changes in travel such as the crusades,
pilgrimages, and the grand tour, have on vocabularies or inventions? Most of the future
questions to be asked cannot yet be formulated because we cannot yet see ways of
collecting, ordering and making sense of the vast materials that would be required to
formulate them.
At present, the frontiers of scientific visualization are focussed on helping us to see
phenomena such as the flow of air in a jet at supersonic speeds, the development of
storms and tornadoes, the dispersal of waste in Chesapeake Bay, changes in the ozone
layer, and many other events that we could not begin to know until we had methods for
seeing them. Computers are transforming our knowledge because they are helping us to
see more than we knew possible. The physical world opens as we make visible its unseen
dimensions.454 The mental world awaits a similar journey and as with all journeys we
must remember that what we see is but a small part of the immensity that is to be known,
experienced, sensed or somehow enters our horizons.
12. Conclusions
This paper began from the premise that every new medium changes our definitions of,
approaches to and views of knowledge. It claimed that networked computers (as enabled
by the Internet), cannot be understood as simply yet another medium in a long evolution
that began with speech and evolved via cuneiform, parchment, manuscripts to printed
books and more recently to radio, film, and video. Computers offer a new method of
translating information from one medium to another, wherein lies the deeper meaning of
the overworked term multimedia. Hence, computers will never create paperless offices.
They will eventually create offices where any form of communication can be transformed
into any other form.
In the introduction we raised questions about an excellent article by Classen concerning
major trends in new media.455 He claimed that while technology was expanding
exponentially, the usefulness456 of that technology was expanding logarithmically and
that these different curves tended to balance each other out to produce a linear increase of
usefulness with time. In our view, simpler explanations are possible. First, technologists
135
have been so concerned with the pipeline aspects of their profession (ISO layers 1-6 in
their language), that they have ignored the vast unexplored realms of applications (ISO
layer 7). Second, phrases such as “build it and they will come” may sound rhetorically
attractive, but unless what is built actually becomes available, it can neither be used nor
useful. Rather than seek elusive limits to usefulness, it is much more effective to make
things available. In short, a more effective formulation might be: let it be useable and
used and usefulness will follow.
Any attempt at a systematic analysis of future applications (cf. Appendix 8) would have
required at least a book length study. For this reason the scope of the present paper was
limited to exploring some of the larger implications posed by the new media. We claimed
that there are at least seven ways in which networked computers are transforming our
concepts of knowledge. First, they offer new methods for looking at processes, how
things are done, which also helps in understanding why things are done in such ways.
Second, and more fundamentally, they offer tools for creating numerous views of the
same facts, methods for studying knowledge at different levels of abstraction. Third, they
allow us to examine the same object or process in terms of different kinds of reality.
Fourth, computers introduce more systematic means of dealing with scale.
Fifth, they imply a fundamental shift in our methods for dealing with age-old problems of
relating universals and particulars. Analysis thereof pointed to basic differences between
the arts and sciences and the need for independent historical approaches to reflect these,
all the more so because computers, which are only concerned with showing us the latest
version of our text or programme, are a direct reflection of this scientific tradition. We
need a richer model that also shows us layered, cumulative versions. Sixth, computers
transform our potential access to data through the use of meta-data. Seventh and finally,
computers introduce new methods for mediated learning and knowledge through agents.
While the main thrust of the paper was focussed on the enormous potentials of networked
computers for new approaches to knowledge, some problems were raised. These began
with some of the limitations in the technology that is actually available today, with
respect to storage capacity, processor speeds, bandwidth and interoperability. The
dangers of making normative models, which then affect the future evidence to be
considered, as in the case of the human genome project, were touched upon. So too were
the dangers underlying some of the rhetorically attractive, but equally misleading
assumptions behind come contemporary approaches to complex systems.
At the outset of the paper, mention was also made of the dangers articulated by Pierre
Lévy, that we are in danger of a second flood, this time in the form of a surfeit of
information, as a result of which we can no longer make sense of the enormity of
materials descending upon us. Partly to counter this, a section of the paper entered into
considerable detail on worldwide efforts concerning meta-data as a means of regaining a
comprehensive overview of both the immense resources that have been collected already
and the ever increasing amounts which are being added daily. Sense making tools are an
emerging field of software.
136
A half century ago pioneers such as Havelock, Innis and McLuhan recognized that new
media inevitably affect our concepts of what constitutes knowledge. The mass media
epitomized this with McLuhan’s pithy phrase: “The medium is the message.” Reduced
and taken in isolation, it is easy to see, in retrospect, that this obscured almost as much as
it revealed. The new media are changing the way we know. They are doing so in
fundamental ways and they are inspiring, creating, producing, distorting and even
obscuring many messages. New machines make many new things possible. Only humans
can ensure that what began as data streams and quests for information highways become
paths towards knowledge and wisdom.
137
Epilogue
Good detective stories have very precise endings. When one has finished the book all the
loose ends and puzzles have been completely solved. There is something similar in the
case of standard works on an individual or the history of some invention. Once one has
reached the final reference, one has arrived at a comprehensive treatment. It is not so in
the case of technology. There are forever new inventions, new trends, new possibilities.
While writing these essays in the course of the past two years there were constantly new
details to add to some section and there was a temptation that the epilogue merely
become a list of the latest things found while the work was on its way to the press. Since
there is no hope of being comprehensive in this fast changing field, the epilogue looks
briefly at some trends evidenced through long-term research projects: the scale of
investments, the rise of global networks, new ways towards standards, ubiquitous and
mobile computing; miniaturisation, new meta-data, agents, new interfaces, evaluation,
requirements engineering and new debates about precision vs. fuzziness.
Scale of Investments
A first glimpse into the scale of changes underway is provided by a simple glance at the
annual sales of some of the major players (figure 1).
Microsoft
Nortel
Fujitsu
AT&T
Hitachi
Philips
IBM
Siemens
General Motors
14.4
18
37.7
53
71.2
76.40
78.5
105.9
164 billion
Figure 1. Annual Earnings of some major players in 1997
The British Government has introduced a new National Grid for Learning with a budget
of c. $ 1 billion U.S. (500 million pounds). This is a very large amount. At the same time
a single Japanese firm such as Hitachi has 17,000 researchers in 35 laboratories around
the world and invests $ 4.9 billion annually for research. The German firm, Siemens,
invests $ 8.1 billion annually for research. In short major corporations are spending more
on research within a single company, than whole nations are spending on their
educational programmes.
The scale of individuals also continues to increase. Nortel has some 17,000 researchers.
AT&T’s Bell Laboratories have 24,000 researchers. This in turn is but a small amount of
the 137,650 persons involved in research in the corporation as a whole (figure 2).
Motorola University has over 100,000 students. A new virtual university of British
Telecom is scheduled to have some 120,000 students. There are now over 1000 industry
138
based universities in the United States alone. These combined with the research institutes
of the great corporations are rapidly becoming larger than the traditional universities
which have led western scholarship and research since their introduction by Abelard in
the middle of the twelfth century.
When the universities complain that their budgets are diminishing, we are usually not told
that industry continues to invest ever greater amounts for research. There are two
fundamental problems with this shift. One is a move away from fundamental research
towards ever more immediate applications, ever more dependent on fluctuations in the
quarterly stocks. As a result, large problems requiring long term solutions tend to lose out
to the short term quick fixes.
Total
Bell Laboratories
24,000
Business Communications Systems Group 29,000
Communications Software Group
3,000
Data Networking Systems Group
3,000
Global Service Provider Business
21,000
Intellectual Property Division
300
Microelectronics Group
12,000
Network Products Group
13,000
New Ventures
350
Optical Networking Group
9,000
Switch and Access Systems Group
13,000
Wireless Networks Group
11,000
137,650
Figure 2. List of employees in research related positions in AT&T.
A second problem is more subtle. Unlike the university, etymologically derived from the
universe of studies (universitas studiorum) and the idea of sharing all that was, is and can
be known, the new corporate research centres are often so secretive that, notwithstanding
intranets, divisions within a corporation often are unaware of work elsewhere in the
company. This had led to sayings such as: “If Siemens knew, what what Siemens
knows… .,” a dictum that could readily be applied to IBM and other giants. Here a new
challenge looms: not so much of making new discoveries, but rather of finding
frameworks for sharing what has been discovered. The rapid evolution of knowledge
management as a new buzz word reflects corporation’s new found awareness of this
problem. Typically they pretend that this is an entirely new problem, blithely overlooking
several thousand years of efforts through philosophers and those in more specialised
fields such as epistemology, logic and in the past centuries, with library science.
Notwithstanding these problems there are trends towards ever greater co-ordination. The
European Commission, in its Fifth Framework Programme, plans to spend some 13.5
billion ecu. When matched by funds from industry this will amount to some $40 billion
for research in the course of the five year period, 1999-2004. Characteristics of this new
139
framework are a far greater co-ordination of projects, the rise of new networks and new
approaches to standards.
Networks
In its first phase, the Internet began as a means of networking major institutes connected
with the U.S. military research. In its second phase, co-ordinated by the efforts of the
Centre Européen de Recherche Nucléaire (CERN) it became a means of networking the
efforts of nuclear physicists globally.
In the past decade this principle has gradually spread to other disciplines. Chemical
Abstracts in conjunction with Hoechst offers global databases relating to chemistry.
Medline offers an international database for medicine. A Global Engineering Network
(GEN), initiated by Siemens, aims to gather all knowledge concerning engineering
principles. A related consortium, led by the Computer Aided Design (CAD) firm,
Autodesk, is working on Industry Foundation Classes, whereby all knowledge concerning
basic architectural parts such as doors and windows is being collected together.
Within the context of the the European Commission’s ESPRIT programme there are
twenty-one networks of centres of excellence (figure 3). Some of these are very large.
The Requirements Engineering Network of International Co-operating Research
Groups457 (RENOIR), involves nearly 60 major institutes in Europe and North America.
1. Agent-based computing
(AgentLink - 27225)
2. Computational Logic
(COMPULOG-NET-7230)
3. Computer Vision Network
(ECV NET - 8212)
4. Distributed Computing Systems Architectures
(CaberNet-21035, 6361)
5. Evolutionary Computation
(EVONET - 20966)
6. Field of Mesoscopic Systems
(PHANTOMS - 7360)
7. High-Performance Computing
(HPCNET - 9004)
8. High-Temperature Electronics
(HITEN - 6107)
9. Information and Data on Open Media
(IDOMENEUS - 6606)
10. Intelligent Control, Integration of Manufacturing Systems (ICIMS - 9251)
11. Intelligent Information Interfaces
(i3net - 22585)
12. Language and Speech
(ELSNET - 6295)
13. Machine Learning
(MLnet - 7115)
14. Model-Based and Qualitative Reasoning Systems
(MONET - 22672)
15. Multifunctional Microsystems
(NEXUS - 7217)
16. Neural Networks
(NEURONET - 8961)
17. Organic Materials for Electronics
(NEOME - 6280)
18. Physics and Technology of Mesoscopic Systems
(PHANTOMS II - 21945)
19. Requirements Engineering
(RENOIR - 20800)
20. Superconducting
(SCENET - 22804)
21. Uncertainty Techniques..for...Information Technology (ERUDIT-8193)
Figure 3. Networked Centres of Excellence in the context of the European Commission’s
ESPRIT programme.
140
Elsewhere within the Commission there are related initiatives. There is, for instance, a
network to create a Global Electronic and Multimedia Information Systems for Natural
Science and Engineering458 (GLOBAL-INFO) and the fifth framework is aiming at
global information ecology.
Such networks are also emerging in many fields. For example, the MEDICI Framework,
which constitutes a next phase in the MOU on cultural heritage, includes a European
Network of Centres of Excellence in Cultural Heritage. The High Level Group on
Audiovisual Policy has recommended the establishment of a network of European film
and television schools.459 A study by the Council of Europe has noted that there are some
eighty networks in the field of culture alone (Appendix 6). One of the challenges of the
MEDICI framework is to co-ordinate these efforts within a larger framework. An obvious
next step is for the networks of excellence in the scientific and technological field to
develop solutions which strengthen some aspects of the cultural networks with respect to
multimedia communication. In the near future it is likely that each major field will
develop their own network. These networks are one of the new ways to standards.
New Ways to Standards
In the past, standards went through a steady process that led first to the national standards
bodies and then to the International Standards Organisation (ISO) or International
Telecommunications Union (ITU). This traditional process was reliable but slow. It often
took five to seven years to arrive at an ISO standard. In new technologies where the life
span of a new technology, such as a storage disk or a Random Access Memory (RAM)
chip is often only five years this process is too slow. As a result some critics have urged
that ISO be abandoned altogether.
To meet the challenges of a rapidly evolving system, the Internet Society’s Internet
Engineering Task Force (IETF) has developed Requests for Comments (RFC’s), which
serve as a first important step toward consensus building. One of the more important
initiatives of the European Commission has been to launch Memoranda of Understanding
(MOU), which serve as informal consortia in bringing together major players in a field.
There have been and/or are, for instance, MOU’s in Multimedia Access to Europe’s
Cultural Heritage, Electronic Commerce, Digital Video Broadcasting (DVB), a Global
System for Mobile (GSM) Communication and more recently, Learning Technology.
Increasingly these agreements go beyond the bounds of Europe. The European union has
been sponsoring an Annotatable Retrieval of Information And Database Navigation
Environment (ARIADNE) for the development of information content metadata.460 There
is co-operation between this project and the efforts of the National Learning
Infrastructure Initiative461 (NLII), to produce an Instructional Management System462
(IMS) "to enable an open architecture for online learning". Similarly in the case of
library systems, there is an International Digital Libraries Research Program, which links
the British Joint Information Systems Committee (JISC) with the efforts of the American
National Science Foundation (NSF).463
141
The MOU’s are part of a larger phenomenon which includes the rise of networks outlined
above as well as new consortia, fora and (task) groups. The concrete case of developing a
Universal Mobile Telecommunications System464 (UMTS) shows how these efforts are
often complementary. Initially there was a consortium of six key industry players
(Alcatel, Bosch, Italtel, Nortel, Motorola and Siemens),465 who agreed to co-operate in
developing third generation mobile networks which would combine Time-Division
Multiple Access (TDMA) and Code-Division Multiple Access (CDMA). In addition
there was an MOU, a Universal Mobile Telecommunications System (UMTS) Forum, a
Special Mobile Group of the European Telecommunications Standards Institute (ETSI
SMG) and a Task Group for Mobile Services of the Association of the European
Telecommunications and Professional Electronics Industry (ECTEL TMS). Organisations
such as the European Telecommunications Standards Institute (ETSI) work closely with
such groups to identify which technologies have the necessary qualities and guide these
through fast track standardisation within the ISO or ITU. Hereby the five year process
can frequently be reduced to one or two years.
In the past there was a quest for a single international standard in a domain. This remains
important in areas such as mobile telecommunications (UMTS) which entail bandwidth
which could interfere with other interests. In other areas such as digital signatures,
individual companies invariably produce their own competing solutions such that a single
standard is not feasible. This realisation has led to a subtle shift within the standards
community: from a quest for a single answer to an open framework wherein several
compatible alternatives can be used. In Europe, for instance, the Open Information
Interchange (OII) provides a context for such compatible alternatives. The new way to
standards is true interoperability with the existing frameworks. Thus there can be more
than one solution but they all need to function within the network.
Ubiquitous and Mobile Computing
The computer began as a static object, the size of a room, which slowly diminished until
it became an object on a desktop. The visions of two pioneers are changing that
paradigm. Mark Weiser (Xerox) has introduced the idea of ubiquitous computing,
whereby all the objects around us have sensors and can function as computers. Leonard
Kleinrock (of ARPA fame) has a vision of mobile computing, whereby there will one day
be no difference between the computational power accessible from the computer at my
desk or from a mobile device. His immediate concern is with conditions in a battlefield
but the same principles could be applied anywhere.
Many initiatives are underway, which are leading to a merger of the Public Switched
Telephone Network (PSTN) and the Public Land Mobile Network (PLMN). In the United
States, the IETF has a Mobile IP Working Group466 (RFC 2002). Within the European
Commission (DGXIIIb ACTS), there is a domain for Mobility, Personal and Wireless
Communications. This has produced its own vision for Future Public Land Mobile
Telecommunications Systems (FPLMTS),467 which includes a Digital Mobile System
(DSM), a Global System for Mobile Communications (GSM), plans for Digital European
142
Cordless Telecommunications (DECT) and a Pan European Paging System (ERMES).
Specific ESPRIT projects include Flexible Information and Recreation for Mobile Users
(FLIRT); High Performance Protocol Architecture (HIPPARCH),468 and the development
of a Mobile Digital Companion469 (Moby Dick). This is leading to many new mobile
devices.470
One of the significant consequences of this move towards mobile computing is a
redefinition or transformation of our basic communications devices. In the past we had
telephones for interactive one to one voice, radio for passive one to many voice,
television for passive one to many video, computers for interactive text and specialised
devices such as fax for sending and receiving text. The distinctions between these clearly
defined devices is blurring. Televisions increasingly have the capabilities of computers,
while computers are acquiring the functionalities of both televisions and radios.
Telephones and fax machines are now typically a single device. Mobile telephones
increasingly have panels with Internet access (e.g. Nokia and Nortel). There was once a
fundamental difference between regular fixed and mobile telephones. Some new devices
automatically switch between mobile and fixed numbers (e.g. Duet471 by the Danish
telecoms supplier Danmark). Telephones are also being combined with Personal Digital
Assistants (PDAs such as the Palm Pilot), which were designed as address books and
schedule organisers (e.g.Qualcomm). 472 Other devices combine the functions of cellphone, address book, e-mail and fax (e.g. Philips Accent).473 A consortium (Ericsson,
Motorola, Nokia, Psion, Epoc) is producing a new hybrid called Symbian.474 A next
generation of mobile devices will include a Universal Mobile Telecommunication System
(UMTS,475 e.g. Siemens, Ericsson, Lucent, Nokia and Nortel+ Matsushita, i.e.
Panasonic).476
Integrally connected with this transformation of electronic devices is the transformation
of their controls. In the case of computers this has typically been a mouse connected by a
wire to the computer. Already there is a new form of wireless mouse, a remote mouse and
there is a trend towards voice activation and gesture technologies. In the case of
television, the original controls on the object itself were replaced by a wireless remote
control. Hundreds of such devices were developed. A new version called an Electronic
Programming Guide (EPG) is evolving. The need for a regulatory framework477for such
devices has already been raised.
There is a new trend whereby the new devices which combine the functions of mobile
phones and Personal Digital Assistants (PDA) will also become a new form of remote
control, which includes the functions of an Electronic Programming Guide (EPG). In the
future a device which used to be a telephone will be our interface for controlling
televisions, computers and flat panel displays used in presentations. As yet there is no
definitive name for this new combination of devices. Some speak of a Personal
Handyphone System (PHS). Others speak of a Service Creation Environment (SCE) or a
Virtual Home Environment (VHE) and believe that this can be achieved with a Virtual
Environment Dialogue Architecture (VEDA).478
143
In the past six months there has been extraordinary activity in this domain. There has
been an accord between Sun to use their Jini software (a subset of Java) in conjunction
with a new Home Audio Visual Interoperability (HAVI)479 introduced through a
consortium led by Philips, Sony and six other home consumer electronics companies.
This promises that consumers could control all the appliances in a networked home from
a personal computer but could also use a television or or even some all-in-one infrared
remote control device. This initiative has become the more significant because it is linked
with the The Open Services Gateway Initiative (OSGI)480 which, in turn, links Philips
with a number of other key players (IBM, Sun, Motorola, Lucent, Alacatel, Cable &
Wireless, Enron, Ericsson, Network Computer, Nortel, Oracle, Sybase and Toshiba).
MIT and Motorola have announced a competing technology.481
Meanwhile there is the Blue Tooth Consortium482 (consisting of IBM, Toshiba, Ericsson,
Nokia, and Puma Technology) which aims at a single synchronization protocol to address
end-user problems arising from the proliferation of various mobile devices -- including
smart phones, smart pagers, handheld PCs, and notebooks. Also relevant is the Wireless
Application Protocol (WAP)483, which includes many of the latest technologies including
the Global System for Mobile Communications (GSM 900,1800 and 1900MHz).484
Finally there is the Voice eXtensible Markup Forum (VXML) wherein AT&T, Lucent
Technologies, Motorola and 17 companies to work on a standard for voice- and phoneenabled Internet access.
Whatever its eventual name some combination or version of this set of devices is likely
soon to become the new battlefield for what are presently called the browser wars,
especially as these devices become equipped with voice commands and gesture
technologies. In the longer term these interfaces will very probably become direct neural
connections with the brain.
Implicit in all these developments is an unspoken assumption that wireless is preferable
to wired communications because it is more convenient. Some environmentalists and
researchers have suggested that wireless communications pose health risks: that users of
cellular telephones are more prone to cancer than those who use old fashioned
telephones. These claims have not been irrefutably established but they deserve very
close study. Else there is a danger that we create an extraordinary wireless network that is
as brilliantly convenient as it is lethally dangerous.
Miniaturisation
The general pattern from room-sized computers to personal and portable notebook type
computers has already been mentioned. The advent of holographic storage methods will
take this a considerable step forward. An emerging field is in the area of bio-computing
where living cells are part of the computational process. Meanwhile, there have been
breakthroughs at the atomic level. Research teams of IBM (Zurich and Watson) have
managed to write the letters of IBM in atoms and there is a growing conviction that
computation at the atomic level is possible. Oxford University has founded a Quantum
Computing Lab. The European Commission has a long term ESPRIT project (21945) on
144
the Physics and Technology of Mesoscopic Systems (PHANTOMS II). To be sure, some
remain sceptical of the feasibility of atomic computing. If it is successful, however, it will
mean that most of the physical encumbrances posed by contemporary computers will
disappear. Not only will wearable computers become an everyday experience. It will be
possible to have computers everywhere and ubiquitous, mobile computing will become a
reality. At this stage, questions of what are the best means to use the new technologies,
will become paramount.
New Meta-Data
Recent developments in meta-data have been reviewed in the main text above as have a
few suggestions for going further. In the past months there has been a growing concern
within the Europe community that the solutions promised by initiatives such as Dublin
Core are short term measures which do not reflect sufficently the complexities of cultural
and historical dimensions of knowledge.
There is a sense of need for long term projects which will address fundamental questions
such as to space, time, individuals, concepts, and narratives. In all of these the interest
lies at arriving at a new dynamic sense of knowledge which reflects cultural and
historical differences. In the case of space, for instance, it is not enough to have a static
map of a country. For instance, the boundaries of Tibet for someone from Tibet may be
different than Chinese claims concerning these boundaries. Moreover, the boundaries of
a country such as Poland change with the centuries and sometimes by the decade. We
need a new kind of meta-data which reflects such changes and which is relected in our
standard software.
Agents
Central to such questions of use is the role of the individual in the process. In the case of
physical tasks, it is generally assumed that everything that can physically be done will be
automated and replaced by machines. In this context it is more than somewhat ironic that
the so-called “new” approach to learning, unwittingly inspired by Dewey’s pragmatism,
places ever more emphasis on the need for “learning by doing.” With such a goal we
could be doing ourselves out of more than a job.
This pragmatic Dewey-doing is frequently linked to problem oriented learning and with
the constructivist school of educational theory, which in turn has its roots in educational
theories of American psychologists in the1950’sand 1960’s who, drawing on the
assumptions of the behaviorist school, emphasised problem solving. Implicit in problem
solving is that the individual responds to an existing problem. The agenda is actively set
by the problem to which the individual passively responds. Learning to solve other
person’s problems, does not necessarily prepare one to identify problems, set agendas,
give new directions, develop visions. Our educational system is preparing persons to be
good employees, at best good executives, i.e. persons who execute someone else’s plans.
But how do we learn to develop independent plans, how do we learn to articulate a
vision?
145
In the physical world, the advent of automation brought robots which were initially
envisaged as metal, hardware butlers to help in various tasks. Gradually robots are
replacing all the physical tasks. Agents can be seen as software versions of hardware
robots: butlers for mental spaces rather than for living-rooms. Their precise functions will
vary with our conceptions of knowledge. If all of our intellectual activities are seen as
limited to problem solving, then there is a hope that mental butlers will do all our work.
But is this a hopeful scenario? In our view it is not. If agents solve given problems, this
does not solve the challenge of identifying problems to be given, thinking about new
problems and reflecting about possible areas of research.
Those engaged in Human Computer Interaction (HCI) have long ago recognised the
significance of and sociology, psychology and even philosophy in solving some of the
challenges. These domains will become much more significant in the decades to come as
the initial technical challenges are overcome. Soon the question will no longer be what
can the machine do but rather: if machines can do everything what then should humans
do? What is our role in a world in which we have automated all our past functions?
Fortunately, some of the leading projects are taking an interim pragmatic approach which
leaves aside some of the more thorny philosophical issues underlying agents. For
example, the International Federation of personal Agents (FIPA) is working on standards
for at least seven aspects of agents: 1) agent management; 2) agent to agent
communication language; 3) agent/software integration; 4) personal travel assistance; 5)
personal assistant; 6) audio-visual entertainment and 7) network management or
provisioning. An ESPRIT project (28831485), led by the Swedish Institute for Computer
Science (SICS), deals with Multimedia Access through Personal Persistent Agents
(MAPPA).
In the past there was a quest to create autonomous robots. This quest continues in projects
such as those of the Autonomous Underseas Systems Institute (AUSI).486 Recently there
has been a trend towards mobile agents and, not surprisingly, there is also a
corresponding quest to produce autonomous agents. For instance, a long term ESPRIT
project (20185), Navigation of Autonomous Robots Via Active Environmental
Perception (NARVAL).
The Starlab, a leading edge laboratory, is exploring the potentials of agent landscapes,
focussing on future generations of software agent concepts, technologies and
applications, with special attention to three elements: 1) scaling issues: from software
agent to virtual societies; 2) communityware: agents as a tool for enhancing social life ;
3) wearable interfaces for co-habited mixed realities. Such innovations in agents are, in
turn, leading to a new preoccupation with interfaces.
New Interfaces
New interfaces have been a recurrent theme of the essays in this book. Chapter three
focussed particularly on new interfaces for cultural heritage and questions of
146
AMUSEMENT487: virtual amusement space; games; representation of individuals and
crowds, avatars and social interaction techniques.
488
CAMPIELLO : dynamic exchange of information and experiences between the
community of people living in historical cities of arts and culture,
their local cultural resources and foreign visitors.
489
(Co-habited Mixed Reality Inhabited Systems)
COMRIS
wearable assistant ('parrot') to enhance participation in large scale
events (conference, trade fair) occuring in mixed-reality spaces that
are co-habited by real and software agents.
490
CO-NEXUS : dynamic database that can be approached by a personal agent based on
real life experiences.
491
(Electronic Arenas for Culture, Performance, Art and Entertainment)
ERENA
electronic arenas. Multi-user virtual environments, social interaction,
agents, crowds, mixed-reality, navigation, virtual spaces, networking.
492
heterogeneous large scale landscapes capable of allowing a wide range
ESCAPE :
of different spaces to coexist.
493
HIPS
(Hyper-Interaction within Physical Space)
directed at tourists allowing them to navigate “a physical space and a
related information space at the same time, with a minimal gap between
the two.”
494
(Living Memory)
LIME
community, collective memory, social interaction, ubiquitous access,
intelligent agents, software agents, representation and presentation of
information space, sociological and ethnographic studies.
MAYPOLE495: taking children to school, “connecting people, places and objects to
perform a shared activity.”
496
(Magic Lounge)
MLOUNGE
whiteboard, web-based tools, multi-party communication, for
“a virtual meeting place for the members of a geographically distributed
community to share knowledge and experience.”
497
(PERsonal and SOcial Navigation)
PERSONA
an approach to navigation based on a personalised and social navigation
paradigm with socially-based interaction, individual differences, user
analysis, agents, narratives, social navigation.
498
POPULATE : avatar kiosks to automatically build avatars for a large number of people.
PRESENCE499: new interaction paradigm with pleasurable multi-modal input/output
devices; support assistance, communications and accessing mobility;
elderly people.
Figure 3. Thirteen projects within the EC’s Intelligent Information Interfaces (I3,
Icube)500 initiative.
147
moving from two-to three-dimensional spaces. A recent development has been the rise of
social interfaces. For instance, the Mitre Corporation has a series of projects dedicated to
Intelligent Multimedia Interfaces.501 More significantly, the European Commission has
initiated a series of projects under the heading Intelligent Information Interfaces (I3,
Icube, figure 3).
These thirteen initial projects have recently been expanded to include twelve others
especially in the context of children’s education such as Kidslab, Kidstory, Playground,
and Stories, under the rubric of I3 ese projects.502 Striking in these projects is how mobile
techniques, and agents are being combined with navigation principles in new ways. This
trend is also evidenced in a project at the Starlab called Technology for Enabling
Awareness (TEA) which specifically aims:
to develop and distribute an easily integrable component for GSM phones,
Personal Digital Assistants, palmtops and portable computing devices in general,
which will allow such devices to get a useful degree of context-awareness.…
Context awareness, as defined by TEA, is obtaining knowledge about the user's
and IT device's state, including surroundings, situation, activity and to a lesser
extent, location, through integrating a variety of external sensors.
Two ESPRIT Preparatory Measures (22603 - C2 and 22642503) Connected Community
(augmented reality) and Inhabited Spaces for Community Academy, Polity and Economy
(INSCAPE) are also exploring aspects of these problems. In these scenarios agents will
not replace humans. Rather, they will enter into an interactive “relationship” with users.
In the past relationships, especially social relationships have been entirely between and
among individual humans. In the past decades there has been an emerging field of human
computer interaction. New in these recent developments is the assumption that avatars
and agents are more than software imitations of human characteristics: that they can
generate not just emoticons or sporadic emotions as witnessed with Yamagutchi toys, but
a complete spectrum of relationships. If this be truly so then we have entered an entirely
new era of human computer interaction, which will bring forth new branches of
sociology.
Evaluation
Another of the basic trends in the past years has been an ever greater effort to evaluate
the consequences of new technologies. The European Commission, for instance, has
developed a series of methods used in Requirements Engineering and Specification in
Telematics504 (RESPECT), which are linked with their Information Engineering Usability
Support Centres505 (INUSE). These include: 1) Measuring Usability of Systems in
Context506 (MUSiC); 2) Toolbox Assisted Process (MAPI); 3) Usability Context
Analysis507 (UCA); 4) Performance Measurement Method508 (PMM); 5) Software
Usability Measurement Inventory509 (SUMI); and 6) Diagnostic Recorder for Usability
Measurement510 (DRUM). There is now a Systematic Usability Evaluation511 (SUE).
There are also Methods and Guidelines for the Assessment of Telematics Application
Quality512 (MEGATAQ), which include three basic features: MEGATAQ Assessment
148
Reference Checklist513 (MARC); MEGATAQ Usage Scenario Checklist (MUSC) and
MEGATAQ Anticipated Consequences Checklist (MACC). In addition there are a
number of specific evaluation tools (figure 4).
In Austria, there is a Center for Useability Research and Engineering (CURE),514 which
is linked with Java development. In addition there are two long term ESPRIT projects
Design for Validation515 ( DEVA) and Resource Allocation for Multimedia
Communication and Processing Based on On-Line Measurement516 (MEASURE). Yet
another related ESPRIT project is called TYPES devoted to the development of proofs. If
one can determine the veracity of claims this is a further step towards reliable evaluation.
Context-of-Use Checklist
Time Line Analysis
Human Reliability Assessment
Diagnostic Recorder for Usability Measurement
Subjective Mental Effort Questionnaire
NASA-Task Load IndeX
Software Usability Measurement Inventory
Heart Rate Variability
Network Performance Quality
Measuring the Usability of Multi-Media Systems
Multimedia Communication Questionnaire
The Job Content Questionnaire
The Extended Delft Measurement Kit
Computer Logging
Critical Incidents
Figure 4. List of specific evaluation tools.
(MCUC)
(TLA)
(HRA)
(DRUM)
(SMEQ)
(NASA-TLX)
(SUMI)
(HRV)
(NetPerf)
(MUMMS)
(MCQ)
(JCQ)
(EDMK)
(CL)
(CI)
Requirements Engineering
In theory, the advent of object oriented programming introduced the possibility of
reusable code. An important article by Nierstrass, Tchichritzis, de Mey, and Stadelman
(1991) outlined a vision where this could lead:517
We argue that object-oriented programming is only half of the story. Flexible,
configurable applications can be viewed as collections of reusable objects
conforming to standard interfaces together with scripts that bind these objects
together to perform certain tasks. Scripting encourages a component-oriented
approach to application development in which frameworks of reusable
components (objects and scripts) are carefully engineered in an evolutionary
software life-cycle, with the ultimate goal of supporting application construction
largely from these interchangeable, prefabricated components.…We conclude
with some observations on the environmental support needed to support a
component-oriented software life-cycle, using as a specific example the
application development environment of ITHACA, a large European project of
which Vista is a part.518
149
A consequence of this article was new research into components, scripts and applications,
the concept of active compound documents and principles of scientific workflow. In all
this requirements have become ever more important. Since 1994 there has been a biennal
International Conference on Requirements Engineering (ICRE). Through the ESPRIT
Network of Excellence (20800) the European Commission has introduced a
Requirements Engineering Network of International Co-operating Research Groups519
(RENOIR), which includes some sixty institutions in Europe and elsewhere with a
specific goal:
to provide a framework for co-ordinated joint research in requirements engineering
related to industrial needs; to support the diffusion of requirements engineering
research; to provide requirements engineering research training; to support
technology transfer in requirements engineering.
To this end RENOIR brings together all the key European research teams from
industry, academia, and research centres. RENOIR focuses on a set of shared
technical goals relating to: the context in which the requirements engineering
process takes place; the groundwork necessary for requirements engineering; the
acquisition of the "raw" requirements; rendering these requirements usable through
modelling and specification; analysis of the requirements; measurement to control
the requirements and systems engineering process; communication and
documentation of the results of requirements engineering. RENOIR will combine
process and artefact-centred approaches to requirements engineering and will draw
on experimental, conceptual and formal research methods.
Requirements engineering is the branch of systems engineering concerned with the
real-world goals for, services provided by, and constraints on a large and complex
software-intensive system. It is also concerned with the relationship of these factors
to precise specifications of system behaviour, and to their evolution over time and
across system families. Put crudely requirements engineering focuses on
improvements to the front-end of the system development life-cycle. Establishing
the needs that have given rise to the development process and organising this
information in a form that will support system conception and implementation.
Requirements engineering provides the tools, concepts and methods which mediate
between information technology service and product providers and information
technology service and product users or markets. It is difficult to overstate the
importance of requirements engineering to industrial and commercial
competitiveness or to the provision of societal services. An information technology
product or service which does not meet the requirements of users, or which cannot
be identified with the requirements of a market sector, will not be used, sold or
yield social benefit.
This consortium is providing a framework for new levels of evaluation and for a
cumulative set of building blocks: software elements on demand. Some research is
already moving more explicitly in this direction, such as the WISE project, an ARPA
150
funded initiative in conjunction with the ETH (the Swiss Federal Polytechnic) at Zurich
or the Aurora520 project at Bellcore, in the United States.
The U.S. Army which spends $50 billion on software annually has become very
interested in these problems of reusable code. For instance, the US Army Research Office
has a Mathematics and Computer Science Division with a long term project devoted to
“the Army after next: 2020-2035.” Here the emphasis is on Software and Knowledge
Based Systems521 (SKBS), an automated compiler system, software prototyping
development and evolution (SPDE), a concurrency workbench, an iterative rapid
prototyping process and distributed iterative simulation.
At present there is a great deal of hype about network computers522. Unlike personal
computers which require considerable software locally, the network computer, we are
told, will have a minimum of local capacity and call upon software packages from the
server side. The revolution in requirements engineering will eventually take this principle
many stages further. An individual or company will state their needs, i.e. list the
operations which they would like their software to accomplish. This request will then
bring together the relevant components of existing code and combine them in ways
suitable to the users’ needs. This is a rather different world than the view from Mr Gates’
windows, but, in the context of a cumulative object oriented approach it is fully possible.
Precision and Fuzziness
The cumulative view is not the only one. Some see the advent of new technology more as
a revolution, a watershed and a paradigm shift. They would claim that the traditions of
classing and ordering knowledge are artefacts of an age of the printed book which is now
passé. The mania for precision, to order, to class knowledge is no longer necessary. The
advent of scatter-gather techniques makes possible new kinds off searching which no
longer require expertise on the part of the searcher. Precise questions and queries are no
longer needed. Organisations such as the International Standards Organisation (ISO), the
International Society for Knowledge Organisation, the International Federation of
Documentation, the purveyors of the major classification systems may be well
intentioned but represent an effort to result ratio which is no longer defensible. It is
enough to type in something vaguely approaching the problem and the machine will do
the rest. Algorithms will search and find all possible related terms. Such claims are, for
instance, made concerning the Aqua browser. Fuzzy logic, we are told can replace
traditional classing efforts and offer us a richer palette of associations.
One of the recurrent themes of this book has been that the new technologies can be used
to harness traditional methods of knowledge organization in new ways. Our view is that
evolution is embracing not replacing: that advances come through integrating traditional
methods with new technology. Hence natural language may be very attractive and useful
in some circumstances but this does not diminish the power of careful distinctions
between words and concepts, between random terms and cotrolled vocabularies. There
are times when fuzzy questions and fuzzy answers are possible and useful. At other times
precision is preferable. Questions such as: How many children do you have? Or how
151
much do I owe the bank? become more disturbing as they be become more fuzzy and
imprecise. The discipline of indexers and classers of knowledge needs somehow to be
reflected in the digital age. It cannot simply be replaced by automatic methods. We are
richer if we can do more, rather than replace certainty with approaches which give us a
range of plausible answers with no criteria for deciding among the list of alternatives.
Conclusion
Too often in the past and too frequently even today we find ourselves in situations where,
even what seems to us as a relatively simple task, is not possible because the programme
does not allow it. The programmer did not think of it therefore it is not possible. This is
changing. Last year the head of Nortel came out with a vision that we need Webtone:
something for the web that will be as predictable, self-evident and reliable as the dialtone on a regular telephone. Within a few months that same goal was on the web site of
Sun Microsystems as one of their chief goals. The need for reliable approaches is
becoming universally supported. The enormous rise of fora, consortia, memoranda of
understanding and other consensus building measures confirms that the major firms have
recognised the fundamental importance of standards.
Until recently the new electronic devices were modelled almost entirely on the
assumptions underlying the problem solving paradigm. Whenever there was a task, one
defined the problem it entailed and designed another piece of software to solve that
problem. In the past years both hardware and software have begun to acquire
“intelligence”: to function autonomously without the need for human intervention.
Intelligence in this sense simply means the ability to carry out predefined commands, to
achieve someone else’s goals. But ultimately these are simply tools. Human intelligence
is about something more: about setting goals, deciding whether they are worth pursuing,
choosing among alternatives we ourselves have created. Will we try to create machines to
do this also or will we decide that it is wiser to keep this domain for ourselves?
152
List of Plates
1. Schema showing a basic shift from Graphical User Interfaces (GUI) to Tangible
user Interfaces (TUI) from the Tangible Bits project in the Things That Think section of
the Massachussets Institute of Technology (MIT), Cambridge Mass.
http://tangible.www.media.mit.edu/groups/tangible/papers/Tangible_Bits_html/index.htm
l
2. Virtual Society project showing video conferencing and virtual reality from
Sony’ Computer Science Lab (CSL), Tokyo.
http://www.csl.sony.co.jp/project/US/index.html
3. Smart Room Interface showing a virtual dog from MIT.
http://vismod.www.media.mit.edu/vismod/demos/smartroom/
4. Illustration of gesture technology in Dream Space by Mark Lucente at IBM.
http://www.research.ibm.com/natural/dreamspaceindex.html
5. MetaDESK Tangibel User Interface (TUI) from the Things That Think Lab at
MIT (as in 1 above).
http://tangible.www.media.mit.edu/groups/tangible/papers/Tangible_Bits_html/index.htm
l
6. The NaviCam Augmented Reality System from Sony’s CSL. The computer on
the left has a camera which views a shelf of books. Superimposed on this image (right) is
a list of books which have been added today.
http://www.csl.sony.co.jp/projects/ar/navi.html
7. Responsive workbench of the Gesellschaft für Mathematik und
Datenverarbeitung (GMD), Sankt Augustin, applied to an architectural scene.
http://www-graphics.stanford.edu/projects/RWB
8. Responsive workbench as in 7 above used to display a human skeleton in virtual
reality.
9.
Visualisation of information in the form of an InfoCube by Jun Rekimoto from
Sony’s CSL, Tokyo.
http://www.csl.sony.co.jp/person/rekimoto/cube
10. Interconnections of words on different pages of a book using IBM’s Data
Visualizer.
http://www.tc.cornell.edu/Visualization/contrib/cs490-95to96/tonyg/language.vis1.html
11. Three-dimensional landscape reflecting the frequency of themes in different papers
from Sandia National Labs, Albuquerque, New Mexico.
http://www.cs.sandia.gov/VIS/science.html
153
12. A related method using the SPIRE software from Pacific Northwestern Labs.
http://www.multimedia.pnl.govi2080/showcase/pachelbel.cgi?/it-content/spire.node
13. Hierarchically related concepts displayed as cone-trees in the Lyber World Graphical
User Interface of the GMD, Darmstadt, based based on the cone tree perspective methods
of Stuart Card, Xerox, Palo Alto.
http://www.darmstadt.gmd.de/oasys/projects/lyberworld
14. The System for Automated Graphics and Explanation (SAGE) from Carnegie
Mellon University.
http://www.cs.cmu.edu/Groups/sage/project/samples/sdm/figure1.gif
15. Three-dimensionsal visualisation of three item rules using the IBM Data
Visualizer.
http://www-i.almaden,ibm.com/cs/quest/demo/assoc/general.html
16. Virtual Environments for Training (VET) developed by Lockheed-Martin’s Palo
Alto Visualisation Lab, the University of Southern california and the U.S. Navy.
http://vet.parl.com/~vet/vet97images.html
17. Another image form the Virtual Evironments for Training (VET) project.
18. Aerial view of the Stanze (Rooms) by Raphael at the Vatican reproduced in
virtual reality by Infobyte.
http://www.infobyte.it/catalogo/indexuk.html
19. Reconstruction in virtual reality by Infobyte, Rome, of a fresco showing the
Incendio nel Borgo (Fire in the Suburb) in the Stanze by Raphael.
http://www.infobyte.it/catalogo/indexuk.html
20. Reconstruction in virtual reality by Infobyte, Rome, of the space represented in
the fresco showing the Incendio nel Borgo.
http://www.infobyte.it/catalogo/indexuk.html
154
Appendix 1 Key Elements of the SUMMA Model (©1997) as a Framework
for a Digital Reference Room
Access (User Choices)
1. Cultural Filters
2. Access Preferences Views
3. Level of Education
4. Purpose
5. Preliminary Search Tools
1. URI,URL, URN
2. MIME Types
3. Site Mapping
4. Content Mapping
5. Abstracts
6. Strategies
1. Random terms
2. Personal lists
3. Data base fields
4. Related terms
5. Subject Headings
6. Standard Classifications
7. Multiple Classifications
Content Negotiation (e.g. Copyright)
Rating System
e.g. Protocol for Internet Content Selection (PICS)
Library Meta-Data A: Dublin Core Fields
Warwick Framework
Schema of Subject Headings Language
Library Meta-Data B: Content Pointers
Who What Where When How Why
1. Terms
Classifications
2. Definitions
Dictionary
3. Explanations Encyclopaedias
4. Titles
Card Catalogues, National Catalogues, Bibliographies
5. Partial Contents Abstracts, Reviews, Citation Indexes
Contents of Digital Reference Room
1. Terms
Classifications
2. Definitions
Dictionary
3. Explanations Encyclopaedias
4. Titles
Card Catalogues, National Catalogues, Bibliographies
5. Partial Contents Abstracts, Reviews, Citation Indexes
Contents of Digital Library, Museum Primary Sources Facts, Paintings
6. Full Contents
Contents of Digital Library, Museum Secondary Sources Interpretations
7. Internal Analyses
8. External Analyses
9. Restorations
10.Reconstructions
155
Appendix 2. Taxonomy of Information Visualization User Interfaces by Data Type
Chris North, University of Maryland at College Park523
DataType Title
Temporal (i.e. Timelines, histories)
LifeLines
LifeStreams
MMVIS: Temporal
Dynamic Queries
Perspective Wall
VideoStreamer
Institution/Author Links to pages, publications
HCIL-Maryland
Yale
Homepage
Homepage, Company
U Michigan: Hibino Thesis
Xerox
CHI91, (Information
Visualizer)
MIT
Homepage
1 Dimensional (i.e. Linear data, text, lists)
Document Lens
Xerox
(see Info. Visualizer)
Fractal Views
UEC-Japan: Koike TOIS'95
SeeSoft
Lucent / Bell Labs HomePage, Brochure,
IEEE Computer 4/96
Tilebars
Xerox
CHI'95, (see Information
Visualizer)
WebBook
Xerox
(see Info. Visualizer)
2 Dimensional (i.e. Planar data, images, maps, layouts)
ArcView
ESRI
Fisheye/Distortion views
GroupKit
Calgary
Information Mural
GVU-GeorgiaTech
Pad++
New Mexico
Powers of Ten
Homepage
Resource Page
HomePage
Homepage
Homepage
Homepage
3 Dimensional (i.e. Volumetric data, 3D images, solid models)
The Neighborhood Viewer Minnesota
Homepage
Visible Human Explorer (VHE) HCIL-Maryland Homepage
Volvis
SUNY-SB
Homepage
Voxelman
IMDM-Hamburg Homepage
Multi-Dimensional (i.e. Many attributes, relational, statistical)
Filter-Flow
HCIL-Maryland
Paper
Dynamic Queries, Query
Previews (HomeFinder,
FilmFinder, EOSDIS)
HCIL-Maryland
HomePage
Influence/Attribute Explorer Imperial College
HomePage
LinkWinds
JPL-NASA
HomePage
Magic Lens
Xerox
Homepage
156
Parallel Coordinates
Selective Dynamic
Manipulation (SDM)
Spotfire
Table Lens
Visage
VisDB
Worlds Within Worlds
XGobi
Hierarchical (i.e. Trees)
Cone/Cam-Trees
Elastic Windows
Fractal Views
Hyperbolic Trees
Info Cube
TreeBrowser
(Dynamic Queries)
TreeMap / WinSurfer
WebSpace
Thesis
CMU
Homepage
IVEE Development Homepage
Xerox
CHI94, (Info.Visualizer)
CMU
Homepage
Munich
Homepage
Feiner
UIST90
AT&T Labs, Bellcore Homepage
Xerox
HCIL-Maryland
UEC-Japan: Koike
Xerox
Sony
CHI91,(Info. Visualizer)
Homepage Report
VL'93
CHI95
Homepage
HCIL-Maryland
HCIL-Maryland
Abstract
Viz91, Homepage,
Winsurfer, Widget
Homepage
U Minnesota
Network (i.e. Graphs)
Butterfly Citation Browser Xerox
CHI'95,(Info. Visualizer)
Fisheye
Paper
Galaxy of News
MIT
Description
Graphic History Browser
GVU-GaTech
HomePage
IGD
(Interactive Graphical Documents) Columbia: Feiner Homepage
Intermedia
Brown
Homepage
Multi-Trees
Furnas
Homepage
Navigational View Builder GVU-Gatech
HomePage, CHI'95
NETMAP
ALTA Analytics, Inc. Homepage
RMM
Isakowitz
Homepage
SemNet
Bellcore
Paper
Themescape / SPIRE
PNL
Homepage, Abstract
WorkSpaces
CASCADE
Information Visualizer /
3D Rooms / Web Forager
Pad++
Personal Role Managers
Pittsburgh
Paper
Xerox
New Mexico
HCIL-Maryland
CG&A, DLib, Paper
Homepage
Homepage
157
Appendix 3. Key Individuals in Human Computer Interface (HCI) and
Visualization
Aalbersberg, IJsbrand Jan
SIRRA
Ahlberg, Christopher524 Chalmers University of Technology
Filmfinder
Arents, Hans Christiaan525 Katholieke Universiteit Leuven
Cube of Content
526
IBM, Austin
Bardon, Didier
Belew, Richard K.527 University of California, San Diego
Benford, Steve 528
Nottingham
VR-VIBE
529
Xerox PARC
Magic Lenses530
Bier, Eric
University of Aberdeen
Amaze
Boyle, John531
532
Bryson, Steve
NASA
Virtual Windtunnel
Bulterman, Dick533 Vrije Universiteit, Amsterdam
Alias/Wavefront, Toronto
Buxton, Bill534
535
Card, Stuart
Xerox PARC
Web Forager
Rome, La Sapienza
Catarci, Tiziana536
Bead-point cloud, Bead-landscape
Chalmers, Matthew537 Ubilab, Zurich538
539
Church, Ken
AT&T
Dotplot
University of Colorado, Boulder
Citrin, Wayne540
Lancaster University
Colebourne, A.541
542
Crouch, Donald B.
Component Scale Display
Cruz, Isabel
Tufts543
Vortex545
Dieberger, Andreas544 Emory University
546
Dix, Alan
Staffordshire University
Eick, Stephen
AT&T
Faieta, Baldo
Social Insect
Fairchild, Kim Michael547 Singapore National University
Foley, James D.548
Georgia Inst. of Technology, Mitsubishi User Interface Design Env.
549
Virginia Tech
Envision
Fox, Edward A.
Information Navigator
Fowler, Richard H.550 Panamerican University
Garg, Ashim551
Brown University
552
Glinert, Ephraim P. University of Washington
Gray, Peter M.D.553 Aberdeen University
Gray, Philip554
University of Glasgow
555
University of California, Irvine
Grudin, Jonathan
Berkeley
Cougar, Tilebars
Hearst, Marti556
Helfman, Jonathan AT&T
Dotplot
Hendley, Bob557
Birmingham University
Lyberworld
Hemmje, Matthias558 GMD, Darmstadt
Hollan, James D.559 University of New Mexico
Pad++
Nottingham University
Ingram, Rob560
Ioannidis, Yannis E561University of Wisconsin
Jacob, Rob562
Tufts
Carnegie Mellon
John, Bonnie E.563
University of Maryland, Synopsys
Johnson, Brian564
Kimoto, Haruo
NTT565
158
Keim, Daniel A.566 Munich now Halle
(VisDB)
GMD Darmstadt
Kling, Ulrich567
Korfhage, Robert568 University of Pittsburgh
(BIRD)569
Krohn, Uwe
Wuppertal
VINETA
Kurlander, David570 Microsoft
Lin, Xia571
University of Kentucky
Reading Room HyperLibrary
572
Glasgow University
Reconaissance
Lunzer, Aran
Mariani, John
Lancaster University
TripleSpace573, QPIT574
Mendelzon, Alberto575University of Toronto
Hy+
576
Stanford
Munzner, Tamara
Carnegie Mellon
Myers, Brad577
Navathe, Shamkant B.
Tkinq
Nuchprayoon578
Pittsburgh
GUIDO
Carnegie Mellon
VIBE
Olsen, Dan R.579
Peeters, E.
Philips
Pejtersen, Annalise Mark Centre for Human Machine Interaction580Bookhouse581
Rao, Ramana
Xerox PARC,
Inxight
582
Rekimoto, Jun
Sony
Information Cube
Apple
Piles, AIR, SCALIR
Rose, Daniel583
Cornell University
Text
Salton, Gerard 584
585
Shieber, Stuart M.
Harvard
Shneiderman, Ben586 University of Maryland
Nottingham University
VR Vibe
Snowdon, Dave587
588
Spoerri, Anselm
AT&T
Info Crystal
Stasko, John
Georgia Tech589
NSF, Arlington
Strong, Gary590
Veerasamy, Aravindan591
Tkinq
Walker, Graham592 BT Labs
University of Illinois, Urbana-Champaign
Wickens, Chris593
Dynamic Home Finder595
Williamson, Chris594
Wittenburg, Kent
Bellcore
596
Zhang, Jiajie
Ohio State
DARE, TOFIR
159
Appendix 4. Major Projects in Information Visualisation
mostly additional to those discussed by Young (1996).
Europe
European Community Joint Research Centre, Ispra
Institute for Systems, Informatics and Safety
Advanced Techniques for Information Analysis
Data Visualization Group597
European Computer Industry Research Centre, Munich
Combination of Bull, ICL and Siemens
Advanced Information Management Systems
Distributed Computing
User Interaction and Visualisation Group599
(JRC)
(ISIS)
(DVG)
(ECRC)598
Canada
National Research Council, Ottawa600
Institute for Information Technology
(IIT)
Al Hladny [email protected]
Tel. 613-993-3320
Human Computer Interaction
Integrated Reasoning
Interaction with Modelled Environments
Interactive Information
Seamless Personal Information
Visual Information Technology
Germany
Fraunhofer Gesellschaft
(IGD)
Institut für Graphische Datenverarbeitung,601 Darmstadt
Document Computing
Multimedia Electronic Documents
(MEDoc)
Intelligent Online Services
Multimedia Extension
(MME)
Mobile Information Visualization602
Active Multimedia Mail
(Active M3)
Location Information Services
(LOCI)
Visual Computing
Augmented Reality
Virtual Table
Abteilung Visualisierung und Virtuelle Realität, Munich
Gudrun Klinker603
Data Visualisation
Professional Television Weather Presentation (TriVis)
Gesellschaft für Mathematik und Datenverarbeitung
(GMD)
(IPSI)
160
Co-operative Retrieval Interface
based on Natural Language Acts
Japan
Nara Institute of Science and Technology
Image Processing Lab605
Image Recognition
Image Sensing
Information Archaeology
Restoration of Relics using VR
(CORINNA)604
(NAIST)
Sony Computer Science Laboratory, Tokyo
Jan Rekimoto606
Katashi Nagao and Jun Rekimoto, Agent Augmented
Reality: A Software Agent Meets the Real World607
Trans-Vision Collaboration Augmented Reality Testbed608
Augmented Interaction
Navicam
Computer Augmented Bookshelf609
Virtual Society Information Booth610
University of Electro-Communications, Chofu, Tokyo
School of Information Systems
Information Visualization Lab611
Hideki Koike
Bottom Up Project Visualization
Enhanced Desk
Fractal Views612
Fractal Approaches to Visualizing Hugh Hierarchies613
Vogue
University of Tokyo
Department of Information and Communication Engineering
Harashima and Kaneko Laboratory614
Professor Hiroshi Harashima and Masahide Kanedo
Takeshi Naemura Grad. Student
Cyber Mirage Virtual Mall with Photo-realistic Product Display
Integrated 3-D Picture Communication
(3DTV)
Interactive Kansei, Face Impression Analysis
Intelligent Hyper Media Processing
Multi-Modal Advanced Human-Like Agent
Information Filter
Department of Mechano-Informatics
Hirose Lab615
Professor Michitaka Hirose
161
Haptic Display
Image Based Rendering
Immersive Multiscreen Display
Portugal
University of Lisbon
Virtual Reality Lab616
Scientific Visualization
Sensory Ecosystems
Spatial Information Systems
United Kingdom
Cambridge University617
Rainbow Graphics Group618
Active Badges
Animated Paper Objects
Autostereoscopic Display
Mobile Computing
Multiresolution Terrain Modeling
Net White Board
Video User Interfaces
Loughborough University
Telecommunications and Computer Human Interaction Research Centre (LUTCHI)619
Advanced Decision Environment for Process Tools
(ADEPT)
Agent Based Systems
Development of a European Service for Information
(DESIRE)
on Research and Education
Digital Audio Broadcasting and GSM
(DAB)
Focussed Investigation of Document Delivery Option
(FIDDO)
Intelligent User Interfaces
Multi-Layered Knowledge Based Interfaces for Professional Users (MULTIK)
Multimedia Environment for Mobiles
(MEMO)
Resource Organisation and Discovery in Subject Based Services (ROADS)
Manchester Computing Centre620
Infocities
G-MING Applications
Janus Visualisation Gateway Project
Knowledge Based Interface for National Data Sets (KINDS)
Parallel MLn Project
Super Journal with Joint Information Systems Committee (JISC)
Manchester Visualization Centre621
University of Huddersfield
HCI Research Centre622
Xerox Europe, Cambridge
162
Context Based Information Systems
United States
Georgia Institute of Technology
Graphics Visualization and Usability Center624
Virtual Environments
Information Mural625
School of Civil and Environmental Engineering
Interactive Visualizer Project626
Scientific Visualization Lab627
Information Visualization
Quiang Alex Zhao628
(CBIS)623
(Georgia Tech)
(IV)
IBM
Visualization Space: Natural Interface629
Marc Lucente630
Visualization Data Explorer631
L3 Interactive Inc., Santa Monica
Net Cubes632
Lucent Technologies
Visual Insights633
Stephen K. Eick [email protected]
Live Web Stationery634
Massachussets Institute of Technology
Visible Language Workshop635
Founded by Muriel Cooper636
Student: David Small637
(MIT)
NASA, Ames
Scientific Visualization638
MITRE639
Collaborative Virtual Workspace
Data Mining
Information Warfare
Nahum Gershon
Orbit Interaction640
Palo Alto
Jim Leftwich
Infospace: A Conceptual Method for Interacting
with Information in a 3-D Virtual Environment
163
Pacific Northwest National Laboratory,641 Richland, Washington
Auditory Display of Information
Automated Metadata Support for Heterogeneous Information Systems
James C. Brown
Spatial Paradigm for Information Retrieval and Explanation (SPIRE)
cf. Themescape642
(Irene) Renie McVeety
Starlight
Text Data Visualisation Techniques
John Risch
Rutgers University
Center for Computer Aids for Industrial Productivity643
Grigore Burdea, Director
Multimedia Information System Laboratory
Adaptive Voice
Multimodal User Interaction
(CAIP)
Sandia National Laboratories, Albuquerque, New Mexico; Livermore, California
(EVE)
Enterprise Engineering Viewing Environment644
Laser Engineered Net Shaping
(LENS)
Synthetic Environement Lab645
(SEL)
Data Analysis
Data Fusion
Manufacturing
Medical
Modelling
Simulation
Advanced Data Visualization and Exploration
EIGEN-VR
Silicon Graphics Incorporated, Mountain View
File System Navigator646
Visual and Analytical Data Mining647
Ronny Kohavi
(SGI)
(FSN)
University of Illinois, Chicago
Electronic Visualization Laboratory648
+ Interactive Computing Environements Lab
4D Math
Cave Applications649
Caterpillar: Distributed Virtual Reality650
CAVE to CAVE communications
Information Visualization, Pablo Project651
Biomedical Visualization652
(UIC)
(EVL)
(ICEL)=NICE
164
University of Illinois, Urbana Champaign
National Center for Supercomputing Applications
Digital Information System Overview653
Electronic Visualization Lab
CAVE Applications 654
Distributed Virtual Reality655
Visualization and Virtual Environments656
Information Technology657
Virtual Environments Group658
Cave Automatic Virtual Environments
Infinity Wall
ImmersaDesk
Renaissance Experimental Lab
Virtual and Interactive Computing Environments659
Beckman Institute Visualization Facility660
Virtual Reality Lab
World Wide Laboratory
Laser Scanning Cofocal Microscopes
Magnetic Resonance Imaging
Chickscope661
Scanning Tunneling Microscope
Transmission Electron Microscope
(UIUC)
(NCSA)
(VEG)
(CAVE)
(I-Wall)
(I-Desk)
(REL)
(VICE)
(LSCM)
(MRI)
(STM)
(TEM)
University of Pittsburgh
Department of Information Science and Telecommunications, Pittsburgh
Michael Spring
Multilevel Navigation of a Document Space662
Docuverse
Landmarks
Mural
Tilebar
Webview
University of Texas-Pan American663
Document Explorer
Information Navigator
Semantic Space View
Xerox Parc
Cone Tree
Document Lens
Information Visualiser
Perspective Wall
Table Lens664
165
Appendix 5 Library Projects665
In the early phases this process was referred to as library automation or electronic
libraries. Now the term digital libraries is most frequently used.
A. ISO
International Standards Organization
(ISO/TC46/SC9)666
Information and Documentation
Presentation, Identification and Description of Documents
Bibliographic References to Electronic Documents
(ISO 690-2)
B. International
G7 Pilot project 4: Bibliotheca Universalis
Gateway to European National Libraries
(GABRIEL)667
G7 Pilot project 4: (Japan)668
Electronic Library System (Ariadne)
Data Retrieval
Definition
for source data
Retrieval
Hypertext
Multimedia data
Intelligent Retrieval
Optional Functions
High
Thesaurus
Cover Image Display Image
Data
Keyword Translation
Machine Translation HDTV
Keyword Retrieval
Memorandum Service
Logical Expression Retrieval Marking
Retrieval using System Hierarchy
Figure 10. Schema of key elements in Japan's model for the G7 pilot project on libraries.
G7 Pilot Project 6 Environmental Natural Resources Management (ENRM)
Earth Observation Resources
Digital Library Reference System669
National Oceanic and Atmospheric Administration Satellite 10
(NOAA 10)670
Larry Enemoto
Téléphone: +1 301 457 5214
Fax:
+1 301 736 5828
E-mail:
[email protected]
Fédération Internationale d’Information et de Documentation
(International Federation for Information and Documentation)
(FID)671
International Council on Archives
Ad Hoc Committee on Descriptive Standards
(ICA)
International Association of Digital Libraries
Millenium project re: new world
(IADL)
166
International Federation of Library Associations
(IFLA)
International Institute for Electronic Library Research
De Montfort University672
International Research Library Association
(IRLA)
Text Encoding Initiative
Michael Sperberg-McQueen
University of Illinois at Chicago
Computer Center (M/C 135)
1940 West Taylor Street, Room 124
Chicago, Illinois 60612
Tel. 1-312-413-0317
(TEI)
Lou Burnard
Oxford University Computing Services
13 Banbury Road
Oxford OX 2 6NN
+44-1865-273238
United Nations
United Nations Bibliographic Information System
(UNBIS)673
United Nations Educational, Scientific and Cultural Organisation (UNESCO)
Memory of the World674
World Heritage List675
Interdeposit Digital Number
(IDDN)676
Multinational
European Commission
Telematics for Libraries677
Themes: childrens page, distance learning, journals,
metadata, music libraries, software
EC Libraries Projects (include):
5601
European Forum for Implementors of Library Automation (EFILA)678
4012
European Forum of Implementors of Library Applications (EFILA+)679
4034
Linking Publishers and national Bibiographic Services
(BIBLINK)680
167
3052
Automatic Information Filtering and Profiling
(BORGES)681
3063
Catalogue with Multilingual Natural Language Access
/ Linguistic Server
5649
Controlled Access to Digital Libraries in Europe
(CANAL/LS)682
(CANDLE)683
4058
A Cooperative Archive of Serials and Articles
(CASA)684
4100
Computerised Bibliographic Record Actions Plus
Preservation and Service Developments
for Electronic Publication
3033
Billing System for Open-Access Networked
Information Resources
Peter Bennett
Mari Computer Systems Ltd.
Unit 22 Witney Way, Boldon Business Park
Boldon Colliery
UK-Tyne and Wear NE 35 9PE
Tel. +44-191-519-1991
3078
Delivery of Copyright Material to End Users687
Hans Geleinsje
Tilburg University Library
Warandelaan 2
PO Box 90153
NL 5000 Tilburg
Tel. +31-13 66 21 46
(COBRA)685
(COPINET)686
(DECOMATE)
[email protected]
4000
European Copyright User Platform688
Emanuella Gaivara
European Bureau of Libraries Information
and Documentation Associations
PO Box 43300
NL 2504 Hague
Tel. +31-70-3090-608
(ECUP)
(EBLIDA)
1011
168
Electronic Data Interchange for Libraries and
Booksellers in Europe
(EDILIB II)689
4005
Electronic Library Image Service in Europe - Phase II
(ELISE II)690
5609
European Libraries and Electronic Resources
in Mathematical Sciences
Michael Jost
FIZ Karlsruhe
(EULER)691
[email protected]
2062
European SR-Z39.50 Gateway
(EUROPAGATE)692
5604
Heritage and Culture through Libraries in Europe
(HERCULE)693
1015
Hypertext interfaces to library information systems
(HYPERLIB)694
4039
Integrated Library Information Education
and Retrieval System
(ILIERS)695
5612
Manuscript Access through Standards
for Electronic Records
(MASTER)696
3099
Online Public Access Catalogue in Europe
(ONE)697
5643
Online Public Access Catalogue for Europe- II
(ONE II)698
3038
Advanced Tools for Accessing Library Catalogues
(TRANSLIB)699
4022
Large Scale Demonstrators for Global, Open
Distributed Library Services
(UNIVERSE)700
1054
Visual Arts Network for the Exchange
of Cultural Knowledge
(VAN EYCK)701
169
5618
Virtual library
ESPRIT
Esprit Working Group 21057
ERCIM Digital Library703
Prof. Jean-Michel Chasseriaux
European Research Consortium for Informatics
and
GEIE
Domaine
de
B.P.
F - 78153 Le Chesnay Cedex
Tel : +33/1 3963 5111
fax:
+33/1
3963
e-mail: [email protected]
Includes Elsevier
Semantic Index System: Thesaurus Management System
cf. Thesaurus Management System for
Distributed Digital Collections
Martin Doerr,
Institute of Computer Science
Foundation for Research and Technology
PO Box 1385, GR 711 10
Herakion, Crete
(VILIB)702
( DELOS )
Mathematics
Voluceau
105
5888
(SIS-TMS)
(FORTH)
[email protected]
Platform Independent and Inter-Platform Multimedia
Reference Applications
ESPRIT 22142704
Nicholas Manoussos,
CompuPress, Athens
Tel. +30-1-922550
Linked with McGraw Hill
(REFEREED)
Dictionary of Art
Fact Extraction
Macmillan
ESPRIT Project
INFO2000
Palaces and Gardens of Europe: Baroque and Classicism
Denise Verneray-Laplace
Kairos Vision
Tel.+33-1-4212-0706
Multimedia Dictionary of Twentieth Century Architecture (TOTEM)
170
Eric Hazan
Tel.+33-14-44-11700
Great Composers:
Multimedia Reference on European Classical Music Heritage
Aissa Derrouaz
Tel.+33-1-44-53-9500
Multimedia Codices of Leonardo da Vinci: Flight of Birds
Leonardo Montecamozzo
Giunti
Tel.+39-2-8393-374
Includes Museo di storia della scienza and Klett
Multilingual Multimedia Encyclopedia
of Ecology in 2000’s Europe
INFMM 5033
Leonardo Montecamozzo
Giunti
Tel.+39-2-8393-374
World Electronic Book of Surgery
European Institute of Tele-Surgery
Professor Jacques Marescaux
(ECO2000)
(WEBS)
(EITS)
Information Context for Biodiversity705
Union Internationale des Associations
Anthony Judge
Tel.+32-2-640-1808
Les Croisades
INFMM 1064
Emmanuel Olivier
Index plus
Tel.+33-1-4929-5151
DGXIII
MEDICI Framework
Mario Verdese DGXIIIb
[email protected]
Professore Alfredo Ronchi
Politecnico di Milano
Via Bonardi 15
20133 Milano, Italia
Tel.+39-02-2399-6040
Fax.+39-02-2399-6020
[email protected]
171
Maastricht McLuhan Institute
European Network of Centres of Excellence in Cultural Heritage
Kim H. Veltman
[email protected]
Digital Reference Room
Council of Europe
Daniel Thérond
BP 431 R6
F-67006 Strasbourg, Cedex
France
Tel+33-88-41-20-00
European Digital Library Consortium706
Guide to Open Systems Specifications
Information Structure and Representation
Library Applications
(GOSS)707
International Collaboration on Internet Subject Gateways
(IMESH)708
Research Libraries Group
Archives and Manuscripts Taskforce on Standards
(RLG)
World Wide Web Consortium709
James S. Miller, IBM
(W3)
D. National
Australia
Australian National Library, Canberra
Australian Cooperative Digitisation Project 1840-1845710
Canada
Bureau of Canadian Archivists
Planning Committee on Descriptive Standards
Rules for Archival Description
Cf. ISBD (G)
National Library, Ottawa
Canadian Initiative on Digital Libraries711
Working Groups
Advocacy and Promotional Issues
Creation and Production Issues
Organisational and Access Issues (Metadata)
Digital Projects712
Early Canadiana Online713
(RAD)
(CIDL)
172
Virtual Canadian Union Catalogue714
Directory of Z39.50 targets715
Z39.50 website716
Carrol Lunau717
Virtual Visit718
Rogers New Media
156 Front Street W, Suite 400
Electric Library719
Denmark
Denmarks Electronic Research Library
Jens Thorhage
(DEF)
France
Bibliothèque Nationale de la France
Gallica720
MEMORIA (see Consortia below)
Germany
Arbeitsgemeinschaft Sammlung Deutscher Drucke
Verteilte Digitale Forschungsbibliothek
Kompetenz Zentrum Digital Library sponsored by DFG
Contact: Herr Norbert Lossaur
tel. +49-(0)551-39-5217
Bertelsmann721
Thomas Middelhof, chairman722
Gesellschaft für Mathematik und Datenverarbeitung
Integrated Publication and Information Systems Institute
Professor Erich Neuhold
Reginald Ferber
Distributed Processing of Multimedia Objects
Global Electronic and Multimedia Information
Systems for Natural Science and Engineering 724
Physics
Computer Science
Mathematics
Natural Sciences and Technology
Database Systems and Logic Programming
Trier
Computer Science Bibliographies
Karlsruhe
Advanced Retrieval Support for Digital Libraries725
(GMD)
(IPSI)723
(GLOBAL-INFO)
(PhysDoc)
(McDoc)
(MathNet)
(eprint)
(dblp)
(DELITE)
173
Institut für Terminologie und angewandte Wissensforschung
Am Köllnischen Park 6/7, 10179 Berlin
or
Postfach 540/ PA 14
D-10149 Berlin
Dr Johannes Palme
Tel. 001-49-30-30862088
Fax. 001-49-30-2791808726
(ITAW)
From Text to Hypertext
Börsenverein des Deutschen Buchhandels
Eugen Emmerling
Schlütersche Verlagsanstalt
Günther Warnecke
Olms Verlag
Dr. Eberhard Mertens
Bertelsmann Club GmbH
Frank Pallesche
Deutscher Bibliotheksverband e. V.
Zentrum für Kunst und Medientechnologie
Technische Universität Braunschweig
Universitätsbibliothek
Bernhard Eversberg, Was sind und was sollen
Bibliothekarische Datenformate, Braunschweig, 1994.727
Italy
Intelligent Digital Library
M.F. Costabile
Dipartimento di informatica
Universita di Bari
(IDL)
[email protected]
Biblioteca Italiana Telematica728
Eugenio Picchi
Istituto di Linguistica Computazione, CNR
Via della Faggiola 32
Pisa, Italy
Japan
Digital Library Network
University of Library and Information Science
Tsukuba Science City729
Netherlands
Digitale Encyclopedie Nederland
(CIBIT)
(DLnet)
(DEN)
174
Elsevier
The University Licensing Program730
United Kingdom
Bath Information and Data Services
Contact: Terry Morrow
(TULIP)
(BIDS)731
British Library
Cataloguing and Retrieval of Information
Over Networks Applications
(CATRIONA)
Digital Library Programme732
Beowulf
Initiatives for Access
Magna Carta
Treasures Digitisation
Research and Innovation Centre
Digital Library Research Programme733
Towards the Digital Library, ed. Leona Carpenter et al., London: British Library,
1997.
Electronic Library Programme734
Access to Network Resources
Digitisation
Electronic Document Delivery
Electronic Journals
Electronic Short Loan Projects
Images
On Line Publishing
Preprints
Quality Assurance
Supporting Studies
Training and Awareness
(elib)
National Council of Archives
Richard Sargent
+44-171-242-1198
(NCA)
Scottish Cultural Resource Access Network735
(SCRAN)
United Kingdom Pilot Site Licence Initiative
(UKPSLI)
United States
US G7 Biblioteca Universalis736
National Science Foundation
International Digital Libraries Collaborative Research737
(NSF)
175
$1 million
linked with Joint Information Systems Centre
Performing Arts Data Service 738
is seeking partners
Carola Duffy
Tel.+44-(0)-141-330-4357
Fax.+44-(0)-141-330-3801
(JISC)
(PADS)
[email protected]
National Information Infrastructure
NII Virtual Library739
(NII)
Information Infrastructure Task Force
IITF Application Programs
Linguistic Data Consortium
Spoken Natural Language Interface to Libraries
BBN, SRI, MIT, CMU
John J. Godfrey
Tel. 703-696--2259
(IITF)
Computer and Information Science and Engineering
Information and Intelligent Systems
Advanced Networking Infrastructure and Research
Experimental And Integrative Activities
1. Advanced Mass Storage
2. Electronic Capture of Data
3. Software for Multimedia Processing
4. Intelligent Knowledge Processing
5. User Training
6. Friendly Interfaces
7. Collaborative Problem Solving Tools
8. Standards and Economic issues
9. Experimental Prototypes
(CISE)
(IIS)
(ANIR)
(EIA)
Department of Energy
Comprehensive Epidemiologic Data Resource
Socio-Economic Environmental
Demographic Information System
Carbon-Dioxide Information Analysis Center
Data Base of Scientific Mathematical Software
Research on Digital Libraries
Library Server for Manufacturing Applications
General Electric R&D Center
Current Economic Statistics
Guide to Available Mathematical Software
(DOE)
(CEDR)
(SEEDIS)
(CDIAC)
(GAMS)
176
American Heritage Project740
American Heritage Virtual Archive Project
Duke University
Stanford
University of California, Berkeley
University of Virginia
American Society for Information Science
(ASIS)741
ARPA Research Program on National Scale Information Enterprises
Alexandria Digital Library742
Berkeley Digital Library see: University of California, Berkeley
Carnegie Mellon University see: National Science Foundation
Center for Networked Information Discovery and Retrieval743
(CNIDR)
Committee on Institutional Co-operation
Center for Library Initiatives
Virtual Electronic Library744
(CIC)
Cornell University
Consortium for University Printing and
Information Distribution
Making of America745
with University of Michigan
Flexible and extensible Digital Object and
Repository Architecture746
Sandra Payette, Carl Lagoze
Digital Library Integrated Task Environment
Steve Cousins
(VEL)
(CUPID)
(MOA)
(FEDORA)
(DLITE)747
Digital Preservation Consortium748
Florida International University Libraries
Everglades National Park
Everglades Digital Library749
Harvard Information Infrastructure Project750
(HIIP)
Johns Hopkins University Press
Project Muse751
177
Library of Congress752
Z39.50753
(ISO 23950)
754
American Memory
Ameritech Digital Library:
1. Brown University
2. Denver Public Library
3. Duke University
4. Harvard University
5. Library of Congress
6. New York Public
7. North Dakota State
8. University of Chicago
9. University of North Carolina
10. University of Texas at Austin755
Digital Librarian
Norma Davik
301-688-7353
(DLF)
Digital Library Federation756
Machine Assisted Realization of the Virtual Electronic Library (MARVEL)
National Science Foundation, DARPA and NASA
(NSF)757
758
Stanford Digital Libraries Project now called Digital Libraries Initiative759
1. Carnegie Mellon University
Infomedia: Digital Video Library760
2. Stanford University
Integrative Mechanisms among
Heterogeneous Services761
3. University of California, Berkeley Environmental Planning and GIS762
4. University of California, S. Barbara Alexandria Project
Spatially Referenced Map Information763
5. University of Michigan
Intelligent Agents and Information Location764
Advanced User Interface765
Glossary: Terms, Organizations766
6. University of Illinois
Federating Repositories of Scientific Literature767
Social Science Team768
Semantic Research769
Interspace Prototype770
Includes CS Quest
Automatic Indexing
Concept Space Generation
Visualisation Fisheye View
Systems Software Research Group
University of Illinois Related Projects
1. Astronomy Digital Image Lab
(ADIL)771
2. Getty Museum Education Site Licensing Project
(MESL)772
773
3. Horizon Project (NASA)
4. The Daily Planet (TM)774
178
Networked Computer Science Technological Reports Library775
(NCSTRL)
Pharos776
Texas A&M University
Center for the Study of Digital Libraries777
University of California, Berkeley
Digital Library Research and Development778
(APIS)
Advanced Papyrological Information System779
American Heritage (see above)
California Heritage780
Cheshire II Search Service781
Search for Information using Z39.50 protocol
Digital Page Imaging and SGML782
Electronic Binding DTD
(ebind)
783
Encoded Archive Description
(EAD)
Finding Aids for Archival Collections784
Index Morganagus785
Full Text Index of Library Related Electronic Journals
Scholarship from California on the Net786
(SCAN)
See You See a Librarian787
Berkeley Multimedia Research Center788
Information People Project789
University of Maryland, College Park
Digital Library Research Group
University of Michigan, Cornell University
Internet Public Library790
Making of America791
In conjunction with Cornell University
(DLRG)
(IPL)
University of North Carolina, Chapel Hill
Gary Marchinioni
Sharium792
E. Consortia
European Libraries Consortium
Contact: Dr. Lotte Hellinga
Multimedia Electronic MemORIes At hand
Accessing, retrieving, structuring writing
This includes:
(MEMORIA)793
179
1. Cap Gemini
2. Oxford Text Archive
3. AIS
4. Bibliothèque Nationale
5. Institut de recherche en Informatique de Toulouse
6. Consorzio Pisa Ricerche
F. Major Companies
Bell Communications Research
Digital Libraries794
(Bellcore)
IBM795
Technology partners include:
VIP
Take care of digitization, data modeling and loading
Elias
Amicus: a library automated system, marketed in the US by CGI
Contact: Willie Chiu
Steve Mills, General Manager IBM Software Solutions:
IBM “already mapped out plans to team with at least 100 different
content partners of the Digital Library.”
IBM Digital Library Collection Treasury: Volume I:
-Java based loader for speeding up process of adding large amounts
of content to the digital library
-new data model with attributes and object classes for enhanced
navigation and indexing
IBM Digital Library Collection Treasury: Volume II: (in preparation)
-more tools in the areas of digitizing art work and enhancing storage,
search and on-line access, including:
- visible and invisible watermarking (encompassing Java and ActiveX)
- cryptolopes
- fine grained access rigths in Lotus Notes and Domino.
Work on the digital libraries is officially focussed on four areas:
1. Media and Entertainment
CareerPath.com
EMI’s KPM Music Library
Institute for Scientific Information
2. Higher Education
Case Western Reserve
Indiana School of Music
Marist College (New York) New Deal Web Site796
3. Government
4. Cultural Institutions
Archivio General de India
180
Franklin D. Roosevelt Presidential library
Lutherhalle Wittemberg Museum
Vatican Library
Fabio Schiatterella
Other projects include
Edo Museum (Tokyo)
10 million images
Digital Library Project China797 which focusses on
hierarchical storage management
DVD storage subsystems
text mining
data management
Contact: George Wang
tel. 86 10 6298 2449
[email protected] .com
Library of Congress
Federal Theater Project (FTP) archives798
Beinecke Rare Book and Manuscript Library (Yale University)
State Hermitage Museum (St. Petersburg)
Image Creation Center for access from museum kiosks
and online.
Among the products being developed in this context are:799
Query by Image Content
(QBIC)
IBM Natural Language Query
Visible Random Brightness Alteration
(VRBA)
Available as Photoshop plug-in to select,
prepare and apply masks used in visible watermarks
Xerox800
Xerox Palo Alto Research Center
(PARC)
A survey of their work is provided in two articles by Marti Hearst, Research in
Support of Digital Libraries at Xerox Parc.801
Per–Kristian Halvorsen is a key contact in this field.
His team is linked with three of the digital libraries
projects of the NSF/DARPA/NASA listed above.
They have a multimedia project called Gita Govinda which integrates dance,
Music, theatre with texts, images etc.
Daniel Brotsky tel. 415-812-4709 and Ranjit Makkuni
The Xerox approach entails four domains:
1. Infosphere
2. Workspace
3. Sensemaking Tools
4. Document
Superbook
InXight is a spinoff company to deal with tools for these domains.
181
Grenoble
This centre is working on two main projects:
Navigateur pour Bibliothèques Electroniques
(NAVIBEL)802
Callimaque/Callimachus803
This uses the Xerox Document On Demand
(XDOD)
for scanning, archiving and indexing
of an object.
This digital library project has links to
INRIA/IMAG
MEMORIA project of the Bibliothèque Nationale
The centres activities include:
Transaction Aided Network Services
(TANS)
Multilingual Retrieval
Terminology Extraction
Co-ordination Language Facility
(CLF)
Knowledge Broker
Lockheed Martin
Rapid Access Electronic Library System
(RAELS)
Virtual Libraries
Biosciences804
Chemistry805
Control Engineering806
Journals
D-Lib807
Amy Friedlander
William Y. Arms
Digital Library News808
Initiatives in Digital Information, University of Michigan809
International Journal on Digital Libraries, Rutgers University810
Journal of Electronic Publishing, University of Michigan811
National Digital Library Program, Library of Congress812
RLG DigiNews813
(DLN)
Publishers
High Wire Press814
Stanford University
Niche Products
My Virtual Reference Desk815
(MVRD)
182
Appendix 6.
Museums and Museum Networks
The original term was virtual museums.816 The French prefer the term Musée imaginaire
(imaginary museums).
B. International
G7 Pilot Project 5: Multimedia Access to World Cultural Heritage817
Connected with the
Istituto Centrale per il Catalogo e la Documentazione818
(ICCD)
Architetto M.-L. Polichetti
Arch. Dr. Francesco Prosperetti
Ministero degl’Affari degl’Esteri
Was Anna Blefari Schneider. Now Giulio Tonelli
International Council of Museums
(ICOM)819
Contact: Dr. Cary Karp820
Committees821
International Council of Museums: Conservation
(ICOM-CC)822
Comité Internationale de Documentation
(CIDOC)
Comité Internationale de musées
et des collections d’Instruments Musicaux
(CIMCIM)823
Audiovisual And New Technologies
(AVICOM)
Committee for Education and Cultural Action
(CECA)824
International Committee on Archaeology and History
(ICMAH)825
International Committee for Museology
(ICOFOM)826
International Committee on Monuments and Sites
(ICOMOS)827
Tourism
International Committee for Costume Museums and Collections (ICCMC)
Comité Internationale pour la Documentation de l’Art
Multimedia Working Group
Introduction to Multimedia in Museums
Editorial Committee: David Bearman, Jennifer Trant,
Jan van der Starre, Tine Wanning828
(CIDOC)
Comité Internationale d’Histoire de l’Art
Thesaurus Artis Universalis
Marilyn Schmitt
[email protected]
(CIHA)
(TAU)
International Confederation of Architectural Museums
International Council on Archives
Committee on Automation
183
United Nations Educational Scientific and Cultural Organisation
World Heritage Information Network
World Heritage Web
HERitage NETwork
Prof. Ing. Renzo Carlucci832
IMAG
CROMA
ARCHEO
LAND
DISAB
(UNESCO)
(WHIN)829
(WHB)830
(HERINET)831
Communication, Information and Informatics Sector833
1, rue Miollis
75732 Paris Cedex 15
tel. 33-1-45-684320
Henrikas Yushkiavitshus
Assistant Director-General
[email protected]
Includes:
Bibliotheca Alexandrina
International Informatics Programme
(IIP)
International Program for the Development of Communication (IPDC)
Memory of the World Program
World Information Report
World Heritage
Alistair McLung
Tel. 0033-1-45684995
Fax. 0033-1-45685739
A. Multi-National
European Parliament
First European Community Framework Programme in Support of Culture834
Thus will integrate the existing projects such as:
Kaleidoscope835
Ariane836
Raphael837
Raphael Programme (98/C 342/09)
City of Culture838
2000 (there will be 9 cities of which 5 have projects including:
Bologna: ARCE net project, a feasibility study regarding the creation of a
virtual museum, which will use new technology to create a
network of cultural infrastructures (art and heritage institutions in the
partner cities) and will offer improved public access to museum
and exhibition collections.
Media II: Audiovisual Policy839
184
European Commission
CULTURE, THE CULTURAL INDUSTRIES AND EMPLOYMENT
Commission staff working paper
SEC (98)837840
1st REPORT ON THE CONSIDERATION OF
CULTURAL ASPECTS IN EUROPEAN COMMUNITY ACTION841
Culture and European Integration
Foundations of the European Community's cultural activities
Speech by Mr Marcelino Oreja,
Member of the European Commission
Ferstel Palace, Vienna, 6th March 1997842
Intergovernmental Conference on Cultural Policies for Devlopment
Peparatory Paper IX
Culture and the New Media Technologies,
Sally Jane Norman843
DG XIIIb
Memorandum of Understanding for Multimedia
Access to Europe’s Cultural Heritage
Next phase is called:
Multimedia EDucation and employment
through Integrated Cultural Initiatives
This will have a European Network of Centres
of Excellence in Cultural Heritage which will
integrate existing efforts such as:
MIDAS NET845
Info 2000
Mediterranean Multi Media Support Centre
for culture and arts
Esprit 22266846
Antonella Fresa
CESVIT, Firenze
Te. +39-0554-294-240
Museums over States in Virtual Culture847
Part of the Trans European Networks Project
Virtual Museum International
Multimedia European Network for High
Quality Images Registration
ESPRIT 24378
Dominique DeLouis
Museums On Line
Tel.+33-1-42453299
Remote Access to Museum Archives
(MOU)
(MEDICI)844
(M.Cube)
(MOSAIC)
(TEN)
(VISEUM)
(MENHIR)
(RAMA)
185
Network of Art Research Computer
Image Systems in Europe
(NARCISSE)
Sharing Cultural Heritage through Multimedia Telematics (AQUARELLE)848
INFO.ENG. IE2005
Alain Michard
INRIA-Domaine de Voluceau
BP 105
F-78153
Tel.+33-1-39-635472
[email protected]
Cultural Heriatge and EC Funding849
ESPRIT
Galleries Universal Information, Dissemination and Editing System (GUIDE)
ESPRIT 23300 + Personal Digital Assistant
(PDA)
Alex Geschke
Compart GMBH,Berlin
Tel. +49-30-4211219
A Multimedia Project for Cultural Organisations
(CAMP2)
ESPRIT 29147
Jürgen Bürstenbinder
Pixelpark, Berlin
Tel.+49-30-349-81-505
Co-Operative Server for Exalting and Promoting Plastic Arts (COSEPPA)
ESPRIT 29176
Jean François Boisson
Euritis
Tel.+33-1-30-1200-71
INFO2000
Artweb
Lewis Orr
Bridgeman Art Library Limited
Tel.+44-171-727-4065
Includes MDA, RMN and Bildarchiv Preussischer Kulturbesitz
Cultural Heritage of long Standing Legacy in Open Network (CHAMPOLLION)
Dirk ven der Plas
Utrecht
Tel.+31-30-253-1982
Multimedia Dictionary of Modern and Contemporary Art
Fernand Hazan
Ediciones AKAL
Thames and Hudson
186
INFO2000 Multimedia Content
Cultural Heritage and Archaeology Multimedia Project
INFMM 1047
Stephan Pelgen
Mainz
Tel.+49-(0)-6131-287-4722
(CHAMP)
Source Vive: a Plateforme for the sharing of knowledge
INFMM 1201
Jacqueline Chiffert
Delta Image, Paris
Tel.+33-1-42-60-00-03
Includes BNF and British Library
Cultural Heritage and Arts Information Network
INFMM 1214
Rune Rinnan Telenor Venture AS
Oslo
Tel.+47-2277-9910
(CHAIN)
DGX
Audiovisual Policy
M. Marcelino Oreja850
Possible European Museum Information Institute
Cf. Museum Documentation Association
Louise Smith
(EMII)
(MDA)
Council of Europe
Division for Cultural Heritage851
Consortium for Computer Interchange of Museum Information
(CIMI)852
John Perkins
New testbed where 10 museums share 150,000 records in 10 weeks.
(CHIO)
Computer Heritage Information On-Line853
Exhibition Catalogue Document Type Description (CHIO DTD)854
Standards Framework
SGML for Cultural Heritage Information855
Full Text Document Type Description V4.0856
(FT DTD)
Pay Services
Museums On-Line857
Implementation of the Menhir Project
AMICO
Corbis
187
RMN858 Artois
Art Web
Viscountess Bridgeman
Now includes RMN
National
Austria
Virtual Real Estate
Christian Dögl
Breitegasse 3/2
A-1070 Vienna
Austria
Tel. 011-43-1-526-2967
Fax. 011-43-1-526-296711859
Australia
Australian Cultural Network860
Canada
Canadian Heritage Information Network861
(CHIN)
France
Inventaire général des monuments et des richesses artistiques de la France862
Hôtel de Vigny 10
Rue du Parc Royal
F-75003 Paris
France
Tel. +33-1-40-15-75-50
Database of Fine Arts and Decorative Arts
Access in 1999 according to Bernard Schotter,
Administration, Ministère de la Culture
Direction, Musées de France
(JOCONDE)
Database re conservation and restoration
(MUSES)
Réunion des Musées Nationaux museum shops
Yves Brochen
Cf. M. Coural Louvre863
(RMN)
Germany
Fraunhofer Gesellschaft Berlin, ISST
Lebendiges Museum On-Line
Lutz Nentwig
(LEMO)
Marburger Archiv864
188
Marburger Informations-Dokumentations
und Administrations-System
Professor Lutz Heusinger
Saur Verlag
Allgemeine Künstlerlexikon865
(MIDAS)
(AKL)
Univeristät Karlsruhe
Ontobroker866
Italy
Cultural Heritage Assisted Analysis Tools
(CHAAT)
Romania
Information Centre for Culture and Heritage
(CIMEC)
Spain
Multimedia Information Remote Access867
Jose M Martinez
Universidad Politecnica de Madrid
(MIRA)
[email protected]
United Kingdom
Arts and Humanities Data Service868
Museum Documentation Association869
(MDA)870
Jupiter House
Station Road
Cambridge CB1 2JD
United Kingdom
Tel. +44 1865 200561
Fax. +44 1223 362521
Co-ordinators of Term IT
Contact: Matthew Stiff
Louise Smith re: European Museum Information Institute
National Council on Archives
Information Technology Standards Working Group
Richard Sargent
Tel. +44-171-242-1198
(NCA)
(ITSWG)
Royal Commission on the Historical Monuments of England871
(RCHME)
Scottish Cultural Resources Access Network
(SCRAN)872
Eurogallery
189
Proposed project linking National Gallery (London), Réunion des musées
nationaux (Paris), Van Gogh Museum (Amsterdam)
United States
American Association of Museums
Museum Digital Licensing Consortium873
American Association for State and Local History
Common Agenda for History Museums
(MDLC Inc)
(AASLH)
Archives and Museum Informatics
5501 Walnut Street, Suite 303
Pittsburgh PA 15232-2311
Tel. 1-412-683-9775
Fax. 1-412-683-7366
David Bearman and Jennifer Trant
Association of Art Museum Directors
(AAMD)
(AMICO)
Art Museum Image Consortium874
Maxwell Anderson
Coordinated by Archives and Museum Informatics
David Bearman and Jennifer Trant
Coalition for Networked Information875
Clifford Lynch
Frick Art Gallery
10 East 71st Street
New York 10021 New York
Clearinghouse on Art Documentation and Computerization
Patricia Barnett
Tel. 212-288-8700 #416
Getty Trust
Getty Information Institute
Formerly Getty Art History Information Program
1200 Getty Center Drive, Suite 300
Los Angeles, California 90049Art Information Task Force
Thesaurus Geographic Names
Union List of Author Names
(GII)876
(AHIP)
(AITF)
(TGN)
(ULAN)
Museum Computer Network877
(MCN)
Museum Informatics Project878
University of California, Berkeley
(MIP)
190
National Endowment for the Humanities879
(NEH)
National Initiative for Networked Cultural Heritage880
(NINCH)
President’s Committee for the Arts and the Humanities
Society of American Archivists
Standards Board881
Working Group on Standards for Archival Description
Committee on Archival Information Exchange
Encoded Archival Description Working Group882
(WGSAD)
(CAIE)
(EADWG)
Major Companies
Hitachi
Viewseum883
Intel
Virtual Gallery884
Sony (Paris)
Personal Experience Repository Project
To visit virtually places wherever visited, with our memory there;
to use captured information for further occasions; Chisato Namaoka885
Xerox (Grenoble)
Campiello Project (Part of i3 or Icube)
Tourism for locals tourists and cultural managers: Venice, Crete and Chania
Uses Knowledge Broker meta-searcher
Knowledge Pump
recommendation system
Individuals
Michael Spring
The Virtual Library, Explorations in Information Space886
The Document Processing Revolution887
George P. Landow
Hyper/text/theory, Baltimore: Johns Hopkins Press, 1984
Benoit de Wael
Thesis: Museums and Multimedia, [email protected]
Christos Nikolaou, Constantine Stephanides, ed.
Research and Advanced Technology for Digital Libraries. Second
European Conference ECDL ’98, Berlin: Springer 1998.
Lars Fimmerstad
The Virtual Art Museum888
Richard Leachman889
Cognition and Cultural Change
Contra post-modernism
191
6.b Museum and Cultural Networks in Europe
Performance art (Theatre, music, opera.):
Académie Européenne des arts de geste
(Les Transversales)
Baltic Music Network
(BMN)
Bancs d'Essai Internationaux
(BEI)
Consortium pour le Coordination d'Études
Européennes sur le Spectacle et le Théatre
(CONCEPTS)
Convention Théâtrale Européenne
(CTE)
Dance Network Europe
(DNE)
European Concert Halls Organisation
(ECHO)
Euro Festival Junger Artisten
(EFJA)
European Forum of Worldwide Music Festival
(EFWMF)
European Network of Info. Centres for Performing Arts
(ENICPA)
European Music Office
(EMO)
Europe Jazz Network
(EJN)
Fête Européenne de la Musique
(FEM)
Informal European Theatre Meeting
(IETM)
Intercultural Production House for the Performing Artists
(IPHPA)
Music Organisations of Europe
(MORE)
Network Dance Web
(NDW)
Performing Arts Research Training Studios
(PARTS)
Red Espanola de Teatros, Auditorios y Circ. de Comun. Autonomas (RETACCA)
Réseau Printemps
(RP)
Réseau Européenne des Centres d'Information du Spectacle Vivant (RECISV)
Réseau européen des services éducatifs des maisons d'opéra
(RESEMO)
Théatre Contemporain pour le Jeune Public
(Archipel)
Union des Théâtres de l'Europe
(ETE)
Visual and Multimedia art:
Cartoon Arts Network
Conseil Européen des Artistes
European Children's Television Network
European Network of Paper Art and Technology
European Textile Network Strasbourg
European Video Services
Germinations
Réseau Pandora
(CAN)
(ECA)
(ECTN)
(ENPAT)
(ETN)
(EVS)
(RP)
Books and Reading:
Collège Européen des Traducteurs Littéraires...
(CEATL)
European Bureau of Library, Information and Documentation Associations (EBLIDA)
European Libraries Cultural Network
(ELCN)
Réseau Européen des collèges de Traducteurs Littéraires
(RECT)
Réseau Européen des Centres de traduction de la poésie contemporaine (RECTPC)
Women's Art Library-International Network
(WALIN)
192
Cultural Heritage, Conservation and Museums:
European Commission on Preservation and Access
(ECPA)
European Confederation of Conservator-Restorators' Organisations (ECCO)
European Cultural Foundation
(ECF)
European Environmental Bureau
(EEB)
European Forum of Heritage Associations
(EFHA)
European Network to Promote the Conservation
and Preservation of the European Cultural Heritage
(ENPCPECH)
Telematic Network for Cultural Heritage-Patridata
(TNCHP)
Europa Nostra united with the International Castles Institute (EUROPA NOSTRA-IBI)
European Heritage Group
(EHG)
Network of European Museums Associations
(NEMO)
Union of European Historic Houses Associations
(UEHHA )
Cultural Administration, Management and Policy:
European Network of Cultural Managers
Centre Européen de la Culture
Council of European Artists
Gulliver Clearing House
European Forum for the Arts and Heritage
European Network of Cultural Administration Training Centres
Réseau des Administrateurs Culturels Brussels
(ENCM)
(CEC)
(ECA)
(GCH)
(EFAH)
(ENCATC)
(ORACLE)
Several Cultural Fields at the same time:
Association des centres culturels de rencontre
(ACCR)
Réseau des centres culturels-monuments historiques
Association of European Cities and Regions for Culture
(Les Rencontres)
Association internationale des centres de résidences d'artistes
(RES ARTIS)
Banlieues d'Europe
Collège Européen de coopération culturelle
(CECC)
Développement de l'Action Culturelle
Opérationelle en Région - Nord-Pas-de-Calais
(DACOR)
Eurocities
European Computer Network for the Arts
(ECNA)
European League of Institutes of the Arts
(ELIA)
European Network of Cultural Centres Brussels
(ENCC)
European Network of Centres of Culture and Technology
(ENCCT)
Network of European Cities of Culture
(NECC)
Réseau Pépinières européennes pour jeunes artistes
(Pépinières)
Réseau Européen des Centres Culturels et
Artistiques pour l'Enfance et la Jeunesse or
European Network of Art Organisations for Children and Young People (EU.NET.ART)
Réseau des Villes européennes des grandes découvertes
(RVED)
Réseau Euro-Sud des Centres Culturels
(RESCC)
Trans Europe Halles
(THE)
193
Appendix 7. Education and Training Projects
Distance Education/ Distance Learning
Courses
Learning, Instructional Design and Instructional Media890
Applications of Educational Technology891
One trend is towards User Modelling and User Adapted Interaction,892
which includes:
Person Machine Interfaces
Intelligent Help Systems
Intelligent Tutoring Systems
Natural Language Dialogs
Cf. also the section on Interfaces above
Individuals
Peter Brusilovsky893 With Marcus Specht and Gerhard Weber, “Towards
Adaptive Learning Environments”, GISI, 1995 pp. 322-329.
A. ISO
Virtual Training Centre (for Distance Learning)894
Renato Cortinovis
(ITU/BDT)
B. International
Global Telecommuncation University
Global Telecommunication Training Institute
Virtual Training Center895
(GTU)
(GTTI)
Global Campus896
IBM Global Campus897
Global Learning Consortium898
(GLC)
International Society for Technology in Education899
(ISTE)
Federation for Audio-Visual Multimedia in Education
(FAME)
Council of European Informatics Societies900
European Computer Driving Licence901
(CEPIS)
(ECDL)
I*Earn902
[email protected]
Technology for Training903
List of Organisations and Institutes including:
The Association for Computer Based Training904
(TACT)
194
B. Multinational
European Commission
White Paper on Education and Training905
Educational Multimedia Task Force initiative906
Memorandum of Understanding for partnership
and standardization in the field of learning technology
(EMTF)
(MOU Learning)907
(EDUCOM)
EDUCAUSE908
(NLII)
National Learning Infrastructure Initiative909
910
Instructional Management System
(IMS)
"to enable an open architecture for online learning".
IMS is co-operating with European Union
Annotatable Retrieval of Information And Database
Navigation Environment
(ARIADNE)
on development of information content metadata 911.
IEEE
(LTSC) (P1484)
Learning Technology Standards Committee912
Technical Standards for Learning Technology. Working Group P1484.1
"Architecture and Reference Model" of a learning environment.
There is a close co-operation with IMS.
Department of Defense and the White House913
Office of Science and Technology Policy
Advanced Distributed Learning
IMS is part of it.
Aviation Industry CBT Committee914
number of recommendations, guidelines,
white papers and technical reports
about a common learning environment model
(OSTP)
(ADL)
(AICC)
Ortelius
Database on higher education in Europe915
European Schoolnet916
Charlotte Linderoth
UNI-C
Olof Palmes Alle 38
DK 8200 Aarhus
Web For Schools917
European Education Partnership
(EEP)918
195
Josiane Morel
EuRopean Multimedia Educational Software Network
ESPRIT 24111
Mario Capurso
Technopolis, Bari
Tel.+39-080-8770309
(ERMES)
Multimedia Educational Innovation Network
ESPRIT 24294
Nikitas Kastis
Lambrakis Foundation
Athens
Tel.+30-1-33-11-848-
(MENON)
Conferences
Virtual Reality in Education919
D. National
Belgium
Les Rencontres de Péricles920
Canada
Schoolnet921
Canadian Electronic Scholarly Network
Online Toolkit for Publication
Online Editions of Scholarly Journals
(CESN)
(VIRTUOSO)
Alberta Research Council
Learning Technologies922
Simon Fraser University923
Ontario Institute of Education
Canadian Education Research Institute924
Penny Milton
(OISE)
(CERIS)
University of New Brunswick, Fredericton
Electronic Text Project
Telecampus925
List of 9000 courses in conjunction with Schoolnet
University of Toronto
Knowledge Media Design Institute926
130 St. George St. Toronto M5S 1A5
(KMDI)
196
tel. 416-978-3011
Gale Moore, Associate Director
France
Centre National d’Enseignement à Distance
(CNED)927
Germany
Mathematics928
Edu-Box, Tele-Student
Professor Dr. Wilfried Hendricks
Technische Universität
Virtual University
Professor Dr. Hans Jörg Bullinger
Fraunhoferinstitut für Arbeitswissenschaft und Organisation
Stuttgart
Zentrum für Kunst und Medientechnologie929
Salon Digital
Cf. Fraunhofergesellschaft, Karlsruhe
Inter Communication Center, Tokyo
Ars Electronica, Linz
Stanford University
Cambridge University
Verlag Klett-Kotta
Bildarchiv Foto Marburg
(ZKM)
Schule und Museum in Datennetz930
Japan
Toshiba
A "virtual reality" based CAE system speeds evaluations of
main control room designs in nuclear power stations and
supports operator training.931
Norway
Kidlink932
United Kingdom
Open University
Computer Based Learning Environments933
Knowledge Based Systems934
(OU)
University Activities
197
Joint Information Systems Committee
Electronic Libraries Programme
Higher Education Digitisation Service
Arts and Humanities Data Service
Technical Advisory Services
(JISC)
(eLib)
(HEDS)
(AHDS)
(TASI)
United States
Individuals
Who’s Who in Instructional Technology935
Collins, Marie936
Driscoll, Marcy937
Duffy, Thomas M.938
Jonassen, David939
Handbook of Task Analysis Procedures
Merrill, M. David
Adult Learning Satellite Service
Apple Classroom of Tomorrow
David Dwyer
Cargill Corporation largest private corporation
Coastline Community College
Community Learning and Information Network
Consortium for Language Teaching and Learning
Executive Director
Peter C. Patrikis
New Haven, CT
Custom Express Train
CBT for Local Business by Sony
Distance Education Clearinghouse940
Edsat Institute
Edunet Educational Network
Multimedia Assisted Learning System
EDventure Holdings942
Esther Dyson, President
Global Schoolnet Foundation943
Global Summit on Distance Education
Johns Hopkins University
Virtual Laboratory944
Michael Karweit
Internet Roadmap
Patrick Crispen
Mission Research Corp945
(ACOT)
(CLIN)
(MASLM941)
Electronic Monastery
198
Monastery of Christ in the Desert946
National Education Telecommunications Organisation
National Aeronautics and Space Administration
(NASA)947
National Technological University
University of Phoenix
University On Line
(UOL)
University of Houston948
Virtual University Campus
University of North Dakota, Department of Space Studies, M.S. at a Distance
Contact: Chuck Wood
Western Governors University949
Competency Based Education
Sir John Daniels Megauniversity
Transquest
Delta Airlines
University of lllinois, Chicago
4-D Classroom950
Jim Costigan
University of Illinois at Urbana-Champaign
Cyber Prof951
Contact: Michael Lam
United States Distance Learning Association
(USDLA)
Electronic Performance Support Systems
United States Navy952
Self Learning, Training
Institutions
Global Engineering Network953
Utah State University, Professor Merrill
Electronic Trainer ()954
0. Information Containers
1. Authoring Systems
2. Models and Widgets
3. Built in Instructional Strategies
4. Adaptive Instructional Strategies
Companies with Self-Paced Learning Products955
CBT Systems956
DigitalThink957
199
Ibid958
Keystone Learning959
LearnIT960
Mastering Computers961
Mindworks Professional Education Group962
Productivity Group International963
Quickstart Technologies964
Scholars.com965
Systems Engineering Clinic966
Teletutor967
Transcender968
Internet Integration Services Companies
Amdahl969
Andersen Consulting970
ANS971
Web site, intranet,extranet, e-commerce
AT&T Solutions972
Bain and Co.973
BBN974
Bell Atlantic Network Integration975
Web site, intranet, extranet
Booz, Allen and Hamilton976
Bull-Integris977
Cap Gemini978
CBT Systems979
Compuserve980
Intranet, extranet
Coopers and Lybrand981
CSC Index982
Deloitte and Touche983
Web site, intranet, extranet, e-commerce
Digital Equipment984
EDS985
Entex Information Systems986
Ernst and Young987
Intranet,extranet, e-commerce
Global Learning Network988
Hewlett Packard989
Web site, intranet,extranet, e-commerce
990
IBM
Inventa991
Intranet,extranet, e-commerce
KPMG Peat Marwick992
Lockheed Martin993
Lucent Technologies
200
Web site, intranet, extranet, e-commerce
MCI Systemhouse994
Web site, intranet, extranet, e-commerce
McKinsey995
Price Waterhouse996
Web site, intranet, extranet, e-commerce
Sapient997
Web site, intranet, extranet, e-commerce
Sun Professional Services998
Web site, intranet, extranet, e-commerce
Unisys999
Vanstar1000
International Business Machines
Global Campus1001 links with
National Technological University
Open University
Automated Training Systems1002
(IBM)
Intelligent Computer Assisted Instruction1003
Multimedia Self Training for SMEs1004
International Society Technological Education
(ISTE)
Rensselaer Writing Center1005
List of writing centers in U.S.
Course
Learning Activity Design Case Library (Tom Carey, University of Waterloo)1006
This has links to themes such as:
concept mapping
teaching and learning
how to think about how to learn
evaluation
knowledge forum 1007
Tools
Tools for Automated Knowledge Engineering
Hypertext, Hypermedia and Learning1009
(TAKE)1008
Reference
Books
Bates, Tony, Technology, Open Learning and Distance Education, London: Routledge,
1995.
Dills, Charles R. and Alexander J. Romiszowski, Instructional Development Paradigms,
Englewood Cliffs: Educational Technology Publications.
Keegan, D., Foundations of Distance Education, London: Routledge, 1996
201
McLellan, Hilary, ed., Situated Learning Perspectives, Englewood Cliffs: Educational
Technology Publications.
Merrill, M. David and David G. Twitchell, ed. Instructional Design Theory, Englewood
Cliffs: Educational Technology Publications, 1994.
Smith, John B., Collective Intelligence in Computer Based Collaboration, Erlbaum
Associates.1010
Tiffen, John, In Search of the Virtual Class
Willis, B. Distance Education, Strategies and Tools for Distance Education, Englewood
Cliffs: Educational Technology Publications, 1994.
Wilson, Brent G., ed., Constructivist Learning, Englewood Cliffs: Educational
Technology Publications.
Sites
Theories of Learning
Knowledge Hierarchy Robert Gagné1011
Schema1012
202
Appendix 8. Basic Application Areas
Content Supplier
1.
2.
3.
4.
5.
6.
7.
8.
9.
Describe, Analyse
Class
Index
Scan, Capture
Create
Database
Display
Store, Archive
Preserve
Broker
10. Retrieve
11. Restore
12. Translate
13. Transform, Morph
14. Encrypt
15. Copyright (Legal)
16. Transact (Financial)
17. Administer
18. Service Bureaus
User Task
18. Work
19. Design
20. Manufacture
21. Output, Broadcast
22. Forecast
User Discipline
23. Libraries
24. Museums
25. Military
26. Industry
27. Government
28. Education
29. Health
30. Entertainment
31. Evaluate
203
Appendix 9. Glossary of Key Elements in Internet Metadata
Client
Whois ++
PH
Lightweight Directory
(LDAP)
Middleware Platform
Common Object Request Broker Architecture
(CORBA)
JAVA
Remote Method Interface
(RMI)
A Java-Corba Alliance is in the works (May 18, 1998)1013
DCOM
Protocol Front End
a) Protocols
1) Hyper Text Transfer Protocol
(HTTP)
There is discussion within W3 that the work of CORBA’s
IIOP by the OMG might be integrated with that of http.
2) Internet Inter-Object Request Broker Protocol
(IIOP)
b) Directory Services
1. Finger
Queried only one database at a time
2. Whois
Stateless query response protocol
Queried only one database at a time
3. X.500
User Friendly Names as URN
uses an abstract naming system.1014
4. Whois++1015
TCP based protocol, which stems from whois
Centroids: collection of attributes and unique values
for these attributes taken from contents of server =
lightweight full-text index to determine whether
server has any relevant info.
Generated automatically from gopher, www, ftp
via IAFA templates
Harvested by index servers
5. WebPh1016
is a client which scans strings of text which appear to be
(PH)
204
http, gopher, ftp or e-mail and converts these on the fly
into clickable hyperlinks.
6. Lightweight Directory Access Protocol1017
Access a LDAP directory service or
one backended by X.500
(LDAP)
7. Simple Object Lookup Protocol
(SOLO)1018
8. Simple Discovery Protocol
(SDP)
a) nameservers use SDP to communicate with one another
b) clients use SDP directly.
c) Get
1. Wide Area Information Server
2. Verity
3. Fulchrum
4. Gopher
5. Z39.50
(WAIS)
d) Put/Post
1. File Transfer Protocol
2. News Transfer Protocol
(FTP)
(NTP)
Indexing Object
Indexer API
Database Backend or Query Protocol
Structured Query Language
(SQL)
Z39.50
GNU DBM
GDBM
IBM DBaseII
Oracle
Sybase
205
Appendix 10. Metadata in Individual Disciplines
Please Note: Only items not specifically documented in the text have footnotes in the
following list.
Art
Visual Resources Association
(VRA)
Core Categories Metadata
Biology
Biological Metadata
IEEE
Biological Metadata Content Standard
Herbarium Information Standards
National Biological Information Infrastructure
Current Status
(NBII)
Schemas
Terrestrial Biological Survey Data, Australia
Data
Data Documentation Initiative
SGML DTD for Data Documentation
(DDI)
Education
Learning Object Metadata Group
Committee on Technical Standards for Computer Based Learning (IEEE P1484)
Educom
Instructional Management Systems Project
Metadata Tool
Electronics
Electronic Industries Association
CASE Data Interchange Format
(EIA)
(CDIF)
Engineering
Global Engineering Network
(GEN)
National Metacenter for Computational Science and Engineering
Industry
Basic Semantic Repository
(BSR)
206
Replaced by
BEACON
Open standards infrastructure for business
and industrial applications
Environment
United Nations
United Nations Environmental Program
Metadata Contributors
Environmental Information Systems
Environmental Database Systems
(UNEP)
Environmental Protection Agency
Environmental Data Registry
(EPA)
Central European Environmental Data Request Facility
(CEDAR)
World Conservation Monitoring Centre1019
Australia
Australia New Zealand Land Information Council Metadata
Environment Resources Information Network
(ANZIC)
(ERIN)
Geospatial and Geographical Information
A. International Standards Organization
(GIS)
(ISO/IEC)
Open systems interconnection, data management
and open distributed processing
(ISO/IEC JTC1/SC21)
Specification for a data descriptive file for geographic interchange (ISO 8211)
Basis for International Hydrographic Organization
(IHO DX-90)
transfer standard for digital hydrographic data
Geographic Information
Standard representation of latitude, longitude and altitude
(ISO 15046)
(ISO 6709)
Geographic Information and Geomatics
WG1. Framework and reference model
WG 2. Geospatial data models and operators
WG 3. Geospatial data administration
WG 4. Geospatial Services
WG 5. Profiles and Functional Standards
(ISO/IEC/TC 211)
B. International
Fédération Internationale des Géomètres
Commission 3.7 Spatial Data Infrastructure
(FIG)
207
International Terrestrial Reference Frame
C. Multi-National
European Standardisation Organization for Road Transport
and Traffic Telematics
WG 1. Framework for standardisation
WG 2. Models and Applications
WG 3. Data Transfer
WG 4. Location Reference Systems
WG 7 Geographic Data File
(ITRF)
(CEN/TC 278)
(GDF)
European Norms for Geographical Information
(CEN/TC 287)
European Policy Framework for Geographical Information
European Geographical Information Infrastructure
GRIded Binary
Standard set of codes for the storage and transmission
of meteorological data
(GRIB)
S-571020
SQL-MM1021
Geographic Tag Image File Format
(GeoTif)1022
DG XIII Open Information Interchange
GIS Standards1023
(OII)
European Umbrella Organisation for Geographical Information
(EUROGI)
Open Geographical Information Systems Consortium Inc
(OGC)
Open Geographic Data Committee
Open Geodata Interoperability Specification
(OGIS)
C. National
Canada
Geographic Data BC, Ministry of Environment, B.C.1024
Spatial Archiving and Interchange Format
Germany
Deutsche Dachverbund für Geoinformation
Russia
Standards List1027
(SAIF)1025
(DDGI)1026
United States
208
American Society for Testing and Materials
Digital Spatial Metadata1029
Spatial Data Transfer Standard
(ASTM)1028
Federal Geographic Data Committee
Content Standard Digital Geospatial Metadata1032
Metadata Standards1034
(FGDC)1031
(CSDGM) 1033
(SDTS)1030
Institute for Land Information
(ILI/LIA)
Global Positioning Systems
John Abate1035
(GPS)
F. Major Companies
ARC/INFO
ArcView
Autodesk
ARX Object Technology
Bell Laboratories, Lucent Technologies
Environmental Systems Research Institute
IBM Almaden
Z39.50
(ESRI)
+ spatial data elements
Government
Government Information Locator Service
Cf. Global Information Locator Service
(GILS)
(GILS)
Health and Medicine
HL7 Health Core Markup Language
(HCML)
Library
ALCTS Taskforce on Metadata
Digital Library Metadata Group
(DLMG)
Library Information Interchange Standards
(LIIS)
Network of Literary Archives
(NOLA)
Oxford Text Archive
(OTA)
Text Entering Initiative
(TEI)
United Kingdom
209
Arts and Humanities Data Service and
United Kingdom Office for Library and Information Networking
Proposal to Identify Shared Metadata Requirements
Metadata
Mapping between Metadata Formats
Linking Publishers and National Bibliographic Services
(AHDS)
(UKOLN)
(BIBLINK)
Nordic Metadata Project
(NMP)
Physics (Scientific Visualisation)
National Metacenter for Computational Science and Engineering
Khoros
Notes on Paper
Science
Environmental Protection Agency
Scientific Metadata Standards Project
Institute of Electrical and Electronic Engineers
(Scientific) Metadata and Data Management.
(EPA)
(IEEE)
210
NOTES
Chapter 1 Libraries in the Digital Age
1 A development of these ideas concerning virtual reference rooms will be presented at
the next meeting of the Internet Society (San Jose, June 1999).
2 Originally published: Cambridge: Cambridge University Press, 1917.
Cf. http://www-groups.dcs.st-and.ac.uk/~history/Miscellaneous/darcy.html.
3
Clifford Stoll, Silicon Snake Oil, New York: Doubleday , 1995.
4
For more on this topic see a standard survey of the latest techniques, by Linda
Kempster.
5
I am grateful to Professor Chris Llewellyn Smith, Director of CERN, who provided
these statistics in a lecture at the INET 98 Summit (Geneva) on 22 July 1998.
6
Thomas H. Davenport with Laurence Prusak, Information Ecology; Mastering the
Information and Knowledge Environment, Oxford University Press, 1997.
7
On this problem see the author’s "Past Imprecision for Future Standards: Computers
and New Roads to Knowledge", Computers and the History of Art, London, vol. 4.1,
(1993), pp. 2-3, 47-53.
8
It was originally sold to Thomson, then to CARL, IBM and ISM.
9
This, for example, is the approach taken by the Civita Consortium (Rome).
10
For an example of this problem in the context of historical studies see the author’s:
“Past Imprecision for Future Standards: Computers and New Roads to Knowledge",
Computers and the History of Art, London, vol. 4.1, (1993), pp. 17-32.
11
Based on its French name: Fédération Internationale de la Documentation
12
Based on its French name: Union Internationale des Associations
13
“ISO 639 contains much other information about the use of language symbols,
registration of new symbols, etc. The language codes of ISO 639 are said to be "devised
primarily for use in terminology, lexicography and linguistics, but they may be used for
any application requiring the expression of languages in coded form." The registration
authority for ISO 639 is given as Infoterm, Österreiches Normungsinstitut (ON), Postfach
130, A-1021 Vienna, AUSTRIA.” See: http://liberty.uc.wlu.edu/~hblackme/sgml/sgmlbib.html.
14
UNISIST. Synopsis of the Feasibility Study on a World Science Information System,
Paris: UNESCO, 1971, p. xiii.
15
See http://www.medici.org
16
For an insightful analysis see: Climate Change and the Financial Sector. The Emerging
Threat-The Solar Solution. ed. Jeremy Leggett, Munich: Gerling Akademie Verlag, 1996.
Chapter 2 Digital Reference Rooms
17
"World Access to Cultural Heritage: An Integrating Strategy", Acts of Congress: Beni
Culturali. Reti Multimedialità, Milan,, September 1996, Milan, 1997, (in press).
18
See the author’s: “The Future of the Memorandum of Understanding (MOU) for
Multimedia Access to Europe’s Cultural Heritage,” Draft Document of the Memorandum
of Understanding.
19
For an example of this problem in the context of historical studies see the author’s:
Past Imprecision for Future Standards: Computers and New Roads to Knowledge",
Computers and the History of Art, London, vol. 4.1, (1993), pp. 17-32.
20
Based on its French name: Fédération Internationale de la Documentation
21
Based on its French name: Union Internationale des Associations
211
22
UNISIST. Synopsis of the Feasibility Study on a World Science Information System,
Paris: UNESCO, 1971, p. xiii.
23
This digital reference room will also be a fundamental resource for a new software
which is truly universal in its scope, namely a System for Universal Multi-Media Access
(SUMMA), provisionally to be developed at Maastricht.
Chapter 3 Search Strategies
24
See the work of Leonard Kleinrock: http://www.lk.cs.ucla.edu.
25
Lévy, Pierre, (1996), The Second Flood-Report on Cyberculture, Council of Europe:
Strasbourg, CC-CULT, 27D. Cf. Lévy, Pierre,
(1990), Les Technologies de
l’intelligence, Paris: La Découverte.
26
For a further discussion of this trend see: Veltman, Kim H. (1997), Why Computers are
Transforming the Meaning of Education, ED-Media and ED-Telecom Conference,
Calgary, June 1997, ed. Müldner, Tomasz, Reeves, Thomas C., Charlottesville:
Association for the Advancement of Computing in Education, vol. II, 1058-1076
available electronically at http://www.web.net/~akfc/sums/articles/Education.html
27
Recently, XML has become part of a more complex architecture strategy of the W3
Consortium which includes Resource Description Framework (RDF), Protocol for
Internet Content Selection (PICS 2.0) and privacy initiatives (P3P). See:
http://www.w3.org/TR/NOTE-rdfarch.
28
On this topic see: http://www.parc.xerox.com/spl/projects/mops/existing-mops.html
http://www.parc.xerox.com/spl/projects/oi/default.html
29
By this reasoning, each exercise requires its own software. Hence writing requires its
own software, e.g. Word Perfect or Microsoft Word; drawing requires another software,
such as Corel Draw, or Adobe Photoshop; design requires other software again: e.g.
Alias-Wavefront or Softimage.
30
For a more detailed disussion of this concept see: Veltman, Kim H. (1997), Towards a
Global Vision of Meta-data: A Digital Reference ‘Room’, 2nd International Conference.
Cultural Heritage Networks Hypermedia, Milan, September 1997, Milan: Politecnico di
Milano (in press).
31
Such a methodology has implications for hardware and network strategies. It means,
for instance that users will typically engage at least two connections simultaneously, one
to the location they are searching, a second to the on-line digital reference room. In the
case of everyday searches where a smaller set of reference materials would suffice, it is
perfectly possible to imagine these on the hard drive for ready reference. At present the
great debates of personal computers versus network computers are ostensibly about
purely technical questions: whether one uses a regular hard drive with resident software
or a thin client which relies on software mainly on a remote server, the assumption being
that users could readily have everything on remote servers and thus effectively be able to
do without hard drives. Perhaps we need more on the home front than some programmers
suspect and more connectivity than they foresee.
32
A sceptic may rightly object that there is a fundamental flaw in this approach, namely,
that these ordered lists in libraries can never pretend to cover the whole of knowledge.
We would agree that this point is well taken, but insist that this does not diminish the
legitimacy of using the assets of libraries to the extent that they are applicable. These
limitations result in part from different kinds of knowledge. Libraries traditionally focus
212
on books of enduring value or long term knowledge, which is relatively static in nature.
By contrast, materials of passing interest were often classed under Ephemera, as a way of
avoiding the problems of materials where categories were dynamic and constantly
changing.
It is instructive to note that when the Association Internationale de Bibliographie was
founded in the latter nineteenth century, it soon split into two organisations, one which
became ISO TC 37, focussed on the categories and classing of established knowledge,
whereas the Union Internationale des Associations (UIA) focussed its efforts on classing
fields that were emerging and not yet clearly defined. It is therefore no co-incidence, that
the Director of Communication and Research of the UIA, Mr. Anthony Judge, is such a
pioneer in the classing of nascent subjects such as world problems. See, for instance,
Benking Heiner, Judge, Anthony J. N., (1994), Design Considerations for Spatial
Metaphors: Reflections on the Evolution of Viewpoint and Transportation Systems,
Position Paper: ACM-ECHT94 Workshop on spatial Metaphors. Workshop at the
European Conference on Hypermedia Technology, Edinburgh, 18-23 September 1994,
available electronically at
http://www.lcc.gatech.edu/~dieberger/ECHT94.WS.Benking.html . For a more thorough
listing, see the homepages of the authors at http://newciv.org/members.benking and
http://www.uia.org and more specifically under Research on Transdisciplinary
Representation and Conceptual Navigation at http://www.uia.org/uiares/resknow.htm.
The Internet is producing electronic versions of both enduring content found in books and
ephemeral materials, but unlike libraries there is not yet a coherent method for
distinguishing between them. A whole number of initiatives have been undertaken to
remedy this situation, including high level domain addresses, more precise URLs, URNs
and URIs, meta-data tagging in http protocols, and other meta-data schemes which have
been reviewed by the author in the article cited in note 1. As these new methods bring
greater discipline to materials on the net they will become more amenable to the methods
used by libraries. In the meantime, the emphasis on sheer number crunching which some
assume as a complete solution for all electronic knowledge should perhaps be applied
particularly to these undisciplined portions of the net.
In some cases, the on-line ephemera have characteristics, which are rarely found at all in
libraries and are valuable precisely and only because they are available to some persons
hours, minutes or sometimes even seconds before they are available to others, notably,
stock exchange figures, sports and race track information and the like involving bets.
Some of this material is so fleeting that it loses almost all its monetary value within
twenty-four hours. Search strategies for such ephemera are predictably different than
those of the eternal truths.
33
Day, A. Colin, (1992), Roget’s Thesaurus of the Bible, San Francisco: Harpers San
Francisco.
34
Such strategies are, of course, not without their dangers. One has to be very careful to
distinguish the user’s interests as a professional from their leisure interests. A nuclear
physicist might well do searches on isotopes and quarks in one capacity and turn to sports
or sex in the other. If such professional and leisure modes were mixed the resulting
search strategies might be more than mixed.
213
A more important potential role for agents lies in translating general lists of search terms
to more controlled lists which can be co-ordinated with subject lists and classification
systems. In studying a topic we typically make a list of terms or keywords which interest
us. For instance, a user may be interested in adaptive modelling, complex adaptive
modelling and conceptual modelling and write these terms sometimes in this form,
sometimes in reverse as modelling, adaptive etc. An agent would recognize that the user
is interested in modelling, adaptive, complex adaptive and conceptual . It would create
authority lists with controlled vocabularies, check which terms are found in subject lists
of standard classifications and thus arrive gradually at a distinction between those terms
which link directly with recognized fields of enduring knowledge, reflected in library
catalogues, and those terms which represent new areas of study to date perhaps only
recorded in citation indexes. In so doing one would create bridges between simple lists,
thesauri and classification systems. The discipline of these more controlled lists could
then be used to call up synonyms, broader, narrower and other related terms.
35
Some will argue that the making of lists is an outdated exercise because the power of
computers is now so great it is easier to search everything available through brute force
number crunching than to bother with the niceties of lists. As is so often the case in life
brute force has limitations, which are overshadowed by intelligence.
36
The philosophy behind this aspect of the system is simple. Names remain the same, but
it makes little sense to overwhelm children with a list of millions of names (such as the
Library of Congress authority list) when they are still learning to read their first names.
Hence beginners are given minimal subsets which are gradually increased as their
horizons expand. This is effectively a simulation in electronic form of traditional
experience. A child going to the resource centre or library in a kindergarten would have a
very small list of names. In an elementary and secondary school the catalogue would
increase accordingly. In a university library the names would be larger again and
continue to expand as the research student was introduced to the catalogue of the world’s
great libraries (National Union Catalogue, British Library and Bibliothèque Nationale).
37
Dictionaries are objective in the sense that there is only one definition that corresponds
to the definition in the Oxford English Dictionary (OED). The potential definition of a
word remains subjective to the extent that there are definitions other than those in the
OED. The same principle applies to classification systems and encyclopaedias.
38
Samurin, E. I., (1977), Geschichte der bibliotekarisch-bibliographischen
Klassifikation, [History of Library and Bibliographical Classification], Munich: Verlag
Dokumentation. Translated from the original Russian: Ocerki po istorii bibliotecnobibliograficeskoj klassificacii, (1955-1959), Moscow, 2 vol.
39
These basic questions may have unexpected applications. In a very stimulating article,
Professor Clare Beghtol, suggested that text types might prove an important way of
classing materials. See: Beghtol, Clare, (1997), Stories: Applications of Narrative
Discourse Analysis to Issues in Information Storage and retrieval, Knowledge
Organization, 24 (2), 64-71.
214
Who
Ruthrof Personae
1981
Brewer Personae
1985
Halasz Existents
1987
(Characters)
Polkinghorne People
1988
Lamarque Characters
1990
Rigney Actors
1990
(individual
and/or
collective)
Clark Agents
1995
What
Events (non human)
Acts (human)
Events
Events
Where
Space
When
Time
Setting
(Location)
Existents
(setting)
Setting
(Time)
Events
Actions (human)
Structure
(plot, including events)
Events
Place
Plot
Scene
How
Why
Narrator Resolution
Voice
Time
Time
Voice
Narrator
Voice
End
Figure 7. Adaptation of Beghtol’s chart of narrative elements from non-ISAR fields in
keeping with six basic questions.
Following the typology of Egon Werlich, Egon, (1975), Typologie der Texte: Entwurf
eines textlinguistischen Modells zur Grundlegung einer Textgrammatik [The Typology of
Texts: An Outline of a Text-Linguistic Model for the Establishment of a Grammar of
Text], Heidelberg: Quelle und Meyer, Begthol suggested a basic distinction between
narrative texts and non-narrative types: e.g. description, exposition, argumentation and
instruction. Combining the research of six authors she proposed a table of narrative
elements from non-ISAR fields (p.66). A slight adaptation of these fields shows how
these can be aligned in terms of the six basic questions (figure 7).
This suggests a definition of narrative as a text type, which applies to all six questions in
ways that the others do not. A more thorough study would need to relate these questions
to the typological work of Lanser, Susan S., (1981), The Narrative Act. Point of View in
Prose Fiction, Princeton: Princeton University Press; Lintvelt, Jaap, (1981), Essai de
Typologie Narrative: Le point de vue. Théorie et Analyse, [Essay of Narrative Typology:
Point of View. Theory and Analysis], Paris: Librairie José Corti; Lindemann, Bernhard
(1987), Einige Frage an eine Theorie der Sprachlichen Perspectivierung, [Some
Questions concerning a Perspectival Treatment of Language], and Perspektivität in
Sprache und Text, [Perspectivity in Language and Text], Hrsg. Peter Canisius, Bochum:
Verlag Dr. Norbert Brockmeyer, 1-51. In keeping with the method outlined below the
various questions might each be placed on an independent plane. Different text types
would then link differing numbers of planes.
40
As Brian Bell, the President of the OLA has kindly pointed out this leads to issues in
metadata: questions of authority control as proposed at the University of Virginia, versus
the OCLC view of using a URN or URL for each concept.
41
There will, of course, be more complex instances, as when a researcher wishes to
determine all instances of a subject in all media, in order to study the relative significance
of particular media. Was the theme used more in books and written materials or primarily
in paintings? Did the advent of film and television increase the use of the theme or lead to
its demise? These are cases where agent technologies will increasingly serve to do most
215
of the preparatory work. In the past researchers had research assistants to find the raw
material. Agents will translate this process into an electronic form. The challenge of
making sense of the raw data will remain with the researcher.
42
Tufte, Edward R., (1990), Envisioning Information, Cheshire, Conn.: Graphics Press.
For an early discussion of these themes in terms of computer graphics see Benking,
Heiner and Steffen, Hinrich, (1985), “Computer Graphics for Management, Processing,
Analysis and Output of Spacial Data” (Corporate, Administrative, Facilities and Market),
WCGA-CAMP ‘85 (World Computer Graphics Association and Computer Graphics for
Management and Productivity) Conference, Section C.7.2 ,Future Trends, Berlin, 440458.
43
For instance, these maps can be linked via Global Positioning Systems (GPS) to
moving objects such as cars or even moving mail and package containers such that one
can trace their movements. If this were applied to all valuables, stealing and robbery
could soon be outmoded bad habits.
44
See: http://www.artcom.de/t_vision/welcome.en.
45
Lukuge, Ishanta, Ishizaki, Suguru, (1995), Geospace. An Interactive Visualization
System for Exploring Complex Information Spaces, CHI 95 Proceedings: Conference on
Human Factors in Computing Systems: Mosaic of Creativity, May 7-11 1995, Denver,
Co. See: http://www.acm.org.sigchi/chi95/Electronic/documents/papers/il_bdy.htm.
46
A number of companies are active in this domain. The Environmental Systems
Research Institute (ESRI) is creating maps of the world. Autodesk has already created
such maps for the world, North America and the Netherlands (cf.
http://www.mapguide.com). For a review of these developments see Potmesil, Michael,
(1997), Maps Alive: Viewing Geospatial Information on the WWW, Bell Laboratories,
Lucent Technologies TEC 153, Holmdel, New Jersey, 1-14 or electronically on the web
at http://www6.nttlabs.com/HyperNews/get/PAPER/30.html.
47
See for instance, Forte, Maurizio,ed., (1997), Archéologie virtuelle. Le passé retrouvé,
Paris: Arthaud, based on the Italian, (1996), Archeologia, percorsi virtuali nelle civilta
scomparse, [Archaeology, Virtual Journeys through Lost Civilisations], Milan:
Mondadori.
48
See, for instance, the differences between Professors Willibald Sauerländer and Martin
Gosebruch.
49
See: Yates, Frances, Dame, (1966), The Art of Memory, London: Routledge and Kegan
Paul.
50
See, for instance, the work of Tony Judge in Visualising World Probems,
Organisations, Values at: http://www.uia.org/uiademo/VRML/vrmldemo.htm.
51
There is a rumour that the next release of Word may be coming with a Z.39.50
functionality.
Chapter 4. Cultural Heritage
52
Ben Shneiderman, Designing the User Interface. Strategies for Effective Human
Computer Interaction, Reading Ma.: Addison Wesley, 1997 (first edition 1987) 3rd ed.
1997.
53
Steven Johnson, Interface Culture: How New Technology Transforms the Way We
Create and Communicate, San Francisco: Harper, 1998. A book by Richard Saul
Wurman, Information Architects, New York: Graphis Inc., 1997 provides a stimulating
216
survey of some popular techniques but offers little insight into developments at the
research level.
54
E.g. IEEE, Technical Committee on Computer Graphics (TCCG), which publishes
Transactions on Visualisation and Computer Graphics (TVCG).
See: http://www.cs.sunysb.edu/~tvcg/.
55
Annual or Bi-Annual Conferences
Association for Computing Machinery
(ACM)
Computer Human Interface
(CHI)
(INTERCHI)
Visualization 1998
Research Triangle Park, 18-23 October 1998
Includes IEEE Information Visualization 19-20 October 1998
EC ESPRIT Programme
Foundations of Visualisation and Multi-Modal Interfaces
1) Comprehensive Human Animation Resource Model
2) Foundations of Advanced Three Dimensional
Information Visualisation
3) Framework for Immersive Virtual Environments
4) Reconstruction of Reconstruction of Reality
for Image Sequences
Visual Information Retrieval Interfaces
Workshop on Advanced Visual Interfaces
Aquila 24-27 May 1998
Gubbio
1996
Bari
1994
Roma
1992
(CHARM)
(FADIVA)
(FIVE)
(REALISE)
(VIRI)55
(AVI)
See: http://informatik.uni-trier.de/~ley/db/conf/avi/index.html
Foundations of Advanced Three Dimensional Information
Visualization Applications
Glasgow
1996
(FADIVA)
Graph Drawing
See: http://gd98.cs.mcgill.ca
Montreal
1998
Rome
1997
Berkeley
1996
Passau
1995
Princeton
1994
Paris
1993
Other Significant Past Conferences in the Field:
1993
217
IEEE Symposium on Visual Languages
1994
InfoVis Symposium on User Interface and Technology
Related to the field of information visualization is the emerging field of diagrammatic
reasoning:
See: http://www.hcrc.ed.ac.uk/gal/Diagrams/research.html
56
See: http://www.geog.ucl.ac.uk/casa/martin/atlas/atlas.html
57
Durham University, Computer Science Technical Report 12/96.
See: http://www.dur.ac.uk/~dcs3py/pages/work/Documents/lit-survey/IVSurvey/index.html
58
See: http://rvprl.cs.uml.edu/shootout/viz/vizsem/3dinfoviz.htm
59
See: http://www-graphics.stanford.edu/courses/cs348c-96fall/infovis1/slides/walk005.html
60
See http://www-graphics.stanford.edu/courses/cs348c-96-fall/scivis/slides/
Mark Levoy provides a summary of two taxonomies based on visual metaphors. The first
is by Jacques Bertin, Sémiologie graphique: les diagrammes, les réseaux, les cartes,
avec la collaboration de Marc Barbut [et al.], Paris, Mouton, [1973, c1967]. Jacques
Bertin, Semiology of graphics, translated by William J. Berg, Madison, Wis. : University
of Wisconsin Press, 1983. This system is based on:
Imposition
- Diagrams
- Networks
- Maps
- Symbols
x
Retinal Variables
- Size
x
- Value
-Texture
- Colour
- Orientation
- Shape
-Arrangement
-Rectilinear
-Circular
-Orthogonal
- Polar
- Association
- Selection
- Order
- Quantity
The second taxonomy is from Peter R. Keller and Mary M. Keller, Visual cues : practical
data visualization, Los Alamitos, CA : IEEE Computer Society Press ; Piscataway, NJ :
IEEE Press, c1993. This system is based on:
Actions
Identify
Locate
Distinguish
Categorise
Cluster
x
Data
Scalar
Nominal
Direction
Shape
Position
218
Rank
Compare
Associate
Correlate
Region
Structure
Mark Levoy also distinguishes four taxonomies by data type:
- number of independent variables (domain)
- number of independent variables (range)
- discrete vs. continuous domain
- binary vs. multivalued vs. continuous range.
His additonal bibliography includes:
1977 John Wilder Tukey, Exploratory data analysis, Reading, Mass. : Addison-Wesley
Pub. Co.
1992 Harry Robin, The scientific image : from cave to computer, historical foreword by
Daniel J. Kevles, New York : H.N. Abrams.
1995 Computer visualization : graphics techniques for scientific and engineering
analysis, edited by Richard S. Gallagher, Boca Raton : CRC Press.
A standard introduction to problems of visualisation is offered by the work of Edward
Tufte, Visual Display of Quantitative Information, 1983; Envisioning Information, 1990,
(cf. note 28 above) and Visual Explanations, 1997, [all] Cheshire, Conn.: Graphics Press.
61
See: http://isx.com/~hci
http://web.cs.bgsu.edu/hcivil
http://www.logikos.com/sef.htm
Protocols
Interface Definition Language
(IDL)
See: http://www.cs.umbc.edu/~thurston/corbidl.htm
Dynamic Invocation Interface
(DII)
Enhanced Man Machine for Videotex and Multimedia
(VEMMI)
See: http://www.mctel.fr
62
There is of course an HCI virtual library
See http://usableweb.com/hcivl/hciindex.html
Hans de Graaf,62 (Technical University, Delft) has a valuable index
See: http://is.twi.tudelft.nl/hci
Isabel Cruz has made a useful collection of reports on Human Computer Interaction at
See: http://www.cs.brown.edu/people/ifc/hci/finalind.html.
See also: Human Computer Interface Virtual Library
(HCI)
See: http://web.cs.bgsu.edu/hcivl/misc.html
http://www.nolan.com/~pnolan/resource/info.html
http://is.twi.tudelft.nl/hci/sources.html
Banxia Decision Explorer
See: http://www.banxia.co.uk/banxia
Document Visualization
See: http://www.psysch.uiuc.edu/docs/psych290/vincow_feb03.html
Information Visualisation Resources
219
See: http://www.cs.man.ac.uk/~ngg/infovis_people.html
Cf. http://graphics.stanford.edu/courses/cs348c-96fall/resources.html
Input Technologies
See: http://www.dgp.toronto.edu/people/BillBuxton/InputSources.html
Three D imensional (3-D) User Interface Kit
See: http://www.cs.brown.edu/research/graphics/research/3d_toolkit/3d_toolkit.html
Visual Information Architecture
(VIA)
See: http://design-paradigms.www.media.mit.edu/projects/designparadigms/improver-paradigms/via.html
Visualisation and Intelligent Interfaces Group
See: http://almond.srv.cs.cmu/edu/afs/cs/project/sage/mosaic/samples/sage/3d.html
Patrick J. Lynch, Annotated Bibliography of Graphical Design for the User Interface.
See: http://www.uky.edu/~xlin/VIRIreadings.html
Visual Design for the User Interface
See: http://info.med.yale.edu/caim/publications/papers/guip1.html
63
Network Centric User Interfaces
(NUI)
Tom R. Halfhill, “Good-Bye, GUI, Hello NUI,” Byte, Lexington, vol. 22, no. 7,
July 1997, pp. 60-72.
See: [email protected]
Apple
See: http://www.apple.com/
Mac OS 8
Rhapsody
IBM
See: http://www.internet.ibm.com/computers/networkstation/
Network Station
OS2/Warp 4
Bluebird
Lotus
See: http://kona.lotus.com
Kona Desktop
Microsoft
See: http://www.microsoft.com/backoffice/sbc_summary.htm#top
Memphis/Active Desktop
Netscape
See: http://www.netscape.com/comprod/tech_preview/idex.html
Netscape
Oracle/NCI
See: http://www.nc.com
NC Desktop
Santa Cruz Operation See: http://tarantella.sco.com/
Tarantella Web Top
Sun/Java Soft
See: http://www.javasoft.com
Hot Java Views
TriTeal
220
See: http://www.softnc.triteal.com/
SoftNC
Ulysses Telemedia
See: http://www.ulysses.net/
VCOS
cf. http://www.softlab.ece.ntua.gr/~brensham/Hci/hci.htm
64
Massachussets Institute of Technology, (1995), Media Laboratory. Projects. February
1995, Cambridge, Mass.: MIT, 6.
65
A more interesting application is in the context of Collaborative Integrated
Communications for Construction (CICC) available electronically
See: http://www.hhdc.bicc.com/people/dleevers/papers/cycleof.htm, which envisages a
cycle of cognition in which the landscape is but one of six elements, namely, map,
landscape, room, table, theatre, home. The author of this system David Leevers works
with Heiner Benking through ASIS.
See: http://www.hhdc.bicc.com/people/dleevers/default.htm
For an excellent summary of some of the major systems presently available see Peter
Young (1997), Three Dimensional Information Visualisation available electronically.
See: http://rvprl.cs.uml.edu/shootout/viz/vizsem/3dinfoviz.htm.
66
See: Rao, Ramana, Pedersen, Jan O., Hearst, Marti A., Mackinlay, Jock D., Card, Stuart
K., Masinter, Larry, Halvorsen, Per-Kristian, Robertson, George C., (1995), Rich
Interaction in the Digital Library, Communications of the ACM, New York, April, 38 (4),
29-39. Card, Stuart (1996), Visualizing Retrieved Information, IEEE Computer Graphics
and Applications.
67
Kling, Ulrich (1994), “Neue Werkzeuge zur Erstellung und Präsentation von Lern und
Unterrichtsmaterialien [New Tools for the Production and Presentation of Learning and
Instructional Materials],” Learntec 93. Europäischer Kongress für Bildungstechnologie
und betriebliche Bildung, ed. Beck, Uwe, Sommer, Winfried, Berlin: Springer Verlag,
335-360.
See: http://www-cui.darmstadt.gmd.de/visit/Activities/Lyberworld.
The GMD also organizes research on Foundations of Advanced Three Dimensional
Information Visualization Applications (FADIVA) and Visual Information Retrieval
Interfaces (VIRI).
See: http://www-cui.darmstadt.gmd.de/visit/Activities/Viri/visual.html.
68
Ibid., 336-340. Cf. Streitz, N., Hannemann, J., Lemke, J. et al., (1992), SEPIA: A
Cooperative Hypermedia Authoring Environment, Proceedings of the ACM Conference
on Hypertext, ECHT ’92, Milan, 11-22.
69
See: http://viu.eng.rpi.edu/IBMS.html
70
See: http://multimedia.pnl.gov:2080/showcase/
71
See: http://multimedia.pnl.gov/2080/showcase/pachelbel.cgi?it_content/spire.node
72
See: http://www.pnl.gov/news/1995/news95-07.htm
73
See: http://www.cs.cmu.edu/Groups/sage/sage.html
74
See: http://www.maya.com/visage
75
See: http://www.cs.cmu.edu/Groups/sage/sage.html
76
See: http://www.cs.sandia.gov/SEL/main.html
77
See: http://www.cs.sandia.gov/VIS/science.html
78
See: http://www.sandia.gov/eve/eve_toc.html
221
79
See: http://www.dgp.toronto.edu/people/BillBuxton/InputSources.html. A less
comprehensive list is provided by Rita Schlosser and Steve Kelly
See: http://ils.unc.edu/alternative/alternative.html who include glove data-input devices
(VPL Data Glove, Exos Dextrous Hand Master, Mattel PowerGlove, Other Types of
Gloves) and eye-computer interaction and access issues. It should be noted that
Alias/Wavefront is working on new three-dimensional input devices.
80
He also refers to stylus devices: see digitizing tablets, lightpens, boards, desks and
pads, touch screens and force feedback ("haptic") devices).
81
See: http://207.82.250.251/cgi-binstart
82
See: http://www.dragonsys.com/home.html
cf. http://www.gmsltd.com/voiceov.htm
83
See Geoffrey Rowan, “Computers that recognize your smile”, Globe and Mail,
Toronto, 24 November 1997, p. B3
84
See: http://delite.darmstadt.gmd.de/delite/Projects/Corinna
85
This includes Bolt, Beranek and Newman (BBN), Carnegie Mellon University (CMU),
the Massachusetts Institute of Technology (MIT) and the former Stanford Research
Institute (SRI).
86
See: http://multimedia.pnl.gov:2080/showcase/
87
See:
http://multimedia.pnl.gov:2080/showcase/pachelbel.cgi?it_content/auditory_display.node
88
See: http://www.al.wpafb.af.mil/cfb/biocomm.htm.
89
See: http://www.sarcos.com/Jacobsen.html
90
See: http://www.hitl.washington.edu/scivw/EVE/I.C.ForceTactile.html
91
Grigore Burdea, Virtual Reality and Force Feedback, New York: John Wiley & Sons,
1996. Cf. Grigore Burdea, Philippe Coiffet, Virtual Reality Technology, New York: John
Wiley & Sons, 1994.
92
This includes data input devices.
93
These include IBM, Apple, Netscape, Oracle, Sun, Nokia, Hitachi, Fujitsu, Mitsubishi
and Toshiba. Cf. ICO Global Communications at http://www.ico.com
94
See: http://www.igd.ghg.de/www/zgdv-mmvis/miv-projects_e.html#basic
95
See: http://www.ubiq.com/hypertext/weiser/IbiHome.html
96
Adaptive and User Modelling
Adaptive and Intelligent Systems Applications
See: http://www.kareltek.fi/opp/projects/index.html
Adaptive Behavior Journal
Adaptive Environments
See: http://www.adapt.env.org
Adaptive Networks Laboratory
(ANW)
See: http://www-anw.cs.umass.edu
Andrew G. Barto, Richard S. Sutton, Reinforcement Learning
User Modeling Conference Chia Laguna
See http://www.crs4.it/UM97/topics.index.html
Knowledge Systems Laboratory Stanford
Adaptive Intelligent Systems
See: http://www-ksl.stanford.edu/projects/BBI
97
See: http://www.lk.cs.ucla.edu
222
98
See: http://www.virtualvision.com
See, for example, the work of Gregg Vanderheiden, Trace Center, Madison,
Wisconsin.
See: http://trace.wisc.edu
100
Rita Schlosser and Steve Kelly at http://ils.unc.edu/alternative/alternative.html have
made a list which includes: Gaze Tracking, Human-Computer Interaction and the
Visually Impaired, Modelling and Mark Up Languages in Visual Aid.
101
Internationale Stiftung Neurobionik, Nordstadt Krankenhaus, Hannover. The director
of the project is Professor Dr. Madjid Samii.
102
Fraunhofer Institut für Biomedizinische Technik (IBMT), D-66386, St. Ingbert
103
Universität Tübingen, Reutlingen, Naturwisschaftlich-Medizinisches Institut (NMI)
104
See: http://www3.osk.3web.ne.jp/~technosj/mctosE.htm
Cf. Michael Kesterton, “All in the mind?”, Globe and Mail, Toronto, A14, 6 January
1998.
105
See: Frank Beacham, “Mental telepathy makes headway in cyberspace,” Now,
Toronto, 13-19 July 1997, pp. 20-21.
106
See: http://www.sciam.com/1096issue/1096lusted.html
107
See: http://www.af.mil/news/airman/0296/look.htm. Other members of his team are
Chris Gowan and David Pole. The section is headed by Dr. Don Monk. There appears to
be related work at the Crew Systems Ergonomics Information Analysis Center
(CSERIAC)
See: http://cseriac.udri.udayton.edu.
108
See: http://www.harpercollins.co.uk/voyager/features/004/fut4.htm
109
See: http://www.premier-research.com/6chris_gallen.htm
110
I.e. Consiglio Nazionale delle Ricerche.
111
See http://zeus.gmd.de/projects/hips.htm
112
See: http://www.mic.atr.co.jp/~rieko/MetaMuseum.html
113
Closely related to a closely linked technology called the electronic book such as
Soft Book, (http://www.softbookpress.com/),
Rocket Book, (http://www.nuvomedia.com/html/productindex.html); and
(cf.
Everybook,(http://www.everybk.com).
See
Chisato
http://www.siliconvalley.com/columnists/gillmor/docs/dg061298.htm).
Numaoka, “Cyberglass: Vision-Based VRML2 Navigator,” Virtual Worlds, ed. JeanClause Heudin, ed., Berlin: Springer Verlag, 1998.
114
See: http://www.cs.columbia.edu/~feiner
115
Related projects at Columbia University include Augmented Reality for Construction
(ARC), Columbia Object-Oriented Testbed for Exploring Research in Interactive
Environments (COTERIE), Knowledge Based Virtual Presentation Systems
(IMPROVISE), Knowledge Based Augmented Reality for Maintenance Assistance
(KARMA)
See: http://www.cs/columbia.edu/graphics/projects/karma/karma.html and Windows
on the World (formerly called Worlds within Worlds).
116
See:
http://www.cs.columbia.edu/graphics/projects/archAnatomy/architecturalAnatomy.html
117
See: http://www.cc.columbia.edu/cu/gsapp/BT/RESEARCH/LOW/Models.html
99
223
118
See: http://www.igd.fhg.de/www/igd-a4/index.html. The Institute’s division on
visualisation and virtual reality and (Abteilung Visualisierung und Virtuelle Realität)
works directly with the European Computer Research Centre (ECRC, Munich). Cf. the
Data Visualization work of Gudrun Klinker
See: http://www.ecrc.de/staff/gudrun.
119
See: http://www.csl.sony.co.jp/person/rekimoto/navi.html
120
See also Katashi Nagao and Jun Rekimoto, “Agent Augmented Reality: A Software
Agent Meets the Real World”
See: http://www.csl.sony.co.jp/person/nagao/icmas96/outline.html;
Trans-Vision Collaboration Augmented Reality Testbed
See: http://www.csl.sony.co.jp/person/rekimoto/transvision.html
Virtual Society Information Booth
See http://www.csl.sony.co.jp/project/VS/index.html and
Homepage of Jun Rekimoto
See: http://www.csl.sony.co.jp/projects/ar/ref.html
121
Named after a type of old-fashioned Japanese drama.
122
For an initial discussion of SUMS see above chapters 2 and 3. For further literature
see: http://www.sumscorp.com.
123
See: http://mercurio.sm.dsi.unimi.it/~gdemich/campiello.html
124
See: http://dynamicdiagrams.com/siteviews.htm
125
See: http://www.almaden.ibm.com/vis/vis.lab.html
126
The Uffizi already has available more complex versions of 30-40 MB per room.
Indeed, the Uffizi is scanning in their entire collection of 1300 paintings at approximately
1.4 gigabytes per square meter. Assuming that the average painting is slightly larger than
a square meter this means that their collection will require 2.6 terabytes. The National
Gallery in Washington is scanning images at a much lower resolution of c.30 MB per
painting, but with a much larger collection of 105,000 images this will still result in some
3.15 terabytes. While it is frequently assumed that only experts will want images at such
high resolutions, today’s desktop PCs are not yet equipped to deal with millions of
paintings on-line.
127
See chapter two above.
128
IMAX is exploring the possibility of delivering their images on-line. This will require
approximately 80 GB/second, which seems astronomical at the moment but in light of
recent demonstrations at the terabyte level is rapidly becoming feasible.
129
Using GOTO technology.
130
Once again there are problems of terminology. While virtual museum typically means
an electronic reconstruction of the physical building, virtual library often means a
bibliography on given subjects while digital library is frequently used for electronic
versions of contents of books.
131
As Michael Ester (formerly Getty Art History Information Program, now Getty
Information Institute) has shown, books involve reflected light and allow the eye to see
up to about 3,500 lines per inch. Computer screens, which shoot light directly into the
eye, activate a different combination of rods and cones and only allow one to see about
2,000 lines per inch. This helps explain why proof-reading is so much more difficult on
screen than it is on paper.
224
132
Sir Ernst H. Gombrich, “The Mirror and the Map, Theories of Pictorial
Representation” in Philosophical Transactions of the Royal Society of London, London,
vol. 270, no. 903, 1975, pp. 119-149.
133
Archeologia, percorsi virtuali nelle civilta scomparse, Milan: Mondadori Editore,
1996. This book has since been translated into French and English.
134
See http://viswiz.gmd.de/VMSD/PAGES.en/index.html. Working in conjunction with
Stanford University, the GMD has also been working on a Responsive Workbench,
which effectively transforms a traditional workbench surface into the equivalent of a
monitor or computer screen showing images in virtual reality which can be viewed with
the aid of stereoscopic glasses and manipulated interactively at a distance. Applications
of such a workbench include TeleTeaching and Virtual Meeting This bears comparison
with the Virtual Workbench:
See: http://beast.cbmv.jhu.edu:8000/projects/workbench/workbench.shtml
and Brainbench
See: http://beast.cbmv.jhu.edu:8000/projects/brainbench/brainbench.shtml
being developed in the Virtual Environments Program at the Australian National
University (Canberra) and the Virtual Table being produced by the Fraunhofer
Gesellschaft (Darmstadt).
135
This is being developed in the context of the GMD’s VISIT project. Other projects of
the GMD include the Digital Media Lab’s (DML) Fluid Dynamics Visualisation in 3D
(FluVis).
136
See: http://sgwww.epfl.ch/BERGER.
137
See: http://mediamatic.nl/Magazine/8*2Lovinck-Legrady.html.
138
See: http://www.cdromshop.com/cdshop/desc/p.735163027518.html.
139
See: http://shea1.mit.edu.
140
The key figures in this Research and Development Group are Prof. Manfred
Eisenbeis, Annette Huennekens and Eric Kluitenberg. See: [email protected]
141
I am grateful to my colleague Professore Ivan Grossi for this information.
See: http://mosaic.cineca.it
142
See: http://hydra.perseus.tufts.edu
143
See: http://www.gfai.de/projekte/index.htm
144
Since the term virtual is used in so many ways, the French have tended to adopt
Malraux’s phrase and refer to all virtual museums as imaginary museums.
145
Among those active in the realm of metadata are the following:
International Council of Scientific Unions,
- Committee on Data for Science and Technology
See: http://www.cisti.nrc.ca/programs/codata/
- United Nations Environment Programme (UNEP)
Towards the design for a Meta-Database for the Harmonization of Environmental
Measurement,” Report of the Expert Group Meeting, July 26-27, 1990, Nairobi:
UNEP, 1991, (GEMS Report Series no. 8).
Harmonization of Environmental Measurement Information System (HEMIS)
Cf. Heiner Benking, Ulrich Kampffmeyer, “Access and Assimilation: Pivotal
Environmental Information Challenges, GeoJournal, Dordrecht, 26.3/1992,
pp.323-334.
- American Institute of Physics
225
cf. Heiner Benking and Ulrich Kampffmeyer, “Harmonization of Environmental
Meta-Information with a Thesaurus Based Multi-Lingual and Multi-Medial
Information System,” Earth and Space Science Information Systems, ed. Arthur
Zygielbaum, New York: American Institute of Physics, 1992, pp. 688-695. (AIP
Conference Proceedings 283).
- Environmental Protection Agency (EPA) Scientific Metadata Standards Project
See: http://www.lbl.gov/~olken/epa.html#Related.WWW
Re: Metadata registries
See: http://www.lbl.gov/~olken/EPA/Workshop/recreadings.html
For an attempt at a metadata taxonomy
See: http://www.lbl.gov/~olken/EPA/Workshop/taxonomy.html
146
A detailed survey of this important field will be the subject of a separate paper for the
opening keynote of Euphorie Digital? Aspekte der Wissensvermittlung in Kunst, Kultur
und Technologie, Heinz Nixdorf Museums Forum, Paderborn, September 1998.
147
Hypertext Markup Language (HTML), as an interim solution, marked a departure
from this method in that it conflated content with presentation.
148
Voice activation may be attractive at times but will frequently be impractical.
Imagine the reading room of a library where everyone is speaking, or even a museum
where everyone is speaking to their computers.
149
Brian Bell suggests that this can be accomplished by linking a PURL or URN to LC
authorities and others from specialized societies such as biochemistry terms etc. Local
library OPACS are able to do this now. Ameritech and A.G. would be leaders in the field.
150
The Getty Research Institute’s Union List of Artists Names (ULAN) would be
another example, although with only 100,000 names as opposed to the 328,000 of the
AKL, the term “union” promises more than it delivers.
151
Cf. the University of Virginia’s Samuel Clemens examples.
See: http://library.berkeley.edu/BANC/MTP/.
I am grateful to Brian Bell for this reference.
152
See, for instance, the methods being developed by Lucent in their Live Web
Stationery. See: http://medusa.multimedia.bell-labs.com/LWS/.
153
Libraries are relatively simple structures. In the case of more complex systems such
as the London Underground it is useful to move progressively from a two-dimensional
schematic simplification of the routes to a realistic three-dimensional rendering of the
complete system, station by station. In the context of telecommunications the so-called
physical world becomes one of seven layers in the model of the International Standards
Organisation (ISO). In such cases it is useful not only to treat each of the seven layers
separately but also introduce visual layers to distinguish the granularity of diffferent
views. In looking at the physical network, for example, we might begin with a global
view showing only the main nodes for ATM switches. (Preliminary models for
visualising the MBone already exist (Munzner, Tamara, Hoffman, Eric, Claffy, K.,
Fenner, Bill, (1996), Visualizing the Global Topology of the MBone, Proceeding of the
1996 IEEE Symposium on Information Visualization, San Francisco, October 28-29, 8592 available electronically
See: http://www-graphics.stanford.edu/papers/bone). A next layer might show lesser
switches and so on such that we can move up and down a hierarchy of detail, sometimes
zooming in to see the configuration of an individual PC, at other times looking only at the
226
major station points. This is actually only an extension of the spectrum linking Area
Management/ Facilities Management (AM/FM) with Geographical Information Systems
(GIS) mentioned earlier.
154
These combinations were and remain successful because they were guided by culture
and taste. Combinations per se do not guarantee interesting results. If taste and sensibility
are lacking the results are merely hybrid versions of kitsch. So the technology must not
be seen as an answer in itself. It offers a magnificent tool, which needs to be used in
combination with awareness of the uniqueness and value of local traditions.
155
An exception is a university textbook, Atlas of Western Art History, ed. John Steer,
Anthony White, New York: Parchment Books, 1994, pp. 54-55.
156
This problem is somewhat more complex than it at first appears. Many of the great
temples are in ruins. There are conflicting interpretations about their exact dimensions
and appearances. Hence in this case interpretations about various ruins is more closely
linked to our “knowledge” thereof than in the case of historical buildings which are still
intact.
157
For a serious discussion of how the advent of printing changed the criteria for
knowledge see: Michael Giesecke, Der Buchdruck in der frühen Neuzeit. Eine historische
Fallstudie über die Durchsetzung neuer Informations- und Kommunikationstechnologien,
(Frankfurt am Main: Suhrkamp, 1991). For the mediaeval period see the masterful study
by Brian Stock, The Implications of Literacy, Written Language and Models of
Interpretations in the Eleventh and Twelfth Centuries, Princeton: Princeton University
Press, 1983.
158
J. Perrault, "Categories and Relators", International Classification, Frankfurt, vol. 21,
no. 4, 1994, pp. 189-198, especially p. 195. The original list by Professor Nancy
Williamson (Faculty of Information Studies, University of Toronto) lists these in a
different order under the heading:
Types of Associative Relationships
1. Whole-part
2. Field of study and object(s) studied
3. Process and agent or instrument of the process
4. Occupation and person in that occupation
5. Action and product of action
6. Action and its patient
7. Concepts and their properties
8. Concepts related to their origins
9. Concepts linked by causal dependence
10. A thing or action and its counter-agent
11. An action and a property associated with it
12. A concept and its opposite.
159
Anthony J. N. Judge, "Envisaging the Art of Navigating conceptual Complexity,"
International Classification, Frankfurt, vol. 22, n. 1, 1995, pp. 2-9. The same author was
responsible for one of the very early publications on this theme, “Knowledge
Representation in a Computer Supported Environment,” International Classification,
Frankfurt, vol. 4, no. 2, 1977, pp. 76-80. The pioneering work of Anthony Judge in the
context of the Union Internationale des Associations is also available on-line:
See: http://www.uia.org
227
This includes:
Coherent Organization of a Navigable Problem-Solution-Learning Space
See: http://www.uia.org/uiadocs/ithree2.htm
Metaphors as Transdisciplinary Vehicles for the Future
See: http://www.uia.org/uiadocs/transveh.htm
Sacralization of Hyperlink Geometry
See: http://www.uia.org/uiadocs/hypgeos.htm
Representation, Comprehension and Communication of Sets: The Role of Number
See: http://www.uia.org/knowledg/numb0.htm
The Future of Comprehension
See: http://www.org/uiadocs/compbasc.htm
Dimensions of Comprehension Diversity
See: http://www.uia.org/uiadocs/compapl.htm
Using Virtual Reality for Visualization
See: http://www.uia.org/uiademo/vrml/vrmldemo.htm
The Territory Construed as a Map
See: http://www.uia.org/uiadocs/terrmap.htm
160
Pioneering in this field has been Eugen Wüster, Internationale Sprachnormierung in
der Technik, Bouvier: Bonn, 1966. He distinguishes between generic (logical), partitive
(ontological), complementary (oppositions) and functional (syntactic) relations. For other
studies see Wolfgang Dahlberg, Wissenstrukturen und Ordnungsmuster, Frankfurt:
Indexs Verlag, 1980 and Analogie in der Wissensrepräsentation. Case-Based Reasoning
und räumliche Modelle, ed. Hans Czap, P. Jaenecke und P. Ohly, Frankfurt: Indeks
Verlage, 1996.
161
Any attempt at ontological structuring will inevitably inspire critics to claim that a
slightly different arrangement would have been closer to the true hierarchy. While such
debates have their value, it is important to recognize that even if there is no complete
agreement about a final configuration, the conflicting versions can still contribute to new
insights, by challenging us to look at trends from a more universal level.
162
See: http://www.uia.org/webints/aaintmat.htm.
163
See: Benking, Heiner, (1997), “Understanding and Sharing in a Cognitive Panorama.”
Culture of Peace and Intersymp 97. 9th International Conference on Systems Research,
Informatics and Cybernetics, August 18-23, Baden-Baden, available electronically
See: http://www3.informatik.uni-erlangen.de:1200/Staff/graham/benking/index.html.
http://newciv.org/cob/members/benking/
Other articles by the same author include:
Benking, Heiner, (1992), “Bridges and a Master Plan for Islands of Data in a Labyrinth of
Environmental and Economic Information,” Materials and Environment. Databases and
Definition Problems: Workshop M. and System Presentation. 13th ICSU-CODATA
Conference in collaboration with the ICSU-Panel on World Data Centers, Beijing,
October 1992.
Benking, Heiner, “Design Considerations for Spatial Metaphors- Reflections on the
Evolution of Viewpoint Transportation Systems,” Workshop at the European Conference
on Hypermedia Technology (ECHT 94), Spatial User Interface Metaphors in
Hypermedia Systems, September 1994, Edinburgh, 1994.
See: http://www.lcc.gatech.edu/~dieberger/ECHT94.WS.Benking.html
228
Benking, Heiner, (1998), “Sharing and Changing Realities with Extra Degrees of
Freedom of Movement,” Computation for Metaphors, Analogies and Agents, AizuWakamatsu City, April 1998, University of Aizu (in press):
See: http://www.ceptualinstitute.com/genre/benking/landscape.htm
Cf. also the forthcoming Benking, Heiner and Rose, J. N., “The House of Horizons and
Perspectives,” ISSS Conference in cooperation with the International Society of
Interdisciplinary Studies, Atlanta, 19-24 July 1998.
Benking identifies six elements as part of his Panorama of Understanding, knowing and
not knowing: bridges (Brücke), forest and ground (Wald und Flur), unknown territory
(terra incognita), maps, filters and brokers; multimedia bridges and integration; viewable
ensemble of the world of the senses (Anschauliches Sinnweltenensemble).
164
If we look, for instance, at classifications of the Middle Ages there were no categories
for science (as we now know it) or psychology. What we call science was typically
(natural) philosophy or was included under the rubric of the quadrivium (arithmetic,
geometry, music and astronomy). Psychology was often in literature such as the Roman
de la Rose.
165
See: http://www.hitl.washington.edu/.
166
See: http://viu.eng.rpi.edu/overview2.html and http://viu.eng.rpi.edu/IBMS.html.
167
Martin, Steve, Clarke, Steve, Lehaney, Brian, (1996), “Problem Situation Resolution,
and Technical, Practical and Emancipatory Aspects of Problem Structuring Methods,”
PARM ’96, Practical Aspects of Knowledge Management, First International
Conference, Basel, 30-31 October 1996, 179-186. I am grateful to Heiner Benking for
this reference.
168
Cf. Hans Robert Jauss, Ästhetische Erfahrung und literarische Hermeneutik, Munich:
W. Fink, 1977.
169
See Rensselaer W. Lee, Ut pictura poesis. The Humanistic Theory of Painting, New
York: W. W. Norman and Co., 1967.
170
Sir Ernst Gombrich, “The What and the How. Perspective Representation and the
Phenomenal World,” Logic and Art. Essays in Honor of Nelson Goodman, ed. R. Rudner
and I. Scheffler, New York: Bobbs Merrill, 1972, pp. 129-149.
171
Sir Ernst Gombrich, “The Visual Image: Its Place in Communication,” The Image
and The Eye, London: Phaidon, 1982, pp. 137-161.
172
See: http://www.cs.sandia.gov/SEL/Applications/saturn.html
173
Brygg Ullmer discusses Cellular Universe Multiscale Spatial Architecture (CUMSA)
Cellular Spatial and Entity Class in Multiscale Spatial Architectures for Complex
Information Spaces
See: http://ullmer.www.media.mit.edu/people/ullmer/papers/multiscale/node7.html.
174
For a further discussion of these problems see Veltman, Kim H., (1997), “Why
Culture is Important [in a World of New Technologies],” 28th Annual Conference:
International Institute of Communications Conference, October 1997, London:
International Institute of Communications, 1997, 1-10.
175
Anthony J. N. Judge, “Systems of Categories Distinguishing Cultural Biases with
notes on facilitation in a multi-cultural environment”, Brussels: Union of International
Associations, n.d. [c.1992]. See also an important article by the same author on
“Distinguishing Levels of Declarations of Principles,” available on line:
See: http://www.ceptualinstitute.com/genre/judge/level20.htm
229
176
Magoreh Maruyama, “Mindscapes, Social Patterns and Future Development of
Scientific Types,” Cybernetica, 1980, 23, 1, pp. 5-25.
177
Geert Hofstede, Culture’s Consequences: International Differences in Work Related
Matters, London: Sage, 1984.
178
Kinhide Mushakoji, Scientific Revolution and Interparadigmatic Dialogue, Tokyo:
United Nations University, GPID Project, 1978.
179
Will McWhinney, Paths of Change: Strategic Choices for Organizations and Society,
London: Sage, 1991.
180
S. Pepper, World Hypotheses: A Study in Evidence, Berkeley: University of California
Press, 1942.
181
Mary Douglas, Natural Symbols: Explorations in Cosmology, London: Pelikan, 1973
182
Howard Gardner, Frames of Mind: The Theory of Multiple Intelligences, London:
Heinemann, 1984
183
W.T. Jones, The Romantic Syndrome: Toward a New Method in Cultural
Anthropology and the History of Ideas, The Hague: Martinus Nijhoff, 1961.
184
Emmanuel Todd, La Troisième Planète: structures familiales et systèmes
idéologiques, Paris, 1983.
185
As in note 118 above.
Chapter 5 New Knowledge
186
This was originally published as: “New Media and Transformations in Knowledge,”
based on an opening keynote: “Metadata und die Transformation des Wissens,”
Euphorie Digital? Aspekte der Wissensvermittlung in Kunst, Kultur und Technologie,
Heinz Nixdorf Museums Forum, Paderborn, September 1998, Paderborn. (in press).
187
It is instructive to note that although the impulses for this research came from various
centres, notably, Cambridge, many of the key ideas developed at the University of
Toronto in the context of classical studies, history, English literature and media studies.
188
Eric A. Havelock, Preface to Plato, Cambridge: Belknap Press, Harvard University
Press, 1963.
189
In the past generation scholars such as Jack Goody (Cambridge) have explored the
implications of this phenomenon in the context of developing cultures, particularly,
Africa. See, for instance, Jack Goody, The Domestication of the Savage Mind,
Cambridge:
Cambridge
University
Press,
1977
(cf.
http://gopher.sil.org/lingualinks/library/literacy/GFS812/cjJ360/GFS3530.htm.) See also
the work edited by him, Cultura escrita en sociedades tradicionales, Barcelona: Gedisa,
1996 (cf.
http://www.ucm.es/info/especulo/numero5/goody.htm).
190
Marshall McLuhan, The Gutenberg Galaxy, The Making of Typographic Man,
Toronto: University of Toronto Press, 1962; Understanding Media: The Extensions of
Man,
New
York:
McGraw-Hill,1964.
(cf.
http://www.mcluhanmedia.com/mmclm005.html).
191
Harold Adams Innis, Empire and Communications, (1950), ed. David Godfrey,
Victoria, B.C.: Press Porcepic, 1986 and The Bias of Communication. Introduction
Marshall
McLuhan.
Toronto:
University
of
Toronto
Press,
1964
(http://www.mala.bc.ca/~soules/paradox/innis.htm). Cf. Judith Stamps, Dialogue,
230
Marginality and the Great White North. Unthinking Modernity: Innis, McLuhan and the
Frankfurt School, Montreal/Kingston: McGill Queens UP, 1995.
192
W. Terence Gordon, Marshall McLuhan, Escape into Understanding, Toronto:
Stoddart, 1997.
193
See: http://www.daimi.aau.dk/~dibuck/hyper/bush.html.
194
See: http://www2.bootstrap.org/.
195
See: http://www.sfc.keio.ac.jp/~ted/
196
See, for instance, Derrick de Kerckhove, The Skin of Culture: Investigating the New
Electronic Reality, Toronto: Somerville House Publishing, 1995, and Connected
Intelligence: The Arrival of the Web Society, Toronto: Somerville House Publishing,
1997. (cf. http://www.mcluhan.toronto.edu/derrick.html).
197
Pierre Lévy, L'Intelligence Collective: Pour une Anthropologie du Cyberspace, Paris:
Éditions La Découverte, 1994. Translation: Collective Intelligence: Mankind's Emerging
World in Cyberspace, translated by Robert Bononno, Plenum Press, 1998. See also Ibid.,
The Second Flood, Strasbourg: The Council of Europe, 1996, (cf.
http://www.unesco.or.kr/culturelink/mirror/research/21/cl21_levi.html
and http://www.georgetown.edu/grad/CCT/tbase/levy.html).
198
Michael Giesecke, Der Buchdruck in der frühen Neuzeit. Eine historische Fallstudie
über die Durchsetzung neuer Informations- und Kommunikationstechnologien, Frankfurt
am Main: Suhrkamp, 1991. For the mediaeval period see the masterful study by Brian
Stock, The Implications of Literacy, Written Language and Models of Interpretations in
the Eleventh and Twelfth Centuries, Princeton: Princeton University Press, 1983.
199
Armand Mattelart, Transnationals and the Third World. The Struggle for Culture,
South Hadley, Mass.:
Bergin & Garvey, 1985; Ibid, Mapping World Communication: War, Progress, Culture,
by Armand Mattelart (University of Minnesota Press, 1994; Armand Mattelart y Michèle
Mattelart, Historia de las teorías de la comunicación, Barcelona: Paidós, 1997.
(http://www.geoscopie.com/guide/g717opi.html).
200
“Can Museum Computer Networks Change Our Views of Knowledge?", Museums
and Information. New Technological Horizons. Proceedings, Ottawa: Canadian Heritage
Information Network, (1992), pp. 101-108.
201
“Frontiers in Conceptual Navigation,” Knowledge Organization, Würzburg, vol. 24,
1998, n. 4, pp. 225-245.
202
Computers and the Transformation of Knowledge", The Challenge of Lifelong
Learning in an Era of Global Change, Couchiching Institute on Public Affairs, 1993
Conference Proceedings, Toronto, pp. 23-25. “Why Computers are Transforming the
Meaning of Education,” ED-Media and ED-Telecomm Conference, Calgary, June 1997,
ed. Tomasz Müldner, Thomas C. Reeves, Charlottesville: Association for the
Advancement of Computing in Education, 1997, vol. II, pp. 1058-1076.
203
Thoughts on the Reorganization of Knowledge", Automatisierung in der
Klassifikation eV, ed. Ingetraut Dahlberg (Teil I), Königswinter/Rhein, 5-8. (April 1983),
(Frankfurt: Indeks Verlag, 1983), pp.141-150. (Studien zur Klassifikation, Bd. 13, SK
13); New Media and New Knowledge", Proceedings of the Third Canadian Conference
on Foundations and Applications of General Science Theory: Universal Knowledge
Tools and their Applications, Ryerson, 5-8 June 1993, Toronto: Ryerson Polytechnic
University, 1993, pp. 347-358.
231
204
Dr. Theo Classen, “The Logarithmic Law of Usefulness”, Semiconductor
International, July 1998, pp.176-184. I am grateful to Eric Livermore (Nortel) for this
reference.
205
The definition of usefulness could readily detour into a long debate. For the purposes
of this article we shall take it in a very broad sense to mean the uses of computers in
terms of their various applications.
206
Ibid, p.184.
207
The ISO identifies seven basic layers to the telecommunications network: three which
belong to the network layer (physical, data-link, network), one which belongs to the
transport layer (transport) and a further three which belong to the user service layer
(session, presentation and application cf. figure 15). These seven layers have been
applied to computers. With respect to the Internet discussions usually focus on the bottom
three layers. These seven layers focus on pipelining and while this is of fundamental
value it does not differentiate sufficiently the many elements on the application side:
ISO Layer
Function
1. Network
Physical
2. "
"
Data-Link
3. "
"
Network
4. Transport
Transport
5. Technical Service Session
6. "
"
Presentation
7. "
"
Application.
Figure 15. The seven layers of the ISO.
208
Laurie McCarthy, Randy Stiles, “Enabling Team Training in Virtual Environments,
“Collaborative Virtual Environments‘98, Manchester, June 1998, pp. 113-121. See:
http://www.isi.edu/vet.
209
James Pycock, Kevin Palfreyman, Jen Allanson, Graham Button, “Envisaging
Collaboration: Using Virtual Environments to Articulate Requirements,” Collaborative
Virtual Environments ’98, Manchester: University of Manchester, 1998, pp. 67-79.
210
See Steve Mann at http://n1nlf-1.eecg.toronto.edu/index.html.
211
The potential problems with such responsive environments are actually quite
considerable. It is all fine and well to have the television turn on to channel two when A
enters the room. But what if B enters the room at the same time, who has programmed
the same device to display channel three. What then is the decision strategy? Is it in
favour of the older rather than the younger, the owner rather than the guest?
212
See: http://gen.net/index.htm
213
See: http://www.npac.sgr.edu/users/gcf/asopmasterB/foilsephtmldir/001HTML.html
http://www.npac.sgr.edu/users/gcf/asopmasterB/fullhtml.html
214
See: http://ce-toolkit.crd.ge.com
See: http://www.interex.org/hpuxvsr/jan95/new.html#RTFToC33
216
See: http://www.anxo.com/
217
Thomas Flaig, “Work Task Analysis and Selection of Interaction Devices in Virtual
Environments,” in Virtual Worlds, ed. Jean Claude Heudin, Berlin: Springer Verlag,
1998, pp. 88-96.
215
232
218
John Mylopoulos, I. Jurisica and Eric Yu, “Computational Mechanisms for
Knowledge Organization,” in: Structures and Relations in Knowledge Organization, ed.
Widad Mustafa el Hadi, Jacques Maniez and Steven Pollitt, Würzburg: Ergon Verlag,
1998, p. 126. (Advances in Knowledge Organization, Volume 6).
219
Charles. S. Pierce, Collected Papers, 5400.
220
T. Sarvimäki, Knowledge in Interactive Practice Disciplines: An Analysis of
Knowledge in Education and Health Care. Helsinki: University of Helsinki, Department
of Education, 1988.
221
Birger Hjørland, Information Seeking and Subject Representation. An Activity
Theoretical Approach to Information Science, Westport: Greenwood Press, 1997 (New
Directions in Information Management, no. 34). He draws also on the ideas of Michael
K. Buckland, Information and Information Based Systems, New York: Greenwood, 1993.
222
L. Hjelmslev, Prolegomena to a Theory of Language, Madison: University of
Wisconsin Press, Madison, 1961.
223
Jürgen Habermas, Knowledge and Human Interests, London: Heinemann, 1972.
German original, 1965.
224
Hanne Albrechtsen and Elin K. Jacob, “The Role of Classificatory Structures as
Boundary Objects in Information Ecologies,” Structures and Relations in Knowledge
Organization, ed. Widad Mustafa el Hadi, Jacques Maniez and Steven Pollitt, Würzburg:
Ergon Verlag, 1998, pp. 1-3. (Advances in Knowledge Organization, Volume 6). Cf.
Hannah Albrechtsen, Domain Analysis for Classification of Software, M. A. Thesis,
Stokholm: Royal School of Librarianship, 1993.
225
Thomas Davenport, Information Ecology,xx.
226
Susan Leigh Star, “The Structure of ill-structured solutions: boundary objects and
heterogeneous distributed problem solving,” in: Distributed artificial intelligence, ed. By
L. Gasser and M.N. Huhns, London: Pitman, 1989.
227
John Law, Multiple Laws of Order, xx.
228
M. C. Norrie and M. Wunderli, “Coordinating System Modelling,” in: 13th
International Conference on the Entity Relationship Approach, Manchester, 1994.
229
Motschnig Pitrik, R. and John Mylopolous, “Classes and Instances,” International
Journal of Intelligent and Cooperative Systems, 1(1), 1992, pp. xx; John Mylopolous, in:
Information Systems Handbook, xx:xx, 199xx; Yannis Bubenko, Conceptual Modelling,
xx:xx, 1998.
230
See: http://ricis.cl.uh.edu/virt-lib/soft-eng.html. Cf. Kevin Kelly, “One Huge
Computer,” Wired, August 1998, pp. 128-133, 168-171, 182 re: developments in JINI.
231
Peter P.S. Chen, “The Entity-Relationship Model: Towards a Unified View of Data,”
ACM Transactions on Database Systems, 1 (1), 1976, pp. 9-37. According to F. Miksa
(personal communication), this system was further developed while Chen was a professor
of computer science at Louisiana State University in 1980.
232
For some discussion of the philosophical and sometimes subjective assumptions
underlying such methods see: W. Kent, Data and Reality: Basic Assumptions in Data
Processing Reconsidered. Amsterdam: North-Holland, 1978; H.K. Klein and R.A.
Hirscheim, “A Comparative Framework of Data Modelling Paradigms and Approaches,”
The Computer Bulletin, vol. 30, no. 1, 1987, pp. 8-15 and Alan Phelan, “Database and
Knowledge Representation: The Greek Legacy,” in: Structures and Relations in
Knowledge Organization, ed. Widad Mustafa el Hadi, Jacques Maniez and Steven Pollitt,
233
Würzburg: Ergon Verlag, 1998, pp. 351-359. (Advances in Knowledge Organization,
Volume 6).
One might expect that librarians, whose lives are dedicated to organizing knowledge
should be very sensitive to these problems. In fact, their professional lives are typically
spent cataloguing and dealing with materials concerning which the reality is not in
question. Each call number applies to a physical book. If there is no physical book in
evidence, then it is “because the book is missing,” which is typically “a user problem.”
Their daily work engages them in simple realism. This helps to explain why librarians
have frequently accepted and in most cases continue to accept naïve systems such as the
entity-relationship model.
233
Mandelbrot, for instance, noted how the length of the coast of England changed as one
changed scale. See: Benoit Mandelbrot, “How long is the coast of England? Statistical
Self-Similarity and Fractal Dimension,” Science, London, vol. 155, pp.636-638. These
ideas were dramatically developed in his major book: Cf. Benoit B. Mandelbrot, The
Fractal Geometry of Nature, New York, NY: W. H. Freeman and Company, 1982.
234
This is being developed in the context of the Joint Picture Experts Group (JPEG) at
http://www.jpeg.org and http://www.periphere.be/lib/jpeg.htm; and
http://www.gti.ssr.upm.es/~vadis/faq_MPEG/jpeg.html.
This is CCITT/ISO(JTC1/SC2/WG10 and has the following standards:
T.80 Common components for image and communication-basic principles
T.81 Digital compression and encoding of continuous tone still image
T.82 Progress compression techniques for bi-level images
T.83 Compliance testing
As well as Still Picture Interchange File Format234 (SPIFF), Registration Authority
(REGAUT) and JPEG Tiled Image Pyramid (JTIP), their spokesperson, Jean Barda, has
developed a System of Protection for Images by Documentation iDentification and
Registration of digital files (SPIDER), which combines two important elements:
(1) a system for registering ownership over an image
(2) metadata tags embedded within the image (header and directory) that identify
the image and its owner. SPIDER is one of the first applications to employ SPIFF, the
newly developed ISO standard designed to supersede the JFIF/JPEG file storage format.
AVELEM, the company that developed SPIDER, also has built a system called Saisie
numerique et Consultation d'images PYRamidales (SCOPYR), i.e. Digital image capture
and exploitation of pyramidal images.
See: http://www.sims.berkeley.edu/courses/is290-1/f96/Lectures/Barda/index.html.
235
Bruce Damer, Avatars!, Exploring and Building. Virtual Worlds on the Internet,
Berkeley: Peachpit Press, 1998.
236
See: http://www.cs.man.ac.uk/mig/giu/.
237
See: http://madsci.wustl.edu/~lynn/VH/
238
See: http://www.igd.fhg.de/www/igd-a7/Projects/OP2000/op2000_e.html
239
See: http://www2.igh.cnrs.fr/HUM-Genome-DB.html
240
Carol A. Bean, “The Semantics of Hierarchy: Explicit Parent-Child Relationships in
MeSH tree structures,” Structures and Relations in Knowledge Organization, ed. Widad
Mustafa el Hadi, Jacques Maniez and Steven Pollitt, Würzburg: Ergon Verlag, 1998, pp.
133-138. (Advances in Knowledge Organization, Volume 6).
234
241
To take a hypothetical example, suppose it was decided that a “normal” male is 6 feet
in height. Hence, the whole range of variation from small pygmies (c.3 feet) to very tall
persons (over 7 feet) would require “modification.”
242
Jennifer Mankoff, “Bringing People and Places Together with Dual Augmentation,”
Collaborative Virtual Environments, Manchester, June 1998, pp. 81-86. See:
http://www.cc.gatech.edu/fce/domisilica.
243
[email protected]
244
The above list entails actual technologies. Meanwhile, thinkers such as Heiner
Benking have been working on blended and morphed realities. Here the focus is on the
transformation of geometric representations and the linking of reality forms as above in a
coherent composite schema. On this theme see his “Cognitive Panorama." For more
see: http://www.ceptualinstitute.com/genre/benking/metaphor-analogy-agents.htm and
http://www.u-aizu.ac.jp/CMAA/;
http://www.ceptualinstitute.com/genre/benking/m-p/meta-paradigm.htm
http://heri.cicv.fr/council/
245
For an alternative and more subtle classification see: Didier Verna, Alain Grumbach,
“Can we Define Virtual Reality? The MRIC Model”, Virtual Worlds ’98, ed. J.C.
Heudin, Berlin: Springer Verlag, 1998, pp. 29-41.
246
Gianpaolo U. Carraro, John T. Edmark, J. Robert Ensor, “Pop-Out Videos,” in:
Virtual Worlds, Berlin: Springer Verlag, pp.123-128. The author distinguishes between
two distinct types integrated video space and complementary videos. In an integrated
video space a CAD space and a video space are merged. In a complementary video one
might be watching a golf player whose swing interests one. A CAD version of the player
would allow one to view the person who had been merely two-dimensional in the video
image from all directions in the model. See also: Gianpaolo U. Carraro, John T. Edmark,
J. Robert Ensor, “Techniques for handling Video in Virtual Environments,” SIGGRAPH
98, Computer Graphics Proceedings, Annual Conference Series, New York, 1998, pp.
353-360.
247
For a more balanced assessment see: William Mitchell, The Reconfigured Eye,
Cambridge, Mass.: MIT Press, 1992.
248
Yaneer Bar-Yam, Dynamics of Complex Systems (Studies in Nonlinearity), Reading,
MA: Perseus Press, 1997. Cf. http://www.necsi.org/mclemens/viscss.html. For a rather
different approach to complex systems see: John L. Casti, Would-Be Worlds. How
Simulation is Changing the Frontiers of Science, New York: John Wiley and Sons, 1997.
249
See: http://www.necsi.org/html/complex.html.
250
This is not to say of course that there cannot be a history of science and technology.
There definitely is and it is essential that we continue to foster awareness of that history.
Without a clear notion of the steps required to reach our present machines for working in
the world and models for understanding that world, it would be all but impossible to
understand many aspects of present day science, and we would be in sore danger of
constantly re-inventing the wheel.
251
For a brief history see the excellent study by Willian Ivins, Jr., Prints and Visual
Communication, Cambridge, Mass.: Harvard University Press, 1953.
252
See André Chastel, Le grand atelier de l’Italie, Paris: Gallimard, 1965.
253
Benjamin R. Barber, Jihad vs. McWorld, New York: Times Books, 1995.
235
254
It may be true that the masterpieces of art also represent a selection from the many
particulars, but the masterpieces are not generalizations of the rest: they remain
individuals per se.
255
I owe this distinction to my colleague and friend André Corboz, who notes that
although sculpture and architecture are static in themselves, they require motion on the
part of the observer in order to be seen completely from a number of viewpoints.
256
There are of course histories of these dynamic subjects but their contents are limited
to verbal descriptions and give no idea of the richness of performances. In the case of
dance there have been some attempts to devise printed notations, which can serve as
summaries of the steps involved. In the case of music there are of course recordings.
More recently there are also films and videos to cover performances of dance and theatre.
257
China, Japan and India also had rich traditions of theatre and dance which, for the
reasons being discussed, were typically ignored until quite recently.
258
Victor Mair, Painting and Performance, Honolulu: University of Hawaii Press, 1988.
On this topic I am grateful to Niranjan Rajah who gave a lecture “Towards a Universal
Theory of Convergence. Transcending the Technocentric View of the Multimedia
Revolution,” at the Internet Society Summit, Geneva, July 1998 (see
[email protected]).
259
The Metadata Coalition at http://www.metadat.org is a group of 50 software vendors
and users including Microsoft with a 7 member council that has voted to support
Microsoft Repository Metadata (Coalition) Interchange Specification259(MDIS) at
http://www.he.net/~metadata/papers/intro97.html.
260
This includes Arbor, Business Objects, Cognos, ETI, Platinum Tech, and Texas
Instruments (TI). See: http://www.cutech.com/newmeta.html.
261
See: http://environment.gov.au/newsletter/n25/metadata.html
262
For basic articles on meta-data see: Francis Bretherton, “A Reference Model for
Metadata”
at
http://www.hensa.ac.uk/tools/www.iafatools/references/whitepaper/whitepaper.bretherto
n.html; “WWW meta-indexes” at http://www.dlr.de/search-center-meta.html and Larry
Kirschberg, “Meta World: A Quality of Service Based Active Information Repository,
Active Data Knowledge Dictionary”, at http://isse.gmu.edu/faculty/kersch/Vitafolder/index.html. For books see: Computing and Communications in the Extreme.
Research for Crisis Management and Other Applications at
http://www.nap.edu/readingroom/books/extreme/chap2.html.
263
For basic definitions of metadata See: http://204.254.77.2/bulletinsuk/212e_1a6.htm
and the Klamath Metadata Dictionary at
http://badger.state.wi.us/agencies/wlib/sco/metatool/kmdd.htm. A basic taxonomy of
metadata is available at http://www.1bl.gov/~olken/EPA/Workshop/taxonomy.html. See
also the CERA metadata model at
http://www.dkrz.de/forschung/reports/reports/CERA.book.html.
264
These are defined in the Generic Top Level Domain Memorandum of Understanding
at http://www.gtld_mou.org/ and include:
Two letter country codes
(ISO 3166)
Generic Top Level Domains
(gTLD)
com
net
236
org
new: firm
shop
web
arts
rec
info
nom
International Top Level Domain
(iTDL)
int
Special US only
gov
mil
edu
Internal Systems
arpa
265
The URL began as a simple resource locator for basic internet protocols, such as:
file
Gopher
http
news
telnet.
Figure 16. Basic categories covered by URLS.
A more universal approach to resource location is foreseen in the evolving
Uniform Resource Indicators (URI) as listed in note 95.
266
See: http://www.iso.ch/cate/d6898.html
267
See: http://www.iso.ch/cate/d18931.html
268
See: http://www.iso.ch.iso/cate/d18506.html
269
See: http://www.acl.lanl.gov/URN/FPI-URN.html
270
See: http://www.issn.org
271
See: http://www.cisac.org/eng/news/digi/ensymp972.htm
272
In draft in ISO TC46sc9.
273
This includes specification of ISRC related metadata which is linked with MUSE, an
EC funded initiative of the record industry which is due to announce (c. October 1998) a
secure means for encoding and protecting identifiers within digital audio.
274
See: http://www.tlcdelivers.com/tlc/crs/Bib0670.htm
275
See: http://www.elsevier.nl/inca/homepage/about/pii. On these identifiers from the
publishing
world
see
an
article
by
Norman
Paskin
at
http://www.elsevier.co.jp/inca/homepage/about/infoident.
276
See: http://www.handle.net/doi/announce.html. Cf.
http://pubs.acs.org/journals/pubiden.html and http://www.doi.org. This began in the
book and electronic publishing industry but is attracting wider membership.
277
See: http://purl.oclc.org
278
See: http://www.cs.princeton.edu/~burchard/gfx/bg.marble.gif
279
Concerning HTML 3 see: http://vancouver-webpages.com/Vwbot/mk-metas.html.
Meta Tags include:
237
Title
Description
Keywords
Subject
Creator
Publisher
Contributor
Coverage: Place Name
Coverage: xyz
Owner
Expires
Robots
Object Type
Rating
Revisit .
280
See: http://hegel.ittc.ukans.edu/topics/internet/internet-drafts/draft-l/draft-leach-uuidsguids-00.txt.
Cf. http://www.icsuci.edu/~ejw/authoring/rd_tri.gif.
281
See: http://www.hensa.ac.uk/tools/www.iafatools/slides/01.html. This is being
developed by the Internet Anonymous FTP Archives Working Group, whose templates
on Internet Data are based on whois++
and include:
URI
File System
Contents
Author
Site Administrator
Another Metadata Format.
This model is being applied to ROADS.
282
See: http://www.dbr/~greving/harvest_user_manual/node42.html
283
The purposes of MCF are:
1. Describe structure of website or set of channels
2. Threading e-mail
3. Personal Information Management functions
(PIM)+
4. Distributed annotation and authoring
5. Exchanging commerce related information such as prices, inventories and
delivery dates.
It will use a Directed Linked Graph which contains:
URL
String
E-mail
Author
Person
Size
Integer
It will also use Distribution and Replication Protocol (DRP) developed by Netscape and
Marimba.
238
The MCF began at Apple Computers. See: R.V. Guha, “Meta Content Framework,”
Apple
Technical
Report
167,
Cupertino,
March
1997
at
http://mcf.research.apple.com/mcf.html. Guha then moved to Netscape and developed the
Meta
Content
Framework
with
Tim
Bray
of
Textuality.
See:
http://www.textuality.com/mcf/NOTE-MCF-XML.html.
284
Web Collections will include:
Web Maps
HTML e-mail Threading
PIM Functions
Scheduling
Content Labelling
Distributed Authoring
It uses XML to provide hierarchical structure for this data. See:
http://www.w3.org/TR?NOTE-XML.submit.html.
285
For IFLA metadata See: http://www.nlc-bnc.ca/ifla/II/metadata.htm.
286
International Standard Book Description (ISBD) has eight basic features:
1. Title and Statement of Responsibility Area
2. Edition Area
3. Place of publication) specific area
4. Publication Distribution etc. area
5. Physical Description Area
6. Series
7. Note Area
8. Standard Number (or alternative) and terms of availability
It should be noted that ISBD has a series of other divisions, namely:
Antiquarian
ISBD (A)
Monographs
ISBD (M)
Non Book Materials
ISBD (NBM)
Printed Music
ISBD (PM)
Serials
ISBD (S).
287
See: http://omni.nott.ac.uk
288
See: http://lcweb.loc.gov/marc
289
The MARC/UNIMARC format uses ISO Z39.50. It is applied to OCLC’s Netfirst.
There are plans to link this with a URC to create a MARC URC. The MARC record
comes in many variants including:
Machine Readable Record of Bibl. Info.
MARBI
US “
“
USMARC
UK
“
UKMARC
UNI “
“
UNIMARC
INTER”
“
INTERMARC
Canadian “
“
CANMARC
Danish “
“
DANMARC
Finnish
“
“
FINMARC
Libris “
“
LIBRISMARC
Norwegian MARC “
NORMARC
South African “
“
SAMARC
239
Iberian (i.e. Spanish)” “
IBERMARC
Norway “
“
NORMARC
290
See Bernhard Eversberg, Was sind und was sollen Bibliothekarische Datenformate.
Braunschweig: Universitätsbibliothek der TU, 1994. See: http://ubsun01.biblio.etc.tubs.de/acwww25/formate/formate.html.
291
The Association for Library Collections and Technical Services (ALCTS:DA) has a
Committee on Cataloging Description and Access at http://www.lib.virginia.edu/ccda and
is engaged in mapping of SGML to MARC and conversely. See:
http://darkwing.uoregon.edu/mnwation/ccdapage/index.html.
292
See: http://www.loc.gov/rr/ead/eadhome.html. Berkeley is also involved in EAD with
a view to creating UEAD URC. See: http://sunsite.Berkeley.EDU/ead.
293
Anne Gilliland-Swetland, “Defining Metadata,” in: Introduction to Metadata.
Pathways to Digital Information, ed. Murtha Baca, Los Angeles: Getty Information
Institute, 1998, pp. 1-8. The author also lists eight attributes of metadata: source of
metadata, method of metadata creation, nature of metadata, status, structure, semantics
and level. Also in this booklet is a useful attempt to map between some of the major
metadata standards: Categories for the Description of Works of Art (CDWA), Object ID,
the Consortium for the Interchange of Museum Information (CIMI), Foundation for
Documents of Architecture/Architectural Drawings Advisory Group (FDA/ADAG),
Museum Educational Site Licensing (MESL) project, Visual Resources Sharing
Information Online (VISION), Record Export for Art and Cultural Heritage (REACH),
United States Machine Readable Cataloging (US MARC) and the Dublin Core. While
providing an excellent survey of existing efforts towards standards in the United States,
this list does not reflect a comprehensive picture of efforts around the world.
294
See: http://nt.comtec.co.kr/doc/uri/urc-s/html/scenarios_2.html
295
Chris Weider
Bunjip Information Systems
2001 S. Huron Parkway
Ann Arbor Michigan 48104
USA
1-313-971-2223.
296
See: http://whirligig.ecs.soton.ac.uk/~ng94/project/names/urndefl.htm
297
See: http://www.acl.lanl.gov/URC/
298
It is not surprising that major projects have plans to link their projects with Uniform
Resource Characteristics (URC), notably:
IAFA Templates
(IAFA URC)
Machine Readable Record
(MARC URC)
298
(EAD URC)
Encoded Archival Description
Text Entering Initiative
(TEI URC)
Consortium for Interchange of Museum Information
(CIMI URC).
299
This uses whois++ for URN resolution; relies on DNS; element ID=Naming
Authority; uses Whois ++ for resource identification. It is similar to IAFA templates but
they have different contents depending on type of object.
+ dns mapping
URC Method: Whois++299 URC
URN Uniform Resource Name
(URN)
URL Uniform Resource Locator
(URL)
240
LIFN Location Independent File Name
(LIFN)
Author
Author Name
Title
Resource Title
Abstract
Short Description
HTTP headers
Content Length
Content Type
Content Language
See: Weider Mitra, M. Mealing, “Uniform Resource Names (URN)”. Internet Draft
(work in progress) IETF, November 1994.
See: ftp://ds.internic.net/internet-drafts/draft-ietf-uri-resource-names-03.txt,
300
This uses http for URN resolution, relies on DNS, separates Element ID into:
Authoring ID and Request ID. Browsers are encouraged to support URCs returned as
HTML or plain text. Associated URC format used to return results of a URN lookup in a
form suitable for automatic processing.
URC Method: Trivial URC +x-dns-2 mapping
ftp://………..
Abstract……….
ftp: mirror site
ftp: “
“
ftp: “
“
Language
Character
Uses http.
Cf. Ron Daniel, Los Alamos, IETF Draft 1995. Paul E. Hofman, Ron Daniel, “Trivial
URC Syntax:urc0”. Internet Draft (work in progress), IETF May 1995. See:
ftp://ds.internic.net/internet-drafts/draft-ietf-uri-urc-trivial-00.txt
301
Ron Daniel, T. Allen, “An SGML based URC service”. Internet Draft
(work in progress), IETF, June 1995. See: ftp://ds.internic.net/internet-drafts/draft-ietfuri-urc-sgml-00.txt
302
Daniel LaLiberte and Michael Shapiro, IETF draft 1995. See:
http://www.hypernews.org/~liberte/www/path.html.
Other competing solutions to this challenge described by Martin Hamilton entail
directory services such as SDP or SOLO.
303
See: http://www.dstc.edu.au/RDU/Apweb96/index.html
304
See: http://www.dstc.edu.au/RDU/TURNIP
305
See: http://me-www.jrc.it/~dirkx/ewse-urn-turnip.html. This is headed by DirkWillem van Gulik:
[email protected].
306
See: http://archie.mcgill.ca/research/papers/
307
See: http://archie.mcgill.ca/research/papers/1995/uradraft.txt
308
See: http://www.hensa.ac.uk/tools/www.iafatools/slides/01.html
309
See: http://ds.internic.net/ietf/iiir/iiir-character.tx
310
See: http://www.imc.org/ietf-calendar/stif.txt
311
RFC
1625 WAIS over Z39.50:1988
1729 Z39.50 in TCP/IP
(Lynch)
241
1728 Resource Transponders (Weider )
1727 Integrated Internet Information Service.
312
See: http://services.bunyip.com:8000/research/papers/1996/cip/cip.html;
cf. http://www.ietf.org/html.charters/find-charter.html
313
See: http://ds.internic.net/rfc/rfc2244.txt
314
See: http://www.ics.uci.edu/~ejw/authoring/
315
See: http://csvu.nl/~eliens/wwww5/papers/Meta.html
316
See: http://www.oclc.org:5046/~weibel/html-meta.html
317
The list of resources called by the URI includes the following:
about
callto
content id
(cid)
cisid
data
file
finger
file transfer protocol
(ftp)
gopher
CNRI handle system
(hndl)
hyper text transfer protocol
(http)
hyper text transfer protocol over secure sockets layer
(https)
inter language unification
(ilu)
internet mail protocol
(imap)
Internet Object Request
(IOR)
internet relay chat
(irc)
java
javascript
java database connectivity
(jdbc)
lightweight directory application protocol
(ldap)
317
(lifn)
location independent file name
livescript
mailto
mailserver
md5
message id
(mid)
mocha
Network File System
(NFS)
network news transport protocol
(nntp)
path
phone
prospero
rwhois
rx
Short Message Service
(Service)
Session Initiation Protocol
(SIP)
Session hyper text transfer protocol
(shttp)
242
Stable Network Filenames
(STANF)
telnet
tv
Enhanced Man Machine for Videotex and Multimedia
(VEMMI)
videotex
view-source
wide area information servers
(wais)
whois++
whodp
z39.50r
z39.50s
For a full list See: http://www.w3.org/Addressing/schemes.
318
Sometimes called Structured Graphic Markup Language (SGML). This grew out of
IBM’s Generalized Markup Language (GML) and Gen Code of the Graphic
Communications Association (GCA). There have been projects to map SGML to MARC
as mentioned in note 84 above.
319
HyperText Markup Language (HTML) is now under ISO/IEC JTC1/SC 18/WG8
N1898. See: http://www.oml.gov/sgml/wg8/document/1898.html. Specialized versions
include compact HTML for Small Information Appliances at http://207.82.250.251/cgibin/start. Attempts to expand the scope of HTML have led to Simple HTML Ontology
Extensions (SHOE) at http://www.cs.umd.edu/projects/plus/SHOE and a Dictionary of
HTML Meta Tags
319
See: http://vancouver-webpages.com/META/metatags.detail.html.
320
See: http://www.cogsci.ed.ac.uk/~ht/new-xml-link.html
As is so frequently the case, Microsoft has copied the ideas and created its own
proprietary versions: Microsoft Extensible Markup Language
(XML)
XML based data transfer
(Microsoft XML)
Extensible Style Language
(XSL)
Microsoft Channel Description Format
(CDF).
For basic literature concerning XML see:
Jon Bosak, “XML, Java and the Future of the Web” at http://sunsite.unc.edu/pub/sun/info/standards/xml/why/xmlapps.htm; Rohit Khare, “XML. The Least you need to
Know” at http://www.cs.caltech.edu/~adam/papers/xml/tutorial and Richard Light,
Presenting XML, S. Net, August 1997. For XML Software Tools
See: http://www.cs.caltech.edu/~adam/local/xml.html.
For Java Object Stream to XML Packages
See: http://www.camb.opengroup.org/~laforge/jsxml/. For a Lark Non-Validating XML
Parser in Java See: http://www.textuality.com/Lark/. XML links with the Document
Object Model. There is also an XML API in Java (XAPI-J) and XML Typing.
Cf. Extensible Hyper Language (EHL) at http://www.cogsci.ed.ac.uk/~ht/new-xmllink.html.
321
For a second version of Cascading Style Sheets no. 2 (CSS2) See:
http://207.82.250.251/cgi-bin/start
322
This is being developed into Extensible Style-Document Style Semantics and
Specification Language (XS-DSSSL).
323
See: http://207.82.250.251/cgi-bin/start
243
324
Schema of key architecture elements in the W3 plan for linking XML with other
elements described at http://www.w3.org/TR/NOT-rdfarch:
XML application
RDF application
PICS 2.0
P3P
RDF-semantics
XML-structure
325
See: http://www.w3.org/PICS/NG
See: http://www.w3.org/Talks/9707/Metadata/slide8.htm
327
See: http://www.w3.org/TR/WD-rdf-syntax
328
This includes:
Document
Element
Attribute
Text
Comment
Processing Instruction (PI).
See: http://www.w3.org/DOM.
329
RDMF entails :
RD Retrieval
RD Submission
Server Description
Schema Description
Taxonomy Description
Status Retrieval.
See: http://www.w3.org/TR/NOTE-rdm.html
330
Some observers such as Khare propose a different way of looking at the roles of the
different markup languages:
Syntax
SGML
Style
CSS/XSL
Structure
HTML
Semantics
XML.
331
The following are Basic W3 Metadata Plans as of May 1998:
Metadata Syntax Specification (RDF)
Language for RDF schemata
Language for expressing filters (simple boolean functions of) RDF
Algorithm for canonicalizing an RDF expression for digital signature
Syntax for digitally signing RDF expressions
Vocabulary for expressing PICS labels in RDF and a
Conversion algorithm from PICS 1.1.
cf. http://www.ics.forth.gr/ICS/acti/netgroup/documents/TINAC/
A fifth method of URN to URL Mapping (Resource Discovery) has been
developed by ARPA , called Handle, which uses a Repository Access Protocol (RAP).
ARPA has also been very active in the development of a Knowledge ARPA Knowledge
Sharing Effort (ARPA KSE). (See: http://www.cs.umbc.edu/agents/kse.shtml), which
326
244
entails both a Knowledge Query Manipulation Language (KQML. See:
http://cs.umbc.edu/kqml) and a Knowledge Interchange Format (KIF.
See:
http://logic.stanford.edu/kif/kif.html).
332
See: http://renki.helsinki.fi/z3950/3950pr.html. There is a great deal of information
available on Z39.50. For the Maintenance Agency homepage See:
http://lcweb.loc.gov/z3950/agency/.
Hosts
available
for
testing
are
at
http://lcweb.loc.gov/z3950/agency/objects/iso-pub.html; Implementor Agreements at
http://lcweb.loc.gov/z3950/agency/objects/agree.html; Interoperability Testing at
http://lcweb.loc.gov/z3950/agency/objects/testbed.html; Implementors Register at
http://lcweb.loc.gov/z3950/agency/register.html.
In addition the National Institute of Science and Technology (NIST) has
Implementation Papers at http://lcweb.loc.gov/z3950/agency/nist.html; Object Identifiers
at http://lcweb.loc.gov/z3950/agency/iso-pub.html; Registered objects and other
definitions at http://lcweb.loc.gov/z3950/agency/objects.html; SQL Extensions at
http://www.dstc.edu.au/DDU/projects/ZINC/proposal3.ps; Version 4 Development at
http://lcweb.loc.gov/z3950/agency/version4.html. There is both a Z39.50 Implementors
Group (ZIG)332 and a Z39.50 Users Group (ZUG). Profiles are available at
http://lcweb.loc.gov/z3950/agency/profiles/about.html; further information about at
http://lcweb.loc.gov/z3950/agency/profiles/about.html; in an Internet environment at
Access
to
digital
collections
at
ftp://ds.internic.net/rfc/frc1729.txt;
http://lcweb.loc.gov/z3950/agency/profiles/collections.html; Access to digital library
objects at http://lcweb.loc.gov/z3950/agency/profiles/dl.html; and a CIMI Profile at
http://lcweb.loc.gov/z3950/agency/profiles/cimi2.html.
333
See: http://lcweb.loc.gov/z3950/agency/1992doc.html
334
See: http://lcweb.loc.gov/z3950/agency/1995doc.html
335
See: http://www.cni.org/pub/NISO/docs/Z39.50-1992/www/50.brochure.toc.html
336
This uses 63 attributes including:
Personal Name
Conference Name
Title
Title Uniform
ISBN
ISSN
LC Card Number
Relation
Position
Truncation
Structure
Completeness.
337
See: http://www.oclc.org:5046/oclc/research/conferences/metadata
http://purl.oclc.org/net/eric/DC/syntax/metadata.syntax.html
338
See: the Dublin Core Home Page at http://purl.org/metadata/dublin_core. The Meta2
Archive by thread is at http://www.roads.lut.ac.uk/lists/meta2/. Important contacts are Dr.
Clifford Lynch at http://www.sciam.com/0397/issue/0397lynch.html; Stuart Weibel at
http://www.dlib.org/dlib/July95/07/weibel.html and John Kunze.
339
See: http://www.roads.lut.ac.uk/Metadata/DC-SubElements
245
340
See: http://www.oclc.org:5046/oclc/research/conferences/metadata2/
http://wwwblib.org/dlib/july96/07weibel.html
341
See: http://www.oclc.org:5046/conferences/imagemeta/index.html.
This problem has also been pursued elsewhere by Clifford Lynch
cf. http://www.sciam.com/0397/issue/0397lynch.html) through the Coalition of
Networked Information (CNI), which held a Metadata Workshop on Networked Images.
See: http://purl.oclc.org/metadata/image. For other projects on image metadata
See:: http://www.dlib.org/dlib/january97/oclc/01weibel.html.
342
See: http://www.dstc.edu.au/DC4
343
See: http://www.linnea.helsinki.fi/meta/DC5.html.
cf. http://www.ariadne.ac.uk/issue12/metadata.
344
Ron Knight, “Will Dublin form the Apple Core?” at
http://www.ariadne.ac.uk/issue7/mcf.
345
Carl Lagoze, Clifford Lynch, Ron Daniel, Jr., “The Warwick Framework: A Container
for Aggregating Sets of Metadata.”
See: http://cs-tr.cs.cornell.edu/~lagoze/container.html:
Container
Package
Dublin Core
Package
MARC Record
Package
Package
Indirect reference ------Æ Terms and Conditions
Relationship for a Container
Relationship Package
Content Package
Dublin Core Record
Access Core List
URN
Other Package
MARC Record
Figures 17-18. Two further diagrams showing possible ways of integrating Dublin Core
Elements with those of MARC records and other standard library formats.
346
See: http://harvest.transarc.com
347
See: http://www.dbr/~greving/harvest_user_manual/node42.html. This was developed
by Michael Schwartz.
348
See: http://www.ukoln.ac.uk/metadata/DESIRE/overview/rev_02.htm
349
See: http://www.ukoln.ac.uk/metadata/DESIRE/overview
350
Attribute List
Abstract
Author
Description
File Size
Full Text
246
Gatherer Host
Gatherer Name
Gatherer Port
Gatherer Version
Update Time
Keywords
Last Modification Time
MD5 16 byte checksum of object
Refresh rate
Time to Live
Title
351
Harvest Template Types:
Archive
Audio
FAQ
HTML
Mail
Tcl
Troff
Waissource.
352
Other harvest features include:
Identifier
Value
Value Size
Delimiter
URL References
353
For insight into the British Library’s efforts see: Towards the Digital Library. The
British Library’s Initiatives for Access Programme, ed. Leona Carpenter, Simon Shaw,
Andrew Prescott, London: The British Library, 1998. Readers are also referred to the
library’s publication: Initiatives for Access. News.
354
See: http://www2.cornell.edu/lagoze/talks/austalk/sld014.htm
355
See: http://www.ipl.org
356
See: http://www.dstc.edu.au/RDU/pres/nat-md/
357
See: http://www.dstc.edu.au/RDU/pres/www5
358
URC Data includes:
Title
Author
Identifier
Relation
Language.
359
See: http://ruby.omg.org/index.htm
360
See: http://www.omg.org/corbserver/relation.pdf
361
This is connected with the Inter-Language Unification (ILU) project of Xerox PARC
at http://parcftp.parc.xerox/pub/ilu/ilu.htm, which is producing an Interface Specification
Language (ISL). It is not to be confused with Interactive Data Language (IDL) See:
http://ftp.avl.umd.edu/pkgs/idl.html.
247
362
See: http://www.omg.org/docs/telecom/97-01-01.txt
Cf. http://www.igd.fhg.de/www/igd-a2/conferences/cfp_tina97.html
363
See: http://www.he.net/~metadata/papers/intro97.html
364
See:
http://mnemosyne.itc.it:1024/ontology.html.
Cf.
http://wwwksl.stanford.edu/kst/wahat-is-an-ontology.html.
365
See: http://www.infospheres.caltech.edu
366
See: http://viu.eng.rpi.edu
367
See: http://viu.eng.rpi.edu/viu.html
368
See: http://viu.eng.rpi.edu/IBMS.html
369
See: http://www.parc.xerox.com/spl/projects/mops
370
See: http://dri.cornell.edu/Public/morgenstern/MetaData.html
cf. http://dri.cornell.edu/pub/morgenstern/slides/slides.html
371
See: http://www2.infoseek.com/
372
See: http://www.intellidex.com. Cf. John Foley, “Meta Data Alliance”, Information
Week, Manhasset, NY, January 27 1997, p. 110.
373
See: http://www.carleton.com/metacnt1.html
374
See: http://localweb.nexor.co.uk
375
For a further list of software See: http://www.ukoln.ac.uk/metadata/software-tools.
For a list of tools .
See: http://badger.state.wi.us/agencies/ulib/sco/metatool/mtools.htm.
376
See: http://www.pastel.be/mundaneum/. Cf.W. Boyd Rayward, The Universe of
Information. The Work of Paul Otlet for the Documentation and International
Organisation, Moscow, 1975.
377
Based on its French name: Fédération Internationale de la Documentation
378
Based on its French name: Union Internationale des Associations. See:
http://www.uia.org.
379
UNISIST. Synopsis of the Feasibility Study on a World Science Information System,
Paris: UNESCO, 1971, p. xiii.
380
http://www.unesco.org/webworld/council/council.htm
381
See: http://www.icpsr.umich.edu/DDI. Cf. the Association for Information and Image
Management International (AIIMI) which organizes the Document Management Alliance
at http://www.aiim.org/dma.
Cf. also the European Computer Manufacturers Association (ECMA) which has
produced
the
Script
Language
Specification
(ECMA
262)
at
http://www.ecma.ch/index.htm and http://www.ecma.ch/standard.htm.
381
See: http://www.sdsc.edu/SDSC/Metacenter/MetaVis.html#3 which provides
electronic addresses for all of the above.
382
See: http://www.sdsc.edu/SDSC/Metacenter/MetaVis.html#3 which provides
electronic addresses for all of the above.
383
See: http://www.ru/gisa/english/cssitr/format/ISO8211.htm
384
See: http://www2.echo.lu/oii/en/gis.html#IHO
385
See: http://www2.echo.lu/oii/en/gis.html#ISO15046
386
See: http://www2.echo.lu/oii/en/gis.html#ISO6709
387
See: http://www.stalk.art.no/isotc211/welcome.html
388
See: http://cesgi1.city.ac.uk/figtree/plan/c3.html.
248
ISO/IEC/TC 211):
WG1. Framework and reference model
WG 2. Geospatial data models and operators
WG 3. Geospatial data administration
WG 4. Geospatial Services
WG 5. Profiles and Functional Standards.
389
See: http://www2.echo.lu/oii/en/gis.html#ITRF
390
See: http://ilm425.nlh.no/gis/cen.tc287
391
See: http://www2.echo.lu/impact/oii/gis.html#GDF
392
See: http://www2.echo.lu/vii/en/gis.html#GDF
393
See: file://waisvarsa.er.usgs.gov/wais/docs/ASTMmeta83194.txt
394
See: http://sdts.er.usgs.gov/sdts/mcmcweb.er.usgs.gov/sdb
395
See: http://fgdc.er.usgs.gov/metaover.html
396
See: http://geochange.er.usgs.gov/pub/tools/metadata/standard/metadata.html
cf. http://www.geo.ed.ac.uk/~anp/metaindex.htm
397
Michael Potmesil, “Maps Alive: Viewing Geospatial Information on the
WWW,” Sixth WWW Conference, Santa Clara, April 1997, TEC 153, pp.1-14. See:
http://www6.nttlabs.com/HyperNews/get/PAPER/30.html
398
See: http://www.research.ibm.com/research/press
399
See: http://ds.internic.net/z3950/z3950.html which provides a list of available
implementations.
400
See: http://www.grid.unep.no/center.htm
401
See: http://gelos.ceo.org/free/TWG/metainofrmation.html
402
See: http://info.er.usgs.gov/gils. Cf. Eliot Christian, “GILS. Where is it where is it
going?” See: http://www.dlib.org/dlib/december96/12christian.html.
403
See: http://www.wcmc.org.uk/
404
See: http://www.erin.gov.au/general/discussion-groups/ozmeta-l/index.html
405
See: http://www.epa.gov/edu
406
On Harmonization of Environmental Measurement see: Keune, H., Murray, A. B,
Benking, H.in: GeoJournal, vol. 23 no. 3, March 1991, pp.149-255 available on line at:
http://www.ceptualinstitute.com/genre/benking/harmonization/harmonization.htm
On Access and Assimilation: Pivotal Environmental Information Challenges,
See: GeoJournal, vol. 26, no. 3, March 1992, pp. 323-334 at:
http://www.ceptualinstitute.com/genre/benking/aa/acc&assim.htm
407
See: http://www.lbl.gov/~olken/epa.html
408
See: http://www.llnl.gov/liv_comp/metadata/metadata.html
409
See: http://www.psc.edu/Metacenter/MetacenterHome.html
410
See: http://www.khoral.com/plain/home.html
411
See: http://www.nbs.gov/nbii
412
See: http://www.nbii.gov/
413
See: http://www.cs.mu.oz.au/research/icaris/bsr.html
414
See: http://www.cs.mu.oz.au/research/icaris/beacon.html
415
See: http://www.sdsc.edu/SDSC/Metacenter
416
See: http://www.usgs.gov/gils/prof_v2html#core
417
See: http://jetta.ncsl.nist.gov/metadata
418
See: http://www.oberlin.edu/~art/vra/core.html
249
419
See: http://www.unesco.org/webworld/telematics/uncstd.htm
See: http://www2.echo.lu/oii/en/library.html
421
See: http://www.mpib-berlin.mpg.de/dok/metadata/gmr/gmrwkdel.htm
422
See: http://www.ukoln.ac.uk/metadata/
http://www.ukoln.ac.uk/metadata/interoperability
http://www.ukoln.ac.uk/dlib/dlib/july96/07dempsey.html
http://www.ukoln.ac.uk/ariadne/issue5/metadata-masses/
423
See: http://ahds.ac.uk/manage/proposal.html#summary
424
See: http://www.ukoln.ac/metadata/
425
See: http://www.ukoln.ac.uk/metadata/interoperability
426
See: http://www.roads.lut.ac.uk/Reports/arch/
427
See: http://omni.nott.ac.uk
428
Godfrey Rust, “Metadata: The Right Approach. An Integrated Model for descriptive
and Rights Metadata in E-Commerce,” D-Lib Magazine, July-August 1998, at:
http://www.dlib.org/dlib/july98/rust/07rust.html.
429
See: http://www.dbc.dk/ONE/oneweb/index.html
430
See: http://portico.bl.uk/gabriel/
431
See: http://www.infobyte.it
432
See: http://www.gii.getty.edu/vocabulary/tgn.html
433
Cf. the interesting work by Dr. A. Steven Pollitt (Huddersfield University, CeDAR),
“Faceted
Classification
as
Pre-coordinated
Subject
Indexing”
at:
http://info.rbt.no/nkki/korg98/pollitt.htm. I am very grateful to Dr. Pollitt for making me
aware of his work. Some believe that traditional discipline based classifications are
outmoded in an increasingly interdisciplinary world. For instance, Professor Clare
Beghtol, The Classification of Fiction: The Development of a System Based on
Theoretical Principles, Metuchen, N.J.: Scarecrow Press, 1994, believes that the
distinction between non-fiction and fiction is no longer relevant since both categories
entail narrative. Meanwhile, Nancy Williamson, “An Interdisciplinary World and
Discipline Based Classification,” Structures and Relations in Knowledge Organization,
ed. Widad Mustafa el Hadi, Jacques Maniez and Steven Pollitt, Würzburg: Ergon Verlag,
1998, pp. 116-132. (Advances in Knowledge Organization, Volume 6), although
sceptical about replacing disciplines entirely, has explored a series of alternative shortterm solutions. Since university and other research departments continue to be discipline
based it may be sensible to maintain what has been the starting point for all classification
systems for the past two millenia, and work on creating new links between, among these
disciplines.
434
See: http://iconclass.let.ruu.nl/home.html
435
See: http://www.gii.getty.edu/vocabulary/aat.html
436
See the author’s “Towards a Global Vision of Meta-data: A Digital Reference
Room,”Proceedings of the 2nd International Conference. Cultural Heritage Networks
Hypermedia, Milan, pp. 1-8 (in press).
437
This Aristotle subdivided into active Operation and passive Process.
438
J. Perrault, “Categories and Relators: a New Schema,” Knowledge Organization,
Frankfurt, vol. 21 (4), 1994, pp. 189-198. Reprinted from: J. Perrault, Towards a Theory
for UDC, London: Bingley, 1969.
420
250
439
A fundamental problem in the systematic adoption and interoperability of these
relations is that different communities and even different members within a community
use alternative terms for the same relation. For instance, what some library professionals
call “typonomy” is called “broader-narrower terms” by others, “generic” by philosophers,
and in computing circles is variously called “is a”, “type instantiation” and
“generalization.” Similarly, “hieronomy” is variously called “is part of,” “partitive” by
philosophers and “aggregation” by computer scientists. MMI will encourage research at
the doctoral level to create a system for bridging these variant terms, using as a point of
departure Dahlberg’s classification of generic, partitive, oppositional and functional
relations.
440
Robert E. Kent, “Organizing Conceptual Knowledge Online: Metadata
Interoperability and Faceted Classification,” Structures and Relations in Knowledge
Organization, ed. Widad Mustafa el Hadi, Jacques Maniez and Steven Pollitt, Würzburg:
Ergon Verlag, 1998, pp. 388-395. See:: http://wave.eecs.wsu.edu
441
See: http://www.philo9.force9.co.uk/books10.htm
442
This may be closer than we think. Cf. David Brin, The Transparent Society, Reading,
Mass.: Addison-Wesley, 1998, p. 287, who also reports on trends towards proclivities
profiling, p.290.
443
See: http://www.csl.sony.co.jp/person/chisato.html
444
See: http://150.108.63.4/ec/organization/disinter/disinter.htm. For a contrary view
see: Sarkar, Butler, and Steinfield’s paper (JCMC-electronic commerce, Vol.1 No.3).
445
See: http://www.cselt.it/ufv/leonardo/fipa/
cf. http://drogo.cselt.stet.it/fipa/
446
See: http://umuai.informatik.uni-essen.de/field_of_UMUAI.html
447
See: http://www.ina.fr/TV/TV.fr.html
448
Bruce Damer, Avatars!, as in note 34 above.
449
See: http://www.chez.com/jade/deuxmond.html which represents Paris.
450
See Virtual Helsinki at http://www.hel.fi/infocities/eng/index.html.
451
See: http://idt.net/~jusric19/alphalinks.html
452
See: http://socrates.cs.man.ac.uk/~ajw/
453
At the Internet Society Summit (Geneva, July 1998), Vint Cerf, the new Chairman, in
his keynote, described how the international space agency is working on a new address
scheme to be launched with the next voyage to Mars late this year.
454
Cf. John Darius, Beyond Vision, Oxford: Oxford University Press, 1984; The Invisible
World, Sights Too Fast, Too Slow, Too Far, Too Small for the Naked Eye to See, ed. Alex
Pomaranoff, London: Secker and Warburg, 1981.
455
Dr. Theo Classen, as in note 18 above.
456
The definition of usefulness could readily detour into a long debate. For the purposes
of this article we shall take it in a very broad sense to mean the uses of computers in
terms of their various applications.
457
See: http://web.cs.city.ac.uk/homes/gespan/projects/renoir/cover.html. Related to this
are other European projects on a smaller scale such as Co-operative Requirements
Engineering with Scenarios (CREWS, Esprit Project 21903).
458
See: http://www.global-info.org
Epilogue
251
459
See: http://europa.eu.int/comm/dg10/avpolicy/index_en.html)
See: http://tina.lancs.ac.uk/computing/research/cseg/projects/ariadne/
461
See: http://www.educause.edu/nlii/
462
See: http://imsproject.org/
463
See: http://www.jisc.ac.uk/pub98/c15_98.html
464
See: http://www.siemens.de/telcom/articles/e0497/497drop.htm
465
See: http://www.siemens.de/ic/networks/products/moswitch/brosch/index.htm
466
See: http://www.telematix.com/library/assoc-org/index.html
467
See: http://www.itu.ch/imt-2000
468
See: http://www.inria.fr/rodeo/personnel/Christophe.Diot/me.html
469
See: http://wwwspa.cs.utwente.nl/~havinga/mobydick.html
470
See: http://www.ispo.cec.be/infosoc/legreg/docs/greenmob.html
471
Philip Manchester, “Mobile Innovations,” Financial Times, 18.11.1998, p.IX
472
See: http://www.qualcomm.com/
473
Wired, October 1998, p. 53
474
See: http://www.software.psion.com/corporate/corporate.html
Cf. http://www.cnnfn.com/digitaljam/9806/25/psion/
475
Financial Times, 10.11,1998
476
Financial Times, 20.10.1998, p.19
477
See: http://dtinfo.dti.gov.uk/digital/main.html
478
See: http://www.cs.ucl.ac.uk/research/live/papers/A.Steed.html
479
See: http://www.havi.org
480
See: http://www.osgi.org
481
See:
http://207.82.250.251/cgibin/linkrd?hm___action=http%3a%2f%2fwww%2ewired%2ecom%2fnews%2fnews%2f
business%2fstory%2f18472%2ehtml
482
See: http://www.infoworld.com:80/cgi-bin/displayStory.pl?980413.ehbluetooth.htm
483
See: http://www.wapforum.org/
484
See: http://www.gsmworld.com/
485
See p. 220 of ftp://ftp.cordis.lu/pub/esprit/docs/projmms.pdf
486
See: http://cdps.umcs.maine.edu/AUSI/
487
See: http://perca.ls.fi.upm.es/amusement/index.html
488
See: http://mercurio.sm.dsi.unimi.it/~gdemich/campiello.html
489
See: http://arti.vub.ac.be/~comris/
490
See: http://www.acad.bg/esprit/src/25598.htm
491
See: http://www.nada.kth.se/erena/
492
See: http://escape.lancs.ac.uk/
493
See: http://www.ing.unisi.it/lab_tel/hips/Newhips.htm
494
See: http://www.living-memory.org/
495
See: http://www.maypole.org/
496
See: http://www.dfki.de/imedia/mlounge/
497
See: http://www.sics.se/humle/projects/persona/web/index.html
498
See: http://www.3dscanners.com/randd/populate.htm#PART
499
See: http://www.presenceweb.org/
500
See: http://www.i3net.org/i3projects/
501
See: http://www.mitre.org/resources/centers/it
460
252
502
See:
http://albertslund.mip.ou.dk/niis.dll/i3net/result.stm?institution=&people=&prime=3&pro
jects=&t3a=on&countries=
503
See: http://www.nada.kth.se/inscape/
504
See: http://info.lut.ac.uk/research/husat/respect/index.html#Menu
505
See: http://www.lboro.ac.uk/research/husat/inuse/f_inuse_project.html
506
See: http://www.npl.co.uk/npl/sections/us/products/music.html
Cf. http://www.algonet.se/~nomos/about/music.htm
507
See: http://www.npl.co.uk/npl/sections/us/products/uca.html
508
See: http://info.lut.ac.uk/research/husat/inuse/performanceeval.html
509
See: http://www.npl.co.uk/npl/sections/us/products/sumi.html
510
See:
http://aether.cms.dmu.ac.uk/General/WWW/General/hci/hcibib/HTML/BCSHCI/Macl93
a.html
511
See: http://www.archimuse.com/mw98/abstracts/garzotto.html
512
See: http://www.megataq.mcg.gla.ac.uk/
513
See: http://www.nectar.org/reposit/megataq/4.2/4.htm
514
See: http://www.cure.at/
515
See: http://www.newcastle.research.ec.org/deva/index.html
516
See: http://www.cl.cam.ac.uk/Research/SRG/measure.html
517
Composition, D. Tsichritzis (Ed.), Centre Universitaire d'Informatique, University of
Geneva, June 1991, pp. 31-56. abstract 10.Oscar Nierstrasz, Dennis Tsichritzis, Vicki de
Mey and Marc Stadelmann, ``Objects + Scripts = Applications,'' Proceedings, Esprit
1991 Conference, Kluwer Academic Publishers, Dordrecht, NL, 1991, pp. 534-552.
518
See:
http://cuiwww.unige.ch/OSG/publications/OOarticles/objects+Scripts=Applications.ps.Z
519
See: http://web.cs.city.ac.uk/homes/gespan/projects/renoir/cover.html. Related to this
are other European projects on a smaller scale such as Co-operative Requirements
Engineering with Scenarios (CREWS, Esprit Project 21903).
520
See: http://www.computer.org/conferen/proceed/ICRE96/ABSTRACT.HTM#165
521
See: http://www.aro.army.mil/mcsc/skbs.htm
522
For one of the more restrained visions See:, for instance, the IBM white paper at
http://www.chips.ibm.com/nc/whitepaper.html.
Appendix 1
Appendix 2
523
See: http://www.cs.umd.edu/users/north/infoviz.html
This is being replaced by On-Line Library of Information Visualization Environments
(OLIVE)
See: http://www.otal.umd.edu/Olive/
which distinguishes between eight kinds of interfaces, namely, temporal, 1D, 2D, 3D,
MultiD, Tree, Network, and Workspace.
A similar list entitled Visual Information Interfaces was is available at the VIRI sight
maintained by the GMD:
See: http://www-cui.darmstadt.gmd.de/visit/People/hemmje/Viri/visual.html.
253
Appendix 3
524
See: http://www.cs.chalmers.se/~ahlberg/
Cf. [email protected]
525
See: http://www.mtm.kuleuven.ac.be/~hca/index.index.eng.html
526
See: http://www.ibm.com/ibm/hci/guidelines/design/realthings/ch4cl.html
527
See: http://www-cse.ucsd.edu/~rik
528
See: http://www.crg.cs.nott.ac.uk/people/Steve.Benford
529
See: http://www.parc.xerox.com/istl/members/bier/
530
See: http://www.parc.xerox.com/istl/projects/MagicLenses/
531
See: http://www.biochem.abdn.ac.uk/~john/john.html
cf. http://www.biochem.abdn.uk/~john/vlq/vlq.html
532
See: http://science.nas.nasa.gov/~bryson/home.html
533
See: http://www.cwi.nl/~dcab
534
See: http://www.dgp.toronto.edu/people/BillBuxton/billbuxton.html
535
See: http://www.computer.org:80/pubs/cg%26a/report/g20063.htm
536
See: http://www.dis.uniroma1.it/AVI96/tchome.html
537
See: http://www.ubs.com/webclub/ubilab/staff/e_chalmers.htm
[email protected]
Tel. 41 1236 7504
538
See:
http://www.ubs.com/cgi-bin/framer.pl?/webclub/ubilab/eindex.htm/Projects/hci.html
539
See: http://www.cs.unm.edu/~jon/dotplot/index.html
540
See: http://soglio.colorado.com
541
See: ftp.comp.lanc.ac.uk/pub/reports/1994/CSCW.13.94.ps.2
542
See Crouch, D., & Korfhage, R. R. (1990). “The Use of Visual Representations in
Information Retrieval Applications”. In T. Ichikawa, E. Jungert, & R. R. Korfhage,
(Eds.), Visual Languages and Applications, New York, Plenum Press, 305-326.
Donald B. Crouch, “The visual display of information in an information retrieval
environment,”
in: SIGIR '86. Proceedings of 1986 ACM conference on Research and development in
information retrieval, pp. 58-67.
543
See: http://www.cs.brown.edu/people/ifc
544
See: http://www.lcc.gatech.edu/~dieberger/CSDL4_abstract.html
cf. [email protected]
545
See: http://www.lcc.gatech.edu/~dieberger/Proj_Vortex.html
546
See: http://www.soc.staffs.ac.uk/~cmtajd/online.html
cf. http://www.soc.staffs.ac.uk/~cmtajd/papers/version-PSE97
547
See: http://panda.iss.nus.sg://8000/kids/fair/
548
See: http://www.cc.gatech.edu/gvu/people/Faculty/James.D.Foley.htm
Cf. [email protected]
549
See: http://fox.cs.vt.edu
550
See: http://www.cs.panam.edu/info_vis/info_nav.html
551
See: http://www.cs.brown.edu/people/ag/home.html
552
See: http://www.cs.rpi.edu/~glinert
553
See: http://www.csd.abdn.ac.uk/~pgray
254
554
See: http://www.dcs.gla.ac.uk/personal/personal/pdg
See: http://www.ics.uci.edu/~grudin
556
See: http://www.sims.berkeley.edu/~hearst
557
Re: Visualization of Complex Systems, see:
http://www.cs.bham.ac.uk/~nsd/Research/Papers/HCI/hci95.html
Re: Hyperspace: Web Browsing with Visualisation see:
http://www.cs.bham.ac.uk/~amw/hyperspace/www95
558
See: http://www-cui.darmstadt.gmd.de/visit/People/hemmje
559
See: http://www.cs.unm.edu/~hollan/begin.html
560
See: http://www.crg.cs.nott.ac.uk/~rji
561
See: http://www.cs.wisc.edu/~pubs/faculty_info/ioannidis.html
562
See: http://www.eecs.tufts.edu/~jacob/
563
See: http://www.cs.cmu.edu/People/bej/
564
See: http://www.cs.umd.edu:80/projects/hcil/People/brianj/VisualizationResources/
565
See: [email protected]
566
See: http://www.dbs.informatik.uni-muenchen.de/dbs/projekt/visdb/visdb.html
cf. http://www.dbs.informatik.uni-muenchen.de/dbs/mitarbeiter/keim.html
567
See: http://www.darmstadt.gmd.de/~kling
568
See: http://www.pitt.edu/~korfhage/korfhage.html
569
BIRD= Browsing Interface for the Retrieval of Documents
570
See: http://www.research.microsoft.com/research//ui/djk/default.htm
571
See: http://www.uky.edu/~xlin/publication.html
572
See: http://virtual.inesc.pt/rct.30.html
573
See: http://wwwwksun2.wk.or.at:8000/0x811b0205_0x00d1119;skF50A50ED
574
See: http://www.comp.lancs.ac.uk/computing/users/jam/proj300.d/qpit.html
575
See: http://www.csri.utoronto.ca/~mendel/
576
See: http://www-graphics.stanford.edu/papers.edu/papers.webviz
577
See: http://www.cs.cmu.edu/~bam
578
See: http://[email protected]
579
See: http://www.cs.cmu.edu/~dolsen
580
See: http://www.risoe.dk/sys-mem/cmi-web.htm
581
See Pejtersen, Annelise Mark.The Bookhouse: Modeling User's Needs and Search
Strategies, a Basis for System Design. Roskilde, Denmark: Riso National Laboratory,
1989.
582
See: http://www.csl.sony.co.jp/person/rekimoto/cube.html
583
See: [email protected]
584
See:
http://cstr.cs.cornell.edu/TR/Search/?publisher=CORNELLCS&number=&boolean=and&author
=Salton&title=&abstract=information+retrieval
585
See: http://www.eecs.harvard.edu/~shieber
586
See: http://www.cs.umd.edu/users/ben/index.html
587
See: http://www.crg.cs.nott.ac.uk/people/Dave.Snowdon
cf. http://www.crg.cs.nott.ac.uk/crg/Research/pits/pits.html
588
See: http://researchsmp2.cc.vt.edu/DB/db/conf/cikm/cikm93.html
cf. [email protected]
589
See: http://www.cs.gatech.edu/gvu/people/Faculty/john.stasko
555
255
590
See: http://www.cise.nsf.gov./iris/ISPPDhome.html
See:
http://www.informatik.uni-trier.de/~ley/db/indices/atree/v/Veerasamy:Aravindan.html
592
See: http://www.labs.bt.com/innovate/informat/infovis/part1.htm
593
See: [email protected]
594
See: http://www.dq.com/
595
See Williamson, C., & Shneiderman, B. (1992) “The Dynamic HomeFinder:
Evaluating Dynamic Queries in a Real-Estate Information Exploration System,” In:
Proceedings of the 15th Annual International ACM SIGIR Conference on Research and
Development in Information Retrieval, Copenhagen, 338-346.
596
See: http://canyon.psy.ohio-state.edu:8080/zhang/zhang-jiajie.html
591
Appendix 4
See: http://esba-www.jrc.it/dvgdocs/dvghome.html
598
See: http://www.ecrc.de/research/uiandv/gsp/applications.html
599
See: http://www.ecrc.de/research/uiandv
600
See: http://www.nrc.ca/corpserv/m_list_e.html
601
See: http://www.igd.fhg.de/www/igd-a4/index.html
602
See: http://www.igd.ghg.de/www/zgdv-mmvis/miv-projects_e.html#basic
603
See: http://www.ecrc.de/staff/gudrun
604
See: http://delite.darmstadt.gmd.de/delite/Projects/Corinna
605
See: http://www.aist-nara.ac.jp/IS/Chihara-lab/mosaic-l.html
606
See: http://www.csl.sony.co.jp/projects/ar/ref.html
607
See: http://www.csl.sony.co.jp/person/nagao/icmas96/outline.html
608
See: http://www.csl.sony.co.jp/person/rekimoto/transvision.html
609
See: http://www.csl.sony.co.jp/person/rekimoto/navi.html
610
See: http://www.csl.sony.co.jp/project/VS/index.html
611
See: http://www.vogue.is.uec.ac.jp/research.html#1
612
See: http://www.vogue.is.uec.ac.jp/~koike/papers/v193/v193.html
613
See: http://www.vogue.is.uec.ac.jp/~koike/papers/tois95/tois95.html
614
See: http://www.hc.t.u-tokyo.ac.jp/activity-index.e.html
615
See: http://ghidorah.t.u-tokyo.ac.jp
616
See: http://virtual.dcea.fct.unl.pt/gasa/vr/
617
See: http://www.cl.cam.ac.uk/abadge/documentation/abwayin.html
618
See: http://www.cl.cam.ac.uk/Research/Rainbow
619
See: http://www.lut.ac.uk/departments/co/research-groups/lutchi.html
620
See: http://www.mcc.ac.uk/research.htm
621
See: http://www.man.ac.uk/MVC/CGU-intro.html
622
See: http://www.hud.ac.uk/schools/comp-maths/centres/hci/HCIcentre.html
623
See: http://www.xrce.xerox.com/research/cbis/cbis_1.htm
624
See: http://www.cc.gatech.edu/gvu/virtual/Venue
625
See: http://www.cc.gatech.edu/gvu/softwiz/infoviz/information_mural.html
626
See: http://cedude.ce.gatech.edu/Projects/IV/iv.html
cf. http://cedude.ce.gatech.edu/research/index.html
627
See: http://www.gatech.edu/scivis
628
See: http://www.cc.gatech.edu/gvu/people/qiang.a.zhao
597
256
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
See: http://www.research.ibm.com/imaging/vizspace.html
See: http://www.research.ibm.com/research/lucente.html
See: http://www.almaden.ibm.com/dx
See: http://www.learningcube.com/webzn.html
See: http://www.bell-labs.com/project/visualinsights/
See: http://medusa.multiemdia.bell-labs.com/LWS
See: http://vlw.www.media.mit.edu/groups/vlw/
See: http://www.ted.com/info/cooper.html
See: http://dsmall.media.mit.edu/people/dsmall/
See: http://science.nas.nasa.gov/Groups/VisTech/visWeblets.html
See: http://www.mitre.org
See: http://www.well.com/user/jleft/orbit/infospace
See: http://multimedia.pnl.gov:2080/showcase/
See: http://www.pnl.gov/news/1995/news95-07.htm
See: http://vizlab.rutgers.edu
See: http://www.sandia.gov/eve/eve_toc.html
See: http://www.cs.sandia.gov/SEL/main.html
See: http://www.sgi.com/Products/Mineset/products/vtools.html#TreeVisualizer
647
See:
http://www.sgi.fr/Support/DevProj/Forum/forum96/proceeds/Visual_and_Analytical_Dat
a_Mining/overview.html.
648
See:: http://www.evl.uic.edu/EVL/index.html
cf. http://www.ncsa.uiuc.edu/EVL/docs.html/homepage.html
649
See: http://www.ncsa.uiuc.edu/VR/cavernus/gallery.html
650
See: http://www.ncsa.uiuc.edu/VEG/DVR
651
See: http://www-pablo.cs.uiuc.edu/Projects/VR
652
See: http://www.bvis.uic.edu
653
See: http://www.ncsa.uiuc.edu/SCMS/DigLib/text/overview.html
654
See: http://ncsa.uiuc.edu/VR/cavernus/gallery.html
655
See: http://www.ncsa.uiuc.edu/VEG/DVR
656
See: http://notme.ncsa.uiuc.edu/SCD/Vis
657
See: http://www.ncsa.uiuc.edu/ITech
658
See: http://www.ncsa.uiuc.edu/VEG/index.htm
659
See: http://notme.ncsa.uiuc/edu/Vis/VICE.html
660
See: http://delphi.beckman.uiuc.edu/WWL
661
See: http://vizlab.beckman.uiuc.edu/chickscope
662
See: http://www.lis.pitt.edu/~spring/mlnds/mlnds.html
663
See: http://cs.panam.edu/info_vis/home-info_vis.html
664
See Ramana Rao et al., “Rich Interaction in the Digital Library,” as in note 217 XX,
p.38.
Appendix 5
665
See: http://www.dlib.org/projects.html
666
See: http://www.nlc.bnc.ca/iso/tc46sc9/index.htm
667
See: http://www.konbib.nl/gabriel
668
See: http://www.mpt.go.jp/g7web/Electronic-Libraries/Electronic-Library.html
257
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
http://www.mpt.go.jp/g7web/Electronic-Libraries/images/Electronic-lib-org.gif
Info via Christine Maxwell
See: http://www.met.fsu.edu/explores/Guide/Noaa_Html/noaa10.html
See: http://fid.conicyt.cl:8000/
See: http://ford.mk.dmu.ac.uk
See: http://www.un.org
See: http://www.unesco.org:80/cii/memory/menupage.htm
See: http://www.unesco.org/whc/heritage.htm
See Algemeene Dagblad, 17/10/97, p.23.
See: http://www2.echo.lu/libraries/en/libraries.html
See: http://www2.echo.lu/libraries/en/projects/efila97.html
See: http://www2.echo.lu/libraries/en/projects/efilap.html
cf. http://www.ewos.be/fora/index.htm#efila
See: http://www2.echo.lu/libraries/en/projects/biblink.html
See: http://www2.echo.lu/libraries/en/projects/borges.html
See: http://www2.echo.lu/libraries/en/projects/canal.html
See: http://www2.echo.lu/libraries/en/projects/candle.html
See: http://www2.echo.lu/libraries/en/projects/casa.html
See: http://www2.echo.lu/libraries/en/projects/cobrap.html
See: http://www2.echo.lu/libraries/en/projects/copinet.html
See: http://cdservera.bples.lse.ac.uk/decomate
See: http://www.kaapeli.fi/eblida/ecup
See: http://www2.echo.lu/libraries/en/projects/edilibe2.html
See: http://www2.echo.lu/libraries/en/projects/elise2.html
See: http://www2.echo.lu/libraries/en/projects/euler.html
See: http://www2.echo.lu/libraries/en/projects/europaga.html
See: http://www2.echo.lu/libraries/en/projects/hercule.html
See: http://www2.echo.lu/libraries/en/projects/hyperlib.html
See: http://www2.echo.lu/libraries/en/projects/iliers.htm
See: http://www2.echo.lu/libraries/en/projects/master.html
See: http://www.dbc.dk/ONE/oneweb/index.html
See: http://www2.echo.lu/libraries/en/projects/one2.html
See: http://www2.echo.lu/libraries/en/projects/translib.html
See: http://www2.echo.lu/libraries/en/projects/universe.html
See: http://www2.echo.lu/libraries/en/projects/vaneyck.html
See: http://www2.echo.lu/libraries/en/projects/vilib.html
See: http://www-ercim.inria.fr/activity/delos.html
See: p. 69 of ftp://ftp.cordis.lu/pub/esprit/docs/projmms.pdf
See: http://www.uia.org/projects/i2000rep.htm
See: http://www-ercim.inria.fr/
See: http://www.ewos.be/goss/top.htm
See: http://www.ilrt.bris.ac.uk/discovery/imesh
See: http://www.dlib.org/dlib/november96/11miller.html
See: http://www.nla.gov.au/ferg/fergproj.html
See: http://www.nlc-bnc.ca/cidl
See: http://www.nlc-bnc.ca/digiproj/edigiact.htm
258
713
See: http://www.nlc-bnc.ca/cihm/ecol
See: http://www.nlc-bnc.ca/resource/vcuc/index.html
715
See: http://novanet.ns.c/vCucdm.html
716
See: http://www.nlc-bnc.ca/resource/vcuc/z3950.htm
717
See: [email protected]
718
See: http://142.78.40.7/vtour/fvtour.htm
719
See: http://www.elibrary.com/canada
tel. 888-298-0114
416-340-2351
720
See: http://gallica.bnf.fr
721
See: http://www.fachinformation.bertelsmann.de/
722
See Fredeick Studemann, “Thomas Middelhoff.Publisher with his eye on
cyberspace,” Financial Times, 7.12.1998, p. 11.
723
See: http://www.darmstadt.gmd.de/IPSI
724
See: http://www.global-info.org
725
See: http://delite.darmstadt.gmd.de/delite/Projects/
726
See: http://picus.rz.hu-berlin.de:88
http://salome.itaw.hu-berlin.de
727
See: http://www.biblio.tu-bs.de/acwww25/formate/formate.html
728
See: http://www.ilc.pi.cnr.it/dbt/index.htm
729
See: http://www.dl.ulis.ac.jp/DLW_E/
730
See: http://www.elsevier.nl/homepage/about/resproj/tulip.shtml
731
See: http://www.bids.ac.uk
732
See: http://portico.bl.uk
733
See: http://www.ukoln.ac.uk/services/bl
734
See: http://ukoln.bath.ac.uk/services/elib/projects
735
See: http://www.scran.ac.uk
736
See: http://nii.nist.gov/g7/04_elec.lib.html
737
See: http://www.nsf.gov/pubs/1999/nsf996/nsf996.htm
738
See: http://www.pads.ahds.ac.uk
739
See: http://nii.nist.gov
740
See: http://sunsite.berkeley.edu/amher/
741
See: http://cpmcnet.columbia.edu/www/asis.
This has a special interest group for Classification Research at another branch of the same
site, See: http://cpmcnet.columbia.edu/www/asis/interest.html
742
See: http://www.alexandria.ucsb.edu
743
See: http://www.cnidr.org
744
See: http://ntx2.cso.uiuc.edu/cic/cli.html
745
See: http://moa.cit.cornell.edu/MOA/moa-main-page.html
746
See: http://www2.cs.cornell.edu/payette/papers/ECDL98/FEDORA-IDL.html
747
See: http://www-diglib.stanford.edu/diglib/cousins/dlite
748
See: http://www.nlc-bnc.ca/ifla/documents/libraries/net/dpc.txt
749
See: http://everglades.fiu.edu/library/index.html
750
See: http://ksgwww.harvard.edu/iip
751
See: http://muse.jhu.edu/muse.html
752
See: http://lcweb.loc.gov/homepage/lchp.html
714
259
753
754
755
See: http://ds.internic.net/z3950/z3950.html
See: http://lc2web.loc.gov/ammem
See: Mike Snider, “Research Archives in Cyberspace,” USA Today, 10 April 1997, p.
60.
756
See: http://palimpsest.stanford.edu/cpa/newsletter/cpaal80.html
See: M.O.W.B., “The Stanford Digital Libraries Project,” Web Techniques, San
Francisco, vol. 2, issue 5, May 1997, p.44.
758
See: http://www-diglib.stanford.edu/
759
See: http://dli.grainger.uiuc.edu/national.htm
760
See: http://www.infomedia.cs.cmu.edu
761
See: http://walrus.stanford.edu/diglib
762
See: http://elib.cs.berkeley.edu
763
See: http://alexandria.sdc.ucsb.edu/
764
See: http://www.si.umich.edu/UMDL/
765
See: http://www.si.umich.edu/UMDL/aui
766
See: http://www-personal.engin.umich.edu/~cerebus/glossary/glossary.html
767
See: http://dli.grainger.uiuc.edu/testbed.htm
768
See: http://dli.grainger.uiuc.edu/dlisoc/socsci_site/index.html
769
See: http://ai.bpa.arizona.edu
770
See: http://csl.ncsa.uiuc/interspace.html
771
See: http://imagelab.ncsa.uiuc.edu/imagelib
772
See: http://images.grainger.uiuc.edu/mesl/mesl.htm
773
See: http://www.atmos.uiuc.edu/horizon
774
See: http://www.atmos.uiuc.edu
775
See: http://www.ncstrl.org
776
See: http://pharos.alexandria.ucsb.edu
777
See: http://www.csdl.tamu.edu
778
See: http://sunsite.berkeley.edu/R+D/
779
See: http://sunsite.berkeley.edu/APIS
780
See: http://sunsite.berkeley.edu/CalHeritage
781
See: http://128.32.224.173/cheshire/form.html
782
See: http://sunsite.berkeley.edu/Ebind
783
See: http://sunsite.berkeley.edu/ead/
784
See: http://sunsite.berkeley.edu/FindingAids/
785
See: http://sunsite.berkeley.edu/~emorgan/morganagus
786
See: http://www-ucpress.berkeley.edu/scan
787
See: http://sunsite.berkeley.edu/~emorgan/see-a-librarian/
788
See http://www.bmrc.berkeley.edu
789
See http://www.lib.berkeley.edu:8000/
790
See http://www.ipl.org
791
See http://www.umdl.umich.edu/moa/
792
See http://www.glue.umd.edu/~march/NALtalk/tsld010.htm
793
See: http://guagua.echo.lu/langeng/en/mlap94/memo.html
794
See: http://community.bellcore.com/lesk/diglib.html
795
See: http://www.software.ibm.com/is/dig-lib/dlis.htm. For an article see Henry M.
Gladney, “Towards On-line Worldwide”, IBM Systems Journal, vol. 32, n.3, 1993.
757
260
Cf. http://www.ibm.com/features/library/manuscript.html
See: http://www.newdeal.org
797
See: http://www_ercim.inria.fr/publication/Ercim_News/enw27/wang.html
798
See: http://www.loc.gov
799
See: http://www.software.ibm.com/is/dig-lib/imagemap/../demo.htm
800
See Ramana Rao, Jan O. Pedersen, Marti A. Hearst, Jock D. Mackinlay, Stuart K.
Card, Larry Masinter, Per-Kristian Halvorsen and George C. Robertson, “Rich
Interaction in the Digital Library,” Communications of the ACM, New York, April 1995,
vol. 38, no. 4, pp. 29-39.
801
See: http://www.dlib.org/dlib/june96/hearst/06hearst.html
cf. http://www.parc.xerox.com/istl/projects/dlib
802
See: http://www.xerox.fr/ats/digilib/navibel/report/rap_1.html
803
See: http://callimaque.grenet.fr
804
See: http://golgi.harvard.edu/biopages.html
805
See: http://www.chem.ucla.edu/chempointers.htm
806
See: http://www.cds.caltech.edu/exras/Virtual_Library/Control_VL.html
807
See: http://www.dlib.org
808
See: http://cimic.rutgers.edu/~ieeedln/
809
See: http://www.lib.umich.edu/libhome/IDINews
810
See: http://cimic3.rutgers.edu/jodl
811
See: http://www.press.umich.edu/jep/
812
See: http://lcweb.loc.gov/ndl/per.html
813
See: http://www.rlg.org/toc.html
814
See: http://highwire.stanford.edu
815
See: http://www.refdesk.com
816
See: http://www.tagish.co.uk/ethosub/lit5/9a42.htm
817
See: http://www.ispo.cec.be/g7/projects/g7pr5.html
818
See: http://www.iccd.beniculturali.it
819
See: http://www.iocm.org
http://palimpsest.stanford.edu/icom
820
See: ck@nrm
821
See: http://www.cs.rdg.ac.uk/icom/officers.html
822
See: http://www.icom.org/icom-cc
823
See: http://www.icom.org/cimcim
824
See: http://www.icom.org/ceca
825
See: http://www.icom.org/icmah
826
See: http://www.icom.org/icofom
827
See: http://www.icomos.org/
828
See: http://www.konbib.nl/rkd/engpubl/mmwg/home.htm
829
See: http://www.unesco.org/whin
830
See: http://www.unesco.org/whcform.htm
cf. http://www.unesco.org/whc/nwhc/pages/home/pages/homepage.htm
831
See: http://www.aec2000.it/aec2000/projects/herinet/herinet.htm
832
See: [email protected]
833
See: http://firewall.unesco.org/webworld/en/
cf. http://firewall.unesco.org/webworld/en/accueil.html
796
261
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
See: http://europa.eu.int/en/comm/dg10/culture/program-2000_en.htm
See: http://europa.eu.int/en/comm/dg10/culture/en/action/kaleidos-gen.html
See: http://europa.eu.int/en/comm/dg10/culture/en/action/ariane-gen.html
See: http://europa.eu.int/comm/dg10/culture/raphael/index.html
See: http://europa.eu.int/en/comm/dg10/culture/en/action/vec_en.html
See: http://europa.eu.int/comm/dg10/avpolicy/media/en/home-m2.html
See: http://europa.eu.int/en/comm/dg10/culture/emploi-culture-intro_en.html
See: http://europa.eu.int/en/comm/dg10/culture/cult-asp/en/index.html
See: http://europa.eu.int/en/comm/dg10/oreja/0603en.html
See: http://www.unesco-sweden.org/Conference/Papers/PAper9.htm
See: http://www.medici.polimi.it
http://www.medici.org
See: http://www2.echo.lu/info2000/midas/activities.html
See: http://www.mcube.fr
[email protected]
See: http://mosaic.infobyte.it
See: http://www2.echo.lu/info2000/en/mm-projects
See: http://inf2.pira.co.uk/pub/ecwebsite97.html
See: http://europa.eu.int/en/comm/dg10/avpolicy/avpolicy.html
See: http://neon.coe.fr
See: http://www.cimi.org
See: http://www.cimi.org/CHIO.html
See: http://www.cimi.org/Project_Chio_DTD.html
See: http://www.cimi.org/SGML_for_CHI.html
See: ftp://ftp.cimi.org/pub/cimi/CIMI_SGML/cimi4.dtd.rtf
See: http://www.museums-online.com/site/
Cf. http://www.rmn.fr/vpc/fvpc.html
See: http://www.dc.co.at
See: http://www.can.net.au
See: http://www.chin.gc.ca
See: http://www.culture.fr/culture/inventai/presenta/invent.htm
See: http://www.louvre.edu
See: http://fotomr.uni-marburg.de/for.htm
See: http://www.saur.de
See: http://www.aifb.uni-karlsruhe.de/WBS/broker
See: http://www.gti.ssr.upm.es
See: http://www.ahds.ac.uk
See: http://www.open.gov.uk/mdoassn
See: http://www.comlab.ox.ac.uk/archive/other/museums/mda
See: http://www.rchme.gov.uk
See: http://www.scran.ac.uk/
See: http://www.museumlicensing.org
See: http://www.amn.org/AMICO/background.html
See: http://www.cni.org
See: http://www.ahip.getty.edu/
See: http://world.std.com/~mcn
262
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
cf. http://world.std.com/%7Emcn/index.html
See: http://www.mip.berkeley.edu
See: http://www.arts.endow.gov/sitemap/index.html
See: http://www.ninch.org
See: http://www.archivists.org
See: http://sunsite.berkeley.edu/FindingAids/EAD/eadwg.html
See: http://www.cmg.hitachi.com/fine_art/art_main.html
See: http://www.intel.com/english/art/
See: http://www.csl.sony.co.jp/person/chisato.html
See: http://www.lis.pitt.edu/`spring/papers/vl.ps.Z
See: http://www.lis.pitt.edu/`spring/papers/dpr.ps
See: http://www.ericsson.com/Connexion/connexion4-95/virtual.html
See: http://www.artworlds.com
See: http://www.ntu.edu.au/education/atet/EVAx17/module2.html
See: http://www.ntu.edu.au/educatio/atet/EVAx17/Index.html
See: http://umuai.informatik.uni-essen.de/field_of_UMUAI.html
See: http://www.cs.cmu.edu/plb
cf. http://sunsite.ust.hk/dblp/db/a-tree/b/Brusilovsky.Peter.html
See: http://info.itu.ch/VTC/
See: http://www.unesco.org/webworld/tunis/tunis97/com_64/com_64.html
See: http://estrella.acs.calpoly.edu/~delta/
See: http://ike.engr.washington.edu:81/igc/
See: http://www.eun.org/launch/programme.htm
See: http://www.iste.org/
See: http://www.cepis.org/
See: http://www.ocg.or.at/ecdleu.html
See: http://www.iearn.org/
See: http://www.tft.co.uk/rel_sites.html
See: http://www.tact.org.uk/default.htm
See: http://www.europa.eu.int/en/record/white/edu9511/index.htm
See: http://www2.echo.lu/emtf/
See: http://www2.echo.lu/emtf/currentnewsemtf.html
See: http://www.educause.edu
See: http://www.educause.edu/nlii/
See: http://imsproject.org/
See: http://tina.lancs.ac.uk/computing/research/cseg/projects/ariadne/
See: http://www.manta.ieee.org/p1484/
See: http://www.ott.navy.mil/1_4/adl/index.htm
See: http://www.aicc.org
See: http://ortelius.unifi.it
See: http://www.eun.org/
See: http://wfs.eun.org
See: http://www.eep-edu.org/
See: http://emoo.imaginary.com/edu-comp/lisbon96/
See: http://www19.area013.be/pericles/
See: http://www.schoolnet.ca/info/newsletter
263
922
See: http://skyler.arc.ab.ca/amlt/AMLT-group.html
See: http://www.telelearn.ca
924
See: http://Ontaris.oise.utoronto.ca/ceris2
925
See: http://telecampus.edu/
926
See: http://www.kmdi.org
927
See: http://www.cned.fr
928
See: http://www-irm.mathematik.hu-berlin/~ilf/mathlib.html
929
See: http://www.zkm.de/
930
See: http://www.kah-bonn.de/1/16/san1.htm
931
See: http://www.toshiba.co.jp/about/reports/ar95/rev_ope/review4.htm
932
See: http://www.kidlink.org
933
See:
http://www.psychology.nottingham.ac.uk/research/credit/themes/learningenvironments/
934
See: http://www.cogs.susx.ac.uk/grad/kbs/
935
See: http://hagar.up.ac.za/catts/learner/m1g1/whoindex.html
936
See: http://star.ucc.nau.edu/~mauri/mauri.html
937
See: http://mailer.fsu.edu/~wwager/driscoll-bio.html
938
See: http://education.indiana.edu/ist/faculty/duffy.html
939
See: http://www.ed.psu.edu/~insys/who/jonassen/default.htm
940
See: http://www.uwex.edu/disted/elcome.html
cf. http://talon.extramural.uiuc.edu/ramage/disted.html transmission of materials
http://miavx1.muohio.edu/~cedcwis/Distance_Ed_Index.html topic areas and
providers
http://iat.unc.edu/cybrary/title_index.html on line courses, consortia
http://www.scs.ryerson.ca/dmason/common/euit.html ed. uses of info. technology
http://www.SLOAN.ORG/EDUCATION/ALN.NEW.HTML computer based
learning projects
941
See: http://www.solvit.com.kr/Malsm
942
See: http://www.edventure.com
943
See: http://www.gsn.org
944
See: http://www.jhu.edu/virtlab/virtlab.html
945
See: http://www.mrcsb.com/multimedia/Apollo2000
cf. [email protected]
946
See: http://www.christdesert.org/pax.html
947
See: http://quest.arc.nasa.gov/courses/telerobotics
948
See: http://www.coe.uh.edu
949
See: http://207.68.137.59/education/hed/news/Septembr/wgu.htm
950
See: http://evl.uic.edu/costigan/ORAL/
951
See: http://cyber.ccsr.uiuc.edu/cyberprof
952
See: http://vet.parl.com/~vet/vet97images.html
953
See: http://www.gen.net
954
See: http://www.coe.usu.edu/coe/id2/
955
See: Information Week, 11 August 1997, pp.
956
See: http://www.cbtsys.com
957
See: http://www.digitalthink.com
958
See: http://www.ibidpub.com
923
264
959
See: http://www.klscorp.com
See: http://www.learnit.com
961
See: http://www.masteringcomputers.com
962
See: http://www.mindwork.com
963
See: http://www.propoint.com
964
See: http://www.quickstart.com
965
See: http://www.scholars.com
966
See: http://www.seclinic.com
967
See: http://www.teletutor.com
968
See: http://www.vanstar.com
969
See: http://www.amdahl.com
970
See: http://www.ac.com
971
See: http://ans.com
972
See: http://www.att.com/solutions
973
See: 617-572-2000
974
See: http://www.bbn.com
975
See: http://www.bani.com
976
See: http://www.bah.com
977
See: http://www.integris.com
978
See: http://www.caapgemini.com
979
See: http://www.cbtsys.com
980
See: http://www.compuserve.net
981
See: http://www.colybrand.com
982
See: http://www.csc.com
983
See: http://www.dtcg.com
984
See: http://www.digital.com
985
See: http://www.eds.com
986
See: http://www.entex.com
987
See: http://www.ey.com
988
See: http://www.globalknowledge.com
989
See: http://www.hp.com
990
See: http://www.ibm.com
991
See: http://www.inventa.com
992
See: http://www.us.kpmg.com
993
See: http://www.lockheed.com/it
994
See: http://www.systemhouse.mci.com
995
See: http://www.mckinsey.com
996
See: http://www.pw.com
997
See: http://www.sapient.com
998
See: http://www.sun.com/sunservice/sunps
999
See: http://www.unisys.com
1000
See: http://www.vanstar.com
1001
See: http://ignwcc01.tampa.advantis.com/explorer
1002
See: http://www.ibmuser.com/
1003
See: http://www.brooks.af.mil/AL/HR/ICAI/icai.htm
1004
See: http://www.galileo.it/ebla/index.html
960
265
1005
1006
1007
1008
1009
1010
1011
See: http://www.rpi.edu/dept/llc/writecenter/web/net-writing.html
See: http://watserv1.uwaterloo.ca/~tcarey/main.html
See: http://watserv1.uwaterloo.ca/~tcarey/links.html
See: http://cseriac.udri.udayton.edu/products/take.htm
See: http://www.stemnet.nfca/~elmurphy/hyper.html
See: http://www.erlbaum.com/254.htm
See: http://gopher.sil.org/lingualinks/library/literacy/cj1441/tks1898.htm
1012
See:
http://gopher.sil.org/lingualinks/library/literacy/fre371/vao443/TKS2569/tks347/tks1937/
tks2065.htm
Appendix 9
1013
See Information Week, May 18 1998, p. 212.
1014
See M. Smith, “X.500 Attribute Type and Object Class to hold
Uniform Resource Identifiers” at ftp://dns.internic.net/internet-drafts/draft-ietf-asidx500-url-01.txt.
1015
See: Bunyip Information Systems
310 Ste-Catherine Street West, Suite 300
Montreal Quebec H2X 2A1
Tel. 514-875-8611
Fax. 514-875-8134
Archie
WHOIS ++ Protocol for Directory Service
Chris Weider
Compatible with X.500
1016
See: http://www.middlebury.edu/~its/Software/WebPh/README.html
1017
See: http://www.umich.edu/~dirsvcs/ldap/doc/guides/slapd/1.html#RTFToC1
1018
See: ftp://zenon.inria.fr/rodeo/solo/draft-huitema-solo-01.txt
Appendix 10
1019
See:: http://www.wcmc.org.uk/
1020
See:: http://www.ru/gisa/english/cssitr/format/s-57.htm
1021
See:: http://www.ru/gisa/english/cssitr/format/sql_mm.htm
1022
See:: http://www2.echo.lu/oii/en/gis.html#GeoTIFF
1023
See: http://www2.echo.lu/oii/en/gis.html#GIS
1024
See: http://www.env.gov.bc.ca/gdbc/saif/toc.htm
1025
See: http://www.env.gov.bc.ca/~smb/saif.html
1026
See: http://www.ifp.uni-stuttgart.de/ddgi/ddgi-main.html
1027
See: http://www.ru/gisa/english/cssitr/format/bycountr.htm
1028
See: http://www.ihs.on.ca/astm.htm
1029
See: file://waisvarsa.er.usgs.gov/wais/docs/ASTMmeta83194.txt
1030
See: http://sdts.er.usgs.gov/sdts/mcmcweb.er.usgs.gov/sdb
1031
See: http://www.gfdc.gov/fgdc2.html
1032
See: http://fgdc.er.usgs.gov/metaover.html
1033
See http://geochange.er.usgs.gov/pub/tools/metadata/standard/metadata.html
cf. http://www.geo.ed.ac.uk/~anp/metaindex.htm
266
1034
1035
See http://www.fgdc.govt/Metadata/metahome.html
See http://www.att.com/attlabs/people/fellows/abate.html.
267