Decennial Aarhus conferences - Department of Computer Science


Decennial Aarhus conferences - Department of Computer Science
Conference Papers
Critical Alternatives
Peer-reviewed papers
Proceedings of
The Fifth Decennial Aarhus Conference
17- 21 August 2015, Aarhus, Denmark
Edited by
Shaowen Bardzell, Susanne Bødker, Ole Sejer Iversen,
Clemens N. Klokmose and Henrik Korsgaard
In cooperation with ACM SIGCHI
Title: Critical Alternatives Proceedings of The Fifth Decennial Aarhus Conference
Editors: Shaowen Bardzell, Susanne Bødker, Ole Sejer Iversen, Clemens N.
Klokmose and Henrik Korsgaard
Publisher: Computer Science, Aarhus University, Denmark
Printed by SUN-TRYK, Aarhus, Denmark
Copyright © remains with the contributing authors.
Publication rights licensed to Aarhus University and ACM.
Following the conference, the proceedings will be made available through the
Aarhus Series on Human Centered Computing and ACM Digital Library.
1975-1985-1995-2005 — the decennial Aarhus conferences have traditionally
been instrumental for setting new agendas for critically engaged thinking about
information technology. The conference series is fundamentally interdisciplinary
and emphasizes thinking that is firmly anchored in action, intervention, and scholarly critical practice. With the title Critical Computing – between sense and sensibility, the 2005 edition of the conference marked that computing was rapidly
seeping into everyday life.
In 2015, we see critical alternatives in alignment with utopian principles—that
is, the aspiration that life might not only be different but also radically better. At
the same time, radically better alternatives do not emerge out of nowhere: they
emerged from contested analyses of the mundane present and demand both
commitment and labor to work towards them. Critical alternatives matter and
make people reflect.
The fifth decennial Aarhus conference, Critical Alternatives, in 2015 aims to set
new agendas for theory and practice in computing for quality of human life. While
the early Aarhus conferences, from 1975 and onwards, focused on computing in
working life, computing today is influencing most parts of human life (civic life, the
welfare state, health, learning, leisure, culture, intimacy, ...), thereby calling for
critical alternatives from a general quality of life perspective.
The papers selected for the conference have undergone a meticulous reviewing
process looking at methodical soundness as well as potentials for the creating
alternatives and provoking debate. Among 71 full and short paper submissions
21 were accepted. The accepted papers span a broad range of positions and
concerns ranging from play to politics.
We would like to express great thanks for help and support to the numerous people who have contributed to making the conference possible. In particular we want
to thank Marianne Dammand and Ann Mølhave for secretarial help, including registration and hotel arrangements. We want to thank the center for Participatory IT
(PIT) and the Department of Computer Science, University of Aarhus for providing
resources for the planning and operation of the conference.
We hope that the conference will inspire critical and alternative thinking and action
for the next decennium.
Olav W. Bertelsen and Kim Halskov
Conference Chairs
Shaowen Bardzell, Susanne Bødker, Ole Sejer Iversen and
Clemens N. Klokmose
Chairs of the Program
Henrik Korsgaard
Proceedings Chair
Conference committee
Conference co-chairs:
Olav W. Bertelsen & Kim Halskov
Communications co-chairs:
Joanna Saad-Sulonen & Eva Eriksson
Program co-chairs:
Shaowen Bardzell & Ole Iversen
Student volunteer co-chairs:
Tobias Sonne & Nervo Verdezoto
Paper co-chairs:
Critical practices track co-chairs:
Ole Sejer Iversen, Susanne Bødker & Shaowen Lone Koefoed Hansen & Jeff Bardzell
Demonstrations co-chairs:
Short paper chair:
Niels Olof Bouvin & Erik Grönvall
Clemens Nylandsted Klokmose
Social chair:
Workshop co-chairs:
Mads Møller Jensen
Peter Dalsgaard & Anna Vallgårda
AV chairs:
Proceedings chair:
Philip Tchernavskij & Nick Nielsen
Henrik Korsgaard
Web chair:
Nicolai Brodersen Hansen
Program committee
Alan Blackwell, Cambridge University, UK
Jonas Fritsch, Aarhus University, Denmark
Anna Vallgårda, ITU, Copenhagen, Denmark
Kia Höök, Mobile Life, Sweden
Anne Marie Kanstrup, University of Aalborg, Denmark
Liam Bannon, University of Limerick, Ireland
Carla Simone, Milano University, Italy
Liesbeth Huybrechts, University of Hasselt, Belgium
Carman Neustaedter, Simon Fraser University, Canada
Marcus Foth, Queensland University of Technology, Australia
Christopher le Dantec, Georgia Tech, USA
Mathias Korn, IUPUI, USA
Christopher Frauenberger, Vienna University of Technology,
Rachel Smith, Aarhus University, Denmark
Ron Wakkary, Simon Fraser University, Canada
Claus Bossen, Aarhus University, Denmark
Sampsa Hyysalo, Aalto University, Finland
Dag Svanæs, NTNU, Norway
Shad Gross, Indiana University, USA
Deborah Tatar, Virginia Tech, USA
Silvia Lindtner, University of Michigan, USA
Eli Blevis, Indiana University, USA
Steve Harrison, Virginia Tech, USA
Elisa Giaccardi, TU Delft, The Netherlands
Stuart Reeves, University of Nottingham, UK
Erik Grönvall, ITU Copenhagen, Denmark
Tuck Leong, University of Sydney, Australia
Gilbert Cockton, Northumbria University, UK
Volkmar Pipek, University of Siegen, Germany
Gopinaath Kannabiran, Indiana University, USA
Wendy Ju, Stanford, USA
Helena Karasti, University of Oulu, Finland
Yanki Lee, HKDI DESIS Lab for Social Design Research, Hong
Irina Shklovski, ITU Copenhagen, Denmark
John Vines, Newcastle University, UK
Participatory Trajectories
Revisiting the UsersAward Programme from a Value Sensitive Design
On Creating and Sustaining Alternatives: The case of Danish Telehealth
Computing and the Common. Hints of a new utopia in Participatory Design
Concordance: A Critical Participatory Alternative in Healthcare IT
Åke Walldius, Jan Gulliksen & Yngve Sundblad
Morten Kyng
Maurizio Teli
Erik Gronvall, Nervo Verdezoto, Naveen Bagalkot & Tomas Sokoler
Keeping Secrets
Networked Privacy Beyond the Individual: Four Perspectives to ‘Sharing’
Personal Data: Thinking Inside the Box
Airi Lampinen
Amir Chaudhry, Jon Crowcroft, Hamed Haddadi, Heidi Howard, Anil Madhavapeddy & Richard Mortier
Deconstructing Norms
Deconstructivist Interaction Design: Interrogating Expression and Form
In Search of Fairness: Critical Design Alternatives for Sustainability
Why Play? Examining the Roles of Play in ICTD
Double Binds and Double Blinds: Evaluation Tactics in Critically
Martin Murer, Verena Fuchsberger & Manfred Tscheligi
Somya Joshi & Teresa Cerratto Pargman
Pedro Ferreira
Oriented HCI
Vera Khovanskaya, Eric P. S. Baumer & Phoebe Sengers
The Alternative Rhetorics of HCI
Note to Self: Stop Calling Interfaces “Natural”
Gaza Everywhere: exploring the applicability of a rhetorical lens in HCI
Human-computer interaction as science
Lone Koefoed Hansen & Peter Dalsgaard
Omar Sosa-Tzec, Erik Stolterman & Martin A. Siegel
Stuart Reeves
Charismatic Materiality
Designed in Shenzhen: Shanzhai Manufacturing and Maker Entrepreneurs
Material Speculation: Actual Artifacts for Critical Inquiry
Charismatic Technology
Silvia Lindtner, Anna Greenspan & David Li
Ron Wakkary, William Odom, Sabrina Hauser, Garnet Hertz & Henry Lin
Morgan G. Ames
Critical subjects and subjectivities
An Anxious Alliance
The User Reconfigured: On Subjectivities of Information
Creating Friction: Infrastructuring Civic Engagement in Everyday Life
Kaiton Williams
Jeffrey Bardzell & Shaowen Bardzell
Matthias Korn & Amy Voida
Past and Future
Not The Internet, but This Internet: How Othernets Illuminate Our Feudal
Interacting with an Inferred World: The Challenge of Machine Learning for
Paul Dourish
Humane Computer Interaction
Alan F. Blackwell
Revisiting the UsersAward Programme from a Value
Sensitive Design Perspective
Åke Walldius
KTH Royal Inst. of Technology
SE-100 44 Stockholm, Sweden
[email protected]
Jan Gulliksen
KTH Royal Inst. of Technology
SE-100 44 Stockholm, Sweden
[email protected]
Yngve Sundblad
KTH Royal Inst. of Technology
SE-100 44 Stockholm, Sweden
[email protected]
The UsersAward (UA) programme was launched in 1998,
initiated by the LO (Swedish Trade Union Confederation)
in cooperation with the TCO (Swedish Confederation for
Professional Employees) and a group of researchers from
KTH (as research coordinator), Uppsala University, Gävle
University, and Luleå Technical University.
The goal of the UsersAward (UA) programme is to develop
and maintain a strategy for enhancing the quality of
workplace software through on-going user-driven quality
assessment. Key activities are development of sets of
quality criteria, as the USER CERTIFIED 2002 and 2006
instruments, and performing large domain specific user
satisfaction surveys building on these quality criteria. In
2005 we performed a first analysis of the values that inform
the criteria and procedure making up the 2002 instrument,
using the Value Sensitive Design methodology. This paper
is a follow-up of that study. We report on new types of
stakeholders having engaged with the UA programme and
reflect on how the conceptual considerations and explicit
values of the programme have shifted as a consequence.
The UsersAward programme follows the “Scandinavian
tradition” of involving users in IT development for use at
workplaces. In the seminal Utopia project in the 1980s the
focus was on user involvement in the design and
development of the IT support [7]. The investigations and
opinion making activities the UA programme has
performed since its inception (domain specific user
surveys, software certifications, prize competitions)
indicate that the users also have to participate in the
procurement, deployment, periodic screenings and further
development of the software [9]. The motivation for the
first analysis of the UA programme from the Value
Sensitive Design (VSD) perspective was ”to understand
how the principled and systematic approach to designing
for human values that Value Sensitive Design offers can
further the understanding of when and how different
stakeholders can contribute to IT design, development, and
deployment in a sustainable and mutually beneficial way”
[10]. The aim of this follow-up paper is to report on new
participants in, and activities and results of, the UA
programme since 2005 and to reflect on how some of the
conceptual considerations and explicit values have shifted
as a consequence of the programme’s development.
UsersAward programme, user satisfaction surveys, userdriven certification of software, workplace IT, Participatory
Design, Value Sensitive Design
ACM Classification Keywords
H.5.3 Group and Organization Interfaces
The Human-Computer Interaction (HCI) and Information
Systems research communities are moving from studying
design and deployment of IT into the workplace to the
digitalization of society as a whole. IT is no longer
considered a separate component in working life, but one of
the driving forces for social and technical innovation. The
digitalization process means that a number of work tasks
are disappearing [3]. Often it is the physically heavy
routine tasks and jobs requiring no or little education that
are affected. In the future the demands for highly educated
workers will increase and consequently new requirements
are put on the working staff, with increased pressure and
stress as a consequence. This makes it important to
consider a wider spectrum of values affected by the
introduction of IT in work and societal contexts.
We briefly summarize the UA programme, VSD as a
critical screening perspective and the main findings
regarding how a better understanding of direct and indirect
stakeholder values involved may benefit the future
development of the UA programme. After that summary
we will report on how we have tried to apply the findings
from the initial paper.
Since 2005, there has been a growing public awareness of
how low quality IT systems negatively affect the work
environment in several sectors of society. We argue that
this increased awareness provides future challenges and
possibilities for HCI research in general and for the UA
programme in particular. We conclude the paper by briefly
describing how the participation of new stakeholders has
resulted in new methods for investigating how software can
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
benefit both user self-direction and the economic health of
organisations using the software.
investigated workplaces meet the required level of
agreement with the 29 statements. If successful a detailed
protocol is published, with quotes from the users’ free-form
comments on all pertinent issues. The 2006 revision of the
instrument was prompted by an interest from German
unions and researchers, who, after having replicated
surveys based on the UC 2002 instrument [6], argued for an
adaptation of some of the quality criteria and more realistic
levels of acceptance in respect to the difference between
German and Scandinavian work organisation and team
autonomy. In 2005, three software packages had received
the USER CERTIFIED 2002 label. Between 2005 and 2009
three more packages passed the user satisfaction
screenings. Furthermore, two of the packages applied for,
and passed, new USER CERTIFIED screenings in 2007 and
2009 respectively.
Between 1998 and 2011 the UA programme developed and
performed a unique combination of industry wide user
surveys, user conferences, targeted design projects, yearly
IT Prize contests, and, as the main research challenge, a
Users’ certification instrument and procedure for workplace
software packages [5]. The 2005 VSD screening of the
programme focused on the certification instrument [10].
We briefly outline the rationale and procedure for the USER
CERTIFIED 2002 and 2006 instruments.
The rationale behind the User Certified instrument
The inspiration for the UA programme was the successful
certification programme for display units, TCO’92,
launched by TCO in 1992 through a broad cooperation
organisations. This certification programme had been
regularly upgraded (TCO’95, TCO’99, TCO CERTIFIED)
and had by 2002 put its label on more than 200 million
display units worldwide, [2].
Value Sensitive Design (VSD) is a theoretically grounded
approach to design of technology that seeks to account for
human values in a principled and comprehensive manner
throughout the design process [4]. The approach is today an
acknowledged approach in HCI. Here is a brief summary.
LO wanted the UA initiative to develop a similar model for
the workplace software market. The goal was to develop a
method that was user-driven, both in the sense that it was
initiated by, and developed in cooperation with, Sweden's
two largest employee organizations (2+1.3 million
members), and in the sense that the certificate each
software package received was based on end-users from at
least two different workplaces who, after having operated
the software for more than nine months, had given it their
seal of approval.
In VSD values are viewed neither as inscribed into
technology nor as simply transmitted by social forces.
People and social systems affect technological
development, and technologies shape individual behaviour
and social systems. It is a tripartite methodology, consisting
of conceptual, empirical, and technical investigations,
which are applied iteratively and integratively. Conceptual
investigations comprise philosophically informed analyses
of the central constructs and issues under investigation.
Empirical investigations focus on the human response to
the technical artefact, and on the larger social context in
which the technology is situated, using methods from the
social sciences. Technical investigations focus on the
design and performance of the technology itself.
Applying the USER CERTIFIED 2002 and 2006 instruments
We briefly summarize the USER CERTIFIED instrument, its
design and our experience from applying it in eight
certification processes. For more comprehensive
descriptions see [5, 9].
A key aspect of Value Sensitive Design is its focus on both
direct and indirect stakeholders. The direct stakeholders
include the users of the system in question, the system
developers, and the managers of both users and developers.
The indirect stakeholders are not direct users of the system,
but nevertheless affected by the system, either positively or
negatively. For example, the direct stakeholders for a
hospital scheduling system might be doctors, nurses, and
other hospital personnel. Two important classes of indirect
stakeholders would be the patients and their families – even
though they don’t use the system directly, they are strongly
affected by it.
The assessment procedure starts by asking a software
provider, who applies for certification, to fill out a selfdeclaration regarding the software and its intended use and
to suggest three workplaces where the user satisfaction of
the package can be assessed. The main activity is the set of
interviews and questionnaire surveys that the evaluation
team carries out at the three workplaces. 29 quality criteria
in the certification questionnaire are presented as
statements to be confirmed on a value scale from 1 (total
dismissal) to 6 (total agreement). At each of the three
workplaces, three end-users and three representatives from
management are interviewed separately, based on the
questionnaire. Also 10% (or at least 10) of the software’s
end-users answer a user version of the questionnaire.
Furthermore, VSD makes a distinction between explicitly
supported values (i.e., ones that we explicitly adopt and
support throughout the design of the system), and
stakeholder values (i.e., ones that are important to some but
not necessarily all of the diverse stakeholders for that
system). In the 2005 screening we found that “the principal
explicitly supported values of the UsersAward programme
The questionnaire covers six areas (totally 29 statements):
Overall benefits, Deployment method, Technical features,
Worktask support, Communicative support, Local
assessment. The users are considered satisfied as a whole,
and a certificate is issued, when at least two of the
itself are transparency and fairness: transparency, because
we want the process by which software packages are
certified to be open and understandable; and fairness,
because we want the certification assessment to be made in
an unbiased manner.” [10]
Unionen, has made surveys in 2008, 2010, 2012, 2013 and
2015 [8]. And in 2013 the union for civil and municipal
servants, Vision, made their first survey. These surveys are
based on UA surveys of industry, health care and banks
performed in 2002, 2004, and 2005 respectively, surveys
which in turn were based on the UA certification values
and criteria [5].
For the evaluated systems, the screening found that “the
values the programme attempts to foster are all related to
human welfare and human flourishing. They include:
competency development for the individual, the team, and
the organization as a whole (in particular opportunities for
exploration and learning); enhanced degree of self-direction
for individual workers and teams; supporting flexible, selfdirected communication within and between work teams;
and economic health of the organization using the system.”
VSD calls on the investigators to consider indirect as well
as direct stakeholders, and harms in addition to benefits.
The 2005 screening of the UA programme resulted in a
recommendation “that the certification instrument
articulates in a more systematic way questions about who
are the indirect as well as the direct stakeholders, and about
the harms as well as benefits of the system for the different
stakeholder groups.” [10] The empirical investigations
proposed in the 2005 screening aim at clarifying “how well
the UsersAward programme supports the values of
transparency and fairness [and] how well it fosters the
values listed above in the systems being evaluated – and
whether there are other values that should be added to the
list, or values that should be clarified or subsumed.” [10]
Related to increased activism of individual unions was a
shift of ownership of the UA programme. As of this
writing, TCO Development has just acquired the
programme, methods and databases. With its 20 years
experience of environmental labelling of IT hardware
( a new challenge for the UA
research and development now is to complement the TCO
Certified label designed to support the values of hardware
ergonomics and the social responsibility of IT
manufacturers with the UsersAward label for IT workplace
software usability. This, also, calls for a reorientation of the
UA research.
Are UA’s declared explicit values of 2005 still relevant?
For the UA programme as a whole, we interpret an
increased union activism and a more resourceful owner as
benefitting an increased awareness of concrete usability
problems at Swedish workplaces. Other UA initiatives we
regard as supporting the transparency and fairness of the
programme are the pilot study mentioned above [1], and
two international research workshops arranged around the
concept of User-driven IT Design and Quality Assurance
(UITQ), which led to, among other things, two replications
of UA certification studies by a German research group [5].
The 2005 VSD screening exercise was very important for
our own understanding of the UA activities. We were
fortunate to have Alan Borning, one of the most
knowledgeable proponents of the approach, as co-author.
However, the closest we have been to a renewed meta level
screening is a small (109 respondents) survey study carried
out in 2009 in order to gauge the needs of different
stakeholders interested in the programme and responding
to the survey – buying companies, user organisations,
public agencies, municipalities, IT providers, IT
consultants and universities [1]. Although the study was not
done along the VSD lines of thought it resulted in some
reconsiderations of the UA values which we will briefly
mention below. In the absence of a systematic VSD
analysis the following points of self-assessment can be seen
as preparation for such a study when we (or, preferably,
external reviewers) get the means of conducting one. First,
we will account for two major changes in the environment
of the UA programme since 2005. Then we will try to
pinpoint what this has meant for the adherence to UA’s
explicit values in the UA practice and to our
reconsiderations of these values.
The growing series of union reports, which typically cover
a randomised sample of about 2 000 users responding to
about 30 statements, clearly show that users experience no
or only a slow increase in the explicit values in systems that
UA promote – competency development, self-direction, and
economic health of the organisation [8]. The most alarming
results concern the lack of self-direction – users are not
allowed to participate in the deployment of new systems in
the four areas problematized in the surveys: how work
tasks should be transformed, how functionality should be
prioritised, and how education and training should be
delivered in order to have the systems support the new
ways of working [9]. Under the heading “Four out of ten
think that the IT-systems are difficult to use” the Union
2015 report concludes: “Here is a potential for cost
reductions for the employers and reduced stress and
frustration for the users.” [8]
A further indication of the relevance of the values declared
in 2005 is the fact that the Swedish health promoting
agency Prevent, jointly owned and managed by the
employer and employee organisations, has initiated a
programme against what they term “TechnoPanic”
( This we interpret as
a sign that the direct interest of organised users with respect
to self-direction and competence is shared by the employer
organisations with their more direct interest in the
Changes in the UA programme environment
The first major change regarding the context for the UA
programme is that the individual trade unions have started
to perform IT user satisfaction surveys of their own
memberships. The biggest union for white collar workers,
economic health of the member companies they serve
through the Prevent health programme.
Development’s focus on challenging hardware and
software providers to live up to high quality standards has
underlined the importance of investigating how investments
in usability may result in better work environment as well
as a sustained or increased level of economic health for the
organisations that use good quality software.
The survey study in 2009 shed some light on one of the
questions raised in 2005 – which other stakeholders should
be involved in user-driven IT quality assurance? The results
indicated that both user organisations and universities
ranked higher as trustworthy institutions than public
agencies and private companies [1]. The non-profit
company TCO Development (TCO-D, new owner of UA)
represents a special case, as it is in turn owned by a trade
union confederation. But more than that, the clients of
TCO-D are not only the federated unions and their
membership. TCO-D also addresses management
representatives of hardware and software providers directly
with certification services. This in turn moves the economic
health of companies, specifically through more selfdirection for the users, to the forefront as an explicit value
for a renewed UA programme.
This research was supported in part by Swedish Agency for
Innovation Systems (VINNOVA).
1. Bengtsson, L., Ivarsen, O., Lind, T., Olve, N., Sandblad,
B., Sundblad, Y., Walldius, Å. (2009). Nya initiativ för
en pejling av behovet hos
UsersAwards olika interessentgrupper. TR TRITACSC-MDI 2009
2. Boivie, P.E. (2007). Global standard – how computer
displays worldwide got the TCO logo, Premiss (2007).
3. Frey, C. B. & M. A. Osborne, “The Future of
Employment: How Susceptible Are Jobs to
Computerisation?” Oxford University manuscript, 2013.
Anticipating the TCO-D future involvement in UA, the UA
research 2009-2011 focused on understanding economic
impacts of managing usability. A new format for analysing
and presenting results from certification was developed [5].
The instrument applies strategy maps, a tool developed by
Kaplan & Norton for facilitating iterative, collaborative
deliberation among stakeholders on strategic issues. As of
this writing, we have only used usability strategy maps in
certification protocols and in minor field studies. The most
visible foregrounding of usability efforts contributing to
economic health is the new survey question “How much
time would you save per day if the IT-systems worked the
way you wanted?” This question has yielded stable member
estimations of around 25 minutes per day, resulting in
assessments of overall potential savings of more than
10000 MSEK a year for the Unionen membership alone
[8]. Through collaboration within the Nordic eHealth
Network (NeRN), mandated by the Nordic Council of
Ministers, this question has in turn been taken up in similar
user satisfaction surveys in some of the other Nordic
countries [11].
4. Friedman, B. (ed.) Human Values and the Design of
Computer Technology, Cambridge University Press and
CSLI, New York, NY and Stanford, CA, 1995.
5. Ivarsen O., Lind T., Olve N.-G., Sandblad B., Sundblad
Y. & Walldius Å. (2011). Slutrapport UsersAward2 –
utvecklad kvalitetssäkring av IT-användning, MID,
6. Prümper, J., Vöhringer-Kuhn, T. & Hurtienne, J.:
UsersAward – First results of a Pilot Study in Germany,
In Proceedings of the UITQ 2005 workshop.
7. Sundblad, Y.: UTOPIA - Participatory Design from
Scandinavia to the World, in Proceedings History of
Nordic Computing 3, pp. 176-186. Springer Verlag,
Heidelberg (2010).
8. Unionen (2015). Tjänstemännens IT-miljö 2014 –
Lyssna på oss som ska använda det”, Unionen.
9. Walldius, Å., Sundblad, Y., Sandblad, B., Bengtsson,
L., Gulliksen, J. (2009). User certification of Workplace
Software – Assessing both Artefact and Usage, BIT
(Behaviour & Information Technology), vol.28, no.2,
pp.101-120. (2009)
VSD is usually employed in guiding the design of
individual information systems. In the preliminary
screening of the UA programme in 2005, we used VSD to
inform the design of a programme intended to impact the
design of computer systems – in other words, we were
working one level removed from the design of the
individual IT system. In this revisit of the UA programme,
we conclude that the programme’s explicit values are the
same as 2005, although there has been a shift towards
understanding the mutual interplay between user selfdirection and economic values of IT usage. TCO
10. Walldius, Å., Sundblad, Y., Borning, A. (2005). A First
Analysis of the UsersAward Programme from a Value
Sensitive Design Perspective, in Bertelsen, O., N., O.,
Bouvin, P., G., Krogh and M., Kyng (eds.), Proceedings
of the Fourth Decennial Aarhus Conference August 2024, 2005, pp. 199-202
On Creating and Sustaining Alternatives:
The case of Danish Telehealth
Morten Kyng
Aarhus University & The Alexandra Institute
Aabogade 34; 8200 Aarhus N; Denmark
[email protected]
platform, called OpenTele. OpenTele is already the most
used telehealth platform in Denmark and in January 2015
the five Danish regions, who own the public hospitals,
decided to increase its use over the next few years.
This paper presents and discusses an initiative aimed at
creating direct and long lasting influence on the use and
development of telemedicine and telehealth by healthcare
professionals, patients and citizens. The initiative draws on
ideas, insights, and lessons learned from Participatory
Design (PD) as well as from innovation theory and software
ecosystems. Last, but not least, the ongoing debate on
public finances/economy versus tax evasion by major
private companies has been an important element in
shaping the vision and creating support for the initiative.
This vision is about democratic control, about structures for
sustaining such control beyond initial design and implementation and about continued development through
Participatory Design projects. We see the “middle
element”, the structures for sustaining democratic control
beyond initial design and implementation as the most
important and novel contribution of the paper.
4S began as an effort, initiated by researchers, to get the
most out of a set of IT infrastructure tools, primarily in
terms of benefits for public health service providers and
private vendors. 4S then developed into what it is today: an
organization facilitating direct and long lasting influence by
healthcare professionals, patients and citizens.
The key people involved in the development of 4S are
researchers in Participatory Design (PD), IT managers from
the public hospital sector, and – for the last year – doctors
and other healthcare personnel working at public hospitals.
In recent months people from patients organizations have
joined the initiative, and they expect to play a key role in
the future. For the PD researchers 4S was originally an
initiative quite separate from their work in PD. That work
was about democracy, mainly in terms of doing design in
ways that allowed participating users to contribute in ways
that made the outcome more attuned to their interest. 4S
was about national infrastructure, open source software
tools and software ecosystems. However, as it turned out,
4S could be developing into a very successful mechanism
for sustaining democratic user influence.
Author Keywords
Contexts for design; sustaining results; control; Participatory Design; software eco-systems; innovation; Open
Source; telehealth, telemedicine, healthcare technology.
ACM Classification Keywords
D.2.10 Design, Methodologies, D.2.m Miscellaneous
H.5.m. Information interfaces and presentation (e.g., HCI):
One might say that Scandinavian Participatory Design (PD)
began as an effort towards extending concerns for
workplace democracy to cover IT [2], and gradually, over
more than four decades, focused more and more on
techniques for PD, see e.g. [3-5]. In contrast, 4S began as
an effort to get the most out of a set of IT infrastructure
tools, primarily in terms of benefits for private vendors and
public health service providers [6]. From there, 4S
developed into an organization facilitating and sustaining
direct influence on the use and development of telehealth
by healthcare professionals, patients and citizens. In doing
so 4S has added a new type of context for PD in Denmark.
This paper is about how users of technology may achieve
direct and long lasting influence on the IT-systems they use.
The case presented and discussed is telemedicine and
telehealth in Denmark. The main mechanism for achieving
this influence is a foundation controlled by healthcare
professionals, patients, citizens, public healthcare
organizations and researchers. The foundation, called 4S
[1], controls two sets of open source software: an ITinfrastructure toolbox and a telemedicine/telehealth1
In our view, the case of 4S, especially how 4S changed over
time, presents important lessons for those who want to
develop and sustain critical, democratic alternatives, and for
those who want to contribute to the continued developments
of Participatory Design (PD) – especially those who are
interested in the relations between PD and its larger setting
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
17 – 21, 2015, Aarhus Denmark
From now on we just use the term telehealth, and take it to
in society. Therefore we describe the development of 4S in
quite some detail. This description involves quite a few
elements: organizations, projects and software. To assist the
reader these are listed below:
providers, e.g. hospitals and general practitioners, and
across systems from different vendors. And through such
sharing make it simpler for the health service providers to
create better services. The aim of 4S was to support
expanding use of the toolbox after the end of the Net4Care
project, i.e. to sustain the results of the project. Looking at
other similar projects, both research projects and public,
open source projects, we believed this to be a major
challenge: The results of research projects, like Palcom [7],
are not being used nearly as much as intended and thus not
developed after the end of the project. Or public open
source create animosity due to lack of good documentation,
lack of tutorials, and/or a flawed governing structure [6].
Denmark has five regions whose main task is to administer
the public hospitals
There are 98 municipalities. Their main tasks include
administration of prevention, elder care, e.g. nursing homes,
and public schools.
An open source telemedicine/-health platform developed by
three of the five Danish regions and used in two large
national projects.
The Second Idea: Open Source Telemedicine Platform
The Net4Care infrastructure toolbox, and the architecture
on which it was based, had many merits [6], but it was
difficult to create sufficient interest among health service
providers and vendors. To them Net4Care was just another
research project without backing from strong public or
private players. Thus we were heading down the road
towards “not being used nearly as much as intended”. Two
things changed this. First we suggested that 4S should also
have governance of an open source telehealth platform,
called OpenTele. OpenTele was being developed by three
of the five Danish regions as part of two large telehealth
projects. And the regions saw 4S as a way to increase the
likelihood of sustaining OpenTele after the end of these two
projects. The inclusion of OpenTele, as was our intension,
dramatically increased the interest in 4S among both these
three regions and the healthcare professionals using the
OpenTele system. Secondly, the public authority
responsible for health IT, The National eHealth Authority,
published a so-called reference architecture for telehealth
[8]. The Net4Care infrastructure toolbox was fully aligned
with this reference architecture. Obviously this dramatically
increased the interest in 4S among health IT companies that
wanted to understand and use the reference architecture.
A research project that developed a proposal for a national
infrastructure for telehealth.
Net4Care is also the name of an infrastructure toolbox
developed by the project.
A foundation with members from the three regions that
developed OpenTele, a municipality, the state level and
from OpenTele users and research. The researchers also
participated in the Net4Care project.
4S governs two sets of open source software: the Net4Care
infrastructure toolbox and the OpenTele telehealth platform
4S has a board, a coordinator, a software group and
healthcare forums.
The rest of the paper is organized as follows: First we
present 4S: how it began, the steps toward what it is today,
and what 4S is currently working with. Secondly, we look
at why 4S might work, i.e. what are the characteristics of
telehealth and the societal context that make it possible for
an organization like 4S to succeed. Then, in the Discussion
section, we look at 4S in relation to ongoing debates in PD
on democracy and sustaining results – and we briefly reflect
on why it took so long to develop the 4S initiative. We
conclude the paper by a short discussion of what we have
learned from 4S.
Then, in June 2013, the 4S foundation outlined above was
formally established, and became copyright holder of the
two sets of open source software: the OpenTele platform
for telehealth and the Net4Care infrastructure toolbox.
Implementing 4S
In the following months we focused on four areas:
interactions with stakeholders, developing the role of the
board, involvement of healthcare professionals and patients,
and handling governance of the open source software.
The First Idea: Shared, Public Infrastructure
Developing the blueprint for the 4S organization began in
2011. The idea was to create an organization to promote
and further develop an open source IT-infrastructure
toolbox. The toolbox was developed as part of a research
project on national infrastructures for telehealth and named
Net4Care after the project [6]. The aim of the toolbox was
to make it easier for an IT-company to effectively and
safely support sharing of health data across health service
Interactions with main stakeholders:
Private IT-companies and users of OpenTele
The first task that 4S focused on was creating links with
private IT-companies that were potential users of the
infrastructure toolbox. To this end we set up an experiment
to investigate and illustrate the effectiveness of the toolbox.
This resulted in a public demonstration of sharing data
across systems of different types, e.g. homecare telehealth
and electronic patient records, and from different vendors at
a conference in 2013 [9]. One of the results was quite
strong support from The Confederation of Danish Industry.
Our second task was creating links with the users of
OpenTele. This proved to be more difficult than expected,
primarily because the two projects using OpenTele were
behind schedule, and for that reason were closely followed
by their funding agencies. The top-level project managers
did not see dialogue with 4S as justifiable amidst
difficulties in catching up, and thus did not want to
facilitate cooperation between project participants and 4S.
However, one of the sub-projects, the one on complicated
pregnancies, decided that cooperation was worthwhile and
over the next two years this cooperation became the catalyst
for the current high involvement of healthcare professionals
and patients in 4S. Figure 1 and 2 illustrates being
monitored at the hospital and at home for a woman with a
complicated pregnancy. Being monitored at the hospital
typically means hospitalization for eight to ten weeks. This
is one of the reasons why many prefer tele-monitoring.
Figure 2. Monitoring at home.
A major task that the board wanted to work with was the
creation of synergy in the continued development of
OpenTele. This development was distributed across the two
large projects and one of these projects consisted of five
almost independent sub-projects. Thus on the one hand
coordination and creating synergy might be difficult. On the
other hand, the composition of the board made the members
believe that they could orchestrate this. As it turned out, the
“project logic” overruled most of the efforts aimed at
creating synergy. This project logic emerged form time
schedules so tight that coordination often was impossible
and that analysis of alternatives never happened. On the
positive side, the problems in creating synergy have
increased the willingness on behalf of the regions to give
more power to 4S in the development of plans for the future
of the OpenTele platform.
Developing the role of the board:
From project focus to national goals
The board of 4S consisted of IT-managers from the three
regions developing OpenTele, a representative from the
organizations that developed the Net4Care infrastructure
toolbox, a medical representative from the organizations
that used OpenTele, and two representatives from the
national level. Shortly after the formalization of 4S an ITmanager from a large municipality joined the board. A PD
researcher from the organizations that developed the
infrastructure toolbox had the role of CEO on a part time
Involvement of healthcare professionals and patients
Democratic control by healthcare professionals and
patients/citizens was not on the original agenda of 4S, but it
was the intention to use 4S to supplement the current
influence by managers and IT developers with input from
users. 4S chose to make this visible through the
organizational structure, which consists of the abovementioned board, a software group and healthcare forums.
Especially the forum on complicated pregnancies was an
active participant in fieldwork-based evaluations and in
organizing workshops with pregnant women and healthcare
professionals to generate suggestions for improvements of
OpenTele, cf. Figure 3 below.
This forum was also responsible for organizing the first
national workshop for healthcare professionals on the use of
telehealth for complicated pregnancies.
Figure 1. Monitoring at the hospital.
revised modules for the OpenTele platform, e.g. a module
supporting the use of smart watches or a better visualization
of different health data. However, nothing happened, and in
the spring of 2014 only the company that developed the
original OpenTele solution was developing on OpenTele.
Therefore 4S decided that the research group behind the
infrastructure toolbox should step in and develop a small
number of new modules as well as some modifications to
existing modules in close cooperation with healthcare
professionals and patients, cf. Figure 4 below.
Figure 3. Workshop with pregnant women, and women
who had recently given birth, to generate suggestions for
improvements of OpenTele.
Through this we learned a lot about what was needed to
facilitate a multi-vendor strategy and – even more important
– we realized that 4S had the potential to become an
organizational frame for democratic influence in the area of
telehealth through the control of the open source platform
OpenTele and the possibilities for continued development
of the platform based on Participatory Design (PD). Briefly
stated we advocate PD as a way to develop great
contributions with respect to both IT and organization. We
advocate 4S as a way of keeping control over the software
and its development. And we advocate open source as a
way of creating competition among different solutions and
providers – ranging from research groups to private
companies – in a way that will enhance quality and keep the
costs down.
Governance and handling of the open source software
The last area that we worked with concerned all the
traditional stuff related to governance of open source
software, such as setting up a code repository and
developing processes for quality control when modifying or
adding software, including bug-tracking. It also included
documentation, tutorials and discussion forums. In addition
we wanted to develop standard contracts to be used by
healthcare providers when contracting work based on the
4S software.
When we look at the ability to realize these goals there is a
marked difference between the infrastructure toolbox and
the OpenTele platform. With respect to the toolbox all the
relevant goals have been met. This is primarily due to the
fact that the group behind the formation of 4S also
developed the toolbox and that this group actively promotes
4S and involves others in the continued development of the
toolbox. This includes involving both private companies
and national organizations working with telehealth. In
addition no tight timetables are currently involved.
Supporting a Societal Trend:
The Next Tasks, Opportunities and Challenges
In January 2015 the five Danish regions decided to
implement infrastructure for telehealth within two years,
and to do so based on the national reference architecture,
thus increasing the importance of the 4S infrastructure
toolbox. Secondly, they decided that the main elements of
the OpenTele platform should form the basis for telehealth
solutions. These decisions open new possibilities of
influence for 4S. At the same time the decisions imply that
several different actors will want to exercise influence on
4S. Thus those interested in the democratic aspects of 4S
should step up their efforts in order not to be marginalized.
When IT comes to OpenTele, the situation is – for several
reasons – very different. First of all the OpenTele software
uploaded to the 4S code repository has so far been several
steps behind the current versions used by the two telehealth
The three regions paying for OpenTele want 4S to have the
most recent code. However, transfer of real governance – as
opposed to formal governance, described in documents –
and transfer of code from the company originally
developing OpenTele to 4S was never part of the “project
logic” mentioned above. Thus the regions have so far failed
to allocated the necessary resources and “decision power”
to speed up this transfer. And the private company
responsible for the versions used by the telehealth projects
has currently no real incentives to change this. The situation
is, however, not satisfactory, and the 4S board have recently
decided to set up a timetable for finishing the transfer.
Figure 4. Workshop with health personnel and a
pregnant women to generate suggestions for
improvements in the use of OpenTele at the hospital and
improvements to OpenTele itself.
Realizing the Democratic Potential
It was the intension of all 4S board members that a number
of different companies should begin to develop new or
First of all, as part of this increased focus on promoting
“democracy and PD”, the key people behind the democratic
aspects of 4S, i.e. PD researchers and healthcare
professionals, realized that “creating great solutions via
PD” and “keeping control over software via 4S” wasn’t
enough to create the interest and engagement that was
needed. Based on our discussion with different healthcare
professionals working at public hospitals that experienced
continual budget cuts, we realized that many saw 4S as a
way of “fighting back”. If not literally, then in terms of
providing solutions and opportunities for development that
wasn’t depending on private companies and their profits. So
this became the new story that made 4S attractive to many
healthcare workers. And – as discussed in the next section –
a story that was aligned with a growing societal trend of
“fighting back” when tax cuts and public budget cuts were
presented by governments as the only way forward.
In this section we first look at telehealth using different
disciplinary perspectives that we find to be important in
understanding the possibilities open to 4S. These
perspectives include innovation, software ecosystems, open
source, public finances, and Participatory Design.
Based on this understanding we then look at the different
stakeholders in telehealth in Denmark and in 4S. We
discuss their interests and how different aspect of 4S may
make them positive, neutral or negative.
Telehealth, including telemedicine, has been around for
many years. However, there are no widely used set-ups, no
widely recognized good business cases and no big players
with a strong, successful product, cf. e.g. [10-12]. Thus we
may characterize telehealth as an emerging area in the sense
that there are no so called dominant designs [13]. We may
also say that telehealth is in the fluid stage. In this stage
different actors experiment with new functionalities, user
involvement is common and solutions are often flexible but
ineffective [14]. To make an analogy from the area of
products, we may look at the tablet computer. In 1989
GRiD Systems released the fist commercially available
tablet-type portable computer [15]. But the fluid stage
continued for more than two decades and only ended when
Apple introduced the iPad, which quickly established itself
as the leader among a few dominant designs. During the
fluid stage the basic concepts or designs change quite often,
new players have a reasonable chance of success and
cooperation with users increases the chance of success [13].
Drawing on C. Christens notion of disruptive technology
[16] we also observe that new players have especially good
chances of success in the cases where they develop new
types of solutions to a new market and these solutions only
later become interesting in one or more big, established
markets. Thus in the area of telehealth, a small new
organization like 4S may have a much better chance than a
similar effort directed towards say Electronic Patient
Records, an established area dominated by multi billion
dollar companies like Epic Systems Corporation [17]. And
if solutions in the telehealth area develop in ways that over
time make them attractive also in relation to other kinds of
hospital-based healthcare services, then theories like those
of C. Christensen tell us that small players like 4S have a
reasonable chance of entering the bigger market of say
hospital IT systems.
As part of developing this aspect of 4S we have begun to
orchestrate critical debate, sharing of experiences and ideas
among healthcare professionals, patients, citizens, public
healthcare organizations and researchers. In addition 4S
facilitate cooperation among them – also when they do not
agree. As it turned out, the decision by the five regions to
implement infrastructure for telehealth caused major
disputes to develop with the municipalities that are responsible for public home care. In this situation, 4S succeeded in
organizing a series of workshops and meetings to develop a
compromise. However, looking at the current options for
the above-mentioned groups, we find that especially
patients and citizens as well as healthcare professionals
need new venues and opportunities. 4S has begun doing this
through our health forums, cf. [1], and to our knowledge 4S
is the only Danish organization facilitating this kind of
activity. In addition to the activities organized by the health
forums we have just begun the process of having a
representative from a patient organization join the board.
Secondly, 4S is developing its role as an active partner in
creation of PD projects within telehealth aimed at adding to
and improving the 4S software. However, the explicit focus
on PD is new and we only have a few results so far.
Thirdly, 4S continues to improve its open source code
repositories and processes for accessing and delivering
code. These processes facilitate the transfer of results of
successful, relevant PD projects to the open source software
controlled by 4S. This includes documentation, quality
control and CE Marking according to e.g. the Medical
Device Directive of the European Union.
Software ecosystems
Finally, 4S continue to provide and improve resources that
facilitate use of the 4S software. The plan for the next 18
months includes texts on telehealth for patients and
healthcare professionals, material on open source business
models for customers and suppliers, as well as blue prints
for contracts covering development, maintenance and
Our first ideas on creating 4S to promote and further
develop an open source IT-infrastructure toolbox was
subsequently rephrased, analyzed, and further developed
using concepts from the area of software ecosystems. The
research field of software ecosystems has emerged as the
study of complex interactions between software
frameworks and software architectures on one hand, and
organizations, users, customers, developers, and businesses
on the other. Based on a recent literature review [18] we
define a ‘software ecosystem’ as: “the interaction of a set of
actors on top of a common technological platform that
results in a number of software solutions or services”.
Further, ‘‘Each actor is motivated by a set of interests or
business models and connected to the rest of the actors and
the ecosystem as a whole with symbiotic relationships,
while, the technological platform is structured in a way that
allows the involvement and contribution of the different
actors.’’ Well-known examples of software ecosystems are
the Apple iOS and Google Android ecosystems
respectively. The telehealth ecosystem in Denmark is a very
different story, but all the same we were able to use the
software ecosystem concepts to analyze the status quo,
identify the lack of connectivity as a key issue, and to argue
for the possibility to improve the ecosystem through the
creation of 4S and an active guardianship of the Net4Care
infrastructure toolbox. We also learned that 4S should pay
specific attention to the “interests or business models” of all
major actors. Regarding the infrastructure toolbox itself,
this worked quite well. The toolbox was seen as a useful,
and it did not compete with any existing products.
However, our inclusion of the telehealth platform
OpenTele, challenged our relationship with many private
vendors, which viewed OpenTele as a direct competitor to
their closed source solutions. Subsequently we have worked
directly on developing business models that illustrate how
different kinds of vendors may benefit from both the
infrastructure toolbox and the telehealth platform. As of
mid 2015 several of the main, private actors of different
kinds are positive or neutral towards 4S including the
OpenTele platform as illustrated by a recent press release
from the Danish Regions co-signed by the Confederation of
Danish Industry [19]. For more on 4S in a software
ecosystem perspective se [6].
important because this made it possible for the company to
develop new versions for new markets without having to
negotiate with any owners of the software. Looking back,
the decision to use open source has proven to be very
important in the development of support for the 4S vision of
democratic control and PD. This is discussed below.
Public finances and tax evasion
The healthcare systems around the world are under
increased pressure: populations are growing, the percentage
of elderly and old people is increasing, and the number of
possible treatments is growing, as are the costs of medicine.
In addition the expected level of public service is growing
in countries like Denmark. As a result current models for
financing public health are in general considered not to be
sustainable, see e.g. [20]. In this situation many
governments look to telehealth as a way to keep
expenditure under control. However, so far such
expectations have not been validated, se e.g. [10, 11].
The most common conclusion seems to be that telehealth is
not one specific thing, and that the way different elements
are put together and integrated in context is what results in
bad, neutral or god outcomes for the different stakeholders
[21]. Following this line of thought we find that 4S offers
two important possibilities: First of all the infrastructure
toolbox and the OpenTele platform make it possible to
tailor many different kinds of telehealth solutions, and to
integrate these with other types of health IT systems in
many different ways. Secondly, since both the toolbox and
OpenTele are open source, it is possible to use and modify
them in experiments, and then – based on the 4S processes
for quality control – transfer results into the 4S code base
and use them in large-scale operation.
In addition, the debate on financing the modern well-fare
state has changed in recent years. For many people it is no
longer only a question of how to control public spending,
but also a question of how to counteract unfair rules and
regulations as well as systematic tax evasion by big
international corporations – tax evasion on a scale that
threatens national coherence through extreme inequality, cf.
[22] and, for a discussion of these issues related to
innovation, [23]. As it turned out, this changed public
climate makes it possible to tell the story about 4S as a way
of providing a democratic alternative to big corporations
with flawed tax morale. In many cases, where we have
discussed the possibility to use OpenTele with hospital
personnel, this aspect of 4S has played an important role.
Especially in situations where hospital budgets are being
reduced, the idea of having a non-profit, public organization
like 4S governing open source software is very attractive to
many hospital employees.
Open Source
Open source was chosen for both the infrastructure toolbox
and the OpenTele platform early on, i.e. prior to the
formation of 4S. With respect to the toolbox is was a
natural choice for modules that we intended to be used in as
many telehealth – and other health-IT – systems as possible.
It was also an approach we had used earlier in EU research
projects developing infrastructure tools cf. [7]. Concerning
the OpenTele platform, the decision to use open source was
made by the three regions financing the development. One
reason for this decision was their experiences with licensebased payment models for some commercial telemedicine
systems. While these models were acceptable in small to
medium sized pilots, they presented – in the view of the
regions – an unacceptable price tag when scaling up. Open
source was seen as a way of getting rid of the license fees.
In addition, open source was seen as a way to realize a
multi vendor strategy that would support both better quality
and competitive prices. For the company developing the
first versions of OpenTele, open source was probably
Participatory Design
Participatory Design has, since is emergence in the early
1970s, been seen as a vehicle for democratization of tech-
nology, although views has varied quite a bit on how successful PD has been and are in this respect, see. e.g. [24-26]
teaching material for the unions was an important element.
In addition it turned out that existing system development
methods were inadequate for this endeavor, and thus – by
necessity – the researchers in the Utopia project began to
develop tools and techniques for PD [31].
In relation to our discussion of 4S and why it might succeed
we’ll first look at the issue of visions and then consider
partners and results.
Looking at this list of results, it is not surprising that the
trade unions was a very strong partner in the first two
generations of project. They co-created the venues for
debate and dissemination, and they were partners in a
comprehensive educational effort based on the trade union
educational programs for shop stewards. An example on
this is the final report from the Utopia projects. It was
written for graphical workers, printed in 70.000 copies and
distributed by the Nordic unions to all their members.
Participatory Design, especially in the Scandinavian
varieties, had from its formation in the early 1970s and the
following two decades a strong and appealing vision based
on work place democracy and more broadly democratization of technology. If we look at the activities and
publications from this period it is obvious that this vision
played a very important role when people chose to work
with PD, see e.g. the volumes of the Scandinavian Journal
of Information Systems cited above [24-26].
Since the late 1990s focus in many, probably in most, PD
projects has been on how best to involve people
(users/potential users) in the development of ICT in ways
that allow them to promote their own interests in relation to
the ICT being developed. Focus is often on the project
level, on PD tools and processes, and it seems that focus on
structures outside the individual projects has decreased, see
e.g. [5, 24-26, 32].
From the 1990s and on “work place democracy” faded and
only the more abstract “democratizat¬ion of technology”
remained. This change in vision meant that PD changed
from being part of something that was important for trade
unions and their members to something important for those
who were involved in a PD project, and to PD researchers.
One might characterize this as a change from political to
academic. This development is also reflected in [27], where
Shapiro argues that PD theoretically has the potential to
produce better systems. This is a very important point for
PD researches, but we have found it difficult to use this
kind of argument to create engagement among patients and
healthcare personnel. For example we found it difficult to
convince users that PD is much better than agile
development methods like say SCRUM [28].
At least two kinds of arguments have been used as
explanations: the changing role of trade unions and
“normalization” of use of technology [4]. The first
argument assumes that the weakening role of trade unions
over the last decades makes it difficult for PD researchers
and/or trade unions to continue the kind of cooperation that
the first and second generation of projects represented. In
our view this is not substantiated: trade unions still have
comprehensive course programs, they engage in struggles
for organizing labor of different nationalities, including
tracing opaque ownership relations, debates on the impact
of globalization etc. We find it more likely that
“normalization” plays an important role. Normalization
refers to the changes in the use of technology at the
workplace that have occurred since the early 1970s. Back
then IT was new to the shop floor and the first generation of
projects played an important role in helping the unions to
understand potential implications of IT and how to deal
with them.
The vision of 4S – democratic control, structures for
sustaining such control, and continued development
through Participatory Design projects – has created strong
engagement from healthcare professionals and patients,
much stronger than the abstract “democratization of
Partners and results
In Scandinavia the first generation of projects (e.g. The
Norwegian Iron and Metal workers project [29] focused on
how local trade unions might influence managementcontrolled development and deployment of IT systems at
the individual workplace in order to promote worker
interests [2, 4]. An important part of this work consisted of
developing courses for local union people, including the
teaching material used. Another element was national dataagreements that regulated the way management introduced
ICT systems [3, 4].
When we consider the Utopia project, it took place at a time
when information technology was about to dramatically
change the newspaper industry – and once more this was a
project that played an important role in helping the trade
unions understand the potential implications and to develop
a design of a system that represented an alternative to the
major commercial products being developed.
During the 1990s and in the following decade the use of IT
has become business as usual in many industries, and in our
In the second generation of projects (e.g. The UTOPIA
project [30]) focus was on worker-controlled design of IT
systems from a trade union perspective, both to generate
demands when negotiating with management and to form
the basis for development of specific systems by
commercial companies. And once more, developing
view, this is a primary reason why the projects of the first
generations are difficult to “re-invent”2.
line of reasoning Concrete Consensus, and describe it in the
following way:
When we look at 4S in this light we see that a strong
partner like the unions is missing. However, 4S is
addressing an area where the use of technology is emerging
and about to dramatically change treatment, rehabilitation
and prevention. Thus 4S is producing a number of results,
which makes 4S very relevant to large numbers of
healthcare professionals, citizens and citizen organizations
as well as to patients and patient organizations. Thus we
find that 4S has the potential to expand its role as a part of
the discourse on telehealth among these groups and to
facilitate development of e.g. courses on telehealth for
home care nurses and different patient groups. In addition
4S is currently facilitating a small, but growing number of
PD projects aimed at improving telehealth, for
patients/citizens and healthcare professionals, as well as for
the public healthcare providers. The results of these projects
are sustained by transferring the software to the 4S
OpenTele code base and by making the relevant
documentation, organizational models, course material etc.
available through 4S.
• The system and organization of work/activities will have
positive effects for both users/employees and for
buyers/employers (representing the owners of the
organization using the system).
• It is possible to find or establish one or more companies
that will implement and market the system.
• A market exists for the system, i.e. there are
companies/individuals who will buy it.
The Utopia project designed a system that probably fulfilled
the first bullet, failed with the second, and thus did not
provide real information on the third [30].
In our analysis of the possibilities of 4S to succeed in
providing substantial improvements for patients/citizens
and healthcare personnel, we use the above notion of
concrete consensus as a way of understanding how the
interests of patients/citizens and healthcare personnel may
co-exist with the interests of other stakeholders.
The users
The primary users are patients/citizens and healthcare
personnel. Both these groups are experiencing major
changes affecting their daily life or work when telehealth is
introduced. However, the consequences vary depending on
the solution in question. Currently no obviously good
solutions covering both IT and organization exist, but both
groups, and especially the patients, give the current
OpenTele solutions high rankings. The groups may
influence the development of OpenTele through 4S, both
via the board and via PD projects. Concerning the 4S PD
projects we see the same possibilities and pitfalls as in other
PD projects in the healthcare area. These include
asymmetric power relations and often the lack of a strong
network and other resources on the patient side, se e.g. [3336]. When it comes to the board we expect patient/citizen
organizations to play a major role, since the individual
patient or citizen do not have the resources, and since these
organizations already are actively involved in promoting
patient/citizen interests in relation to health and welfare
technology, including telehealth. Thus it will be the
responsibility of these organizations to safeguard against
patients/citizens becoming hostages for other interests.
With respect to healthcare personnel we are not sure how
they will utilize the board. Currently the healthcare
personnel participate based on their involvement in
telehealth projects and as telehealth experts recognized by
their peers. At the moment the professional organizations,
e.g. the Danish Medical Association, are not actively
involved in the area of telehealth. Thus the healthcare
personnel currently involved in 4S also see 4S as a way of
developing policies on behalf of their respective
professional organizations.
The Stakeholders in Telemedicine
As the next part of analyzing why 4S might be a success we
consider the different stakeholders and their interests in 4S.
In doing so we also summarize the interests of each group
across the disciplinary discussion in the previous
subsections. However, to set the scene we first look at some
of the key ideas and assumptions from the first generations
of Scandinavian PD projects.
First of all we note that the first Scandinavian PD projects,
like the Iron and Metal Workers project [29] and the Utopia
project [30] aimed at improving work by changing the way
computers were used at the workplace. To do so they
focused on cooperation between researchers and trade
unions. However, to achieve the improvements aimed at,
the idea was that the trade unions would subsequently use
the knowledge and other results from the projects to
negotiate such improvements with management and
owners, both locally and centrally. In [5], Kyng labels this
Early examples on suggestions for where to look for new
possibilities for PD include working in developing
countries and with non-profit organizations. Both these
lines of work have resulted in new projects, but not on a
scale like the first generations of Scandinavian projects.
Recently several new forms of extremely individualized
labor are emerging in e.g. programming, translation and
transportation. It has, in discussions among PD researchers,
been suggested that PD researchers should look into making
a new kind of PD-project to help understand what is going
on and how to counteract negative consequences. To our
knowledge such a project hasn’t been initiated yet.
OpenTele platform, this is being accepted as a fact of life,
i.e. as a decision made by the regions, not by 4S. In this
situation many private companies see 4S as making positive
contributions, e.g. in terms of suggesting business models
for different types of companies and developing processes
for accessing and delivering code that make use of the
OpenTele platform attractive for a number of different
types of companies.
The providers of health services
The hospitals and other providers of health services in
Denmark are expected to introduce telehealth in the next
few years and to reduce costs and increase quality by doing
so. However, as previously mentioned, there are no hard
data supporting that they can achieve this. Thus many of the
providers of health services see a need for building more
knowledge and developing and modifying telehealth
solutions based on experiences. And for these reasons they
engage in 4S. At the same time they acknowledge that the
users of telehealth should play a key role in this, and
therefore welcome the emphasis that 4S place on the
influence of there groups.
In summary, 4S promotes OpenTele solutions, but aim for
co-existence with other solutions, not for confrontation.
Actually 4S actively facilitate co-existence based on datasharing and integration through the infrastructure toolbox.
And in doing so it seems that 4S is creating support from
most of the stakeholders and that only a rather small
minority is now negative towards 4S.
In the future we will see differences between the different
user groups and between these groups and the managers
representing the providers of health services when it comes
to specific solutions. So far we believe that the possibilities
offered to the users by 4S, will make it easier for them to
promote their own interests – also, and maybe especially, in
the face of disagreements with management.
This being said, quite a few people, both in the public and
the private sector, are skeptical towards open source, and
we expect this skepticism to continue to influence also how
they view 4S.
Finally, it should be noted that the vision of 4S means
different things to different people. To many, and probably
to the most enthusiastic supporters, it is important that 4S
provides a democratic alternative to big corporations with
flawed morale concerning tax. To others, 4S is important as
a way of sustaining results of PD projects. And to managers
at hospitals and regional IT departments, 4S is a way of
combining development of cost-effective solutions with
user involvement and acceptance.
The public authorities financing health services
In Denmark, the Ministry of Health and the municipalities
finance health services. As mentioned they expect
telehealth to deliver substantial savings, but they too know
that there are no hard data supporting this. At the same
time, they consider the economic models for the primary,
commercial telehealth solutions to be too expensive when
scaling up to the national level. Thus, for the same reasons
as the providers of health services, they are positive towards
4S. They also see 4S as way to develop business models
and cost structures that are beneficial for the different
public stakeholders. At the same time they expect the 4S
infrastructure toolbox to speed up data sharing and the
integration of telehealth solutions with other types of health
IT and thus pave the way for better economy in telehealth.
4S today is emerging as an organization that sustains
democratic control by healthcare professionals and
patients/citizens. However, that 4S ended up in this way is
at least as surprising to the researchers who took the
initiative to create 4S as it was to the researchers creating
the Utopia project, that the main heritage of that project
probably is contributions to PD tools and techniques. In the
following we discuss these two issues: Fist we look at
sustaining democratic control and related work in PD. Then
we briefly discuss the issue of surprise, and we do so under
the heading “Why this took a long time”.
Suppliers of health IT
The last group of stakeholders we consider are companies
providing health IT. Several of these companies have made
substantial investments in the area of telehealth and many
of them find that the public customers should just buy their
solutions and get on with the job of providing telehealth
services3. However, this is not really happening. The
markets for telehealth are small and fragmented, and most
of the customers postpone purchases. In this situation the
4S infrastructure toolbox is seen as a way to integrate
proprietary solutions from different vendors and thus
support growing market integration. In addition the
contributions from 4S to building knowledge among public
customers is also seen as a way to speed up their decisions
on buying telehealth solutions. When it comes to the
On Participatory Design and sustaining democracy
Following the first generations of Scandinavian PD
projects, PD research has focused primarily on PD tools and
techniques, see e.g. [3, 5, 32]. The PDC 2012 proceedings,
[32] does contain a session on values and politics, ibid pp.
29-40. However, of the three papers in the session two are
about methods and techniques and the third is about how
children with profound disabilities can participate in
formative design processes. All three papers represent
interesting and valid contributions, but they are not related
to politics in the sense of the first Scandinavian PD
projects. From time to time this development results in
discussions of how to reinforce or even reinstate ideals on
Personal communication with managers from several big,
international IT companies from 2008 to the present.
democracy and political action. This was discussed e.g. by
Kyng in [4], where he pointed to “normalization” of the use
of IT at workplaces as one of the reasons of declining
cooperation with Trade Unions in PD. In [3] Bødker et al
list a number of changes in technology, management,
unions and research which they all see as contributing to the
shift from political to pragmatic. They also explain the huge
success of PD as largely depending on participation being a
basic epistemological (knowledge theoretical) principle. At
the same time, they are not willing to let go of the political
aspects of the past. They discuss the “consumer movement”
as one promising possibility, and mention the computer
equipment certification by TCO, the Swedish Confederation of professional employees, see also [37]. The most
vivid debate of the development of PD research is probably
the 2002 and 2003 issues of the Scandinavian Journal of
Information Systems, where the titles of the papers include
“P for Political”, “D for Democracy”, and “A for
Alternatives”. In “P for Political” [38], it is argued that “PD
must encompass work motivated in political conscience,
…not only participatory design.” An almost alternative
view is presented by Kyng in [5], who argues that the
politics in PD relies on external factors, e.g. Trade Unions.
When we then look at where 4S may be positioned is this
debate, we find that the 4S initiative has developed into
being political, in the sense that it facilitates democratic
control over IT solutions by users and their organizations.
However, the mechanism for this control, the 4S organization, is not PD or a result of PD, but an “external factor”
that uses PD to develop solutions in the interests of users.
And 4S in turn, is itself dependent on the growing societal
trend of “fighting back” when tax cuts and public budget
cuts are presented by governments as the only way forward.
that developed a system for hospital operation room
logistics. The system received very positive evaluations by
health care personnel [40]. Among other things they
pointed to improved work environment. Later a privately
owned company was formed to turn the system into a
product, and that company was recently sold for almost 20
million dollars. The current web-site for the company
emphasizes improved productivity, and do not mention
work environment [41].
Why this took a long time
Looking back at the history of 4S, we note that it took us a
long time to move from the initial ideas on 4S to the current
vision and organization. And when we began to think about
this, we found it especially surprising that it took us so long
to make the connection between 4S and PD. However,
today we believe that this delay is understandable.
First of all we note that it is common knowledge that people
tend to stick proven ways of doing things for (too) long, se
also [13]. One might say that the success and tradition of
PD research made us blind to other ways of doing than
continuing current PD research with its focus on PD
methods, techniques and tools – and on the concept of a
project as the frame for doing things.
Secondly, we note that we were equally slow in
understanding and developing the democratic potential of
4S. In retrospect one might say that we were not looking for
democracy, and thus we did not see it for a very long time.
In addition, the researchers involved in the creation of 4S,
adhere to the notion that politics in PD relies mainly on
external factors. Thus we did not want to push for the
“fighting back” part of the 4S vision, but to develop it
together with key players among the users.
What about sustaining results?
If this understanding will help others to move faster and
more effectively when creating alternatives and
mechanisms to sustain them we are not quite sure.
However, a simple lesson for PD researchers could be that
there is more to democracy and politics than PD and
projects, a subject that we expand a bit upon in the section
A different line of reflection in PD is represented by the
investigations into sustaining results of PD projects.
The paper by Iversen and Dindler [39] reviews a number of
papers on sustaining PD initiatives and identifies four
different forms of sustaining: maintaining, scaling,
replicating, and evolving. They go on to discuss how to
increase sustainability based on the more nuanced
understanding that the different forms represent. The paper
concludes, “in general, the sustainability perspective does
not fundamentally change the PD toolbox, but may
influence the choice of method, how they are modified and
when they are used.” p. 166. We agree with the main points
presented in the paper. However, we note that the paper is
about sustaining of the results of PD projects. And we find
that the options discussed for improving sustainability of
results are framed by the notion of a project. In contrast to
this approach, the 4S initiative sets up a new, potentially
permanent, organization to sustain results – and to frame
PD projects. The 4S approach also solves another issue
related to sustaining results: that of keeping control of
results over time. The issue is illustrated by a PD project
The story of 4S is quite specific: IT is about telehealth in
Denmark and how an organization set up to govern open
source software developed into a frame for sustaining
democratic control and PD projects. On the other hand we
find that the story illustrates some key points in the
discussions on PD and democracy.
First of all 4S illustrates how PD may become part of a
societal trend, and that the specific aspects of that trend are
important for the outcome. In our case fighting back when
tax cuts and public budget cuts are presented by
governments as the only way forward.
Secondly, 4S illustrates how proper support structures for
the participants/users and their “group interests” are
important and at the same time that developing these
structures are not, or at least need not be, part of the results
of PD, at lest not explicit, consciously executed PD.
Thirdly, we find that 4S illustrates that “Democracy” or
“Democratic control” often are too general categories to
create enthusiasm and engagement, there has to be a
manifestation of a more concrete vision, ‘problem’ and
When we turn to the possibilities for doing something
analogue in other countries we are mildly positive: in
countries with sufficiently similar political systems and
health systems, it seems worthwhile to investigate if and
how cooperation between 4S and matching organizations
might be feasible. On a very practical level, we are talking
with the Greek organization “SciFY – Open Source
software/hardware for social benefit” and the Faculty of
medicine, University of Thessaly, about using OpenTele to
deliver telemedicine in small villages in rural areas where
access to medical assistance is difficult.
When it comes to possibilities for doing something
analogue in another specific area, a first step could be to
carry out an analysis of that area, similar to the one
presented in the section Why 4S might succeed. And then,
based on this initial understanding, develop a more targeted
analysis, tailored to the specifics of the area in question.
Up until now the 4S initiative have been about creating and
developing the 4S organization and using “state-of-the art”
PD within this framework. However, we expect that the
future activities of 4S will provide numerous opportunities
for PD research. Issues we would like to address include
PD processes for going from successful research prototype
via clinical trials to tested and certified software. And from
small-scale pilot to large-scale daily operation. Furthermore
we see a strong need for PD to cooperate with other
disciplines to encompass work on business models, e.g. for
starts-ups working with the 4S software, and on the socalled business cases used by e.g. public providers of health
services to evaluate new service proposals. Up until now
these business cases have mainly been using quite standard
economic concepts and models. Hopefully PD can
contribute to the development of supplementary concepts
and models.
We thank all the people involved in the 4S initiative, and
especially the healthcare personnel and the pregnant women
in the sub-project on complicated pregnancies at Aarhus
University Hospital. In addition we thank the reviewers for
their comments.
4S webpage. Available from:
Ehn, P. and M. Kyng, The Collective Resource
Approach to Systems Design, in Computers and
Democracy. 1987, Avebury: Aldershot, UK. p. 17-57.
Bødker, S., et al., Co-operative Design—perspectives
on 20 years with ‘the Scandinavian IT Design Model’,
in Proceedings of NordiCHI 2000. 2000: Stockholm,
Sweden. p. 22-24.
Kyng, M., Users and computers: A contextual
approach to design of computer artifacts.
Scandinavian Journal of Information Systems, 1998.
10(1&2): p. 7-44.
Kyng, M., Bridging the Gap Between Politics and
Techniques: On the next practices of participatory
design. Scandinavian Journal of Information Systems,
2010. 22(1): p. 49-68.
Christensen, H.B., et al., Analysis and design of
software ecosystem architectures – Towards the 4S
telemedicine ecosystem. Special issue on Software
Ecosystems, 2014. 56(11): p. 1476-1492.
PalCom project webpage. Available from:
Statens Serum, I. and I.T. National Sundheds,
Referencearkitektur for opsamling af helbredsdata hos
borgeren. 2013, National Sundheds-IT: København.
The UNIK partnership conference. 2013; Available
ret invitation.pdf.
Steventon, A., et al., Effect of telehealth on use of
secondary care and mortality: findings from the
Whole System Demonstrator cluster randomised trial.
BMJ (Clinical research ed.), 2012. 344: p. e3874.
EU project Renewing Health. Available from:
the Telecare Nord project on COPD. Available from:
Tidd, J. and J. Bessant, Managing innovation :
integrating technological, market and organizational
change. Vol. 4th. 2009, New York: Wiley. 638.
Abernathy, W. and J. Utterback, Patterns of industrial
innovation. MIT Technology Review, 1978(6): p. 4047.
The BYTE awards : GRID System's GRiDPad. BYTE
Magazine, 1990. 15(1): p. 285-286.
Christensen, C.M., The Innovator's Dilemma : When
New Technologies Cause Great Firms to Fail. 1997,
Harward: Harvars Business Press. 225.
Epic Systems Corporation. Available from:
Manikas, K. and K.M. Hansen, Software ecosystems –
A systematic literature review. Journal of Systems and
Software, 2013. 86(5): p. 1294-1306.
Fælles vej frem for telemedicin til glæde for både
patienter og erhvervsliv. 02.02.2015; Available from:
Porter, M. June 6, 2006, Mandag Morgen:
Copenhagen, Denmark.
Goodwin, N., Jumping the gun in the telehealth
steeplechase. 2012.
Piketty, T., Capital in the Twenty-First Century. 2014:
Harvard University Press.
Mazzucato, M., The entrepreneurial state : debunking
public vs. private sector myths. Anthem frontiers of
global political economy. 2013, London: Anthem
Press. 237.
Scandinavian J of Information Systems, 1998. 10.
Scandinavian J of Information Systems, 2003. 15.
Scandinavian J of Information Systems, 2010. 22.
Shapiro, D., Participatory design: the will to succeed,
in Proceedings of the 4th decennial conference on
Critical computing: between sense and sensibility.
2005, ACM: Aarhus, Denmark. p. 29-38.
Schwaber, K. Controlled Chaos : Living on the Edge
Nygaard, K., The ‘Iron and metal project’: trade
union participation, in Computers Dividing Man and
Work – Recent Scandinavian Research on Planning
and Computers from a Trade Union Perspective., Å.
Sandberg, Editor. 1979, Swedish Center for Working
Life, Demos Project report no. 13,
Utbildningsproduktion: Malmö, Sweden. p. 94-107.
Bødker, S., et al., A Utopian Experience. Proceedings
of the 1986 Conference on Computers and
Democracy, 1987: p. 251-278.
Ehn, P. and M. Kyng, Cardboard Computers:
Mocking-it-up or Hands-on the Future, in Design at
work: Cooperative design of computer systems, J.
Greenbaum and M. Kyng, Editors. 1991, Lawrence
Erlbaum Associates, Inc.: Hillsdale, New Jersey,
USA. p. 169-195.
32. Proceedings of PDC 2012, The Participatory Design
Conference. 2012: Roskilde, Denmark.
33. Grönvall, E. and M. Kyng, Beyond Utopia: reflections
on participatory design in home-based healthcare
with weak users, in Proceedings of the 29th Annual
European Conference on Cognitive Ergonomics.
2011, ACM: Rostock, Germany. p. 189-196.
34. Christensen, L.R. and E. Grönvall, Challenges and
Opportunities for Collaborative Technologies for
Home Care Work, in ECSCW 2011: Proceedings of
the 12th European Conference on Computer
Supported Cooperative Work, 24-28 September 2011,
Aarhus Denmark, S. Bødker, et al., Editors. 2011,
Springer London. p. 61-80.
35. Fitzpatrick, G. and G. Ellingsen, A Review of 25 Years
of CSCW Research in Healthcare: Contributions,
Challenges and Future Agendas. Computer Supported
Cooperative Work (CSCW), 2012: p. 1-57.
36. Bertelsen, W.O., et al., Therapeutic Strategies - a
Challenge for User Involvement in Design, in
Workshop at NordiChi 2010, The 6th Nordic
Conference on Human-Computer Interaction. 2010:
Reykjavik, Iceland.
37. Gulliksen, J., et al., Key principles for user-centred
systems design. Behaviour & Information Technology,
2003. 22(6): p. 397-409.
38. Beck, E.E., P for Political: Participation is Not
Enough. Scandinavian Journal of Information
Systems, 2002. 14(1): p. 77-92.
39. Iversen, O.S. and C. Dindler, Sustaining participatory
design initiatives. CoDesign: International Journal of
CoCreation in Design and the Arts, 2014. 10(3-4): p.
40. iHospital. Det Interaktive Hospital. 2005; Available
41. Cetrea. Clinical Logostics. 2015; Available from:
Computing and the Common. Hints of a new utopia in
Participatory Design
Maurizio Teli
Department of Information Engineering and Computer Science, University of Trento
Trento, Italy
[email protected]
K.4.0 Computers and Society General.
In order to explain how a new utopia for PD can take place,
this statement is organized as follows. Firstly, I will begin
with a preliminary discussion of critical theories of
contemporary societies, including an understanding of the
role of digital technologies and the common. Secondly, I
will present the recent design contributions on computing
and the commons. Both will pave the way for a theoretical
clarification of the difference between the commons and the
common. Thirdly, I will discuss the social and institutional
conditions enabling a common-oriented utopia in PD. In
conclusion, I will connect critical theory, design
contributions, and the social and institutional conditions, to
sketch out future directions based on my proposal.
In this statement, I draw upon the need of Participatory
Design to engage with new utopias. I point to contemporary
critical theories and to concurrent social conditions that
make possible to identify the construction of the common
as a possible utopia. In conclusion, I suggest that forms of
community-based participatory design could be actual
practices supporting such utopia.
Author Keywords
Participatory design; commons; common; utopia.
ACM Classification Keywords
During the Aarhus conference of 2005, Dan Shapiro [21]
elaborated on Participatory Design [PD] as a political
movement in a phase of economic stability and growth. It is
common knowledge that since then things have
dramatically changed, due to the effect of “The Great
Recession”, the economic crisis that has been haunting the
Western countries during the past eight years.
According to Ehn [8], PD originated in a specific social
context characterized by strong trade unions as the main
social ally of PD. In the light of the current societal
situation, I propose an update of the politics of PD along
similar lines. Such update is needed because the economic
crisis has pointed out that “business as usual” is not a viable
practice from the point of view of the aspirations toward a
just and sustainable society. In fact, the crisis has shown the
societal limitations of the steady accumulation of capital
and it has made evident the need for forms of renewal of the
bases on which current societies are tied together [12].
In this changing context, PDers could contribute to the
shaping of contemporary societies by collectively engaging
in the definition of new utopias that could inform their
actions. In this respect, both keynote speakers at the XIII
Participatory Design Conference, Pelle Ehn and Shaowen
Bardzell, articulated reflections on the roles of past and
future utopias for research and practice in PD. Their
reflections opened up a space for new directions in utopian
thinking in computing.
While the pioneers of PD where siding with the workers in
the factory system, nowadays PD practitioners are called to
an updated understanding of the context and the
identification of new social allies (something Dearden et al.
tried to do in Aarhus 2005 focusing on agencies promoting
emancipation [6]). Critical analyses of the current situation
provide a useful lens to reposition the politics of PD and
here I present three influential analyses that are looking at
the computing-society relationship.
This statement takes on the challenging task of engaging
with the construction of a possible utopia oriented to the
common. The construction of such utopia is possible
nowadays by leveraging on two concurrent social
conditions: the existence of a group of highly-skilled
precarious workers; and the existence of institutional
opportunities for scholars in the design discipline to connect
that group with the common through research funding.
Sociologist Christian Fuchs focuses on the notion of “mode
of production” as developed by Karl Marx. Specifically, he
summarizes the complex of working activities bringing to
the production of digital technologies through the concept
of digital labor [9]. The interesting part of Fuchs's update of
Marxist theory is the stress on how, in the different places
of digital production (from the African mines to Facebook),
relations of production involve ownership, coercion,
allocation/distribution, and the division of labor. Moreover,
he distinguishes between work and labor, the former being
the activity of transformation of the world, while the latter
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
being the commodification of work through the job relation.
This distinction is something particularly significant in the
age of social media and other digital technologies, in which
work is algorithmically commodified.
My proposal for a renewed utopia for PD is to orient PD
practices toward the common, being aware of the current
mode of production and of the role of netarchical
capitalism. That implies the quest for new social allies.
Before identifying them, I discuss how PD is already
engaging with the commons.
Michel Bauwens, a scholar and an activist, helps clarify the
way through which contemporary capitalism is articulated.
Bauwens contribution is particularly interesting in its
definition of contemporary capitalism as “netarchical” [2],
defined as the “brand of capital that embraces the peer to
peer revolution […] It is the force behind the immanence of
peer to peer.” [p. 7]. As the factory was the locus of
construction of workers solidarity in industrial capitalism,
the netarchically enabled peer-to-peer collaboration in
commons-based project can be the place where the
prevalence of the commons on the market can be elaborated
and politically constructed. For both Bauwens and Fuchs,
the possibility of collaborative work is strengthened by
contemporary digital technologies but it is also in the digital
domain that such collaborative work can become a form of
labor and contribute to the accumulation of capital.
Although less oriented to the labor-capital conflict than
before [10], in PD there has been a growing attention to the
commons, a peculiar institutional arrangement discussed
extensively by Elinor Ostrom [17]. The commons are
institutional arrangements for managing shared resources
that are not based on private nor state property.
Specifically, in the digital domain, a commons can be
defined through a legal protection that favor the availability
to third parties and a form of collective ownership and that
entails distributed governance in the management of the
interactive artifact. Digital commons (like Wikipedia or
Free Software) are characterized by specific organizational
traits, such as voluntary participation and contribution. In
those institutional settings, many participants contribute
without a necessity of doing that (as in a job relation) but
out of their commitment to the project [3].
A similar concern is shared by the philosophers Michael
Hardt and Toni Negri [11], two of the key thinkers of the
stream of Marxism known as Autonomous Marxism (AM).
AM relies upon Marx’s “Fragment on the machines”, that
stresses how productive forces evolve through the
expansion of the “general intellect”, a form of collective
and distributed knowledge. As knowledge is a growing part
of the production process, the life of the knowledge
producers itself is turned into a source of value,
independently from the classical labor relation, in what AM
defines as the “life theory of value” [16]. In such a
perspective, AM shares the preoccupation of Fuchs for
infinite exploitation of the social media users, for example,
and the opportunities that Bauwens sees in the forms of
peer-to-peer collaboration. Another reason that makes AM
interesting is the inclusion of authors like Spinoza,
Foucault, and Deleuze as inspirational sources. Drawing
upon them, Hardt and Negri develop a peculiar
anthropology, based on three fundamental elements: the
anthropological priority of freedom over power, the latter
seen as a containing force; the social priority of the
multitude of the poors, whose actions institutionalized
powers react to; and the centrality of affect in the
development of social life, with the possibility of forms of
love and hate to take the stage in historical development.
The domain on which freedom, the multitude, and affect
operate is that of the common. The common, without an
“s”, is intended as the ensemble of the material and
symbolic elements that tie together human beings. The
perspective of Negri and Hardt has the privilege to enrich
the analysis of the political economy, as the one discussed
by Fuchs and Bauwens, with a perspective on the subjects
of social life, characterized by affects, a desire for freedom,
and the capability to act collectively.
From a design perspective, pointing to the commons
implies not only to understand how Intellectual Property
Rights affect design [14], but also to acknowledge that
designing a commons entails the social processes of
maintenance and governance of a commons, that is a
process of commoning [13]. In the PD tradition, the
attention to the commons as a specific form of production
has been gaining momentum, framing the roles of the users
and designers [15], understanding collaborative production
in specific places [19], or trying to include in the design
process the political implications of Free Software [5].
All this work has proven effective in discussing some of the
implications for the design practices of a commons
perspective. However, these are still lacking a perspective
able to politically scale to the societal dimensions described
by Fuchs, Bauwens or Autonomous Marxists. My
suggestion is to supplement the framework proposed by
Elinor Ostrom, who focused on the actual institutional
arrangements related to the management of a specific
resource, to locate the specific commons in the wider
perspective of Hardt and Negri's accent on the common, as
the ensemble of the material and symbolic elements that tie
together human beings. In Hardt and Negri's reading, the
common can actually be nourished or dispossesed, and the
actual forms of capitalism are drawing precisely on forms
of accumulation by dispossession, in which value is
extracted out of the collaborative capabilities of people.
In this context, PD can locate itself as a progressive force
by strengthening social practices and social groups that
nourish the common, and by identifying both relevant
social allies and practical means. Due to the centrality of
knowledge in contemporary society, I argue that a relevant
ally can be identified in the “The Fifth Estate”, a group
composed by highly-skilled precarious workers. Moreover,
design projects with this group are possible thanks to
specific narratives used by the funding agencies, in
particular by the European Union.
In the early times of PD, the presence of strong trade unions
constituted an opportunity for designers to rethink their
practices in terms of political positioning. Today the
alliance with the Fifth Estate could be similarly
challenging. Moreover, it can be fruitfully achieved by
leveraging current narratives in the European Union
funding strategies.
Recently, Allegri and Ciccarelli [1] have proposed an
analytical category, named the “Fifth Estate”, to include
highly-skilled precarious and freelance workers who are
fully or partially excluded from accessing welfare security
and benefits in Southern European welfare states.
In fact, the main research policy instrument in the EU is
Horizon 2020 which basic narrative regards the innovation
addressing social and environmental issues as the key
component for the European development in the globalized
world. In this framework, a couple of specific lines of
funding are relevant to understand how the research policies
of the European Union frame the theme of interest here,
that is the commons in the digital world, the “Onlife
Initiative” and CAPS (Collective Awareness Platforms for
Sustainability and Social Innovation2).
In the last decade in Southern Europe that social group has
grown and has included an increasing number of university
graduates, as a result of the worsening labour market
conditions [Eurostat data until 20121]. Looking at the age
group 30-34, we see that at the European level there are
about 35% of people who completed tertiary education,
with a growth of approximately 15% of graduates in the
past 15 years. Enrollment in tertiary education is also
growing, with an overall difference of about 2.5 million
students between 2003 and 2012. Contemporaneously, in
Southern European countries the level of employment three
years after graduation has dramatically decreased (-14% in
Italy and -23% in Spain). In Greece, the country that is
worse off, in 2012 only 40% of the university graduates had
a job three years after graduation. Moreover, the growth of
part-time jobs during the crisis together with a decrease in
full-time jobs, suggests that the quality of available jobs is
decreasing too.
The more relevant is what the EU refers to as CAPS, a very
small line of funding, counting approximately 35 million
Euros per year. Nevertheless, this line of funding is
particularly interesting as it revolves around the collective
distribution of social power through the deployment of
technologies. On the IEEE Technology and Society
Magazine, Fabrizio Sestini [20], the reference person in the
European Commission for this line of funding, states that
“The ultimate goal is to foster a more sustainable future
based on a low-carbon, beyond GDP economy, and a
resilient, cooperative democratic community.”. Examples of
project already funded through this line include, for
example, D-Cent and P2PValue. D-Cent3 is a project
working with social movements (like the Spanish
Indignados) to build technologies for direct democracy and
economic empowerment, as digital social currencies.
P2PValue4 focuses on value in peer production, in
connection with new forms of cooperative organizations
and significant forms of activism.
Allegri and Ciccarelli [1] argue that the Fifth Estate could
be, in contemporary capitalism, what the Fourth Estate was
in industrial capitalism: one of the leading forces capable of
articulating new perspectives of wealth distribution. The
emergence of the Fifth Estate is characterized by new forms
of commons-based practices like bottom-up cooperation,
solidarity, and civic cooperation. In fact, the growth of
phenomena like co-working spaces (more than 100 only in
Italy [18]) and of new funding strategies like crowdfunding
(in May 2014 there were 41 active platforms only in Italy!
[4]), suggests that the Fifth Estate is actually experimenting
with new forms of collaboration.
Moreover, this EU narrative frames innovation as “digital
social innovation”, a collaborative form of innovation based
on the co-creation of knowledge, technologies, and
services. The stress on democracy by Sestini and the focus
on collaboration in relation to digital social innovation,
clearly refer to organizational forms that differ from the
traditional bureaucratic organizations or from the
networked enterprises typical of the period between the end
of the 1990s and the beginning of the 2000s. To summarize,
the described EU narrative can actually constitute an
opportunity window for common-oriented PD project to be
funded and conducted.
To put it briefly, the Fifth Estate is engaging in commonsbased forms of association that can nourish the common.
Therefore, we can argue that one of the potential allies for
contemporary PD is the Fifth Estate, as one of the social
groups able to characterize future progressive change (it is
not by chance that some commentators are pointing to that
social group as the backbone of Syriza in Greece and
Podemos in Spain).
Participatory Design. Routledge, Oxford, 2012, 182–
Through an understanding of contemporary capitalism as
something providing tensions between forms of social
collaboration and of accumulation by dispossession, I
articulate a potential update of PD positioning in relation to
the social allies PD could talk to and to the practical means
to conduct projects. I identify the significant social subject
in the “Fifth Estate”, a social group of highly-skilled
precarious workers, and the practical means to fund and
conduct projects in the European Union drive toward digital
social innovation. The ensemble of the discussed theories,
design perspectives, and social conditions, is what
constitutes a design space oriented to the common.
Ehn, P. Scandinavian design: On participation and
skill. In P.S. Adler and T.A. Winograd, eds., Usability.
Oxford University Press, Inc., 1992, 96–132.
Fuchs, C. Digital Labour and Karl Marx. Routledge,
10. Halskov, K. and Brodersen Hansen, N.. The diversity
of participatory design research practice at PDC 2002–
2012. International Journal of Human-Computer
Studies 74, 2015 81–92.
11. Hardt, M. and Negri, A. Commonwealth. Belknap
Press, Cambridge, Mass, 2009.
From the point of view of PD practices, community-based
PD [7] looks like the most interesting methodological
starting point toward common-oriented PD, as it entangles
the capability to intercept the diverse and distributed
character of the Fifth Estate and it can be easily aligned
with the definition of the funding agencies.
12. Harvey, D. Seventeen Contradictions and the End of
Capitalism. Oxford University Press, Oxford; New
York, 2014.
13. Marttila, S., Botero, A., and Saad-Sulonen, J. Towards
commons design in participatory design. Proceedings
of the 13th Participatory Design Conference, ACM
(2014), 9–12.
Summarizing, I provided hints of a new utopia in PD based
on the idea of nourishing the common, the ensemble of the
material and symbolic elements that tie together human
beings. Such utopia can potentially make the PD
community a strong actor in the construction of a more just
and sustainable society.
14. Marttila, S. and Hyyppä, K. Rights to remember?: how
copyrights complicate media design. Proceedings of
the 8th Nordic Conference on Human-Computer
Interaction: Fun, Fast, Foundational, ACM (2014),
Allegri, G. and Ciccarelli, R. Il Quinto Stato. Perché il
lavoro indipendente è il nostro futuro. Precari,
autonomi, free lance per una nuova società. Ponte alle
Grazie, Milan, 2013.
Bauwens, M. The political economy of peer
production. post-autistic economics review, 37 (2006),
15. Marttila, S., Nilsson, E.M., and Seravalli, A. Opening
Production: Design and Commons. In P. Ehn, E.M.
Nilsson and R. Topgaard, eds., Making Futures:
Marginal Notes on Innovation, Design, and
Democracy. The MIT Press, Cambridge, MA, 2014,
Benkler, Y. Practical Anarchism Peer Mutualism,
Market Power, and the Fallible State. Politics &
Society 41, 2 (2013), 213–251.
16. Morini, C. and Fumagalli, A. Life put to work:
Towards a life theory of value. Ephemera: theory &
politics in organization 10, 3/4 (2010), 234–252.
Castrataro, D. and Pais, I. Analisi delle Piattaforme
Italiane di Crowdfunding. Italian Crowdfunding
Network, 2014.
17. Ostrom, E. Governing the Commons: The Evolution of
Institutions for Collective Action. Cambridge
University Press, 1990.
Clement, A., McPhail, B., Smith, K.L., and Ferenbok,
J. Probing, Mocking and Prototyping: Participatory
Approaches to Identity Infrastructuring. Proceedings of
the 12th Participatory Design Conference: Research
Papers - Volume 1, ACM (2012), 21–30.
18. Rete Cowo. Tutti gli spazi Cowo, città per città. 2015.
Dearden, A., Walker, S., and Watts, L. Choosing
Friends Carefully: Allies for Critical Computing.
Proceedings of the 4th Decennial Conference on
Critical Computing: Between Sense and Sensibility,
ACM (2005), 133–136.
20. Sestini, F. Collective awareness platforms: Engines for
sustainability and ethics. Technology and Society
Magazine, IEEE 31, 4 (2012), 54–62.
19. Seravalli, A. Making Commons: attempts at composing
prospects in the opening of production. 2014.
21. Shapiro, D. Participatory Design: The Will to Succeed.
Proceedings of the 4th Decennial Conference on
Critical Computing: Between Sense and Sensibility,
ACM (2005), 29–38.
DiSalvo, C., Clement, A., and Pipek, V. Participatory
design for, with, and by communities. In J. Simonsen
and T. Robertson, eds., International Handbook of
Concordance: A Critical Participatory Alternative in
Healthcare IT
Erik Grönvall
IT Uni. of Copenhagen
Rued Langgaards Vej 7
Copenhagen, Denmark
[email protected]
Nervo Verdezoto
Aarhus University
Aabogade 34 D, 8200
Aarhus, Denmark
[email protected]
Naveen Bagalkot
Srishti School of Art,
Design and Tech.,
Bangalore, India
[email protected]
Tomas Sokoler
IT Uni. of Copenhagen
Rued Langgaards Vej 7
Copenhagen, Denmark
[email protected]
collaboration between patients and healthcare professionals
in the negotiation of care activities. The HCI field has also
gone through a transformation where the perception and
role of the ‘user’ and other stakeholders in design has
changed. People have gone from being perceived as human
factors, to users, to design collaborators, and lately as active
participants in design processes that design-for-future-use
(rather than design-for-use) [3-5]. Ongoing design
processes include a shift away from design-for-use towards
designing for design after design. Here, infrastructuring is
used within the design literature (see e.g. Ehn [5] and
Seravelli [16]), to explore how to design meta-designs, or
designing for design after design, and how diverse
stakeholders through constantly ongoing design processes
design-in-use rather than designing for use before use.
The healthcare sector is undergoing large changes in which
technology is given a more active role in both in-clinic and
out-of-clinic care. Authoritative healthcare models such as
compliance and adherence which relies on asymmetric
patient-doctor relationships are being challenged as society,
patient roles and care contexts transforms, for example
when care activities move into non-clinical contexts.
Concordance is an alternative model proposed by the
medical field that favours an equal and collaborative
patient-doctor relationship in the negotiation of care.
Similarly, HCI researchers have applied diverse models of
engagement in IT design ranging from authoritative models
(e.g. perceiving people as human factors to design for) to
more democratic design processes (e.g. Participatory
Design). IT design has also been crafted as on-going
processes that are integrated parts of everyday use. Based
on the best practice of participation from the medical and
the HCI fields, we identify critical alternatives for
healthcare design. These alternatives highlight opportunities
with ongoing design processes in which the design of care
regimens and care IT are perceived as one process.
Reflecting on the above transformations, and our own
experiences working with healthcare IT and design, we
describe perspectives on participation originated from both
the medical and the HCI field. From the medical field, we
take the transformation from compliance and adherence to
concordance to enlighten the perspectives on participation.
In contrast to compliance and adherence that represent
authoritative models of care where the patient is perceived
as a passive receiver of care instructions [13], concordancebased care emphasizes an active patient, co-responsible in
defining and ensuring their own care plans. From the HCI
field, we account for 'ongoing design processes' and
'participatory design' as design ideals. Based on these ideals
from the medical (i.e. concordance) and design (i.e. PD and
infrastructuring) fields, and in particular their roles in
contemporary healthcare IT projects, we show how
bridging disciplinary boundaries could offer critical
alternatives on how to design for future care and care IT.
Author Keywords
Concordance; Critical Alternative; Participatory Design;
Healthcare; Ongoing design; Infrastructuring
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g., HCI):
During the last decade the doctor-patient relationship has
been redefined. It has undergone a transformation from
being defined by authoritative care models towards models
with a more balanced relationship between care providers
and receivers. Information Technology (IT) systems for
healthcare and their design processes may either reinforce
the authoritative care models or facilitate a more equal
Considering the shifting notions of participation within HCI
and healthcare, it seems important to examine the roles that
various actors may assume in healthcare IT design and use
and how participation is shaping current and future care
practices. We will now describe participation and how it
has been perceived and changed in healthcare as well as in
healthcare IT design processes.
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
Participation in
is moved out of the clinical consultations and into the
people’s everyday settings and routines, the discussion
about patients’ and doctors’ roles and responsibilities
become even more critical.
Compliance has traditionally been the prevailing
relationship between a doctor and her patients. Compliance
measures to what degree a patient follows a prescribed
treatment, without considering the patient as a partner in the
delivery of care [13]. However, as reported in the medical
literature (e.g. [13]), a prescription that a patient disagrees
with already at the medical consultation will have less or no
value. Therefore a shift from compliance to adherence was
initiated within the medical practice. Adherence extends
compliance by also considering to what degree a patient
agrees with the proposed treatment plan. However, both
compliance and adherence have been criticized within the
medical field as being too authoritative [9]. The medical
field has proposed concordance as a more democratic
model to shape the doctor-patient relationship.
Concordance emphasizes equal participation and active
doctor-patient collaboration at the clinic in both the
definition and fulfilment of a care regimen [9]. In doing so,
concordance promotes an ongoing negotiation between the
patient and healthcare professional through a democratic
and open-ended patient-professional collaboration process
[9]. As such, concordance involves a political stance where
the patient and healthcare professional are considered equal
peers in the treatment.
Within HCI, there has been a shift during the last 30 years
from perceiving people as human factors to consider them
as actors within the design process [3, 7, 10]. Today many
HCI practitioners acknowledge the benefits of involving
different stakeholders in design processes. Participatory
Design (PD) is one design approach based on an active and
equal involvement of different stakeholders throughout the
design process. Initially, much of the PD work in healthcare
focused on initiatives coming from professional settings to
support the development process of technology focusing on
both patient’s and health professional’s needs to improve
clinical practice. For example, developing Electronic
Patient Record systems to support clinical consultations and
medical practices [18]. In recent years, the increasing move
of care from the hospital to the home has required the
application of diverse PD methods in non-professional
settings to actively involve patients in the design process,
uncovering their specific needs in relation to their
treatments and their everyday life [7, 12, 24]. In nonprofessional settings, uncovered challenges for patients to
perform care activities includes the lack of patient’s
motivation to perform the prescribed treatment, the
individual mental and physical capabilities, the individual’s
perception of technology, the dynamic use context, the
aesthetics of the home and of care technology, and the
distribution of control over the setting [7].
From a medical perspective, concordance is an activity
situated at the clinic, as part of the medical consultation.
However, in an earlier work by the authors, it is argued that
the implementation of care activities in everyday life
situations extends beyond the clinic. The negotiation of care
is situated in everyday life contexts and should be
considered as an ongoing negotiation [2]. Our previous
work showed how care receivers negotiate the integration
of prescribed assistive tools into their home environment
and the impact of the care regimen on them and their
families’ everyday lives. For example, a woman rejected a
prescribed stepping board, as she preferred to use her own
stairs for her physiotherapy. Another example was a spouse
that embedded objects from the home environment into the
patient’s rehabilitation activities to support the prescribed
exercises. These examples illustrate the patients’ ongoing
negotiations with others, objects and the environments to
manage their out-of-clinic care. The examples also extend
the role of concordance to include more than the in-clinic
consultation activities, embracing also the ongoing
healthcare practices that take place in everyday life.
An ever-growing number of healthcare IT design and
research projects have applied participatory methods to
engage patients and doctors in designing healthcare IT
systems for later deployment and use. Contemporary PD
research do not only consider how to design ready-to-use
systems but also investigates meta-designs and designing
for design after design (i.e. infrastructuring) to allow people
to participate in ongoing design processes (design-in-use)
[5]. As such, infrastructuring calls for attention to the active
role of people as designers, through use, to develop support
for practices that could not be envisioned before use [16].
While participatory healthcare IT researchers has eagerly
adopted the ideals of active participation of patients and
doctors in the design process (e.g., [7, 22]), they have in
general not considered the role of patients’ active
participation in shaping their own care regimen after the
initial design process has ended, nor designed for such
participation. In other words, many healthcare IT designs
have adopted the medical ideals of compliance and
adherence, designing healthcare IT solutions that ensure or
support a patient to follow a prescribed regimen (such as
tools for remote-monitoring [8, 12, 17] and medical
reminders [14, 23]). Herein lays a paradox. It seems that
Meanwhile, concordance is by no means a settled debate in
the medical discourses. Its focus on changing the
responsibilities of the doctors and patients is critical, and it
is argued that the consequences of increased patient
responsibility for their own care needs further exploration
from a societal, moral and ethical perspective. In particular,
Segal states that healthcare professionals should intervene if
patients with life-threatening or very contagious diseases
refuse antibiotics or other treatments [15]. As concordance
healthcare IT projects, while driven by methodological
approaches that in their very essence build on models that
value an active participation of different stakeholders (i.e.,
patients), are stuck with authoritative de-contextualized
models concerning the relationship between doctor, patient,
and treatment. Participatory design driven healthcare IT
projects do exist that have investigated alternative strategies
for care, including a renegotiated patient-physician relation
and collaboration [1]. Nevertheless, with only few recent
exceptions (e.g. [2]) concordance has not been a design
goal in healthcare HCI projects [6]. Indeed, the
development in the medical field towards concordance as
an alternative to compliance and adherence has largely been
overlooked by the healthcare IT design community. Instead,
the HCI community has contributed for example with ideas
of gamification and persuasive design to support the
existing compliance and adherence models [14].
one. These two development-strands represent critical
alternatives to the dominant PD practice in healthcare
where different stakeholders’ co-design technology for later
use under the dominance of compliance and adherence.
Both PD and concordance involve a political stance where
diverse stakeholders collaborate on democratic and equal
terms. In particular, both approaches empower the citizen,
being a (future) user of technology or receiver of care, to
make an impact on his or her situation through active
participation. Indeed, the philosophy behind PD and
concordance bear much resemblance and, if thought of,
there may not be such a large conceptual difference
between co-designing one's own care (i.e. concordance) and
co-designing the technology that will support the very same
care. Or stated differently, should the design of healthcare
IT and the design of one’s care regimen be kept as two
separate activities or should they be perceived as one? As
the introduction of new technology into a given practice
will also change the practice [11], an ongoing design of
both IT and care seems favorable. Considering the two
strands presented above, strand A may be more
straightforward to implement and control (as compared to
strand B) as there are two separate phases. Phase one
focusing on healthcare technology design, and then in phase
two individualized care activities are designed based on the
technology from phase one. These two phases are tightly
connected but clearly separated. In strand B, both the
technology and care design are perceived as one ongoing
design process based on the idea of infrastructuring. In
strand B new possibilities exists (as compared to strand A)
to challenge and inform both the PD and the medical fields.
Applying an ongoing design approach (design-in-use) may
be a way to discuss participation, facilitating care practices
where both care and the care IT are co-designed over time.
In sum, concordance and PD in concert can provide an
alternative approach for designing healthcare IT, informing
and strengthening each other.
To critically rethink citizen participation in care and
healthcare IT design, and the relationship between
designing individualized care regimens and designing
healthcare IT to support these regimens, may open up for
new strategies to improve the quality of care. Also it may
inform both the healthcare and design research fields.
As presented above, many PD healthcare IT projects have
been informed by a medical perspective but have not
considered the unexpected ways of how people appropriate
technology in relation to everyday practices [2]. However,
there are critical alternatives to consider when designing
future healthcare IT solutions within both contemporary
Participatory Design and medical research fields. We are
referring in particular to, firstly how PD research has
investigated ongoing design processes, supported by
infrastructuring [5], and secondly how the medical science
has turned towards concordance as an alternative to patient
compliance and adherence [9]. People negotiate and
appropriate care activities and healthcare IT to make them
part of their everyday lives [2, 20]. Rather than designing
for a specific use, it may therefore be advantageous to
support practices of appropriation and negotiation.
In this paper, we have presented critical alternatives for
future healthcare designs. The proposed alternatives move
away from the authoritarian ideals of compliance and
adherence towards a more participatory and collaborative
doctor-patient relationship. This move implies that the
separation of the design and the use of healthcare IT will
dissolve. Through ongoing design processes, supported by
concordance and infrastructuring, the act of designing
individualized care regimens and designing healthcare IT to
support these regimens may become one. The HCI research
community should better understand the effects on the
involved stakeholders (e.g. patients), not only working with
concordance as a design ideal, but also working with
ongoing design processes where both care regimens and
healthcare technology are designed in concert. However,
There are at least two possible developments in which PD
research and the notion of concordance may join forces to
provide a critical alternative for future healthcare designs.
The first strand (A) is based upon two sequential phases. In
phase one PD is used to co-design healthcare IT that aim
for concordance rather than compliance and adherence (as
we have previously argued in [2]). In the second phase
(and as an extension to [2]) the design outcome of phase
one is used by patients to negotiate and integrate
concordance-based care practices into their everyday lives.
The second strand (B) is an infrastructuring approach where
the design of healthcare IT and the design of a patient’s
care regimen are conceptually considered equal and where
both the design of IT and care is ongoing and perceived as
participation as discussed in this paper will require rather
resourceful participants, something that should be taken
into consideration in future work. These critical alternatives
do not only have the potential to point to new directions for
healthcare IT design targeting an individual patient. For
example in Europe there is an increased interest on service
co-creation within healthcare and patient involvement in
designing healthcare services, as seen in national health
policies and NGO initiatives [19, 21]. It could prove useful
to investigate ongoing design processes as described in this
paper as a strategy for future healthcare services. The
proposed critical alternatives can inspire designers and
researchers to reconsider their design processes when
designing out-of-clinic healthcare and technology to better
support concordance-based care practices in people’s
everyday life.
10. Iversen, O. S., Kanstrup, A. M. and Petersen, M. G., A
visit to the 'new Utopia': revitalizing democracy,
emancipation and quality in co-operative design, In
Proc. NordiCHI 2004, ACM Press(2004), 171-179.
11. Kaptelinin, V. and Bannon, L. J. Interaction Design
Beyond the Product: Creating Technology-Enhanced
Activity Spaces. Human–Computer Interaction, 2011,
12. Kusk, K., Nielsen, D. B., Thylstrup, T., et al., Feasibility
of using a lightweight context-aware system for
facilitating reliable home blood pressure selfmeasurements, In Proc. PervasiveHealth 2013, IEEE
(2013), 236-239.
13. Moore, K. N. Compliance or Collaboration? the
Meaning for the Patient. Nursing Ethics, 1995, 2(1):7177.
14. Oliveira, R. d., Cherubini, M. and Oliver, N., MoviPill:
improving medication compliance for elders using a
mobile persuasive social game, In Proc. UBICOMP
2010, ACM Press(2010), 251-260.
15. Segal, J. “Compliance” to “Concordance”: A Critical
View. Journal of Medical Humanities, 2007, 28(2):8196.
16. Seravalli, A., Infrastructuring for opening production,
from participatory design to participatory making?, In
Proc. PDC 2012, ACM Press(2012), 53-56.
17. Siek, K., Khan, D., Ross, S., et al. Designing a Personal
Health Application for Older Adults to Manage
Medications: A Comprehensive Case Study. Journal of
Medical Systems, 2011, 35(5):1099-1121.
18. Simonsen, J. and Hertzum, M., A Regional PD Strategy
for EPR Systems: Evidence-Based IT Development, In
Proc. PDC 2006-Vol II, 2006, 125-228.
19. Spencer, M., Dineen, R. and Phillips, A., Co-producing
services – Co-creating health, NHS WALES, 2013.
20. Storni, C. Multiple Forms of Appropriation in SelfMonitoring Technology: Reflections on the Role of
Evaluation in Future Self-Care. International Journal of
Human-Computer Interaction, 2010, 26(5):537-561.
21. The health fundation, Co-creating Health, from:, 2008.
22. Uzor, S., Baillie, L. and Skelton, D., Senior designers:
empowering seniors to design enjoyable falls
rehabilitation tools, In Proc. CHI 2012, ACM
Press(2012), 1179-1188.
23. Vitality - GlowCaps, from:, 2012.
24. Aarhus, R., Ballegaard, S. and Hansen, T., The eDiary:
Bridging home and hospital through healthcare
technology, In Proc ECSCW 2009, Springer(2009), 6383.
We like to thank colleagues from both the medical and HCI
fields in formulating the thoughts expressed in this paper.
1. Andersen, T., Bjørn, P., Kensing, F., et al. Designing for
collaborative interpretation in telemonitoring: Reintroducing patients as diagnostic agents. International
Journal of Medical Informatics, 2011, 80(8):e112-e126.
2. Bagalkot, N. L., Grönvall, E. and Sokoler, T.,
Concordance: design ideal for facilitating situated
negotiations in out-of-clinic healthcare, In Proc. EA
CHI '14, ACM Press(2014), 855-864.
3. Bannon, L., From human factors to human actors: the
role of psychology and human-computer interaction
studies in system design, In Design at work: cooperative
design of computer systems, Lawrence Erlbaum
Associates, Inc., Hillsdale, New Jersey, USA, 1991.
4. Dantec, C. L. and DiSalvo, C. Infrastructuring and the
Formation of Publics in Participatory Design. Social
Studies of Science, 2013, 43(2): 241-264.
5. Ehn, P., Participation in design things, In Proc. PDC
2008, ACM Press(2008), 92-101.
6. Fitzpatrick, G. and Ellingsen, G. A Review of 25 Years
of CSCW Research in Healthcare: Contributions,
Challenges and Future Agendas. Computer Supported
Cooperative Work (CSCW), 2012, 22(4-6):609-665.
7. Grönvall, E. and Kyng, M. On participatory design of
home-based healthcare. Cognition, Technology & Work,
2013, 15(4):389-401.
8. Grönvall, E. and Verdezoto, N., Beyond SelfMonitoring: Understanding Non-functional Aspects of
Home-based Healthcare Technology, In Proc. UbiComp
2013, ACM Press(2013), 587-596.
9. Horne, R., Weinman, J., Barber, N., et al. Concordance,
adherence and compliance in medicine taking. London:
NCCSDO, 2005.
The columns on the last page should be of approximately equal length.
Remove these two lines from your final version.
Networked Privacy Beyond the Individual:
Four Perspectives to ‘Sharing’
Airi Lampinen
Mobile Life Centre,
Stockholm University
Stockholm, Sweden
[email protected]
rise of mainstream social media (for a review, see [8]).
Rather than attempting a general discussion of privacy, this
paper narrows in on a more limited set of issues by
addressing interpersonal aspects of networked privacy.
Synthesizing prior work, this paper provides conceptual
grounding for understanding the dialectic of challenges and
opportunities that social network sites present to social life.
With the help of the framework of interpersonal boundary
regulation, this paper casts privacy as something people do,
together, instead of depicting it as a characteristic or a
possession. I illustrate interpersonal aspects of networked
privacy by outlining four perspectives to ‘sharing’. These
perspectives call for a rethink of networked privacy beyond
an individual’s online endeavors.
In the following, I deploy the framework of interpersonal
boundary regulation [1] in order to approach privacy as
something people do together, i.e. a collaborative activity,
instead of depicting it as a characteristic of a piece of
content or a possession of an individual. I provide examples
of how the negotiation of accessibility and inaccessibility
that characterizes social relationships plays out in the
context of social network sites by outlining four
perspectives to ‘sharing’ that have emerged from prior
empirical studies. These perspectives call for a rethink of
privacy in our networked age beyond the individual and
his/her online endeavors.
Author Keywords
Privacy; interpersonal boundary regulation; social network
site; Facebook;; Couchsurfing
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g., HCI):
Ellison and boyd [4, p. 158] characterize a social network
site as ‘a networked communication platform in which
participants (1) have uniquely identifiable profiles that
consist of user-supplied content, content provided by other
users, and/or system-provided data; (2) can publicly
articulate connections that can be viewed and traversed by
others; and (3) can consume, produce, and/or interact with
streams of user-generated content provided by their
connections on the site.’ Nowadays, social network sites are
commonly accessed through a variety of devices and
platforms, such as applications on mobile devices. People
interact with social network sites not only when they access
them directly; rather, they may come across integrated
features almost anywhere. For instance, Facebook’s ‘Like’
feature, which allows people to indicate with a click their
approval and enjoyment of pieces of content, is now nearly
ubiquitous on the Web.
Social network sites (SNSs), such as Facebook, MySpace,
LinkedIn, and Instagram, present people with novel
opportunities to maintain social ties; craft an online
presence; and, as a result, gain access to social validation
and meaningful feedback [14]. As these platforms have
become part of the everyday life for millions of people,
severe concerns have been raised regarding their
implications for social interaction and societies at large.
This paper provides grounding for understanding the
dialectic of challenges and opportunities that social network
sites present to daily social life by building on an
understanding of networked privacy that has been
developed in and around Human–Computer Interaction
over the past decade (for early work, see e.g. [10]).
It is worth noting that the nature of privacy is a complex
and long-standing question that has attracted the attention
of scholars in several disciplines already well before the
The widespread adoption of social network sites into
everyday interaction affects sociality more broadly than just
in terms of the activities that take place on these platforms.
Ubiquitous access to social network sites and participation
in social interaction via them is of course not universal.
However, social network sites have become a pervasive
part of social life for many and, at least in large parts of the
Western world, even those who do not use them in their
day-to-day life are embedded in social settings that are
shaped by their existence. Typically even those who opt out
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
of using SNSs have, and indeed must have, an opinion on
them. Finally, refusing to join a social network site does not
mean that one would not be referred to or featured in the
service (for instance, in photos or textual anecdotes that
others share).
The widespread adoption of SNSs challenges customary
mechanisms for regulating interpersonal boundaries. For
instance, when it comes to social interaction via these
services, it is not reasonable to rely solely on the supportive
structures that time and space can provide. In 2003, Palen
and Dourish [10] paved the way for ongoing research into
interpersonal networked privacy by raising the issue of
boundary regulation in an increasingly networked world.
While the boundary regulation framework originates in
Altman’s considerations of social life in physical settings
[1], offline activities have often been bypassed when
examining boundary regulation in the context of SNSs. Yet,
the social interaction that takes place in SNSs is embedded
in the wider fabric of social relationships that surround
individuals. Boundary regulation efforts must therefore be
understood holistically, since the broad adoption of these
platforms weaves them tightly into everyday life, with
consequences also to those who can or will not use them.
The framework of boundary regulation settings [1] was first
introduced by social psychologist Irwin Altman in the
1970s. Over the past decades, it has been developed further,
most prominently as Communication Privacy Management
theory [11,12]. Interpersonal boundary regulation refers to
the negotiation of accessibility and inaccessibility that
characterizes social relationships [2]. Interpersonal
boundary regulation is a core process of social life:
Interpersonal boundaries are constantly regulated through
negotiations that draw lines of division between self and
others, and ‘us’ versus ‘them’. They are used to structure
how and with whom people interact. When successful,
interpersonal boundary regulation allows people to come to
terms with who they are and how they relate with one
another. In contrast, less successful or failing efforts to
regulate boundaries are experienced as conflict, confusion,
and clashes in expectations, that is, as boundary turbulence
[11] that necessitates re-negotiating rules of regulation.
Finally, the idea that not just individuals, but groups, too,
need to regulate their boundaries was a part of Altman’s [1]
original characterization of privacy as boundary regulation.
Yet, this notion of group privacy has received scarce
scholarly attention within HCI, even though Petronio and
colleagues [11,12] have elaborated explicitly on how
individuals and groups make decisions over revealing or
concealing private information, proposing that individuals
develop and use rules collaboratively to control information
flows. In contrast to Altman whose work focuses mainly on
personal boundaries, Petronio pays particular attention to
how people manage collectively co-owned information and
collective boundaries, such as those within families and
groups of friends. Both Altman and Petronio suggest that
individuals need to regulate interpersonal boundaries
proactively in order to achieve a desired degree of privacy.
As individuals, groups, and societies, we regulate access to
social interaction both by how physical spaces are built and
through the behaviors that take place in them. Boundary
regulation practices abound from making or avoiding eye
contact to closing or opening a door. Boundaries of
professional and leisure life are negotiated in intricate ways
that range from what people wear and eat to with whom,
about what, and in what kind of language they converse [9].
These practices are applied to achieve contextually
desirable degrees of social interaction as well as to build
and sustain relations with others and with the self.
This section illustrates different interpersonal aspects of
networked privacy by outlining four perspectives to
‘sharing’ that takes place in the context of SNSs. The
notion of ‘sharing’ is grounded in prior empirical work on
networked privacy [5,6,7,13] where it has emerged as a
central way in which people talk about their engagement
with SNSs and their efforts to manage what gets published,
to whom, and with what kinds of implications. I draw on
prior qualitative, interpretative analyses of ongoing sensemaking surrounding social life in the context of SNSs and
of the emerging practices for regulating interpersonal
boundaries in settings wherein (1) people may share content
with multiple groups at once, (2) others may share content
on their behalf, (3) sharing can be done via automated
mechanisms, and (4) people may share as a group. Instead
of treating ‘sharing’ as a monolithic, well-defined activity, I
consider and contrast the sharing of manually selected
digital content; the streaming of automatically tracked
behavioral information; and, acts of sharing that directly
challenge the hypothetical online–offline divide.
While interpersonal boundaries are not just an analogue of
physical demarcations, they are, in part, determined by
physical structures. For instance, walls, doors, windows,
and furniture shape access to various spaces and to the
social interaction that can take place in them. Access to
visual and auditory information is limited by what is
physically possible [3]: Only people within a restricted
geographical radius at any given time can see and hear what
is being said and done by co-present others. Those
witnessing an incident in an unmediated situation can tell
others about it, carrying it further, but there are only so
many people who can observe an event first-hand. Sharing
one’s experiences and observations with people who are not
present requires effort. Moreover, in face-to-face situations,
those doing and saying things typically have a fairly good
sense of who can see and hear their actions, as they can
observe their audience in a shared temporal and spatial
setting. Someone could, of course, be eavesdropping out of
sight, but having such an unintended audience would be
exceptional and violate commonly held rules of decorum.
where the main content that is being shared is information
about music listening, is an instance of a class of services
that allow users to stream behavioral data via automated
sharing mechanisms. Automated sharing mechanisms are
promoted as a means to make sharing increasingly
effortless and authentic in its presumed completeness, but
research indicates that much work can go into regulating
what is being shared via them [13]. Interpersonal boundary
regulation efforts in the presence of automated sharing
mechanisms entail decisions that extend beyond choices
over sharing per se and affect, instead, how participants
behave in the first place: Uski & Lampinen [15] note that
while norms concerning manual sharing on Facebook focus
mainly on what and how one should share, the norms
sanctioning automated sharing on target primarily
music listening, that is, the ‘actual’ behavior. In other
words, when it comes to interpersonal boundary regulation
in the presence of automated sharing, there are, first,
interpersonal boundary regulation practices that regulate
what is publicized, and second, efforts to regulate what to
do and, consequently, what kind of data will be available
for sharing later on. These efforts show how users’ agency
to regulate interpersonal boundaries is restricted neither to
adjusting the privacy settings provided within an SNS nor
to selecting from among the choices services propose to
their users in the user interface. Finally, it is worth noting
that users engage in efforts to regulate sharing not only
when the sharing mechanism risks publicity of content that
they do not want to have in their profiles but also when the
sharing mechanism fails to publicize content that it should
have and that users want to appear in their profiles [13, 15].
This illustrates well how boundary regulation is not solely a
matter of restricting access but also one of providing it.
Sharing with Multiple Groups
Social network sites may bring about group co-presence, a
situation in which many groups important to an individual
are simultaneously present in one context and their presence
is salient for the individual [6]. Group co-presence makes it
difficult to keep traditionally separate social spheres apart
from one another. Maintaining a broad social network in an
SNS may lead to a sense of having to present oneself and
one’s social connections consistently with everyone. In an
early study [6], participants reported few social tensions
related to group co-presence on Facebook per se, but the
authors argue that group co-presence was unproblematic
only insofar as it was made unproblematic with the aid of
behavioral and mental practices. The challenges related to
this tendency of SNSs to flatten diverse audiences into a
single group have since been well documented, often under
the rubric of context collapse (for an overview, see [16]). A
central challenge in sharing with multiple groups at once
lies in regulating interpersonal boundaries in a situation
where it is hard to keep track of to whom access is
provided, or to figure out whether or when these audiences
interact with content that is made available to them.
Sharing on Behalf of Others
While individuals can regulate interpersonal boundaries on
their own, ultimately their success always relies on others’
support of their efforts to draw boundaries in a certain way.
Interpersonal boundary regulation is best understood as
cooperative on two levels: First, cooperation is manifested
in continuous, subtle acts of contesting or supporting
others’ efforts: maintaining a boundary or making a
successful claim to identity requires that others affirm one’s
actions and support the definition of oneself that is put
forward. Lampinen et al. [7] report on Facebook users
reckoning on how and by whom the content that they
shared would be viewed and interpreted. Some pondered
whether the content they were sharing might inadvertently
create challenges for a friend. Participants’ efforts to
cooperate included considerate acts of sharing, discretion in
self-censorship, and benevolent interpretation. Second, in
response to the need to deal with content disclosed by
others and with one’s power to share on behalf of others,
individuals cooperated occasionally more explicitly, too.
This involved coming into agreement on shared codes of
conduct through explicit negotiation of what to share, with
whom, and under what conditions [7]. While interpersonal
boundary regulation was not readily discussed as explicit
cooperation, participants placed a strong emphasis on being
trustworthy and considerate [7]. Indicating a willingness to
take others into consideration and an awareness of reliance
on others’ consideration, these participants saw this was not
only as something desired of others but also as a standard
they strove to live up to.
Sharing as a Group
Shifting the focus further beyond mainstream social
network sites, consider Couchsurfing, a service where
members can engage in hospitality exchange by hosting
visitors or by staying with others as guests. In a recent study
Lampinen [5] examined how multi-person households share
accounts and regulate access to their domestic sphere as
they welcome ‘couchsurfers’. Profiles on Couchsurfing
serve to present two interwoven but distinguishable aspects
of households: the domestic space and the people who live
in it. The study touches upon group privacy by illustrating
how, beyond sharing credentials, account sharing involves
complex negotiations over how to present multiple people
in a single profile, how to coordinate the households’
outfacing communication and decisions over whether to
grant potential visitors access to the home, as well as how
to share the benefits of a good reputation in a fair way.
Here, sharing as a group entails multi-faceted boundary
regulation that takes place not only in online interaction but
also in the course of interacting with guests face-to-face.
Thus, the study challenges the hypothetical online—offline
divide with its focus on the practice of hosting where
’sharing’ online and offline are tightly interwoven.
Sharing via Automated Mechanisms
Providing a contrast to the aforementioned examples of
manual sharing on Facebook, the music-centered,
4. Ellison, N. B., & boyd, D. M. (2013). Sociality through
social network sites. In W. H. Dutton (ed.), The Oxford
handbook of Internet studies (pp. 151– 172). Oxford
University Press.
Although social network sites have characteristics that
disrupt conventional premises of interpersonal boundary
regulation, boundaries are regulated through co-operative
processes also in their presence. In fact, the use of social
network sites may even make the performative nature of
social life more visible than is usual or desirable. Increased
awareness of the work that goes into achieving appropriate
social interaction and sustaining meaningful relationships
may feel uncomfortable as it challenges the illusion of the
effortless authenticity of everyday socializing.
5. Lampinen, A. (2014). Account Sharing in the Context of
Networked Hospitality Exchange. In Proceedings of the
2014 Conference on Computer-Supported Cooperative
Work (pp. 499–501). ACM.
6. Lampinen, A., Tamminen, S., & Oulasvirta, A. (2009).
All my people right here, right now: Management of
group co-presence on a social networking site. In
Proceedings of the 2009 International Conference on
Supporting Group Work (pp. 281–290). ACM.
This paper invites us to consider the limitations inherent in
conceiving of privacy management as an individual-level
endeavor to do with control over information by
synthesizing research that illustrates how people regulate
both individual and collective boundaries in cooperation
with one other, with consideration to others, and in line
with the affordances of different technologies and relevant
social norms. When the tools and features of a service
create constraints in day-to-day life, people may opt to
lessen their engagement with the service [6,7] or to change
the way they behave in order to manage what is being
shared [7,13].
7. Lampinen, A., Lehtinen, V., Lehmuskallio A., &
Tamminen, S. (2011). We're in it together: Interpersonal
management of disclosure in social network services. In
Proceedings of the 2011 Annual Conference on Human
Factors in Computing Systems (pp. 3217–3226). ACM.
8. Newell, P. B. (1995). Perspectives on privacy. Journal
of Environmental Psychology, 15(2), 87–104.
9. Nippert-Eng, C. E. (1996). Home and work: Negotiating
boundaries through everyday life. Chicago: University
of Chicago Press.
To design technologies and policies that are supportive of
people’s boundary regulation aims and practices in an
increasingly networked world, we first need to understand
those pursuits and the reasoning behind them. By
highlighting the cooperative nature of these everyday
efforts to ‘make the world work’, this paper calls for a
rethink of networked privacy beyond the individual and
his/her online endeavors.
10. Palen, L., & Dourish, P. (2003). Unpacking privacy for
a networked world. In Proceedings of the SIGCHI
conference on Human factors in computing systems (pp.
129-136). ACM.
11. Petronio, S. S. (2002). Boundaries of privacy: Dialectics
of disclosure. Albany, NY: State University of New
York Press.
12. Petronio, S. (2013). Brief status report on
communication privacy management theory. Journal of
Family Communication, 13(1), 6-14.
This work has been supported by the Mobile Life VINN
Excellence Centre. I thank my colleagues at Mobile Life,
University of Helsinki, Helsinki Institute for Information
Technology HIIT, University of California, Berkeley, and
Microsoft Research New England.
13. Silfverberg, S., Liikkanen, L.A., & Lampinen, A.
(2011). ‘I'll press play, but I won’t listen’: Profile work
in a music-focused social network service. In
Proceedings of the 2011 Conference on ComputerSupported Cooperative Work (pp. 207–216). ACM.
1. Altman, I. (1975). The environment and social
behavior: Privacy, personal space, territory, crowding.
Monterey, CA: Brooks/Cole Pub. Co.
14. Stern, S. (2008). Producing sites, exploring identities:
Youth online authorship. In D. Buckingham (ed.),
Youth, identity, and digital media (pp. 95–117).
Cambridge, MA: MIT Press.
2. Altman, I., & Gauvain, M. (1981). A cross-cultural and
dialectic analysis of homes. In Spatial representation
behavior across the life span: Theory and application.
New York: Academic Press.
15. Uski, S., & Lampinen, A. (2014). Social norms and selfpresentation on social network sites: Profile work in
action. New Media & Society.
3. boyd, d. (2008). Why youth ♥ social network sites: The
role of networked publics in teenage social life. In D.
Buckingham (ed.), Youth, identity, and digital media
(pp. 119–142). Cambridge, MA: MIT Press.
16. Vitak, J. (2012). The impact of context collapse and
privacy on social network site disclosures. Journal of
Broadcasting & Electronic Media, 56(4), 451-470.
Personal Data: Thinking Inside the Box
Amir Chaudhry, Jon
Crowcroft, Heidi Howard,
Anil Madhavapeddy,
Richard Mortier
University of Cambridge
[email protected]
Hamed Haddadi
Queen Mary
University of London
[email protected]
they said they desired (We Want Privacy, but Can’t Stop Sharing, Kate Murphy, New York Times, 2014-10-05). This paradox implies dissatisfaction about what participants received
in return for exposing so much about themselves online and
yet, “they continued to participate because they were afraid
of being left out or judged by others as unplugged and unengaged losers”. This example also indicates the inherently social nature of much “personal” data: it is impractical to withdraw from all online activity just to protect one’s privacy [3].
We are in a ‘personal data gold rush’ driven by advertising
being the primary revenue source for most online companies.
These companies accumulate extensive personal data about
individuals with minimal concern for us, the subjects of this
process. This can cause many harms: privacy infringement,
personal and professional embarrassment, restricted access to
labour markets, restricted access to best value pricing, and
many others. There is a critical need to provide technologies
that enable alternative practices, so that individuals can participate in the collection, management and consumption of
their personal data. In this paper we discuss the Databox, a
personal networked device (and associated services) that collates and mediates access to personal data, allowing us to recover control of our online lives. We hope the Databox is a
first step to re-balancing power between us, the data subjects,
and the corporations that collect and use our data.
Context sensitivity, opacity of data collection and drawn inferences, trade of personal data between third parties and data
aggregators, and recent data leaks and privacy infringements
all motivate means to engage with and control our personal
data portfolios. However, technical constraints that ignore
the interests of advertisers and analytics providers and so remove or diminish revenues supporting our “free” services and
applications, will fail [12, 25].
A host of other motivations and uses for such a Databox have
been presented elsewhere [20, 17, 9]. These include privacyconscious advertising, market research, personal archives,
and analytical approaches to mental and physical health by
combining data from different sources such as wearable devices, Electronic Health Records, and Online Social Networks. All these examples point to a need for individuals to
have tools that allow them to take more explicit control over
the collection and usage of their data and the information inferred from their online activities.
Author Keywords
Personal Data; Computer Systems; Privacy
ACM Classification Keywords
Information systems applications; Computing platforms
Many Internet businesses rely on extensive, rich data collected about their users, whether to target advertising effectively or as a product for sale to other parties. The powerful network externalities that exist in a rich dataset collected
about a large set of users make it difficult for truly competitive markets to form. A concrete example can be seen in the
increasing range and reach of the information collected about
us by third-party websites, a space dominated by just two or
three players [7]. This dominance has a detrimental effect on
the wider ecosystem: online service vendors find themselves
at the whim of large platform and API providers, hampering
innovation and distorting markets.
To address this need we propose the Databox, enabling individuals to coordinate the collection of their personal data,
and to selectively and transiently make those data available
for specific purposes. Following the Human-Data Interaction
model [18], a Databox assists in provision of:
• Legibility: means to inspect and reflect on “our” data, to
understand what is being collected and how it is processed.
Personal data management is considered an intensely personal matter however. Dourish argues that individual attitudes towards personal data and privacy are very complex and
context dependent [5]. A recent three-year study showed that
the more people disclosed on social media, the more privacy
Copyright 2015 is held by the author(s).
University and ACM.
Derek McAuley
University of Nottingham
[email protected]
• Agency: means to manage “our” data and access to it, enabling us to act effectively in these systems as we see fit.
• Negotiability: means to navigate data’s social aspects, by
interacting with other data subjects and their policies.
We do not envisage Databoxes entirely replacing dedicated,
application-specific services such as Facebook and Gmail.
Such sites that provide value will continue receiving personal data to process, in exchange for the services they offer.
Databox simply provides its user with means to understand,
Publication rights licensed to Aarhus
5th Decennial Aarhus Conference on Critical Alternatives August 17 21, 2015,
Aarhus Denmark
control and negotiate access by others to their data across a
range of sites, including such currently dominant players. As
a physical object it offers a range of affordances that purely
virtual approaches cannot, such as located, physical interactions based on its position and the user’s proximity.
forms of cooperation, by third-party services and data aggregators. The Databox can be used as a central point for negotiating such data access and release rights.
Controlled Access. Users must have fine-grained control
over the data made available to third parties. At the very least,
the Databox must be selectively queryable, though more complex possibilities include supporting privacy-preserving data
analytics techniques, such as differential privacy [6] and homomorphic encryption [21]. A key feature of the Databox
is its support for revocation of previously granted access. In
systems where grant of access means that data can be copied
elsewhere, it is effectively impossible to revoke access to the
data accessed. In contrast, a Databox can grant access to process data locally without allowing copies to be taken of raw
data unless that is explicitly part of the request. Subsequent
access can thus easily be revoked [16]. A challenge is then to
enable users to make informed decisions concerning the impact of releasing a given datum as this requires an understanding of the possible future information-states of all third parties
that might access the newly released datum. One way to simplify this is to release data only after careful and irreversible
aggregation of results to a degree that de-anonymisation becomes impossible. More complex decisions will require an
on-going dialogue between the user and their Databox, to assist in understanding the impact of their decisions.
Nor is the Databox oriented solely to privacy and prevention of activities involving personal data. It enables new applications that combine data from many silos to draw inferences presently unavailable. By redressing the extreme asymmetries in power relationships in the current personal data
ecosystem, the Databox opens up a range of market and social
approaches to how we conceive of, manage, cross-correlate
and exploit “our” data to improve “our” lives.
What features must a Databox provide to achieve these aims?
We answer in four parts: it must be a trusted platform providing facilities for data management of data at rest for the data
subjects as well as controlled access by other parties wishing
to use their data, and supporting incentives for all parties.
Trusted Platform. Your Databox coordinates, indexes, secures and manages data about you and generated by you.
These data can remain in many locations, but it is the Databox
that holds the index and delegates the means to access that
data. It must thus be highly trusted: the range of data at its
disposal is potentially far more intrusive – as well as more
useful – when compared to data available to traditional data
silos. Thus, although privacy is not the primary goal of the
Databox, there are clear requirements on the implementation
of the Databox to protect privacy [11]. Trust in the platform
requires strong security, reliable behaviour and consistent
availability. All of the Databox’s actions and behaviours must
be supported by pervasive logging, with associated tools, so
that users and (potentially) third-party auditors can build trust
that the system is operating as expected and, should something unforeseen happen, the results can at least be tracked.
We envisage such a platform as having a physical component,
perhaps in the form-factor of an augmented home broadband
router, under the direct physical control of the individual.
Thus, while making use of and collating data from remote
cloud services, it would also manage data that the individual
would not consider releasing to any remote cloud platform.
Supporting Incentives. A consequence of the controlled access envisioned above is that users may deny third-party services access to data. The Databox thus must enable services
alternate means to charge the user: those who wish to pay
through access to their data may do so, while those who do
not may pay through more traditional financial means. One
possible expression of this would be to enable the Databox
to make payments, tracing them alongside data flows to and
from different third-party services made available via some
form of app store. Commercial incentives include having the
Databox act as a gateway to personal data currently in other
silos, and as an exposure reduction mechanism for commercial organisation. This removes their need to be directly responsible for personal data, with all the legal costs and constraints that entail, instead giving control over to the data subject. This is particularly relevant for international organisations that must be aware of many legal frameworks. A simple
analogy is online stores’ use of payment services (e.g., PayPal, Google Wallet) to avoid the overhead of Payment Card
Infrastructure compliance.
Data Management. A Databox must provide means for
users to reflect upon the data it contains, enabling informed
decision-making about their behaviours, and particularly
whether to delegate access to others. As part of these interactions, and to support trust in the platform, users must be
able to edit and delete data via their Databox as a way to handle the inevitable cases where bad data is discovered to have
been inferred and distributed. Similarly, it may be appropriate for some data to not exhibit the usual digital tendency of
perfect record. Means to enable the Databox automatically to
forget data that are no longer relevant or have become untrue
may increase trust in the platform by users [15]. Even if data
has previously been used, it may still need to be “put beyond
use” [2]. Concepts such as the European Union’s Right to
be Forgotten require adherence to agreed protocols and other
Many past systems provide some or all of these features, but
none have really been successful due, we claim, to fundamental barriers that have yet to be coherently addressed.
Trust. The growth of third party cloud services means users
must trust, not only the service they are directly using, but
also any infrastructure providers involved, as well as other
parties such as local law enforcement. If the Databox is to
take such a central place in our online lives, it must be trusted.
Two key aspects stand out here for the Databox: (i) the need
to trust that it will protect the user against breach of data
due to attacks such as inference across different datasets; and
(ii) the need to trust that the software running on it is trustworthy and not acting maliciously. Two features of the design
support this. First, all keys remain with the user themselves
such that not even a Databox provider can gain access without permission. Second, by making a physical artefact a key
component in a user’s Databox, e.g., a low-energy computing
device hosted in their home, physical access controls – including turning it off or completely disconnecting it from all
networks – can be applied to ensure data cannot leak. While
this minimises the need to trust third parties, it increases trust
placed in the software: we mitigate this by using open-source
software built using modern languages (OCaml) on platforms
(Xen) that limit the Trusted Computing Base, mitigating several classes of attack to which existing software is vulnerable.
creasingly complex ecosystem [7]. Considering the churn experienced in the personal data startup space, with so many
new but typically short-lived entrants, it seems that few truly
viable models have yet been discovered. Our belief is that the
power of personal data can only be realised when proper consideration is given to its social character, and it can be legibly
combined with data from external sources. In this case, we
might anticipate many potential business models [22].
Unfortunately, these approaches typically entail lodging all
personal data into cloud-hosted services giving rise to concerns about privacy and future exploitation or mistaken release of data. In contrast, your Databox retains data – particularly control over that data – locally under your sole control. From a technology point of view, the general approach of
Databox is that of “privacy by design”, though it remains to be
seen if it can be successful in a space such as this, where policy and technology need to co-evolve. In order to sell personal
data, there needs to be a method for determining the marginal
rate of substitution (the rate at which the consumer is willing to substitute one good for another) for personal data. The
sale of personal data and insights derived from it is the key
utility in this ecosystem, and individuals’ preferences are the
fundamental descriptors and success indicators.
Usability. Personal data is so complex and rich that treating
it homogeneously is almost always a mistake and, as noted
above, user preferences in this space are complex: socially
derived and context dependent. A very broad range of intents
and requirements must be captured and expressed in machineactionable form. We will build on techniques developed in
the Homework platform [19] which prototyped and deployed
a range of novel task-specific interfaces that assisted users in
the complex business of managing their home networks. Deciding which devices should be able to share in and access
the digital footprint, even before considering sharing with
other people, makes it even harder. Issues such as mixed,
sometimes proprietary data formats, high variability in datum
sizes, the multiplicity of standards for authentication to different systems to access data, lack of standard data processing
pipelines and tools, and myriad other reasons make this job
complex and time consuming. In addition, most data is inherently shared in that it implicates more than one individual
and thus ownership and the concomitant right to grant access
is not always clear. E.g., Use of cloud email services like
Gmail: even if a user opts out by not using Gmail, there is a
high chancethat a recipient of their email is using Gmail.
Governments and regulatory bodies have attempted to impose
regulatory frameworks that force the market to recognise certain individual rights. Unfortunately, legal systems are not
sufficiently agile to keep up with the rapid pace of change in
this area. Attempts at self-regulation such as the Do Not Track
headers1 are ineffective, with only an insignificant fraction of
services in compliance [7]. It is even possible that there may
be a shift towards consumer protection legislation, as opposed
to current prevalence of informed consent [13].
We are in an era where aggregation and analysis of personal
data fuels large, centralised, online services. The many capabilities offered by these services have come at significant
cost: lost of privacy and numerous other detrimental effects
on the well-being of individuals and society. Many have commented that people simple do not see the need for technologies like this until they suffer some kind of harm from the
exploitation of their data. On the other hand, it has been argued that privacy is negotiated through collective dynamics,
and hence society reacts to the systems that are developed
and released [10]. We speculate that data management may
become a mundane activity in which we all engage, directly
or through some representative, to a greater or lesser extent.
Cost. There are a range of incentives that must align for the
success of such a platform. The day-to-day costs of running
a Databox have to be acceptable to users. Similarly, costs
that third-parties incur when accessing the system will have
to be recouped, including perhaps recompensing for access to
data that previously they would have simply gathered themselves. It remains to be seen how this can be done in practice:
Are users willing and able to pay in practice? What will be
the response of users when offered pay-for versions of previously free-to-use services? There is some evidence that some
users will be willing to make this trade-off [1], but studies
also show that the situation is complex [24].
We have proposed the Databox as a technical platform that
would provide means to redress this imbalance, placing the
user in a position where they can understand, act and negotiate in this socio-technical system. By acting as a coordination point for your data, your Databox will provide
means for you to reflect on your online presence, restore to
you agency over your data, and enable a process of negotiation with other parties concerning your data. Even if the
Databox as currently conceived is not a perfect solution, only
In response to growing awareness about how our data is processed, many startups have formed in recent years aiming to
put users explicitly in control of their personal data or metadata. They typically provide platforms through which users
can permit advertisers and content providers to enjoy metered
access to valuable personal data, e.g., OpenPDS [4]. In exchange, users may benefit by receiving a portion of the monetary value generated from their data as it is traded in an in-
by taking initial, practical steps can we elicit the necessary
knowledge to improve the state-of-the-art. We do not believe
further progress can be made without focused effort on the
practical development and deployment of the technologies involved. E.g., Before addressing the complex problems of comanaging data [3], a Databox that enables personal data to be
collated and reflected upon will allow individuals to explore
workflows managing both their own data and, through ad hoc
social interaction, data involving other stakeholders.
10. Gürses, S. Can you engineer privacy? Commun. ACM
57, 8 (Aug. 2014), 20–23.
11. Haddadi, H., Hui, P., and Brown, I. Mobiad: private and
scalable mobile advertising. Proc. ACM MobiArch
12. Leontiadis, I., Efstratiou, C., Picone, M., and Mascolo,
C. Don’t kill my ads! balancing privacy in an
ad-supported mobile application market. Proc. ACM
HotMobile (2012).
13. Luger, E., and Rodden, T. An informed view on consent
for UbiComp. In Proc. ACM UBICOMP (2013),
14. Madhavapeddy, A., Mortier, R., Rotsos, C., Scott, D.,
Singh, B., Gazagnaire, T., Smith, S., Hand, S., and
Crowcroft, J. Unikernels: Library operating systems for
the cloud. In Proc. ACM ASPLOS (Mar. 16–20 2013).
15. Mayer-Schonberger, V. Delete: The Virtue of Forgetting
in the Digital Age. Princeton University Press, 2009.
16. McAuley, D., Mortier, R., and Goulding, J. The
Dataware Manifesto. In Proc. IEEE International Conf.
on Communication Systems and Networks
(COMSNETS) (January 2011). Invited paper.
17. Mortier, R., Greenhalgh, C., McAuley, D., Spence, A.,
Madhavapeddy, A., Crowcroft, J., and Hand, S. The
personal container, or your life in bits. Proc. Digital
Futures (2010).
18. Mortier, R., Haddadi, H., Henderson, T., McAuley, D.,
and Crowcroft, J. Human-data interaction: The human
face of the data-driven society. SSRN (Oct. 1 2014).
19. Mortier, R., Rodden, T., Tolmie, P., Lodge, T., Spencer,
R., Crabtree, A., Sventek, J., and Koliousis, A.
Homework: Putting interaction into the infrastructure. In
Proc. ACM UIST (2012), 197–206.
20. Mun, M., Hao, S., Mishra, N., Shilton, K., Burke, J.,
Estrin, D., Hansen, M., and Govindan, R. Personal data
vaults: A locus of control for personal data streams. In
Proc. ACM CoNEXT (2010), 1–12.
21. Naehrig, M., Lauter, K., and Vaikuntanathan, V. Can
homomorphic encryption be practical? In Proc. ACM
Cloud Computing Security Workshop (2011), 113–124.
22. Ng, I. C. Engineering a Market for Personal Data: The
Hub-of-all-Things (HAT), A Briefing Paper. WMG
Service Systems Working Paper Series (2014).
23. Rotsos, C., Howard, H., Sheets, D., Mortier, R.,
Madhavapeddy, A., Chaudhry, A., and Crowcroft, J.
Lost in the edge: Finding your way with Signposts. In
Proc. USENIX FOCI (Aug. 13 2013).
24. Skatova, A., Johal, J., Houghton, R., Mortier, R.,
Bhandari, N., Lodge, T., Wagner, C., Goulding, J.,
Crowcroft, J., and Madhavapeddy, A. Perceived risks of
personal data sharing. In Proc. Digital Economy: Open
Digital (Nov. 2013).
25. Vallina-Rodriguez, N., Shah, J., Finamore, A.,
Grunenberger, Y., Papagiannaki, K., Haddadi, H., and
Crowcroft, J. Commercial break: Characterizing mobile
advertising. In Proc. ACM IMC (2012).
Thus we have begun development of underlying technologies
for Databox: Nymote ( and its constituent
components of MirageOS [14], Irmin [8] and Signpost [23].
In addition, the community is developing methodologies for
indexing and tracking the personal data held about us by third
parties. However, the successful widespread deployment of
such a platform will require that we tackle many significant
issues of trust, usability, complexity and cost in ways that are
transparent and scalable. Resolving questions such as those
above requires that we develop and study Databoxes in-thewild, in partnership with individuals, consumer rights groups,
privacy advocates, the advertising industry, and regulators.
Acknowledgements. Work supported in part by the EU FP7
UCN project, grant agreement no 611001. Dr Haddadi visited
Qatar Computing Research Institute during this work.
1. Acquisti, A., John, L. K., and Loewenstein, G. What is
privacy worth? Journal of Legal Studies 42, 2 (2013),
2. Brown, I., and Laurie, B. Security against compelled
disclosure. In Proc. IEEE ACSAC (Dec 2000), 2–10.
3. Crabtree, A., and Mortier, R. Human data interaction:
Historical lessons from social studies and CSCW. In
Proc. ECSCW (Oslo, Norway, Sept. 19–23 2015).
4. de Montjoye, Y.-A., Shmueli, E., Wang, S. S., and
Pentland, A. S. openpds: Protecting the privacy of
metadata through safeanswers. PLoS ONE 9, 7 (07
2014), e98790.
5. Dourish, P. What we talk about when we talk about
context. PUC 8, 1 (Feb. 2004), 19–30.
6. Dwork, C. Differential privacy. In Automata, Languages
and Programming, M. Bugliesi, B. Preneel, V. Sassone,
and I. Wegener, Eds., vol. 4052 of LNCS. Springer,
2006, 1–12.
7. Falahrastegar, M., Haddadi, H., Uhlig, S., and Mortier,
R. Anatomy of the third-party web tracking ecosystem.
CoRR abs/1409.1066 (2014).
8. Gazagnaire, T., Chaudhry, A., Crowcroft, J.,
Madhavapeddy, A., Mortier, R., Scott, D., Sheets, D.,
and Tsipenyuk, G. Irmin: a branch-consistent distributed
library database. In Proc. ICFP OCaml User and
Developer Workshop (Sept. 2014).
9. Guha, S., Reznichenko, A., Tang, K., Haddadi, H., and
Francis, P. Serving ads from localhost for performance,
privacy, and profit. In ACM Workshop on Hot Topics in
Networks (2009).
Deconstructivist Interaction Design:
Interrogating Expression and Form
Martin Murer
Center for HCI
University of Salzburg
[email protected]
Verena Fuchsberger
Center for HCI
University of Salzburg
[email protected]
According to Vallgårda [23], giving form to computational
materials in interaction design is a practice that comprises
three elements: the physical form, the temporal form, and
the interaction gestalt. As the computer is no longer at the
center of attention, and as interaction design is no longer
just a matter of interface design, Vallgårda proposed to establish a form-giving practice of interaction design. In order to develop such a practice, we need to explore and detail the according expressional vocabulary. Interaction design
though does not start to develop its form-giving practice and
expressional repertoire from scratch. As Hallnäs and Redström [10] argue, human-computer interaction has to some
extent adopted rudimentary aesthetics from other areas, such
as graphic design. Those design disciplines are highly relevant in HCI, for instance, in screen-dominated interaction
design. This is considered insufficient, as the aesthetics
In this paper, we propose deconstructivist interaction design
in order to facilitate the differentiation of an expressional vocabulary in interaction design. Based on examples that illustrate how interaction design critically explores (i.e., deconstructs) its own expressional repertoire, we argue that there
are commonalities with deconstructivist phases in related design disciplines to learn from. Therefore, we draw on the role
and characteristics of deconstructivism in the history of architecture, graphic design, and fashion. Afterwards, we reflect
on how interaction design is already a means of deconstruction (e.g., in critical design). Finally, we discuss the potential
of deconstructivism for form-giving practices, resulting in a
proposal to extend interaction design’s expressional vocabulary of giving form to computational material by substantiating a deconstructivist perspective.
[. . . ] of disciplines dominated by design-by-drawing
tells us very little about the computational aspects of this
new material we are working with. [10, p. 106]
Author Keywords
Deconstructivism; Interaction Design; Styles; Aesthetics
ACM Classification Keywords
We propose to look at historically evolved practices that enriched the aesthetic vocabulary of related design disciplines,
such as architecture, graphic design, and fashion. In particular, we draw attention to deconstructive phases, as those share
characteristics with the contemporary shift in interaction design from a pure focus on function to an increased attention
to expressional possibilities (e.g., in form-driven ixd research
[12]). We will depict the relation between deconstructivism
and established design disciplines and discuss how interaction design is already in the service of deconstruction (e.g.,
in critical design). Finally, we will conclude with a proposal
for substantiating a deconstructive perspective in interaction
design that might enrich its expressional repertoire by interrogating the form of computational material.
H.5.m. Information Interfaces and Presentation (e.g. HCI):
A motion controlled fur monkey makes a spectator realize,
that she has been acting like a chimpanzee for the last couple of minutes [20]. Strangers sitting on a public bench unconsciously slip closer as the bench slowly changes its shape
[13]. A whole room responds to the movements of a person by moving, collapsing, or expanding [22]. A lightbulb
throws a shadow on a wall and all of a sudden the shadow
grows wings and starts to fly [15]. These four encounters
with interactive technology, all presented at recent premier
HCI venues, share certain characteristics that are denotive to
a current strand in interaction design, which shifts attention
from function to expression. At first glance, the salient communality is that the four artefacts bear a surprising momentum, they are exceptional, rather a piece than a system. Less
salient is that they all critically explore form-giving in interaction design by focusing their exploration on computation
as a design material, they deconstruct our assumptions about
how to interact with a particular element in our reality.
Copyright 2015 is held by the author(s).
University and ACM.
Manfred Tscheligi
Center for HCI
University of Salzburg
[email protected]
In a 2013 event the New York Museum of Modern Arts
(MoMA) celebrated the 25th anniversary of a milestone exhibition in architecture: Philip Johnson and Mark Wigley
had curated “Deconstructivist Architecture”, showcasing the
work of seven at the time of the exhibition contemporary architects (see e.g., [11]). The curators did not attempt to define a style; they rather aimed to present then contemporal
architecture of similar approaches. Still, it was that very exhibition that has coined the term deconstructivism as a framing for a highly influential development in architecture and
many other disciplines in succession. The visual appearance
of deconstructivist styles can be characterized by controlled
Publication rights licensed to Aarhus
5th Decennial Aarhus Conference on Critical Alternatives August 17 21, 2015,
Aarhus Denmark
chaos, unpredictability and distortion. Underneath its skin,
deconstruction is not about a style or a movement. Rather,
its proponents understand their work as an opposition to the
ordered rationality of postmodernism.
developing future interactive systems are reasonable and desirable outcomes of such approaches.
Interaction design in the service of deconstruction
The background of our proposal is, on one hand, constituted
by the origin of deconstructivist movements, such as philosophy, architecture and graphic design. On the other hand, there
is also related work in interaction design and HCI, which to
varying extents includes deconstructivist elements as part of
a method or thinking. With HCI becoming more implicated
in culture [4], recent practices and concepts can be identified
that incorporate deconstruction as a means to raise awareness
for societal changes and issues, such as critical theory, critical
design, or feminist design.
Origins of Deconstructivist Movements
Being based on Jacques Derrida’s philosophic concept of deconstruction, deconstructivism in architecture asks questions
about modernism by “reexamining its own language, materials, and processes,” [16] as summarised by Wigley in his
1988 catalogue essay:
A deconstructive architect is [. . . ] not one who
dismantles buildings, but one who locates the inherent
dilemmas within buildings. The deconstructive architect
puts the pure forms of the architectural tradition on the
couch and identifies the symptoms of a repressed impurity. The impurity is drawn to the surface by a combination of gentle coaxing and violent torture: the form is
interrogated. [16]
The study object of critical theory is, broadly speaking, any
aspect of culture, exploring the constructedness of knowledge
[1]. For HCI, Bardzell [1] suggests ways for criticism to contribute to its practice. Critical design [8] is an approach to
provocation, which offers sources and strategies to inspire
designs, such as techniques to radically rethink fundamental
concepts. Those strategies aim at reconfiguring sociocultural
norms in more aesthetic or social ways and may stimulate
demand for such designs [3]. From this point of view, the
deconstructive moment is de- and reconstructing knowledge.
It was that MoMA exhibition in 1988 that got deconstructivism “catapulted into design press” [16], with a lasting
impact not only on architecture. Other disciplines such
as graphic design (e.g., [18]) or fashion (e.g., [14]) soon
augmented existing streams with those formal concepts and
mechanisms. Deconstructivism became a kind of cliché of
a certain formal style, characterized by aggressive arrangements, sharp edges, fragmentation, etc. Deconstruction got
phrased more generally as means “[. . . ] to deform a rationally structured space so that the elements within that space
are forced into new relationships” [18, p. 122]. In graphic design and fashion the notion of deconstruction since changed
from a style to an activity of critical form-giving, becoming
a central element in any design practice [16, 14]. It is no
longer en vogue but rather considered a zeitgeist (e.g., [6]).
The contemporary relevance of deconstructing is though still
present. As Derrida phrased it in a 1994 interview in The
New York Times Magazine, there is some element in deconstruction “that belongs to the structure of history or events. It
started before the academic phenomenon of deconstruction,
and it will continue with other names.” [21]
Feminism can also be considered from a deconstructivist
point of view. Following the political agenda of critiquing
power (e.g., [5]), its aim is to deconstruct social structures. In
HCI, for instance, feminist thinking supports the awareness
and accountability of its social and cultural consequences [2].
Buchmüller argues that one common goal of feminism is to
“[C]hange the situation/position of the researched by offering them critical ways of thinking, new ways of expression
as well as new opportunities of action.” [5, p. 175]. This
means that feminism seeks to initiate social change by changing social structures. Deconstruction in a feminist tradition is,
thus, applying deconstructive procedures to dominant gender
structures [25]. The deconstructive element is related to the
artifact [1] which initiates questioning a social setting.
Besides critical design and feminism, a further notion of deconstructing the social by means of technology can be found
in critical making. It emphasizes critique and expression
rather than technical sophistication and function [17]. Making is understood as an activity, which is a vehicle for critically engaging with the world [9]. The constructive processes
are considered as the site for analysis, the shared acts of making are more important than the evokative artifacts. Thus, the
prototypes are a means to an end, which achieve value by creating, discussing and reflecting on them [17].
All these deconstructive movements have passed their peak of
attention, but they left their respective disciplines with an enriched set of formal vocabulary. Taking this deconstructivist
perspective, it may be worthwhile to consider how it relates to
the goal of establishing a richer expressional vocabulary for
form-giving practices in interaction design. A deconstructivist lens can help us understand what kind of deconstruction
we already perform in interaction design, how we invert existing approaches and thinking, and how this framing might
support both the practice and theory in order to understand
and link their approaches to experiential knowledge creation.
Similar to architecture, graphic design, and fashion, interaction design is in the first place concerned with construction.
However, notions of deconstructivism, such as provocation,
critical theory and design are pervading interaction design,
being a mode of questioning the current state of the art. Both
creating an enhanced understanding as well as imagining and
With deconstructivist interaction design we seek to frame
critical practices that emphasize the expressional possibilities
interaction design provide. This is not mutually exclusive
from other critical practices in human-computer interaction
and design, i.e., a particular design or artefact may well satisfy multiple attributions. While, e.g., critical design aims to
“introduce both designers and users to new ways of looking at
the world and the role that designed objects can play for them
in it” [19, p. 51], we want to draw attention to the form-giving
and material aspects. As Hallnäs and Redström [10] argue,
in HCI-related research and design practice aesthetical decisions tend to be hidden by concerns for practical functionality,
usability, user requirements etc. In line with Dunne [7], they
argue, that those decisions are hidden, “not because they are
not made, but because the expressions of the things we design
become mere consequences of other concerns.” [10, p. 105]
They propose a program for experimental design of computational things that turns the classical leitmotif “form follows
function” upside down. While not explicated by Hallnäs and
Redström, we argue that their program bears a deconstructive
momentum: As a critical activity the classical design agenda
is questioned by proposing an alternative leitmotif, i.e., function resides in the expression of things.
tegration of physical and temporal form into a coherent entity
while breaking with the assumptions of how reality behaves.
Following Vallgårda’s trinity of forms (physical form, temporal form, interaction gestalt) in interaction design practice
[23], the core, the interplay between the three, is what Leigh
et al. [15] challenged in their design. Deconstructivist interaction design, however, may also deconstruct single elements
as well. In order to question the physical form, the related
design disciplines provide a whole spectrum of practices and
vocabulary to draw on, as they already intensively explored
the expressional repertoire of physical forms, for instance, in
architecture, pottery, or statuary. Challenging, articulating,
and expressing the temporal form or the interaction gestalt,
however, is an exercise interaction design is necessarily engaged with in interrogating the interplay of the three formgiving elements. Looking through a deconstructivist lens at
interaction design practices and the related artefacts, whether
they are meant to be deconstructive or not (or, for instance,
following a critical, feminist or any other approach), may support this exercise and contribute to the articulation of formal
expressional vocabulary.
As “form follows function” used to be the prevailing (structuralist) leitmotif of interface design, “function resides in the
expression of things” may become a (deconstructive) leitmotif of interaction design. It would guide explorations that
question the expressional qualities of materials and artefacts.
It implies engagement with the expressional repertoire, its articulation and vocabulary. Furthermore, deconstructive design can help to extend the existing repertoire by critically
challenging and examining what is already known.
Starting from architectural deconstructivism, we argue that
framing interaction design examples and strands from a deconstructivist perspective supports reflection on the act of
taking apart form, function, meaning, concepts, or artifacts
to enhance our knowledge and expressional vocabulary for a
form-giving practice in interaction design.
Exemplary Deconstructivist Interaction Design
In the following we describe one of the artefacts highlighted
in the very beginning of this paper in order to elaborate on
how it bears a deconstructive stance. Remnance of Form by
Leigh and colleagues [15] is an interactive installation that
aims to explore the “dynamic tension between an object and
its shadow.” [15, p. 411] Composed of a light, projection
and tracking technology, a shadow can detach itself from its
former role. The relationship between a light bulb, an object
and its shadow is constantly modulated and distorted. A person interacting with Remnance of Form can move the light
bulb and the object around in physical (and temporal) space.
Depending on one’s movements in respect to the installation,
the shadow, as a digital alteration of reality, is programmatically shaped into different forms and behaviours (i.e., changes
shape, grows wings and flies away, shows fear).
The reflection we presented is meant to be a starting point for
a discussion on deconstructive interaction design by trying to
consider current design practices through a deconstructivist
lens. In order to strengthen this framing, further material and
design studies and examinations from this perspective will
be needed that support the sophistication of (deconstructive)
form-giving practices. While there are obvious parallels to
deconstructive phases in other disciplines, deconstructive interaction design underlies multiple deviations that are constituted in the constant advancement of computational technology. This prompts interaction design to continuously challenge and deconstruct established computational forms (e.g.,
as the predominate aesthetics of GUIs brought about a deconstruction of form, questioning whether “a machine needed to
be visible at all.” [24, p. 67])
Remnance of Form deconstructs our perception and assumptions about interacting with reality. It is critical in the sense
that it deforms the elements of what used to be a rationally
structured space (i.e., our previous assumptions about the interaction with a light bulb, an object and its shadow) and
forces them into new relationships. Arguably, its critical momentum does though not seek to “disrupt or transgress social
and cultural norms” [3] as framed by critical design. Rather it
is the interdependence of the form elements of interaction design [23] that are interrogated and explored at once. This adds
a particular expression to our interaction design vocabulary,
one that can easily be interpreted as being magical through
dissolving the formerly stable associations between lights,
objects and their shadows. The interaction gestalt of Remnance of Form as it unfolds through interaction, its disruptive and disturbing nature, was brought on by a balanced in-
In that realm we do not consider deconstructive interaction
design being tied to a political agenda, as in critical or feminist theory; rather, we aim to emphasize the act of questioning (material) form and functions. We do not oppose deconstruction to construction; instead, we see deconstruction as
a means to gain an additional kind of knowledge, one, that
allows to derive multiple functions out of form, both conceptually and practically. The framing of deconstructivist interaction design is no opposition to the epistemological value of
prevailing critical practices in human-computer interaction,
nor to constructive design research. Rather, we aim to draw
attention to the potential of critical form-giving as means to
enrich and further detail the expressional vocabulary of interaction design practice. Drawing on a notion Johnson used to
describe the then contemporary architecture, deconstructivist
interaction design can be described as a “contemporary artistic phenomenon that derives its forms from constructivism
and yet deviates from it.” [11]
12. Heekyoung Jung and Erik Stolterman. 2012. Digital
Form and Materiality: Propositions for a New Approach
to Interaction Design Research. In Proc. NordiCHI’12.
ACM, 645–654. DOI:
13. Sofie Kinch, Erik Grönvall, Marianne Graves Petersen,
and Majken Kirkegaard Rasmussen. 2014. Encounters
on a Shape-changing Bench: Exploring Atmospheres
and Social Behaviour in Situ. In Proc. TEI’14. ACM,
New York, NY, USA, 233–240. DOI:
The financial support by the Austrian Federal Ministry of Science, Research and Economy and the National Foundation
for Research, Technology and Development is gratefully acknowledged (Christian Doppler Laboratory for “Contextual
14. Gizem Kiziltunali. 2012. A deconstructive system:
fashion. Konferenssiesitelmä. 4th Global Conference,
Inter-Disciplinary. Net.
1. Jeffrey Bardzell. 2009. Interaction criticism and
aesthetics. In Proc. CHI 2009 (CHI ’09). ACM, New
York, NY, USA, 10. DOI:
15. Sang-won Leigh, Asta Roseway, and Ann Paradiso.
2015. Remnance of Form: Altered Reflection of
Physical Reality. In Proc. TEI’15. ACM, New York, NY,
USA, 2. DOI:
2. Shaowen Bardzell and Jeffrey Bardzell. 2011. Towards a
feminist HCI methodology: social science, feminism,
and HCI. In Proc. CHI 2011. ACM, 10. DOI:
16. Ellen Lupton and J Abbott Miller. 1994. Deconstruction
and graphic design: history meets theory. Visible
language 28 (1994), 346–346.
3. Shaowen Bardzell, Jeffrey Bardzell, Jodi Forlizzi, John
Zimmerman, and John Antanitis. 2012. Critical design
and critical theory: the challenge of designing for
provocation. In Proc. DIS ’12. ACM, 10. DOI:
17. Matt Ratto. 2011. Critical Making: Conceptual and
Material Studies in Technology and Social Life. The
Information Society 27, 4 (2011), 252–260. DOI:
4. Mark Blythe, Jeffrey Bardzell, Shaowen Bardzell, and
Alan Blackwell. 2008. Critical issues in interaction
design. In Proc. BCS-HCI ’08. BCS, UK, 2. http:
18. Timothy Samara. 2005. Making and breaking the grid: a
graphic design layout workshop. Rockport Publishers.
19. Phoebe Sengers, Kirsten Boehner, Shay David, and
Joseph ’Jofish’ Kaye. 2005. Reflective Design. In Proc.
CC ’05. ACM, New York, NY, USA, 10. DOI:
5. S. Buchmueller. 2012. How can Feminism Contribute to
Design? Reflexions about a feminist framework for
design research and practice. In Design Research
Society (DRS) International Conference.
20. Jan M. Sieber and Ralph Kistler. 2013. Monkey
Business. In Proc. TEI’14. ACM, New York, NY, USA,
2. DOI:
6. Chuck Byrne and Martha Witte. 2001. A brave new
world: understanding deconstruction. Graphic Design
History (2001), 245.
7. Anthony Dunne. 1999. Hertzian tales. MIT press
21. Mitchell Stephens. 1994. Jacques Derrida and
Deconstruction. The New York Times Magazine,
January 23. (1994).
8. Anthony Dunne and Fiona Raby. 2001. Design noir:
The secret life of electronic objects. Springer.
22. Anna Vallgårda. 2014a. The Dress Room: Responsive
Spaces and Embodied Interaction. In Proc.
NordiCHI’14. ACM, New York, NY, USA, 10. DOI:
9. Shannon Grimme, Jeffrey Bardzell, and Shaowen
Bardzell. 2014. ”We’Ve Conquered Dark”: Shedding
Light on Empowerment in Critical Making. In Proc.
NordiCHI’14. ACM, New York, NY, USA, 10. DOI:
23. Anna Vallgårda. 2014b. Giving Form to Computational
Things: Developing a Practice of Interaction Design.
Personal Ubiquitous Comput. 18, 3 (March 2014),
577–592. DOI:
10. Lars Hallnäs and Johan Redström. 2002. Abstract
information appliances: methodological exercises in
conceptual design of computational things. In Proc.
DIS’02. ACM, New York, NY, USA, 12. DOI:
24. Mikael Wiberg and Erica Robles. 2010. Computational
compositions: Aesthetics, materials, and interaction
design. International Journal of Design 4, 2 (2010),
11. Philip Johnson and Mark Wigley. 1988. Deconstructivist
Architecture: The Museum of Modern Art, New York.
Little, Brown.
25. Peter V Zima. 2002. Deconstruction and critical theory.
Continuum International Publishing Group.
In Search of Fairness: Critical Design Alternatives
for Sustainability
Somya Joshi
Stockholm University
Borgarfjordsgatan 12
[email protected]
Teresa Cerratto Pargman
Stockholm University
Borgarfjordsgatan 12
[email protected]
levels of human experience [5], but are also deeply
concerned by environmental [1], socio-political [13], social
sustainability [4] and ecological concerns [20]. Latest
developments in Sustainable HCI (SHCI) clearly indicate
an interest into the development and design (or undesign)
of technologies relying less on instrumental purposes of
efficiency connected with corporate profit [17] and more
prone on volitional and value-laden aspects underlying
people’s use of technologies [6,2]. Our paper follows from
the above, focusing primarily on the social value laden
enterprise and its processes of activism and co-creation that
inform critical alternatives within the SHCI discourse. We
do so via the illustrative lens of a case study (Fairphone
[10]) that enables us to conceptualise technological design
and consumption from a holistic perspective of social
ecology. The paper contributes with four critical design
alternatives towards sustainability.
Does fairness as an ideal fit within the broader quest for
sustainability? In this paper we consider alternative ways
of framing the wicked problem of sustainability. One that
moves away from the established preference within HCI,
towards technological quick-fixes. We adopt a critical lens
to challenge the belief that by merely changing practices at
an individual level one can do away with unsustainability.
This thinking, we argue, is flawed for many reasons, but
mostly because of the wickedness of the sustainability
problem. By analyzing the case of Fairphone, we illustrate
how it is possible to imagine and design change at a
broader level of community engagement, when it comes to
concerns of fairness and sustainability. We contribute to a
deeper understanding of how social value laden enterprises
along with open technological design can shape sustainable
relationships between our environment and us.
Author Keywords
Sustainability; Open Technologies; Critical Alternatives;
Transactions; Social-Ecology.
Our interest in the social ecology perspective [16,21] is due
to its holistic understanding of the interplay between
natural-ecological and socio-semiotic dimensions. In
particular, we turn our attention to how this interplay of
dimensions can be reflected in the design process. This
relationship, we believe, is key to further elaborate on
issues pertaining to the design and building of computer
systems [7]. In particular, this specific relationship between
material (natural) and human (constructed) facets of
human-environment systems is referred to as transactions
[21]. By transactional relationships, Stokols et al. [21] refer
to continuous, bidirectional and mutually influencing
relationships occurring between both natural-ecological
(i.e. material) and social-semiotic dimensions (i.e.
meanings, values, moral judgments). Transactions entail
exchanges among diverse actors, assets and resources that
play a major role in the sustainability and resilience of our
environment. A particularity of these transactions is that
they are not fungible, as changes in one dimension are
related to changes in another dimension. As such, the
concept of transaction has implications for SHCI in terms
of how we think about the delineation and understanding of
the design space. We see it as an arena wherein diverse
compromises are made as a result of the multiple and
multifarious dilemmas [15] designers are confronted with,
when dealing with sustainability issues.
ACM Classification Keywords
H.5.m. Sustainable interaction design, Miscellaneous.
How do we build systems that are in line with current
understandings of Earth’s finite natural resources? How do
we design them to catalyze and sustain social change?
These and other questions have been introduced and
developed within the Sustainable HCI (SHCI) literature
[3,17,8,18,14,10,20,24]. This pioneering body of research
work has drawn on theoretical and applied developments
that can be related to third wave HCI [5]. They have on the
one hand, touched upon key issues related to our everyday
practices and culture and, on the other hand, they have
overflowed third wave HCI boundaries as they are not only
focused on the cultural, emotional, pragmatic or historical
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
situated within an ecosystem of ideals aspired to by
society, such as sustainability, freedom, justice, peace,
truth. None of these concepts exist in an absolute state;
rather the incremental enactment of re-balancing the
inherent inequity is what we focus on. To address the
above, we consider alternative ways of framing the wicked
problem of sustainability. One that moves away from the
established belief within HCI, in technological quick-fixes.
We adopt a critical lens to challenge the belief that by only
changing behavior at an individual level, one can do away
with unsustainability. This thinking, we argue, is flawed
for many reasons, but mostly because of the wickedness of
the sustainability problem. By analyzing the case of
Fairphone, we illustrate how it is possible to imagine and
design change at a broader level of community
engagement, when it comes to concerns of fairness and
Based on the theoretical discussion above, we set out to
examine what such an alternative might look like in
practice. Fairphone started off as an awareness raising
campaign in 2010, mainly focused on conflict minerals
within the context of the smartphone industry. In the
absence of a real alternative to point to, Fairphone emerged
as a social enterprise in 2013, as the outcome of a crowdfunded campaign designed to produce a truly ‘fair’ phone.
While the notion of fairness here was co-constructed by the
founding members and the community of users, the four
main action points that the movement was built around,
were: mining, manufacturing, design and life-cycle. We
chose this case as it provides us with a window into two
worlds simultaneously: that of a social enterprise setting
out to engineer and sustain a movement (based on
changing relationships and practices within the domain of
technology design and consumption), and, that of a
technical artifact designed to embody the life-cycle
approach - built on the “fairware” principles (open
hardware and software, conflict free and fair in terms of
workers rights, circular economy – a “cradle to cradle”
From the data analyzed, the following four themes
From a logic of Volume to a logic of Fairness
Design alternatives in some cases are based on the
principle that change arises when technologies provide
opportunities for individuals to live differently [22].
Sustainability in this context is a byproduct of the artifact
itself. Other design choices frame change-making as
arising from technology providing opportunities for
community debate [9], in which the direction for change is
set by communities themselves. Within the context of
Fairphone, we observed both these drivers for change.
Framing itself as a social enterprise, Fairphone has at the
outset opted for a mandate that is more rooted in the ethics
of sustainability than in any desire to be industry leaders in
the smartphone domain. This has translated into a set of
compromises or trade-offs. As a young start up, it has
found itself in the position where it is unable to diversify
its product range, or maximize much needed profits in
order to grow; but instead keep a steady focus on the goal
at hand which is to promote fairness and quality control of
the existing artifact. In the words of the CEO of the
organization, “Our mission is to create momentum to
design this future. We started by making a phone to
uncover production systems, solve problems and use
transparency to invite debate about what’s truly fair. We
believe that these actions will motivate the entire industry
to act more responsibly.” Two years down the line, “This
is where we still are. Our business model hasn’t changed,
nor has our product focus – we will still concentrate on
phones, and won’t branch out into other consumer
electronics like laptops or tablets.” While this model has
so far worked well for Fairphone, we are skeptical about
the transferability and sustainability of this approach for
other young start ups operating within a highly competitive
market (without the same safety-nets). Another reservation
we have pertains to the measurable impacts emerging from
Empirical Design
This paper draws on data collected at two levels. Primary:
of semi-structured interviews conducted iteratively, with
impact development and product strategy staff at
Fairphone. Secondary: data collected via the website, blogs
and online documentation, as well as the critical voices
emerging from the wider community of users and
supporters. The latter consists of early adopters, experts,
users, designers and partner organisations. We also draw
on social media data from Twitter (#wearefairphone) and
Facebook. The analysis of the data was conducted using a
procedure known as explication de texte, or close reading,
an analytical method that originated in the humanities [19].
We took into consideration texts (i.e. interview transcripts,
expert reports, blog entries, social media excerpts and
forum debates) as our unit of analysis, from which we
arrived at conceptual threads. They constitute the findings
of our study, which we will unpack in the following
Performing Fairness
Defining the landscape of fairness is a tricky task, given
that there is neither a single accepted definition, nor an
absolute state of fairness per se. As a concept it is
construed both in terms of procedural and distributive
fairness [11]. It is the latter, equity-based logic for action,
which concerns us the most within this inquiry, with
respect to responsibilities and resource distribution. The
global resource challenge shifts from being a matter of
“living within limits” to one of “living in balance” between
social and environmental boundaries [12]. Fairness is
this choice to prioritize fairness over profits or growth. In
particular we wonder how this translate into tangible
changes within the established industry of consumer
electronics, where it remains to be seen if the model of
Fairphone serves as an inspiration for change or a niche
alternative which leaves no dent in their operations.
term. This links to our understanding of critical
alternatives, in that it provides a mechanism to challenge
the status quo with regard to consumption choices and
locked-in design (i.e. proprietary systems that engender
waste and unsustainability in their wake). During our
interviews with the team, we were provided with the
following view on both the evolution of the artifact and the
movement: “Today Fairphone is not for everyone. The
early adopters or aware consumers are the ones who are
part of the community, because either they are interested in
conflict minerals, or workers rights, or environmental
impact. However by joining the movement, they gain a
context to this awareness and see the bigger-linked picture
of the life cycle approach we adopt. We are essentially
providing them with the tools to be curious with.” The cocreation then takes place, we argue, not just at the level of
the product design, but also in terms of education and
capacity building on the part of the wider Fairphone
community. The trade offs of opening up both the technical
design of the phone and the future evolution of it to the
wider user community, are manifold. Sustainability in
hardware choices translated into decisions such as the
modularity of the phone, its openness to friendly hacks and
the conflict free nature of materials used, all make costly
demands on resources, that are otherwise spent on user
experience (e.g. larger screens, lighter phones, cheaper and
faster processors, better cameras etc.) By adopting a lifecycle view, and interlinking design as well as consumption
decisions made closer to home, with impacts in far away
and often disconnected places, the very discourse of
innovation and sustainability are being reexamined and
Just as DNA comes in pairs, we were informed in our
discussion with the Head of Impact at Fairphone, that the
DNA of this social enterprise consists of two strands: on
the one hand, it aims to change the relationship between
users and how they consume technology, on the other
hand, it aims to use technology as an innovative tool to
alleviate societal problems that emerge from its supply
chain. From our critical standpoint, we see two key
challenges standing in the way of realizing the goal of
changing relationships between users and their technology
fixes. That of scalability and the slow pace of change
within the sustainability context. With regard to the former,
a small outfit such as Fairphone (i.e. 30 fulltime staff in
Amsterdam) simply (to put it in their own words) “can not
afford to move the entire supply chain of production from
China to Europe.” This alludes to a set of compromises
that shift their roadmap and milestones to impacts more
graspable, small scale and localized in the early stages of
development. With regard to the latter challenge, in an
industry and market such as that of smart phones and more
broadly consumer electronics, the rate of change in product
development feeds into an expectation of heightened
novelty seeking. One member of the team, working on
product design, commented that it was not just the
consumer, with whom the relationship was transforming,
but also with industrial traders (such as for example
telecom providers and plastic suppliers) who were coming
to view Fairphone as an experiment they increasingly
sympathized with. It provided them proof that an
alternative could exist. Within this environment, we ask,
what does it entail to engage in a slow deliberate march
towards the attainment of sustainability goals, most of
which are not immediately apparent (i.e. changing worker
relationships, acceptance of standards and regulation with
regard to conflict free mining, recycle-reuse impacts)?
Opening up the ‘black box’ of design
One startling difference we found between the model
adopted by Fairphone and mainstream consumer
electronics manufacturers, concerns the openness in design
and the holistic life-cycle view. Be it manifest in the urban
mining workshops organized routinely or the e-waste
reduction efforts in Ghana, the attempt here is one of
creating conditions for debates around sustainability.
Urban mining emerges from this context as a way to
change the existing imbalance, by the extraction of
minerals from existing products. The aim is to dismantle
gadgets that have reached their end-of-life to uncover
what’s within. The idea is that once users have unscrewed
the back cover of their technological artifact (in this case a
smart phone), and identified the components, the urban
mining workshop would aim to unravel some of the
phone’s hidden stories. From pollution and extremely
dangerous working conditions to child labor, one can learn
that a number of mining-related practices desperately
require improvement. The approach adopted by
mainstream players within this context, is to hide this
inconvenient body of knowledge behind, sealed, glued and
proprietary locked in devices, where the design serves as
an impenetrable casing which keeps all unpalatable, guilt-
From Locked-in Design to Collaborative Co-creation
With regard to the actual technical artifact at hand,
Fairphone has positioned itself along a roadmap towards
‘Fairware’, which is a concept expressed in both long and
short term ideas. Thinking short term, Fairphone aims at
being open source to allow lead users to optimize it, as well
as modular, so others can repair/replace parts to use the
phone longer. In the long term, “Cradle2Cradle” design
would enable reuse scenarios, where old modules could be
used to upgrade other devices, which would allow
circulating across product cycles. This would considerably
cut down on waste and forced obsolescence in the long
inspiring footprints of our consumption behaviours, neatly
out of sight (and hence out of mind). The critical
alternative being offered by Fairphone here is one of
opening the design processes.
8. DiSalvo, C., Sengers, P., and Brynjarsdòttir. Mapping
the landscape of sustainable HCI, In Proc. CHI’10.
ACM (2010), 1975-1984.
10. Fairphone.
Far from being a flawless and self-contained movement,
Fairphone is in its early stages of evolution, experiencing
the teething pains expected of any initiative aiming to
challenge the status quo. We see it as a step towards
opening the discourse of critical alternatives within
sustainable HCI. In this paper we have presented critical
alternatives, both at the theoretical level (with the socioecological approach as a lens) and at an applied level (via
the illustrative lens of the Fairphone case). In doing so we
have contributed to a more holistic understanding of how
social value laden enterprises along with open
technological design can shape sustainable relationships
between our environment and us.
11. Haidt, J. (2012). The righteous mind. Why good people
are divided by politics and religion. Random House.
9. DiSalvo, C. Adversarial design. MIT Press, 2012
12. Hajer, M.; Nilsson, M.; Raworth, K.; Bakker, P.;
Berkhout,F.; de Boer, Y., Rockström, J.; Ludwig, K.
and Kok, M. Beyond Cockpit-ism: Four Insights to
Enhance the Transformative Potential of the
Sustainable Development Goals. Sustainability 2015, 7,
13. Dourish, P. HCI and Environmental Sustainability: The
Politics of Design and the Design of Politics. In Proc.
DIS’10, ACM Press (2010),1-10.
14. Håkansson, M. and Sengers, P. Beyond Being Green:
Simply Living Families and ICT. In Proc. CHI ´13.
ACM (2013). 2725-2734.
1. Aoki, P.M.; Honicky, R.J.; Mainwaring, A.; Myers, C.;
Paulos, E.; Subramanian, S. and Woodruff, A. A
Vehicle for Research: Using Street Sweepers to Explore
the Landscape of Env. Community Action. In Proc.
CHI’09 ACM (2009), 375-384.
15. Håkansson, M. and Sengers, P. (2014). No easy
compromise:Sustainability and the dilemmas and
dynamics of change. Proc. of DIS’14. ACM
16. Lejano, R. and Stokols, D. Social Ecology,
Sustainability and Economics. Ecological Economics,
89 (2013), 1-6.
2. Bellotti, V., Caroll, J.M. and Kyungsik, H. Random acts
of kindness: The intelligent and context-aware future of
reciprocal altruism and community collaboration. In
Proc. of CTS. IEEE (2013), 1-12.
17. Nardi, B. The Role of Human Computation in
Sustainability, or, Social Progress is Made of Fossil
Fuels. In Handbook of Human Computation, New
York, Springer, 2013.
3. Blevis, E., Makice, K., Odom, W., Roedl, D., Beck, C.,
Blevis, S., and Ashok, A.: Luxury and new luxury,
quality and equality. In Proc. DPPI ’07, ACM (2007),
18. Pargman, D. and Raghavan, B. Rethinking
sustainability in computing: from buzzword to nonnegotiable limits. In Proc. NordiCHI ’14. ACM (2014),
4. Busse, D.; Blevis, E., Beckwith, R.; Bardzell, S.;
Sengers, P.; Tomlinson, B.; Nathan, B. Social
Sustainability: An HCI Agenda. In Proc. CHI EA’12.
ACM (2010), 1151-1154.
19. Richards, I.A. (1930). Practical Criticism. A study of
Literary Judgment. Kegan Paul & Co.Ltd. London.
5. Bödker, S. When second wave HCI meets third wave
challenges. In Proc. NordiCHI ‘06, ACM Press (2006),
20. Silberman, S. and Tomlinson, B. Toward an Ecological
Sensibility. In Proc.CHI ´10 EA. ACM (2010), 34693474.
6. Cerratto-Pargman, T. A European strand of Sustainable
HCI? Workshop paper NordiCHI ‘14. (2014).
21. Stokols, D., Lejano, R. and Hipp, J. Enhancing the
Resilience of Human–Environment Systems: a Social
Ecological Perspective. Ecology and Society 18, 1
(2013), 1-12.
7. Cerratto Pargman, T. and Joshi, S. (2015).
Understanding limits from a social ecology perspective.
LIMITS 2015, Irvine, CA. USA
22. Woodruff, A., Hasbrouck, J., and Augustin, S. A bright
green perspective on sustainable choices. In Proc. CHI
’08. ACM (2008).313-322.
Why Play?
Examining the Roles of Play in ICTD
Pedro Ferreira
Mobile Life @ KTH
Stockholm, Sweden
[email protected]
munication Technologies for Development (ICTD or
ICT4D). The focus on the urgency to satisfy a pre-defined
set of socio-economic needs relegates play as something
poor people cannot afford to engage with. Generally speaking, there has not been a lack of excitement around the advent of the information age. ICTs were, early on, framed
around their potential to improve and optimize business
processes and efficiency [9, 27, 48]. In the late 80’s, Nobel
Laureate Robert Solow challenged this enthusiasm with a
famous quip: “You can see the computer age everywhere
but in the productivity statistics” [78], echoing the concern
that such explanations were insufficient to account for the
overwhelming success and excitement around ICTs. While
history has certainly justified the role of ICTs in promoting
the advancement of a variety of socio-economic goals, this
remains a narrow and insufficient way to account for and
understand the mass personal adoption of these technologies. Within ICTD, this type of focus, is especially predominant and fails to account for some of the main aspects
of technology use: i.e. entertainment, leisure or games.
The role of technology in socio-economic development is at
the heart of ICTD (ICTs for development). Yet, as with
much Human Centered technology research, playful interactions with technology are predominantly framed around
their instrumental roles, such as education, rather than their
intrinsic value. This obscures playful activities and undermines play as a basic freedom. Within ICTD an apparent
conflict is reinforced, opposing socio-economic goals with
play, often dismissed as trivial or unaffordable. Recently a
slow emergence of studies around play has led us to propose a framing of it as a capability, according to Amartya
Sen, recognizing and examining its instrumental, constructive, and constitutive roles. We discuss how play unleashes
a more honest and fair approach within ICTD, but most
importantly, we argue how it is essentially a basic human
need, not antithetical to others. We propose ways for the
recognition and legitimization of the play activity in ICTD.
Author Keywords
Play; ICTD; ICT4D; capabilities; freedom; games; entertainment
Today, digital interactions, particularly in the form of
smartphones, pervade Western and developing worlds alike.
Mobile subscriptions have nearly matched the earth’s population, and 3 million people now have access to internet
[80]. And while leisure and ludic interactions with technology are prominent, they lack the acceptance within academic work around technology use. It is now roughly 10
years since Bødker described third-wave HCI (Human
Computer Interaction) in which understandings of technology must move beyond the office and into everyday life [8].
It is also close to 10 years since Gaver first published his
work on ludic engagement [28], making play a legitimate
goal in and of itself, pointing to the richness that play
around technology brings into people’s lives. ICTD, in its
more consumer focused facet, understanding and harnessing the potential of personal ICTs such as mobile phones, in
what Heeks named ICT4D 2.0 [33], is also bordering on 10
years. This is the right time to review and understand how
play has been treated within the community, and point to
future directions for a more honest understanding of ICTD
work, as well as the importance of recognizing individual
and local agency through play. We need an understanding
of digital technologies where play, gaming and entertainment are recognized legitimate activities, resisting the
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g., HCI):
Whenever we talk about ICT use, it is hard not to think
about its gaming and entertainment potential. It is certainly
self-evident that these aspects have been a major driver
behind the success and mass adoption of ICTs worldwide
(see for instance Sey and Ortoleva for a short review [75]).
While apparently evident, we fail to see the acknowledgment of those activities as ends in themselves, intrinsically
valuable and even as basic freedoms we enjoy as humans.
Recognizing playfulness as it unfolds in everyday life becomes all the more challenging in Information and ComCopyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
temptation of marginalizing these forms of interaction, or
legitimizing them only insofar they permit the achievement
of other, external, goals. This is key for understanding the
importance and desirability of digital technologies in people’s lives.
“Inside the play-ground an absolute and peculiar order
reigns. Here we come across another, very positive feature
of play: it creates order, is order. Into an imperfect world
and into the confusion of life it brings a temporary, a limited perfection.” [40] (p.10)
We begin with a broad understanding of play, as an academic concern, so as to derive a common, everyday understanding of the term to be used in analyzing its role in ICTD
work. We then build on Nussbaum’s notion of play as a
basic capability [59] framing it under Sen’s notions of instrumental, constructive and constitutive roles [71] and
drawing implications for understanding previous work in
ICTD. Our aim is to acknowledge play as an activity people
have a reason to value in itself, not contradictory with other
ICTD goals. We discuss ways to reframe our thinking about
play in ICTD, away from more paternalistic discourses, to
repurpose the play concept as a valuable resource.
For Huizinga, play is one of the most important and foundational activities humans (and animals) engage in. When
Huizinga writes “not serious” it is to place it in contrast
with the many requirements of everyday life. Play is of
primary importance, particularly through its “limited perfection” and “order”, which stands in stark contrast with
other everyday notions of play that emphasize its triviality.
More recent scholars, like Brian Sutton-Smith, have emphasized the importance of play as an indispensable mechanism for survival [82]. And in their in-depth analysis of the
different types of play existing in humans and animals
alike, Brown & Vaughn focus on the importance of play for
a normal development of the brain, and a transformative
force, capable of improving people’s lives dramatically and
vital for survival: “the opposite of play is not work, it is
depression” [11].
Play is an elusive concept, it is discussed from many different perspectives, and, as pointed out by sociologist Thomas
Henricks, it suffers from the cantoning of disciplines
around the concept [35]. Once more prominent within the
domain of psychology and highly influential child development studies by Jean Piaget [61], it has captivated interest from a variety of disciplines, such as anthropology [30,
49] or sociology [35]. Henricks argues for revisiting the
classical authors, less constrained by the tranches of academia, in order to get a more unified understanding of play.
We proceed to a brief overview of play, to help us build an
understanding, useful for thinking of play within ICTD.
One of the concepts more closely tied to play is that of
games. Huizinga in fact refers to play, mainly (but not only)
as an action one does with games. If play is a challenging
concept to unpack, game is certainly not easier. Roger Caillois categorizes play within four different types [13], building on the Greek terms: (1) Agon, referring to games of a
competitive nature, (2) Alea, pertaining to games of chance,
such as casino games, (3) Mimesis, referring to role playing
and (4) Ilinx, which encompass aspects of extreme sports
that provide adrenaline rushes and even taking hallucinogenic substances or riding rollercoasters. Games are one of
the most extensively studied forms of play, both in academia in general [13, 40, 81], as well as around technology
more specifically [42, 52, 69, 83]. We discuss games
throughout, in the common, everyday understanding of the
term, encompassing all these different forms. But it is worth
emphasizing that not all play comes in the form of games.
The origins and importance of play
Dutch Cultural Historian Johann Huizinga, in his seminal
work Homo Ludens [40], provides us with the first academic text focused exclusively on play. For Huizinga, play
is not an element within culture, but rather a foundational
aspect that precedes culture itself. In his words (the bold is
ours for emphasis on foundational aspects):
"Summing up the formal characteristic of play, we might
call it a free activity standing quite consciously outside
'ordinary' life as being 'not serious' but at the same time
absorbing the player intensely and utterly. It is an activity
connected with no material interest, and no profit can be
gained by it. It proceeds within its own proper boundaries
of time and space according to fixed rules and in an orderly manner." [40]
Free play
For artist Allan Kaprow, the distinction between playing
and gaming is actually an important one:
“This critical difference between gaming and playing cannot be ignored. Both involve free fantasy and apparent
spontaneity, both may have clear structures, both may (but
needn’t) require special skills that enhance the playing.
Play, however, offers satisfaction, not in some stated practical outcome, some immediate accomplishment, but rather
in continuous participation as its own end. Taking sides,
victory, and defeat, all irrelevant in play, are the chief requisites of game. In play one is carefree; in a game one is
anxious about winning” [44]
Huizinga lays down the foundations for what most have
picked on to discuss play, namely: (1) that it is free, bounded by no other material interest than the play itself; (2) that
it is explicitly non-serious, standing outside ordinary life;
(3) that it absorbs the player; and (4) that it has boundaries
in time, space as well as rules. For Huizinga, and for most
authors who have discussed the concept, as we will see,
these are important concepts that demarcate play:
For Kaprow, even goals within the game itself (what Suits
calls prelusory goals [81]), such as winning the game, a
goal, for Huizinga already separated from everyday life
defined only through the willingness of players to accept
those goals for the sake of the game playing itself. Kaprow
is interested in liberating the player further, and is only
willing to consider play in the form of free play, an exploration free from worries and goals whether internal or external to itself. It is not just an escape from ordinary life, but it
is an escape from other forms of stress or anxiety.
gaming, such as interpersonal communication and social
networking. Studies such as Wyche’s around Facebook use
in Kenya [86], amongst others [19, 22, 39] show the importance people ascribe to these uses, which should not be necessarily striking as they are otherwise prevalent amongst
users in the Western world as well.
How and why we use play
While there is value in these categorizations and discussions, we will use play as encompassing most of the categories above. What is important here is an understanding of
activities, which share some, or all of the following aspects:
Most work regarding the importance of free play comes
from psychology, and more specifically child psychology.
This interest originates more prominently in Jean Piaget’s
early work on children’s stages of development and the role
of play as the motor through which children explore and
derive their understanding of the world, as well as an important tool in education [61]. While Piaget’s focus was
centered around child play, and not adult play for instance,
his concept of play remains close to the one of Kaprow’s
ideas of liberation through freer exploration and engagement with the world, and as a fundamental mechanism for
development and learning [31].
Absorption, flow, autotelic and personal engagements
An important aspect for understanding the play experience,
and the desire to partake in it, is the often-mentioned absorption of the player, since all the way back to Huizinga’s
characterization [40]. This aspect has again been more extensively studied within psychology, namely in reversal
theory. Reversal theory tries to understand motivation and
draw distinctions between the concept of paratelic, denoting the kinds of activities which are pursued as means to an
end, and autotelic, denoting those engaged in for the enjoyment in the process itself [2]. Autotelic engagements
denote moments in which people are absorbed in activities,
escaping worldly, everyday realities. This has been discussed by Csikszentmihalyi’s through his influential work
on flow [17], denoting that state in which one is absorbed
into an intrinsically rewarding activity, pursuing it intensely
for its own sake. While Csikszentmihalyi describes it as a
personal feature, it is often taken, and easily translatable as
a feature of the activities themselves:
They are engaged in freely, meaning its something
people value and engage in willingly, outside of external pressures to do so.
They stand outside of ordinary life, in that they provide
escape from the everyday routines and hardships. This
is related to them being bounded in space and time,
implying a demarcation from other activities, even
when these are not always evident.
They are pursued for their own sake, meaning that they
possess an autotelic or non-instrumental value, rather
than serving an external purpose. This is often coupled
with a state of flow, or intense absorption.
These do not represent a strict set of necessary conditions to
classify something as play, but rather guidelines to understand activities regularly placed under play, such as games,
entertainment, fun or leisure. We will use these rather
loosely, and at times include activities which may seem
contrary to some of these principles, but which should still
reasonably be understood as essentially playful. Other ways
of classifying the same phenomena could be under unserious or non-instrumental, but using either of these, and other
terminologies, ultimately undermines the focus of this
work. For instance talking about these activities as noninstrumental implies a degree of uselessness, contrary to
our main argument that these are the basis for leading a
good life. Equally, to call them unserious would be, deny
the serious of play itself, and its constitutive role as an essential freedom, as philosopher Kurt Riezler expressed:
“An autotelic person needs few material possessions and
little entertainment, comfort, power, or fame because so
much of what he or she does is already rewarding. Because
such persons experience flow in work, in family life, when
interacting with people, when eating, even when alone
with nothing to do, they are less dependent on the external
rewards that keep others motivated to go on with a life
composed of routines. They are more autonomous and independent because they cannot be as easily manipulated
with threats or rewards from the outside. At the same time,
they are more involved with everything around them because they are fully immersed in the current of life.” [17].
“Man’s playing is his greatest victory over his dependence
and finiteness [...]. He can also play, i.e., detach himself
from things and their demands and replace the world of
given conditions with a playworld of his own mood, set
himself rules and goals and thus defy the world of blind
necessities, meaningless things, and stupid demands. Religious rites, festivities, social codes, language, music, art –
even science – all contain at least an element of play. You
can almost say that culture is play. In a self-made order of
rules man enjoys his activities as the maker and master of
his own means and ends. Take art. Man, liberated from
what you call the real world or ordinary life, plays with a
world of rhythms, sounds, words, colors, lines, enjoying the
It is also building on these notions of intrinsically valued
engagements, that we build our understanding of play. These may manifest themselves in very distinct ways than
triumph of his freedom. This is his seriousness. There is no
‘merely’.” [67]
ment started with the carving of a literal Hole in The Wall
between Mitra’s workplace, NIIT and an adjoining slum in
New Delhi, known as Kalkaji [53], where a computer was
put to use freely. This experiment proved to be a widely
mediatized success story of free learning and playing by the
children in the slum, who with only “minimal guidance”
derived multiple useful interactions from the computer,
achieving a number of measurable educational outcomes
[53, 54]. This has then been scaled to hundreds locations
and has reached thousands of children in India and Africa
[37]. Part of the project’s success is the emphasis on how
entertaining and motivating content contributed to the successful educational experiment.
Play as capability
Capability theory is a welfare economics concept introduced by Nobel Laureate Amartya Sen, and it stands at the
root of arguably the most widely used measure of achievement of human needs, the Human Development Index
(HDI) [1]. The HDI is closely tied to the Millenium Development Goals (MDG) [62], which constitute the basis established by the United Nations (UN) to guide development
work in general, and as a consequence, ICTD efforts. The
development of capabilities is, according to Sen, based on
the idea that development should focus on enhancing people’s abilities to choose the lives they have reasons to value.
One of the most notable efforts in this area since then has
been led by Matthew Kam in designing for, and studying,
the impact of digital gaming, particularly on the mobile, in
English language education [42]. Long term trials have reported significant successes [43]. Studies in ICTD have
praised the potential for games and leisurely activities in
supporting educational goals, from literacy goals, to mathematics [5], with games being early on lauded as a good
educational tool through its engaging function [29]. Play
and games for education are arguably the most represented
within ICTD understandings of play and games, and fall
under the rubric of serious games [52] which is also an important field within technology studies at large.
Martha Nussbaum, who worked with Sen in the development of capability theory, and is arguably the main authority on the subject, has written briefly about play as a capability. Nussbaum briefly describes play as “Being able to
laugh, to play, to enjoy recreational activities.” [58] Nussbaum however, has not published as extensively on the
topic of play as capability, with some notable exceptions
discussing how some, in particular women, may lack opportunities for play and some of the consequences:
“Often burdened with the “double day” of taxing employment and full responsibility for housework and child care,
they [women] lack opportunities for play and the cultivation of their imaginative and cognitive faculties. All these
factors take their toll on emotional well-being: women have
fewer opportunities than men to live free from fear and to
enjoy rewarding types of love—especially when, as is often
the case, they are married without choice in childhood and
have no recourse from a bad marriage.” [57]
Job training
Education is not the only field in which games and entertainment can have a positive, instrumental, role. Polly is a
telephone based voice manipulation and forwarding system
deployed in Pakistan by Raza et. al. [66]. It has had measurable impact during its large scale and long-term deployment with nearly half a million interactions and 85.000 registered users at the time, and showing considerable tendency in growth. The authors used entertainment to draw
people to the service where other development services
could be provided. They wanted to ask the questions:
It is under this frame that we understand the importance of
play, not as a superfluous activity, but as an essential capability towards living a good life; engaging in and exerting
the freedoms that one has reasons to value. Building on
Sen’s terminology, we will look at the role of play as instrumental, i.e. as a means-to-an-end, its constructive role,
i.e. in building a consistent and honest interaction between
researchers, designers and users. Finally, we arrive at our
main goal here: how we can understand its constitutive role,
i.e. as an unapologetic end-in-itself.
“(1) Is it possible to virally spread awareness of, and train
people in, speech-based services, in a largely low-literate
population, using entertainment as a motivation? (2) Is it
possible to leverage the power of entertainment to reach a
large number of people with other speech-based services?”
Their evaluation turned out largely positive with the high
rates of adoption and job creation, as well as plentiful evidence that users were extracting entertainment and leisurely
value from the service, beyond the development offer [65].
Play, and games in particular, have been discussed in ITCD
mainly given their instrumental roles. We will now look at
the work done in this area, which represents the main contributions to the understanding of play within ICTD, drawing on examples from education, job searching and health.
Non-prescribed uses
Within health applications in ICTD, Schwartz and colleagues worked with the deployment of mobile devices to
health workers in India. These devices were intended for
purposes of collecting health data in eight different projects
[70]. They note how workers also used these devices for
Within education
Perhaps the most famous and widely covered experiment
with technology and education in development is Sugata
Mitra’s Hole in the Wall (HiW) experiment. This experi-
more personal goals, beyond the intentions of deployment,
calling these ‘non-prescribed uses’ [70]. The authors reflect
on how explicitly allowing for these non-prescribed uses,
helps to have a more honest engagement with the workers
themselves. These did not feel the need to hide such behaviors, and it even worked as an extra motivation to participate in the project itself.
The argument that Sen makes is for how democracy, as a
capability, contributes to an ongoing dialogue, allowing a
constant reorganization of priorities involving all levels of
governance, rather than strictly thought out and imposed
top-down. Development and aid theorist William Easterly,
in his influential book “The White Man’s Burden” [20],
explains how the poorest, which are often the ones targeted
by aid and development interventions, are not engaged in
this valuation process. Easterly argues that top-down aid
and development goals are essentially anti-democratic,
since the ones at the bottom do not dispose of any feedback
or accountability mechanisms to change priorities at the
top, such as voting or effective lobbying.
In this example, we extend the notion of play to include
much of the personal use of technologies more broadly,
particularly in a work context. It is about respecting people’s desire to use these technological possibilities for their
own purposes, the freedom to use them for stepping out of
everyday work tasks, creating a playground beyond the
instrumental value of the tasks they were given the devices
to accomplish. Rather than fighting these uses, the project
leaders figured that simply allowing for them created a better engagement with the health workers, leading to increased efficiency for the overall project.
We do not have a solution for tackling the complexity and
breadth of aid, development, or more specifically, ICTD
initiatives with regards to their democratic framing and accountability. We argue that, within ICTD, the downplaying
of playful engagements, which are, as we saw, prominent,
and have the characteristics of freedom and voluntariness,
harms communication and acknowledgment of people’s
desires and value judgments, and, as a consequence, increase communication problems within the different layers
of ICTD work. We will now look at how these problems
have emerged, and have started being discussed within
ICTD, as well as implications for the evaluation of ICTD
projects and dynamics more broadly.
We have seen how games and entertainment can bring
about positive benefits as educational strategies. We have
seen how they can help in engaging people with technological interventions, as well as providing motivation for
users participating in other projects, by not restricting some
of the non-prescribed uses. While we find this type of instrumentalization of play a rather narrow view on the phenomena, it is nonetheless one that should be respected,
since it helps steer away from a vision where play may
harm or detract from development goals. Echoing Sen’s
discussion on democracy and development not being contradictory goals (as is sometimes assumed) and the idea of
the “presumed existence of a deep conflict […] [when]
studies show how both are important and have no conflict”
[71] (p.149-150). Like Sen, we discuss this to help alleviate
some of the concerns which emerge around play, and which
we will see in more detail. But this should not detract from
the importance of play in other ways, beyond and regardless
of whether it achieves other goals or not.
ICTD and play as failure
In what is one of the first pieces in ICTD to reflect on these
aspects, Ratan and Bailur provide us with examples of
ICTD projects, which were classified as failures, despite, on
a closer look, having brought significant value to people’s
lives. One such project is a community radio and telecentre
known as OurVoices, an UNESCO funded initiative which
installed a significant amount of ICTs in a village called
Bhairavi for the purpose of disseminating what was considered “relevant information”, and to examine ICT’s impact
on reducing poverty and enhancing development. While
initially branded a success in improving different metrics
from women’s health to job creation, it quickly fell under
One aspect, which is not often mentioned when discussing,
play, but with important consequences, is what Sen describes as (specifically referring to democratic rights) the
“constructive role in the genesis of values and priorities”
[71]. That is the right for people and communities to openly
discuss and reorganize what values are important, and what
goals to prioritize. This needs to be a constant, deliberative
process that involving the affected, or targeted, communities. As Sen puts it:
“We were told by the villagers that the radio set medium
had been phased out soon after implementation (one of the
reasons given was that the villagers started taking the radios out to their fields and listened to FM radio instead of
OurVoices). During research listeners dismantled one of
the radio loudspeakers in protest and used it to accompany
the procession of the statue for a religious festival. The
NGO’s reaction to this was that the people were ignorant
and uninterested in their own “development”. They removed all the cabling, and set up the loudspeaker in another village” [64]
“The need to discuss the valuation of diverse capabilities in
terms of public priorities is, I have argued, an asset, forcing
us to make clear what the value judgments are in a field
where value judgments cannot be – and should not be –
avoided [...] The work of public valuation cannot be replaced by some cunningly clever assumption.” [71]
The authors proceed to detail a number of consequences
that ensued from this situation and causing significant distress to the local population. Ratan and Bailur discus yet
another project, Hole in the Office (HiO), which dealt with
the introduction of PCs in a community, to aid with job
searching [60]. Discontinuation of HiO occurred after it
was determined that it was being used 25 times more for
entertainment and games than for its intended function, thus
satisfying no development goal:
This is an issue, which affects ICTD’s understanding of
ICT adoption dynamics and motivations. To illustrate this
mismatch, Heeks discusses a survey conducted in Tanzania
reporting that fewer than 15% of mobile owners believed
that the benefits of owning mobiles justified the costs [56].
To this, Heeks asks: “Um . . . so if you believe that guys,
why on earth do you own a mobile?” [34] Heeks is suggesting that a richer story is waiting to be told. A story that was
not reported, either because there was no way to get at the
rationale for the actual reasons leading people to make that
investment, or because it is the kind of information that
ICTD, with its narrower focus on pre-defined development
goals, is in a way not always prepared to assimilate.
“Yet, across the board the perception was that the PC was
a “good” thing and that free access should not be discontinued” [64].
What these examples tell us is what Sey and Ortoleva have
called “negative perceptions” of play within ICTD [75].
These perceptions lead to situations in which behaviors,
that are not intended to occur around the specific ICTD
intervention, are dismissed or censored even if they are
highly valued by the communities. This has implications for
people’s agency, defined by Sen:
Ferreira and Höök have discussed these tensions in their
work around mobile phone adoption in Rah Island, Vanuatu
[24]. They focus on tensions not just between ICTD and the
communities, but within the communities themselves, leading to obstacles in reporting and getting a deeper understanding as to people’s motivations, and indeed rightful
desires, to acquire and engage with modern ICTs. These
tensions should not be fed within ICTD dynamics, if we
want an honest engagement with the recipients of these
development projects, and an understanding as to what motivates the spending of significant amounts of resources in
acquiring these, as Heeks explains:
“I am using the term ‘agent’ [...] as someone who acts and
brings about change, and whose achievements can be
judged in terms of her own values and objectives, whether
or not we assess them in terms of some external criteria as
well” [71].
This highlights tensions between communities and individual’s values and goals, and external criteria for assessment,
which, as we will now see, bears some damaging consequences for ICTD dynamics.
“The significant amounts being spent by the poor on mobiles indicate that phones have a significant value to the
poor […] we [the ICTD community] have long known […]
that ‘poverty’ is not just about money and, hence, that poverty interventions and tools can usefully target more than
just financial benefits” [34]
Implications for evaluation
The issue of downplaying the role of play within ICTD is
not just one of overlooking community and individual
agencies, as problematic as that is, but has implications for
the evaluation of projects as well. Ramesh, the project manager of OurVoices explains why it is so hard to get an understanding of what is actually happening in ICTD projects
such as OurVoices:
This tension between perceived value and the financial efforts made by the poor to acquire these technologies is present in other settings, as pointed out by Song, where according to him people can spend more than 50% of their income
on personal access to ICTs despite this representing an effort some would consider excessive [79]. The question is
whether it is reasonable to simply assume irrationality in
such behaviors, and subsequently risking censoring them,
or whether there is something to learn from these uses.
“It is hard to know if people are really listening. In a survey, if we …ask whether they watch TV or listen to us, they
say yes. […] The minute they see us, they tell us what we
want to hear. They say yes, yes we listened. They feel guilty
for choosing entertainment over development, like something which is good for them.” [64]
If we believe the latter to be the case, then we need to address the dynamics that exist within ICTD, and development work at large, between funding agencies, practitioners
and recipients, that are partly responsible for undermining
the appreciation of these more complex and irreducible, yet
entirely legitimate forms of engaging with technology.
The tension between what “insiders” may legitimately desire and what they perceive “outsiders” want to hear has
been documented in other places [16, 36, 55] and may at
times prevent even a basic understanding of the situation.
This has been documented in developing work [16, 55], and
has been more generally known to be an already existing
characteristic of trials [10], which should not be encouraged, if we want to elicit a rich understanding of the communities as well as the ICTD projects themselves. These
concerns are echoed by Kuriyan and Toyama, remarking
that “what rural villagers want and what we think they need
are frequently different” [46].
Capabilities, according to Sen, should be:
“Deliberative in order to accommodated ongoing discussions regarding priorities and new information, rather than
taking any predefined metric (seducing as it may be) as an
ultimate indicator of well-being, poverty or freedom of different kinds” [71]
If we are to return to Schwartz and colleague’s recognition
and acceptance of non-prescribed uses [70], we see how it
not only helped achieve the project’s goals, but also generated a more open and honest interaction between the different stakeholders. This in turn also helps avoid entirely undesirable feelings of guilt, as brought up by Ramesh from
OurVoices, stemming from people using technology in other ways than their instructed use. These, in turn, may hide,
what would otherwise be important aspects of technology
use, which should be taken into consideration when trying
to understand ICT usage and even, to some extent, explicitly catered to. The intentions and desires behind technology
adoption, which often involves heavy financial sacrifices
such as those Heeks [34] or Smyth and colleagues [77] have
documented, remain underappreciated, given the failure to
account for and respect people’s legitimate desires. And
this is the most important aspect to keep in mind: not only
this tension affects ICTD goals by undermining the relationships between the different parties, but, on a more fundamental level, they are highly prescriptive of images that
some ICTD projects entertain about their participants, as
Ramesh explains:
field where this has been most emphasized is within psychology where from Piaget’s early work [61], the importance to stimulate and encourage play amongst children is
an uncontroversial topic within the field. More recently
psychologist Peter Gray has published extensive work on
the topic, summing up the views of most cognitive psychology on the issue, arguing that, play is not only fundamental,
and an important tool for learning, but also something to be
unleashed [31]. The Convention on the Rights of the Child
has long stated in its article 31: “States Parties recognize
the right of the child to rest and leisure, to engage in play
and recreational activities appropriate to the age of the
child and to participate freely in cultural life and the arts.”
[4]. While acknowledging the right of play for children is
undoubtedly an important step, even if it is far from being a
reality throughout the globe, it should not detract from the
fact that play is important regardless of age.
The desire to play
We have seen an emergence of a, yet sparse, body of work
which has began documenting playful behaviors around
technology in ICTD. We have seen how South African mobile users have appropriated MXit, an instant messaging
system, for the purposes of gaming [83], and other examples abound of such appropriations within the developing
world [15, 75, 77]. Social networks and interpersonal communication have also been source of enthusiasm [70, 83,
86]. Larger surveys show users in the developing world
engaging with gaming in internet cafes and other points of
access [45, 74], and more generally many have discussed
the significant amounts of resources that users, even within
resource constrained situations, are willing to place in entertainment and gaming [26, 34, 63].
“We might be giving a programme about an agricultural
scheme, which the government might have for him, which
might significantly increase his yield, but he’s not interested in listening to it, because it’s boring. You know, he
wants to watch the movie. That’s the competition we’ve got,
the challenge we have to overcome.” [64]
As a final note, the constructive role of play is, in a way,
instrumental, since we frame it as a means towards achieving other goals (such as better evaluation and feedback
loops). But because it can be a broad means towards negotiating the goals and values of development themselves,
rather than achieving them directly, it is important to, as
Sen does, view this role under a different light, more tied to
the idea of freedom itself, rather than a narrower conception
of social and economic progress.
This desire for play, entertainment and personal uses in the
developing world should not be striking, as this accounts
for a significant portion of technological use outside the
developing world as well. It is up to ICTD to acknowledge
and study what people do, and want to do with technology,
rather than ascribing a priori intentions. Here HCI can provide some guidance, where studies have emerged of technology use, studied and understood, beyond instrumental
framings, and rather as an appreciation of legitimate technology use: for instance Juhlin and Weilenmann’s work on
hunting [41] or Jeffrey and Shaowen Bardzell’s work on
sex toys [6]. Both of these focus on the activities themselves, helping support the growing interest in those domains. However, we also find also some obscurity around
these issues in HCI as well, as Genevieve Bell pointed out
in her keynote at CHI 2010 [7]. Some similar resistances
exist in appreciating activities for their own sake, with activities such as sports, sex or religion remaining largely
underrepresented in research in comparison to their prominence within actual ICT use.
We have devoted some time here to explain the importance
of play from an instrumental and constructive viewpoints,
partly in order to alleviate some concerns that may stem
from an often – and wrongly – assumed contradiction between the triviality of play and the seriousness of other
goals, such as development ones. Our main point in this
work however, and what we will argue now in more detail,
is that play is intrinsically important: its constitutive role as
an integral part of the human experience. This is regardless
of whether or not it helps achieve other goals, such as the
ones discussed above.
We have seen how most academic thought overwhelmingly
speaks of play as a fundamental and innate aspect of life.
From Huizinga’s initial framing of play, as the basis from
which culture emerges [40], to Nussbaum’s classification of
play as a basic capability in humans [59]. Arguably the
We have seen how play occurs even around the simplest
digital interactions [24], and how people engage in extensive work to creatively appropriate technologies for their
own desired uses, regardless of how limited the opportunities for play seem to be. But we argue that people should
not have to work so hard to derive pleasure and other value
from technologies which many of us take for granted, and
that we should complement and assist, rather than resist, the
motivations and desires that many communities are already
very visibly displaying around ICT adoption.
adopt ICTs, regardless of their socio-economic situation,
than perpetuating play’s relative obscurity within ICTD
work. A failure to understand motivations behind ICT use is
likely to result in a failure to provide real benefits to those
we target through our studies and interventions.
We have discussed the importance of looking at play as a
capability within ICTD work, and analyzed its instrumental,
constructive and constitutive roles, positioning relevant
work according to this framework. In order to help alleviate
some of the issues around the acknowledgment of play
within ICTD, we propose some broader topics for discussion: (1) the dynamics of paternalism and a priori prescription of goals existing in ICTD, (2) how play has a narrative
and rhetoric role within ICTD, and development more generally, which influences the way it is discussed, prioritized
or accepted. We end by (3) discussing how this is not just
an issue of designing, or deploying, playful technologies in
ICTD, but rather a broader discussion that needs to take
place in order to reform the term of play as, essentially, a
quality of life and freedom issue.
Taking play seriously
There are two further examples we would like to bring up
as part of emphasizing the non-triviality of play (although
the play activities may themselves be trivial). In severely
resource-constrained situations, play does not become a
luxury one cannot afford. In many, rather severe, cases play
becomes all the more important as a means to escape current hardships, particularly through the aspects of escape
and order that play brings to life. It is in this way we would
like to discuss these two important examples. The first
comes from the Great Depression era, in the United States,
where a growth in resources allocated to leisurely activities
grew, rather than diminish, during the period between 1890
and 1940 [25], across socio-economic backgrounds.
The second, and one of the most striking examples of this is
given to us by George Eisen in “Children and Play in the
Holocaust”, where Eisen tries to reconcile, and come to
terms with, the horror of the situation and the importance of
play among children [21]. In his work it is clear how easily
one may completely ignore the importance and role of play
in those situations. Eisen describes play as it occurred, dissecting its role within that context, without underappreciating or downgrading the urgency and deep precariousness of
the situations those children found themselves in. It was,
according to Eisen, precisely play, which allowed children
to preserve a level of humanity throughout their plight.
Paternalism and prescription
Many have denounced the paternalistic dynamics that occur
within development work [15, 32, 64]. Anthropologists like
Arturo Escobar have discussed the making of concepts like
“development” and “third” world, as narrow conceptions of
the people described, as passive receivers of external
knowledge which is good for them [23]. Manzo and colleagues discuss these dymanics: “What political economy
wanted, in short, was to take the poor, inefficient Third
World decision maker by the hand, lead him to the development candy store, and show him how to get the best buy
for his meager pennies.” [50]. Ratan and Bailur also discuss
this, through their recounting of Helena Norberg-Hodge’s
[36] ethnographic fieldwork and her experience of coming
back to Ladakh, India ten years after having been told by a
local that there was no existing poverty in that location,
only to find that same person begging tourists for help and
emphasizing her situation of poverty .
While tensions around the appreciation of play may appear
particularly acute in ICTD settings, they have resonances in
different moral tales present in the Western world as well.
From Aesop’s fable of the grasshopper and the ant [14], in
which the grasshopper finds its demise, after spending all
summer singing and not preparing adequately for winter,
while the ant thrives in its dutifulness (Suits dedicates a
whole book defending the grasshopper’s lifestyle as the
only way to lead a good life [81]), to for instance ideas
about work ethics as a trait of cultures or religions (such as
Max Weber’s treatise on protestant work ethics [84]).
Within popular and academic thought, multiple traditions
exist that reinforce these strict prioritizations between activities, most famously embodied by Mazlow’s pyramid of
What is at stake here is not just the dynamics that are created between the recipients of aid, donors and development
practitioners, but more broadly the construction of these
settings, as Escobar argues [23]. The interactions that development work generate can yield possibly detrimental
dynamics between recipients of development work, funding
organizations and development practitioners, and as Smyth
and colleagues write, we must question:
“Are needs really more urgent than desires? Who defines
which is which? Do researchers exaggerate the urgency of
‘needs’ due to their own biases and preconceptions?” [77]
These, and other, moral tales shape the way we conceive of
play, work, dutifulness and so on, and are likely to permeate ICTD and academic work. However, we need to question these assumptions by showing how notions of play and
work, for instance, are not contradictory. We are often better served appreciating the ways in which people choose to
Perhaps here we can learn from a deeper understanding of
local needs and aspirations to inform the nature of ICTD
interventions. To gain this deeper understanding, we may
borrow even more from traditions of anthropology and sociology, as suggested by Burrell & Toyama [12]. This may
allows us to step away from some of the more top-down,
pre-defined metrics (as suggested by Burrel & Toyama
[12], Heeks [33] and Van Dijk [18], to name a few). But
this is not only a discussion between top down vs. bottom
up approaches for development, as embodied most notably
by Jeffrey Sachs [68] and William Easterly [20]. Both of
these approaches have their own benefits and limitations.
The important aspect to keep in mind is that, to be effective,
both will require constant loops of feedback and accountability to avoid reinforcing some of these representational
and paternalistic dynamics.
researchers are only a part of the structures that determine
and conduct development and ICTD work. The same concerns of justifying technology use that Ramesh, from OurVoices, brought up, may not be so dissimilar to the ways in
which ICTD practitioners and researchers must justify their
own work, as Adam Smith famously remarked:
“The interest of the dealers, however, in any particular
branch of trade or manufactures, is always in some respects
different from, and even opposite to, that of the public.”
This is not intended to ascribe malice or ill intentions onto
ICTD institutions, researchers, practitioners and other parties involved, from whom we assume an a priori deep
commitment to their work. This is rather a reminder that
dynamics which emerge from these institutional arrangements may at times ignore those with less voice to lobby for
their interests, as Easterly discusses [20]. By not acknowledging playful behaviors around technology we may be
engaging in a poor representation of people, and as a consequence, assuming priorities without deliberation. This
may appear particularly prominent in development work,
where the main focus is on a stricter view of socioeconomic needs, as already established in the MDG.
Feelings of shame or guilt, such as those brought up by
Ramesh from OurVoices, further marginalize the understanding of these practices and undermine the informational
base on which ICTD work is conducted, i.e. the capacity to
understand what the values and needs are for the users. Insights on steering clear from paternalistic modes of development thinking are not necessarily a novelty introduced in
this work. But we argue that the acknowledgment of play,
as a legitimate activity valued regardless of socio-economic
situation, challenging us to observe and respect these behaviors, can be a key element to help unlock some of the
vices that development work can generate.
These are part of larger structures and dynamics, which can
be difficult to escape. These are engrained, as we saw in
academic thought, religious morals, theories around work
ethics and even in children’s fables. Play can mean different
things, few of which deserve the ill treatment and depreciation that it often gets. We suggest a repurposing of the concept of play at all levels, to acknowledge these fundamental
human forms of engagement, enriching academic thought
more generally, and ICTD interventions more specifically.
Play as a narrative/rhetoric device
We have defined play according to a series of criteria that
are not all necessary to classify something as play or not. It
is rather a matter of family resemblances [85] between different activities which we reasonably qualify under play,
rather than a perfectly defined concept. The ways in which
we choose to frame and acknowledge certain activities as
play, or not, has important repercussions. One repercussion
might be that donors are unwilling to fund technologies for
entertainment if that is how they are framed rather than to
achieve socio-economic outcomes for instance. As project
manger Ramesh (from the OurVoices project) mentions
when queried as to the discrepancies between the intended
goals of the project, people less than serious activities and
the difficulty at getting at these:
Not (just) a design issue but a larger debate
There are certainly better and worse ways to design play
experiences with and around digital technologies, such lessons appear more prominently within game studies [69].
One important issue outside the gaming world, and which
has inspired play thinking in HCI is Sengers’ and Gaver’s
proposition for open interpretation [73] and Höök’s proposition for designing open surfaces [38], as a way to allow
for a freer engagement with technology.
“We can either approach community radio as what the
community wants. If you make it that way, it will be music
only. But at [the donor agency] we can’t justify all this
equipment to play music all day. There has to be a development angle.” [64]
Within an ICTD context, some such as ourselves have suggested looking at HCI, as a more mature, design oriented,
discipline aimed at understanding people and technology,
for design recommendations for play in ICTD, such as
openness [15, 51]. While this certainly has some value, it is
also important to remember that many play experiences,
like games, are often extensively and carefully crafted to
provide enhanced experiences of their own. In that sense
openness is not a panacea to play, and both kinds of design
strategies have a place when providing people with interesting and engaging playful experiences.
The implications of framing something as important or urgent can result in different levels of attention paid, and
funding awarded, to different projects and communities.
Smyth and colleagues ask:
“Do researchers exaggerate the urgency of ‘needs’ due to
their own biases and preconceptions?” [77]
While certainly biases and preconceptions from researchers
play a part in how projects are conducted and evaluated,
Regardless of strategy, design alone can hardly challenge
the existing institutional, moral and ethical structures, discussed earlier, that obfuscate play activities. These contribute to a perception, as Sutton-Smith describes, of other activities, as being “wiser” [82]. The work that must be done,
is not just the technical work of designing these experiences, as important as that is, but a questioning of priorities
and a constant renegotiating of these priorities with the
people who are supposed to benefit from these technologies. It involves an ongoing dialogue, while steering away
from a priori moral imperatives, which do not even seem to
match with the ways in which people value digital technologies, both within ICTD contexts and at large.
Openness in designing systems can certainly be one of the
mechanisms, amongst other ways of crafting and designing
for play, with which to provide positive and desirable experiences with digital technologies. But we argue that openness as a value may be more important in crafting the ICTD
projects and interventions themselves rather than the specific technologies, so that these can accommodate new insights, encouraging mutual deliberation and learning [47].
By resisting a priori goals, shifting the value of the term
play and focusing more broadly on the interventions, rather
than mainly on the systems’ designs, we hope to help alleviate some of the tensions around this topic, thinking more
broadly about ICTD work and involving those who the
work will affect, in more open deliberations over goals and
priorities. Play is as an important capability to enhance, it
should not be a source of guilt, but rather representing a
large portion of what we value in technology and life.
We have seen how play, as a capability, helps us appreciate
its different roles (instrumental, constructive and constitutive), showing how there is no inherent conflict between
play and development, and how the acknowledgment of
play is fundamental for a more honest and rigorous engagement with ICTD work. A dialogue around play will
always exist within societies and academia, setting rules
and deliberating over its appropriateness, this deliberation
process should be encouraged, not restricted. But most importantly, this framing allows us to place the focus on its
constitutive aspect, valuing play regardless of other development goals – in the same way we should ensure gender
equal opportunities and democratic rights despite any other
social and economic rationale, as essential freedoms, allowing people to deliberate on, and live, their lives in the ways
they have reason to value and see fit – unapologetically
focusing on a better quality of life.
Anand, S. and Sen, A. 1994. Human Development
Index: methodology and measurement. Human Development Report Office (HDRO), United Nations
Development Programme (UNDP).
Apter, M.J. 1989. Reversal theory: motivation, emotion, and personality. Routledge.
Arora, P. 2012. The leisure divide: can the “Third
World”come out to play? Information Development.
28, 2 (2012), 93–101.
Assembly, U.G. 1989. Convention on the Rights of
the Child. United Nations, Treaty Series. 1577, 3
Banerjee, A. et al. 2005. Remedying education: Evidence from two randomized experiments in India.
National Bureau of Economic Research.
Bardzell, J. and Bardzell, S. 2011. Pleasure is your
birthright: digitally enabled designer sex toys as a
case of third-wave HCI. Proceedings of the 2011 annual conference on Human factors in computing systems (New York, NY, USA, 2011), 257–266.
Bell, G. 2010. Messy Futures: culture, technology
and research.
Bødker, S. 2006. When second wave HCI meets third
wave challenges. NordiCHI ’06: Proceedings of the
4th Nordic conference on Human-computer interaction (New York, NY, USA, 2006), 1–8.
Boyd, D.F. and Krasnow, H.S. 1963. Economic
Evaluation of Management Information Systems.
IBM Syst. J. 2, 1 (Mar. 1963), 2–23.
Brown, B. et al. 2011. Into the wild: challenges and
opportunities for field trial methods. Proceedings of
the SIGCHI Conference on Human Factors in Computing Systems (2011), 1657–1666.
Brown, S. and Vaughan, C. 2010. Play: How It
Shapes the Brain, Opens the Imagination, and Invigorates the Soul. Scribe Publications.
Burrell, J. and Toyama, K. 2009. What constitutes
good ICTD research? Information Technologies &
International Development. 5, 3 (2009), pp–82.
Caillois, R. 1961. Man, play, and games. University
of Illinois Press.
Calder, A. 2001. The Fables of La Fontaine: Wisdom
brought down to earth. Librairie Droz.
Chirumamilla, P. and Pal, J. 2013. Play and power: a
ludic design proposal for ICTD. Proceedings of the
Sixth International Conference on Information and
Communication Technologies and Development: Full
Papers-Volume 1 (2013), 25–33.
Crewe, E. and Harrison, E. 1998. Whose development. An ethnography of aid. (1998), 23–65.
Csíkszentmihályi, M. 2008. Flow: The Psychology of
Optimal Experience. HarperCollins.
Van Dijk, J.A. 2006. Digital divide research,
achievements and shortcomings. Poetics. 34, 4
(2006), 221–235.
Donner, J. 2009. Blurring livelihoods and lives: The
social uses of mobile phones and socioeconomic development. innovations. 4, 1 (2009), 91–101.
Easterly, W.R. 2006. The White Man’s Burden: Why
the West’s Efforts to Aid the Rest Have Done so
Much Ill and so Little Good. Penguin Press.
[21] Eisen, G. 1990. Children and play in the Holocaust:
Games among the shadows. Univ of Massachusetts
[22] Ellwood-Clayton, B. 2006. All we need is love—and
a mobile phone: Texting in the Philippines. Cultural
Space and Public Sphere in Asia, Seoul, Korea.
[23] Escobar, A. 2011. Encountering development: The
making and unmaking of the Third World. Princeton
University Press.
[24] Ferreira, P. and Höök, K. 2012. Appreciating pleiplei around mobiles: Playfulness in Rah Island. Proceedings of the 2012 annual conference on Human
factors in computing systems (New York, NY, USA,
[25] Fischer, C.S. 1994. Changes in leisure activities,
1890-1940. Journal of Social History. (1994), 453–
[26] Galarneau, L. 2014. Global Gaming Stats: Who’s
Playing What, and Why. Big Fish Games. (2014), 1–
[27] Gallagher, C.A. 1974. Perceptions of the value of a
management information system. Academy of Management Journal. 17, 1 (1974), 46–55.
[28] Gaver, W.W. et al. 2004. The drift table: designing
for ludic engagement. CHI ’04 extended abstracts on
Human factors in computing systems (New York,
NY, USA, 2004), 885–900.
[29] Gee, J.P. 2003. What video games have to teach us
about learning and literacy. Computers in Entertainment (CIE). 1, 1 (2003), 20–20.
[30] Geertz, C. 1991. Deep Play. Rethinking Popular Culture: Contemporary Perspectives in Cultural Studies.
(1991), 239.
[31] Gray, P. 2013. Free to learn: Why unleashing the
instinct to play will make our children happier, more
self-reliant, and better students for life. Basic Books.
[32] Gumucio-Dagron, A. 2001. Making waves: Stories of
participatory communication for social change.
Rockefeller Foundation New York.
[33] Heeks, R. 2008. ICT4D 2.0: The Next Phase of Applying ICT for International Development. Computer.
41, 6 (2008), 26–33.
[34] Heeks, R. 2008. Mobiles for Impoverishment? ICTs
For Development.
[35] Henricks, T.S. 2006. Play reconsidered: Sociological
perspectives on human expression. University of Illinois Press.
[36] Hodge, H.N. 2013. Ancient futures: learning from
Ladakh. Random House.
[37] Hole-in-the-Wall: 2011. Accessed: 2015-01-24.
[38] Höök, K. 2006. Designing familiar open surfaces.
Proceedings of the 4th Nordic conference on Humancomputer interaction: changing roles (New York,
NY, USA, 2006), 242–251.
[39] Horst, H.A. and Miller, D. 2006. The cell phone: an
anthropology of communication. Berg.
[40] Huizinga, J. 1939. Homo Ludens: A Study of the
Play-Element in Culture. Routledge.
[41] Juhlin, O. and Weilenmann, A. 2008. Hunting for
fun: solitude and attentiveness in collaboration. Proceedings of the 2008 ACM conference on Computer
supported cooperative work (New York, NY, USA,
2008), 57–66.
[42] Kam, M. et al. 2008. Designing e-learning games for
rural children in India: a format for balancing learning with fun. Proceedings of the 7th ACM conference
on Designing interactive systems (2008), 58–67.
[43] Kam, M. et al. 2009. Improving literacy in rural India: Cellphone games in an after-school program. Information and Communication Technologies and Development (ICTD), 2009 International Conference on
(2009), 139–149.
[44] Kaprow, A. 2003. Essays on the Blurring of Art and
Life. Univ of California Press.
[45] Kolko, B. et al. 2014. The value of non-instrumental
computer use: Skills acquisition, self-confidence, and
community-based technology training. TASCHA.
[46] Kuriyan, R. et al. 2008. Information and communication technologies for development: The bottom of the
pyramid model in practice. The Information Society.
24, 2 (2008), 93–104.
[47] Loudon, M. and Rivett, U. 2014. Enacting Openness
in ICT4D Research. Open Development: Networked
Innovations in International Development. (2014),
[48] Lucas, H.C. 1973. User reactions and the management of information services. Management Informatics. 2, 4 (1973), 165–172.
[49] Malaby, T.M. 2007. Beyond play a new approach to
games. Games and culture. 2, 2 (2007), 95–113.
[50] Manzo, K. 1991. Modernist discourse and the crisis
of development theory. Studies in comparative international development. 26, 2 (1991), 3–36.
[51] Marsden, G. 2009. UNDER DEVELOPMENT Electronic tablecloths and the developing world. Looking
ahead. 16, 2 (2009).
[52] Michael, D.R. and Chen, S.L. 2005. Serious games:
Games that educate, train, and inform. Muska &
[53] Mitra, S. et al. 2005. Acquisition of computing literacy on shared public computers: Children and the“
hole in the wall.” Australasian Journal of Educational Technology. 21, 3 (2005), 407.
[54] Mitra, S. 2003. Minimally invasive education: a progress report on the “hole-in-the-wall” experiments.
British Journal of Educational Technology. 34, 3
(2003), 367–371.
[55] Mosse, D. 2001. People’s knowledge’, participation
and patronage: Operations and representations in rural development. Participation: The new tyranny.
(2001), 16–35.
[56] Mpogole, H. et al. 2008. Mobile Phones and Poverty
Alleviation: A Survey Study in Rural Tanzania. Proceedings of 1st International Conference on M4D
Mobile Communication Technology for Development
(Karlstad University, Sweden, 2008), 69–79.
[57] Nussbaum, M. 2000. Women’s capabilities and social
justice. Journal of Human Development. 1, 2 (2000),
[58] Nussbaum, M.C. 2011. Creating capabilities. Harvard University Press.
[59] Nussbaum, M.C. et al. 1993. The quality of life. Clarendon Press Oxford.
[60] Pawar, U.S. et al. 2008. An „Office Hole-in-theWall‟ Exploration. Microsoft Research Technical
[61] Piaget, J. and Cook, M.T. 1952. The origins of intelligence in children. (1952).
[62] Poverty, E. 2015. Millennium development goals.
United Nations. Available online: http://www. un.
org/millenniumgoals/(accessed on 23 August 2011).
[63] Radha, G. 2013. Mobile Gaming in Emerging Markets - Some Insights. TreSensa.
[64] Ratan, A.L. and Bailur, S. 2007. Welfare, agency and
“ICT for Development.” Proceedings of the International Conference on Information and Communication Technologies and Development (Bangalore,
[65] Raza, A.A. et al. 2013. Job opportunities through
entertainment: Virally spread speech-based services
for low-literate users. Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems
(2013), 2803–2812.
[66] Raza, A.A. et al. 2012. Viral entertainment as a vehicle for disseminating speech-based services to lowliterate users. Proceedings of the Fifth International
Conference on Information and Communication
Technologies and Development (2012), 350–359.
[67] Riezler, K. 1941. Play and Seriousness. The Journal
of Philosophy. 38, 19 (1941), 505–517.
[68] Sachs, J. 2005. The End of Poverty: Economic Possibilities for Our Time. Penguin Press.
[69] Salen, K. and Zimmerman, E. 2004. Rules of play:
game design fundamentals. MIT Press.
[70] Schwartz, A. et al. 2013. Balancing burden and benefit: non-prescribed use of employer-issued mobile
devices. Proceedings of the Sixth International Conference on Information and Communications Tech-
nologies and Development: Notes-Volume 2 (2013),
Sen, A. 1999. Development as freedom. Oxford University Press.
Sen, A. and others 1993. Capability and well-being.
Sengers, P. and Gaver, B. 2006. Staying open to interpretation: engaging multiple meanings in design
and evaluation. Proceedings of the 6th conference on
Designing Interactive systems (2006), 99–108.
Sey, A. et al. 2013. Connecting people for development: Why public access ICTs matter (eBook).
Sey, A. and Ortoleva, P. 2014. All Work and No
Play? Judging the Uses of Mobile Phones in Developing Countries. Information Technologies & International Development. 10, 3 (2014), pp–1.
Smith, A. and Nicholson, J.S. 1887. An Inquiry Into
the Nature and Causes of the Wealth of Nations... T.
Nelson and Sons.
Smyth, T.N. et al. 2010. Where there’s a will there’s
a way: mobile media sharing in urban india. Proceedings of the 28th international conference on Human
factors in computing systems (New York, NY, USA,
2010), 753–762.
Solow, R.M. 1987. We’d better watch out. New York
Times Book Review. 36, (1987).
Song, S. 2009. Nathan and the Mobile Operators.
Many Possibilities.
Statistics, I.T.U. 2014. International Telecommunication Union. ITU ICT Statistics. Retrieved February.
Suits, B. 1978. The grasshopper: Games, life and
utopia. Broadview Press.
Sutton-Smith, B. 2009. The ambiguity of play. Harvard University Press.
Walton, M. and Pallitt, N. 2012. “Grand Theft South
Africa”: games, literacy and inequality in consumer
childhoods. Language and Education. 26, 4 (2012),
Weber, M. 1905. The Protestant Ethic and the Spirit
of Capitalism: and other writings. Penguin.
Wittgenstein, L. 1953. Philosophical investigations.
John Wiley & Sons.
Wyche, S.P. et al. 2013. Facebook is a luxury: An
exploratory study of social media use in rural Kenya.
Proceedings of the 2013 conference on Computer
supported cooperative work (2013), 33–44.
Double Binds and Double Blinds:
Evaluation Tactics in Critically Oriented HCI
Vera Khovanskaya
Information Science
Cornell University
[email protected]
Eric P. S. Baumer
Communication and
Information Science
Cornell University
[email protected]
Phoebe Sengers
Information Science and
Science & Technology Studies
Cornell University
[email protected]
of AI research and nine years later named the most
important paper in the conference that year. It was not,
however, influential in the way that at least one of the
authors intended. For Philip Agre, the paper represented not
only a breakthrough in the technical development of AI, but
also a critical reframing of AI as a research program. As
later worked out in Computation and Human Experience,
for Agre the primary goal of this work had been to critically
conceptualizations undergirding technical work in AI and to
offer a new conceptual alternative, embodied in technical
language that engineers could understand. This approach
Agre termed critical technical practice. Yet, to his
frustration, while AI enthusiastically picked up on the
technical work, the fundamental critical reconceptualization
of human activity embodied in the technology was not
picked up; eventually the work was reabsorbed into what
Agre saw as business as usual in AI [44].
Critically oriented researchers within Human-Computer
Interaction (HCI) have fruitfully intersected design and
critical analysis to engage users and designers in reflection
on underlying values, assumptions and dominant practices
in technology. To successfully integrate this work within
the HCI community, critically oriented researchers have
tactically engaged with dominant practices within HCI in
the design and evaluation of their work. This paper draws
attention to the ways that tactical engagement with aspects
of HCI evaluation methodology shapes and bears
consequences for critically oriented research. We reflect on
three of our own experiences evaluating critically oriented
designs and trace challenges that we faced to the ways that
sensibilities about generalizable knowledge are manifested
in HCI evaluation methodology. Drawing from our own
experiences, as well as other influential critically oriented
design projects in HCI, we articulate some of the trade-offs
involved in consciously adopting or not adopting certain
normative aspects of HCI evaluation. We argue that some
forms of this engagement can hamstring researchers from
pursuing their intended research goals and have
consequences beyond specific research projects to affect the
normative discourse in the field as a whole.
While critical technical practice as a concept did not have
the lasting influence on AI that Agre had hoped, in recent
years the idea of deeply coupling technical design work
with critical reflection on its conceptualization has been
picked up in a neighboring field, Human-Computer
Interaction (HCI). For example, within HCI, Dourish cites
Agre's notion of critical technical practice as a major
inspiration for his landmark work Where the Action Is [16,
p 4]. Sengers et al.'s articulation of reflective design [41]
expands critical technical practice to be used not only to
work toward the resolution of specific technical impasses,
but as a core method throughout all phases of technology’s
design and use. The engagement of critical practice within
technology design specifically and HCI more broadly has
also been spurred through research drawing on other critical
traditions. For example, Bardzell, Bardzell, and Blythe
integrate criticism and critical theory into design [5,6,7,10];
DiSalvo and Hirsch draw on critical and political traditions
in the arts to frame new opportunities and roles for design
[15,23]; and Pierce and Löwgren use philosophy of
technology and design theory to articulate and execute
critical approaches to interaction design [35,36,30]. In
short, there is a foment of critical approaches in HCI which
aim to push forward HCI research while simultaneously
troubling some of its foundational assumptions. Thus, in
contrast to Agre's impact within AI, in HCI critically
Author Keywords
Critical technical practice; critically oriented HCI,
ACM Classification Keywords
H.5.m. Information Interfaces and Presentation (e.g. HCI):
In 1987, two graduate students at MIT published a paper
describing a new way to construct intelligent programs in
the major annual Artificial Intelligence (AI) conference [2].
This paper became widely influential, spawning a new area
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
oriented research has clearly achieved lift-off as a vibrant
research program.
What does tactical engagement with HCI look like?
We will ground our ongoing discussion in this section in
three examples of 'working systems' which have been
influential in the field and inspirational for our own
practice. We chose these particular exemplars as instances
where authors begin to “lift the curtain” to reveal some of
the tensions in evaluating critical HCI projects.
In this paper, we take the emergence of a wider body of
research integrating critical reflection with technology
design as an opportunity to better understand the dynamics
and problematics of tactically engaging with HCI in order
to expand, reconceive, or otherwise critically reform the
field. We argue that the challenges that Agre faced in
marrying critical reflection with an ongoing technical
discourse that otherwise tends to bracket such questions
have not simply disappeared but remain relevant to
understanding how researchers can successfully tactically
engage HCI. Moreover, HCI adds a significant new
complexity to this challenge because of the split audience
of HCI work. HCI researchers must engage not only other
scholars, but also end users or other participants, who are an
obligatory passage point for commonly accepted definitions
of what makes a system or approach 'work' in HCI.
The first system, Affector, is an interactive video
installation developed by Phoebe Sengers, Simeon Warner,
and Kirsten Boehner [12,40,42]. The system is comprised
of a video window between the offices of two friends that
enables them to communicate their emotions by
systematically filtering the video feed according to sensor
readings. Affector is presented as a challenge to dominant
threads in computer-mediated communication toward
greater realism and accuracy in representation of affect by
not directly representing emotion in the system and instead
emphasizing openness to interpretation. Methodologically,
it challenges HCI notions of objectivity and scientific
observation by using an 'autobiographical design' method in
which the same people are designers, users, and evaluators,
working together to iteratively design and develop the
system in reaction to their personal experiences.
Within this set of considerations, we aim to make three
main contributions. First, we articulate how the need for
tactically engaging with the lingua franca in HCI shapes the
nature of our interactions with the participant in ways that
can hamstring at least some of researchers' critical
intentions. Second, through examining unpublished
backstories of three of our own projects, we demonstrate
how this happens concretely on the ground, leading to some
undesirable practical consequences in relationships with
participants despite our largely being able to position the
projects as successful from an HCI research point of view.
Finally, based on our own work and related published
projects, we identify three specific ways in which what
counts as 'working' in mainstream HCI discourse manifests
in critically oriented HCI discourse in ways that can
inadvertently lead to particular forms of breakdown in the
relationship between designers and publics.
The second system, Local Barometer, is one of three
“threshold devices”, poetic intermediaries between home
and the environment, developed by Gaver’s research studio
at Goldsmiths [21, 33]. The Local Barometer is intended to
provide people with a new sense of the “sociocultural
texture” around their house. The barometer itself is
composed of six small screens which display scrolling
images and texts from classified advertisements sourced
from the internet. The local wind conditions, as measured
by the device, determine which advertisements appear on
the screens: the harder the wind blows, the greater the
geographic distance from which the ads “travel.” Gaver et
al. position this work to resist a dominant ethos in
technology design towards accuracy and efficiency. Instead,
they stress the value of multiple, unresolved narratives in
understanding the meanings of technology, including play,
ambiguity and interpretation in design [42]. Thus, the
evaluation of this system involved collecting and
juxtaposing multiple narratives of use, rather than
emphasizing a single correct meaning or use of the system.
The problematic we explore in this paper is associated with
work in HCI that proposes significant alterations to core
foundational assumptions of HCI and embodies these
alternatives in working systems or methods. Some of this
work is explicitly inspired by critical technical practice,
some of it comes from other critical traditions, and some is
not explicitly framed as 'critical' but still aims to open
significant new spaces for HCI by inverting dominant
assumptions of what makes for good design or research. In
this section, we will describe how tactical engagement with
HCI works in several well-known projects and articulate
three key attributes of these projects that support effective
tactical engagements with HCI. We will argue that these
three key attributes lead to a core problematic for critically
oriented HCI work: that the need to articulate systems as
'working' within HCI’s discursive norms significantly
troubles our relationships with participants.
The third system is an "ultra low cost sensing system"
developed by Kuznetsov, Hudson, and Paulos to support air
quality activists in detecting particulate pollution [25]. The
“lo-tech” sensors can be assembled inexpensively from
common paper materials. The authors describe the system
and its deployment in a local environmental community,
highlighting a series of tradeoffs between expensive and
inexpensive sensing systems, usability and generalizability,
and the role of inexpensive sensors for encouraging
reflection and community action. Through this system, the
authors aim to alter the political economy of environmental
sensing by making it possible for non-experts to collect,
measure, and reflect on local air quality, rather than
requiring them to partner with scientists who would be in
charge of the data gathering and interpretation. Kuznetsov
et al. test the system in collaboration with a local activist
group. This project is part of a longer-term design research
program on participatory approaches to environmental data
gathering, which has highlighted political issues in the
design of environmental sensing systems [4,25,26,27,28].
they assigned to their third collaborator the role of an
“external evaluator” to provoke some of the reflection and
moderate discussions about the system. In other work,
Sengers legitimated these practices by arguing that
approaches such as Affector's which address multiple
interpretations are still “definable and testable” by HCI
standards [42, p.107].
For Gaver’s team, there is a similarly strong commitment to
presenting an alternative to dominant technology design
methods. This work is not framed as 'critical' - Gaver et al.
see this work rather as a positively and practically oriented
design practice. Nevertheless, it directly challenges some
HCI orthodoxies, particularly traditional goals of
“usefulness and usability,” and aims to open HCI eyes to
the possibilities of designing for open-ended playfulness (vs
tight functionality) and leveraging ambiguity (vs.
establishing clear answers) [18].
Based on these examples, we now describe three key
attributes of these systems and others like them that support
effective, critical engagement with more mainstream HCI.
Later, we examine how these strategies interact to shape
what it is possible and not possible to say through
constructing working systems in HCI.
Engaging the Lingua Franca
One attribute emerging from these examples is that they
make their critical interventions by challenging some
aspects of the normative discourse while simultaneously
engaging, and to a degree upholding, its other elements.
Even while each approach highlights the ways it upends
one or more significant aspects of standard HCI practice,
each also supports the viability of its critical move by
making sometimes implicit and sometimes explicit appeals
to HCI's established “lingua franca”—the common forms
of communication of an intellectual field. In this section,
we explore how these three examples do so by looking at
what aspects of HCI they queer and what they hold steady.
This challenge is embodied not only in the goals designed
for in these systems, but also in their evaluation. In
contradistinction to approaches that aim to establish
definitively how a system is taken up in use, the evaluation
of the Local Barometer, as well as other domestic
technologies such as the Drift Table and the Key Table,
focuses on capturing the intervention’s multiply
interpretable assessments [42]. Gaver and his team study
families who are tasked to “live with” the technology by
using ethnographic techniques. They also solicit the views
of semi-independent commentators, such as journalists and
documentary filmmakers, who, like the participants, are not
informed of any designerly intentions for the deployment.
These methodologies can read as strange to an HCI
audience more attuned to laboratory-based 'user tests'
conducted with behavioral social-science methodologies,
and are probably explicitly intending to be provocatively
different. These moves are not naive; as with Affector,
Gaver's explicit intention is to co-opt the documentary
method while recouping it for a different epistemological
orientation. For his team, the double-blind techniques are a
way of capturing the multiply interpretable, to avoid
narrowing the scope of the interpretation and to “open up”
the design space as much as possible.
With Affector, the authors were not only arguing for a new,
non-representational way of looking at affect. They also
argued for shifting the evaluation of design practice in HCI
from ascertaining whether a design is a success or failure by
predetermined metrics to narrating how the system “came
to be known or lived as a success or failure” [12, p. 12:24].
They do so by recognizing “multiple, perhaps conflicting
interpretations” [42 p. 101] of the design through iterative
and thoughtful analysis not only of the system design, but
also of the evaluation criteria. As the authors point out, one
of the “problematics” [12 p.12:19] their work revealed was
that knowledge gained through these approaches “is not
always compatible with what is recognized as knowledge
production in HCI” [12 p.12:19].
In the low-tech sensors case, the challenges to HCI
orthodoxy have to do with questions about the framing of
the relationship between researchers and participants.
Particularly in Aoki et al.'s ethnographic work into the
challenges of environmental sensing [4], on which the lowtech sensors work explicitly draws, a key issue is seen in
the way activist groups defy usual HCI understandings of
fixed, well-defined user populations who are positively
oriented to technology and to research. That study
identified activist groups’ skepticism, directed towards
scientists seeking to understand air quality, around who has
access to the data collected and who is in charge of its
interpretation. This finding lead directly to Kuznetsov et
al.’s choice to design air quality sensors that can be easily
The evaluation of Affector was explicitly motivated in part
by the need to counter perception by the broader HCI
community that allowing for multiple possible
interpretations rather than a single, correct answer would
lead to an “anything goes” mentality. One stated goal was
to show that multiply-interpretable systems could still be
rigorously evaluated [42]. In an effort to make room for
their new approaches and to habilitate them to HCI
sensibilities, the authors appropriated and reframed existing
HCI methods. In their writing, the authors explicitly aimed
to use these methods coupled to a different epistemological
stance. For example, they used a diary for the two users to
track their emotions and interactions with the system, and
interpreted by activists themselves, thus removing scientists
from the equation. The design is evaluated through a
community workshop in which activists try out the sensors
and the researchers interpret the results. The narration of the
results blends issues and factors from a standard, apolitical
HCI with those that come from the political stance of the
researchers, ranging from discussions of ease of use to
reflections on the degree to which the sensors allow for end
users to feel a sense of ownership and understanding of the
data. At the same time, the evaluation is couched in terms
drawn from standard HCI methodologies, particularly the
form of the focus group. For example, the interpretations
provided are the researchers', the participants are
interchangeable labels, and the argument is substantiated
through empirical evidence in the form of direct quotations
from participants.
more recognizably
interpretation [11].
Still, the risks of being re-assimilated seem attenuated in
HCI compared to AI because of the presence of an
established community of researchers schooled in critically
oriented sensibilities, which likely supports more radical
methodological transformations than would be possible if
papers were only ever reviewed by researchers committed
to mainstream methodologies. But while the risks described
by Agre in AI may be less dire in contemporary HCI, the
adoption of critical engagement with technology design as
part of HCI practice also introduces a significant new
element, in the form of users.
Constructing what ‘works’
A second attribute emerging from these examples is that
they aim to make critical interventions in HCI through
embodying them in ‘working systems’, or, to put it more
precisely, systems that can be narrated to the HCI
community as 'working.' The critical arguments gain
rhetorical force in the community by demonstrating that one
can challenge significant norms of HCI practice and still
end up with a system that is practical, usable, functional, or
otherwise clearly 'good' by standards that are at least
somewhat recognizable from mainstream HCI.
In all three examples, then, we see critical interventions
couched as challenging HCI norms, coupled to aspects that
buttress the validity of the challenge with appeal to terms
and sensibilities that resonate, or are thought to resonate,
with a more mainstream HCI community. Such strategies
are sometimes framed as concessions or compromises
required to get a work published [29]. That framing,
however, does not do justice to the creative challenge
constituted by the commitment to engage the mainstream
discourse for change. Each of these examples remixes the
mainstream and the subversive in a different way in order to
establish new grounds. By identifying this dynamic, we
wish, in part, to call it out as an explicit and challenging
research design problem worthy of sustained attention
within the critically oriented HCI community.
This 'work ethic' [3] is also a central feature of Agre's
critical technical practice in AI; his systems were intended
to give rhetorical force to his philosophical critique by
demonstrating that alternatives generated from that critique
can be embodied in a working system. In [3], Agre argues
that the notions of what it means for a system to 'work'
shape the claims it is possible (or not) to make through
them. Within HCI, particularly in the last 10-15 years,
claims to a 'working system' have been rooted nearly
universally, as the previous section might suggest, in
processes of 'evaluation', where evaluation almost
invariably involves tests with human participants In order to
make the claim that a critical intervention 'works' as a
design principle, researchers frequently choose to engage in
'evaluation,' i.e., muster arguments that involve claims
based on empirical evidence of how participants understand
or interact with a system. Sometimes, as with the low-tech
sensors, critical interventions in HCI are embodied in
challenging assumptions or norms about use in the design
of a system, while those systems are evaluated using fairly
standard strategies such as usability tests or focus groups.
Other times, as with Affector and Local Barometer,
researchers alter the evaluation methodologies themselves
in ways they consider to better align with the critical goals
of the project.
At the same time, we need to be aware of the consequences
and costs of the specific ways in which the dance between
the mainstream and subversion takes place. Agre's
experience in AI, described in the introduction, serves as a
case in point. In [3 pp. 152-153] Agre describes why it is
difficult for critical alternatives to truly redirect a technical
field. One of these reasons is that "it is difficult to apply [a]
method [embodying a critical alternative] to any real cases
without inventing a lot of additional methods as well, since
any worthwhile system will require the application of
several interlocking methods, and use of the existing
methods may distort the novel method back toward the
traditional mechanisms and the traditional ways of talking
about them." There is thus a risk that the necessary business
of leveraging traditional HCI methodologies and
sensibilities may limit our ability to shape true critical
sensibilities may make it possible for critical alternatives to
be picked up, but at the same time invite re-appropriation
into the status quo. Boehner et al., for example, argue that
Gaver's methodology of 'cultural probes,' intended as a
radical reframing of HCI techniques for assessing use
contexts that undermined claims to objectivity, became
quickly reframed and "improved upon" by assimilating it to
In considering these choices in a critically oriented design
project, we note that, since the validity of the design
argument hinges on a convincing evaluation, there is likely
more room to maneuver tactically in the design of a system
than in its evaluation. Thus, the question of evaluation is a
lynchpin for understanding how critically engaged projects
gain a rhetorical foothold in HCI. At the same time,
'evaluation' must not be framed too narrowly. As we saw
with the Affector, Local Barometer, and Lo-tech Sensing
examples unpacked previously, the negotiation of "what
works" is present not only during the literal “doing” of the
evaluation, but throughout the construction and narration of
design and evaluation.
The Problematic
So far in this argument, we have identified three key
attributes of projects that embody critical challenges to
mainstream HCI in working systems. First, researchers
must necessarily engage with the lingua franca to make
their critical challenges legible and defensible in the field;
they do so by holding some aspects of mainstream
methodology steady while violating others. Second, in
order to argue for the value of their critical challenge, they
frequently engage in some form of 'evaluation' which is
based on narrating the results of empirical interactions with
human participants. Third, part of the critical project in HCI
involves queering the relationships among designers,
technology, and users as commonly imagined in the field.
Reframing relationships with users
Given that participants are a central resource for knowledge
construction in HCI, a third attribute that emerges from
these examples is that in critically oriented HCI not only the
technology itself but also relationships between designers,
technologies, and their users become material to be
critically reframed. All three systems share an interest in
challenging traditional relationships between technology
interventions and human participants by presenting
alternative relationships in the form of working systems.
Affector, for example, is grounded in a critique of how
standard affective computing frames the relationship
between human emotional experience and its mechanic
representation; it also is used to argue that richer notions of
emotional experience can be brought into design by
violating common notions of objectivity in HCI through
folding designers' experiences directly into design. Gaver et
al. use the Local Barometer to argue that forms of human
experience normally marginalized in design are legitimate
design material. Further, the system is based on reframing
the relationship between designers and users from designers
attempting to control user experience to enabling more
open-ended forms of engagement. Low-tech sensors are
designed for a politically engaged audience that implicitly
challenges traditional HCI conceptions of the discipline as
scientific and therefore politically neutral. Explicitly, the
sensor project aims to reframe relationships between
scientists and activists by putting its users in control of
sensing and interpretation and cutting the scientists out of
the loop.
A major resulting problematic, and one we have
experienced in our own work, is that the act of making
critically oriented design interventions legible to the HCI
community—i.e. tactically engaging with the “lingua
franca”—shapes the nature of interactions with
participants in ways that can undermine the critical goals
of the project. Keeping in mind that legitimate evaluation is
part of the HCI definition of “working,” it follows that
discursive norms in the HCI community will have an
impact on how critically researchers engage with the human
participants in their work (i.e. their “users”) in order to be
able to speak convincingly to the broader HCI community.
These simultaneous conversations with the twin audiences
of scholars and participants can result in complicated
double binds. These double binds arise because critically
engaged projects necessarily find themselves grappling with
a legacy of evaluation in HCI in order to negotiate what it
means for their own systems to “work” in accordance with
its discursive norms. And what makes an evaluation
'convincing' in HCI is strongly influenced by its inheritance
from the intellectual traditions of psychology, cognitive
science, and human factors. This is the complex and
somewhat contradictory discursive world in which critically
oriented researchers – often coming from humanities, arts,
or design traditions to some degree at odds with behavioral
social-science epistemologies – must maneuver to make our
arguments hold.
More generally, and in contradistinction to the case in AI
(at least at the time Agre was working), critical engagement
by HCI researchers is oriented not only to critical reflection
on the conceptual commitments of the discipline, but also to
altering how we imagine and perform relationships between
researchers and users. Sometimes this is embodied in a
commitment that not only researchers, but also end users,
participants, or publics should critically reflect on
technology and its relationship to human life [e.g. 41]. This
interest in considering and responding to the problematic
politics of designers and users deeply resonates with and is
partially inspired by the long legacy of participatory design.
But, unlike participatory design, in which these reframed
relationships are widely considered an inviolable aspect of
what it takes to create knowledge, in HCI the commitment
to developing alternative relationships to users comes under
substantial stress. The next section explains why.
In the rest of this paper, we aim to lay out concretely how
the way we construct legitimate claims in evaluation shapes
the relationships it is possible for us to have with
participants. Our goal in this analysis is to look at specific
ways that tactical engagement takes place, and the
consequences of trade-offs made in that engagement. We
will argue that our relationships with users become
contorted in particular ways through specific strategies for
legitimation inherited from mainstream HCI.
These trade-offs and their consequences are often hard to
see, particularly in published papers. This is because those
papers are necessarily framing interactions with users in
ways that will support claims to legitimacy. The contortions
and problematics are outside that frame and perhaps even to
some degree outside researchers' conscious understanding.
Next, we will explore the nature of and substantiate this
problematic by looking at problems that emerged in three
examples from our own work. We will lift the veil from
our previously published papers to discuss issues in our
relationships with participants that troubled us at the time,
but seemed to fall outside the frames of what we could
easily discuss in the published work. Later in the paper, we
will integrate these experiences with published works to
identify discursive tropes which shape engagement with
audiences and account for design and evaluation outcomes.
assess the biases of different sources and sort out opinion
from factual news reporting and identify reporterly bias.
While instances of frame reflection that were in line with
Schön and Rein's [39] description of frame reflection did
occur, they were mentioned more rarely.
The first set of tensions arises from participants' insistence
on the value of neutral, factual news reporting. Indeed,
during controlled experiments and focus group studies that
involved explicitly mentioning the concept of framing [37],
participants often interpreted “framing” as “spin” or “bias.”
The system was seen as a tool for identifying and avoiding
such manipulations of objective news reportage. This
usage, however, runs contrary to literature on framing,
which suggests that there is no such thing as an un-framed
fact [17,13]. This distinction between biased and “straight”
news, between opinion and fact, led participants to use the
system not in a reflective mode but in an evaluative one.
Thus, the accounts that participants often gave were in
some ways at odds with the frame reflection and the
conceptual goals for the paper.
The following three research projects have been published
in some capacity in HCI venues and have involved at least
one of the authors. Some of these examples explicitly draw
from critical technical practice and critical design
methodologies, while others are not explicitly “critical” and
instead engage in subversion through other forms of
unusual HCI practices. What is held in common in these
three accounts is the presence of the three attributes
described above: engaging with HCI’s lingua franca and
with human participants, while simultaneously challenging
status quo relationships between researchers, users, and
technological systems through “working” alternatives. By
examining our own work through these internal accounts
we can investigate the ramifications of the interactions
between attributes more fully than by reading only the
published portions of this critically oriented work.
Second, we had hoped the system would help participants
identify frames at work, consider the assumptions on which
those frames are based, and look at the ramifications of
those frames for the perception. However, as prerequisite to
engaging in frame reflection, participants would have
needed to accept that framing operated in the way described
by these theories. That is, in order to engage in frame
reflection, our participants would have needed to adopt a
meta-perspective that (1) multiple perspectives on (i.e.,
framings of) reality exist and (2) those different
perspectives (framings), generally speaking, are each
equally valid. Once this meta-perspective is adopted, a tool
such as the one we designed could potentially become quite
useful. However, without such a meta-perspective, the
visualizations provided by the tool become open to
(mis)interpretations that align with a different fundamental
take on the world and the nature of (objective) reality.
Designing for Frame Reflection
One of the authors was involved in designing and
implementing an interactive visualization tool that
leverages computational linguistic analysis to present
patterns of language in political blogs and news sources. In
contrast to traditional data visualization, the design sought
not to provide an overview of what is being said but rather
how things are being said, i.e., to encourage attention to and
reflection on how issues are “framed” [17,13]. In addition
to lab studies and focus groups [37], this system was
deployed in a field study, during which the tool was used
for at least 8 weeks by regular readers of political news
coverage during the 2012 U.S. election campaign [8]. The
evaluation sought to assess the tool's capacity for
supporting frame reflection [39], the ways that users
integrated tool use with their existing reading practices, and
broader issues in how participants interpreted the
computational analysis and visualization. So as not to bias
their responses, participants were not initially told that the
primary intended purpose of the system was to foster frame
In retrospect, we experienced a researchers’ double bind: if
we told participants about framing, frame reflection, and
multiple perspectives, then any evidence of frame reflection
we saw in their accounts would be more likely attributable
to those statements than to their experiences with the tool
per se. However, when we kept such information from
them, as was the case here, then participants were left with
little scaffolding for the kind of thinking the tool was meant
to engender. This double bind did not result in the research
becoming entirely incapacitated, as some participants were
able to come to an understanding of framing, but the
process was hamstrung by our desire to evaluate the
validity of the system for supporting frame reflection.
The field study identified a variety of ways that participants
used the frame reflection tool, but particularly relevant here
is that participants often attempted to use the system as a
‘bias detector.’ Participants would examine specific
patterns of language visualized using the tool in an effort to
Challenging Self-Optimization
In this project, two of the authors were involved in the
development and deployment of a reflective, critically
oriented personal informatics tool [24]. This tool was
designed to inspire reflection not on peoples’ own
behaviors, as personal informatics systems usually do, but
instead, on the infrastructures underlying the gathering and
presentation of personal data and the narrow modes of
engagement that traditional personal informatics systems
promote. We encouraged this critical reflection by
gathering users’ web browsing data and displaying it using
different design strategies: for example, we hoped to
encourage reverse engineering of data gathering algorithms
from our participants by using purposeful malfunction in
the visualization. Because we were arguing for the role of
critical design in challenging the status quo of selfoptimization narratives in personal informatics, and because
we wanted to show that our design strategies could
potentially be employed toward similar means in other
personal-data contexts, our evaluation needed to
convincingly illustrate the ways that the designs themselves
provoked the specific sorts of reflection we intended.
Probing Community Values
This research was part of a larger effort to explore issues
broadly related to sustainability [43] at a local farmers’
market using cultural probes [9]. As part of this work, two
of the authors sought to develop a methodology for doing
“community probes,” applying a similar approach and
sensibility as cultural probes, but as a means of fostering
conversations among community members rather than
between community members and designers. Responses to
a cultural-probe-style diary were used to inspire a series of
such community probes in the form of speculative design
proposals. For example, inspired by themes of stress and
chaos in the diary responses, one design suggested a series
of ropes and poles that automatically reconfigured
themselves to allow for optimal foot traffic among the
crowded market stalls. These proposals were intentionally
provocative, as we sought to incite reactions from
participants pertaining to what they liked and disliked about
the market.
In order to evaluate our system, we recruited participants
for a study about reflecting on web browsing data and about
unexpected approaches to visualizing personal information.
While this was not directly deceptive, it did not directly call
out the critical agenda in our work. During the interviews
with our participants, we found ourselves in a complicated
tension. Participants picked up on the strangeness of our
system’s design but were also trying to use the system as
one would use a traditional personal informatics tool. Often,
this interpretation of the tool would happen in the context of
“critical” conversations where participants speculated on
the norms, limitations, and assumptions in personal
informatics systems. While some participants expressed
skepticism toward personal informatics systems and the
self-optimizing values they espouse, it is undeniable, too,
that some of those same participants were optimistic about
tracking and imagined using the tool to “improve” their
web browsing habits.
These designs were displayed on large posters at the market
along with markers and post-it notes for visitors to leave
comments. Adhering to the “aesthetic control” of cultural
probes [18], the posters were not branded with the research
lab or university where the authors worked. One of the
central challenges that presented itself in the deployment of
these cultural probes was that despite the satirical nature, or
in some cases technical infeasibility, of the proposals, some
market visitors thought not only that they were serious
proposals for changes, but that these changes were being
proposed by the market administration. When several
market visitors complained about the proposals to the
administration, we were asked to amend the posters with a
disclaimer that the designs were in no way affiliated with or
endorsed by the official market administration. We later
followed up with the management, asking about conducting
a follow-up where they could be more involved in the
design process, but we were told that the market
administration was not interested in running any more
“surveys” at this time; in retrospect, these conversations
illustrated not only a substantial misunderstanding of our
research intervention, but also an antagonistic relationship
between us and the public we wanted to engage.
We discovered a complex negotiation between the
participants and the interviewers that evolved in response to
users’ conflicted interpretations of the system. We wanted
to get responses from participants during the interview
without revealing our own motivations, but simultaneously
felt tempted to explain our critical stance in order to have
an informed conversation about the limitations of personal
informatics. The way this played out in practice was that
the former behavior manifested during the “formal”
interview, while the latter discussion occurred after the
interviewer began “debriefing” the participant about the
goals of the study. This ad-hoc negotiation with traditional
HCI protocols allowed researchers to pick up on threads
previous articulated by participants earlier in the interview
while maintaining the “hands-off” ethos of not biasing our
participants (at least not initially). The negotiation also
demonstrates the ways that our research goals shaped the
ways we engaged with the participants in our study. We
saw similar tensions stemming from tactical non-disclosure
manifest more dramatically in the following project.
Under some interpretations of cultural probes, we could
read the overwhelmingly negative response to the
community probe, both from the market attendees and the
market administration, as a response and legitimate result of
the probe: the refusal of even infeasible technologies points
to community values around sustainability that oppose
optimization narratives in favor of community gathering,
revealing a community-oriented sustainability practice. We
are very aware, however, of the interpersonal tensions that
manifested when we came to the Farmer’s Market and
attempted to provoke critical responses from the public
while temporarily masking our affiliations with the local
university. While we were able to gather experiences and
community responses to the probes we set up, the severed
relationship with the market administration points to the
ways our relationships suffered as a consequence of the
levels and types of information withheld.
participants in psychological experiments. Bardzell et al.
connect this struggle to the complexities of critical design,
where the goal is to deal with uncomfortable topics (in their
case, sex and gender) in deliberately provocative ways.
In our cases, we see traces of participant unease as to the
nature of their relationship with researchers even in work that
did not explicitly involve “critical design,” such as the
speculative sketches we installed at the farmers’ market
being (mis)interpreted as official proposals. This relationship
becomes even more complex when the orientation of the
participant toward the researcher is explicitly suspicious or
even adversarial, as was the case with the activist
communities in the Aoki et al. Street Sweepers fieldwork [4].
There, activists took on a generally oppositional stance
toward academic researchers, who the activists saw as people
who enter into activist communities for the benefit of their
own research and disappear, along with the data, when this
research is complete. Again, in navigating this complex
relationship that comes as a unique consequence of
employing subversive tactics in HCI, either in embodying
alternatives in working systems or methods, we see
noticeable frictions that arise when researchers negotiate
different aspects of HCI’s lingua franca at different times in
response to the specifics of their situation,
We initially found it difficult to articulate our experiences
beyond a simple run-down of cases where we had “gone
wrong” in doing critically oriented work, or a general
indictment of empirical values unconsciously permeated into
critically oriented HCI. Eventually, we realized that the
evaluation practices researchers take to legitimate their
research as working systems in HCI are deliberate, tactical
engagements with the lingua franca, practices which in turn
take a role in negotiating discursive norms in the community.
For this reason, it is important not to treat our own tensions
as isolated cases of research malpractice, but as actions
situated in the context of the critically oriented community.
Using our own experience as a guide, we looked for traces of
the same problematic in other critically oriented. We felt
connections between the tensions in our work and those
documented by Gaver et al. in “Anatomy of a Failure,”
where the research team reflects on challenges in the
evaluation of a sensor-based system, the Home Health
Monitor, that was designed to be deliberately ambiguous in
order to encourage interpretation and appropriation in
domestic settings [20]. Similarly to our own methods, and in
keeping with the studio’s practices, the researchers did not
tell the participants in their study about their own intentions
to avoid biasing the participants’ interpretations of the
system. As the researchers describe, this initial lack of
transparency contributed to a lingering attitude of uncertainty
and suspicion that persisted even as researchers attempted to
clarify their intentions through modifications in the system’s
design. In the paper, the researchers discuss the ways in
which withholding information and leaving the design open
to interpretation made the system almost completely
inscrutable and, simultaneously, almost completely
uninteresting. We found points of resonance in their analysis
in the ways that non-disclosure shaped the relationship
between researchers and study participants.
We have seen that in all the cases above, both our own and
others from the literature, researchers are holding constant
or appealing to some elements of more orthodox evaluation
practices while challenging, bending, or exploring
alternatives to others. How do we actually articulate what
these practices are, to better understand what is being held
constant and what is being changed? Here, we draw on
values sourced from controlled scientific experimentation to
highlight how HCI’s historical undercurrents of cognitive
psychology and computer science shape elements of the
current lingua franca in evaluation. For each of these three
values, or discursive tropes, we describe how they can be
employed by critically oriented researchers to creatively
maneuver subversive research toward conventional notions
of legibility. In practice, these three values are not clearly
separable and are in fact bound up with one another in
intricate ways. For the purposes of analysis, we tease them
apart, but it should be noted that in practice they rarely
separate cleanly. Our intention is to provide conceptual
language with which to better understand both the nature
and the ramifications of tactical engagement with discursive
norms of evaluation in HCI.
Engagement between design researchers and their
participants, and the ways that the relationship is made ever
more complex by explicitly critical agendas, is also
addressed in the work of Bardzell et al. [5]. In this work, the
researchers deploy critical designs meant to provoke critical
reflection on gender and divisions of domestic labor. This
paper recognizes their participants’ struggle to participate
(and be “good participants”) in the research. Like with Home
Health Monitor, Bardzell et al. address aspects of participant
suspicion through a desire to know what “the study is really
about.” Part of this struggle in subject participation has to do
with the idea that relationships between researchers and their
participants do not start from a “blank slate.” One couple
interviewed by Bardzell et. al. was particularly skeptical
about the hidden motivations of the researchers stemming
from their previous experiences in graduate school as
Demand Characteristics
Psychologists and other social scientists have long
acknowledged that the substrate they study—human
beings—differs in important ways from that studied in the
physical sciences. Namely, experimental subjects often
reason, consciously or unconsciously, about the purpose of
the experiment in which they are participating, not for
nefarious ends but often rather for compliance. Many
experimental volunteers desire to be “good subjects” and
see their participation as a contribution to the furthering of
science [34]. The subject, then, also has a stake in the
experiment turning out well. Thus, “as far as the subject is
able, [s/]he will behave in an experimental context in a
manner designed [...] to validate the experimental
hypothesis” [34 p. 778, emphasis original]. Thus, an
experimental result may not actually be due to the
experimental manipulation itself but to the willing
compliance of the subjects. Indeed, the subject may
leverage a variety of cues – including the study description,
the informed consent forms, the demeanor of the
experimenter, the study procedures themselves - to reason
(again, either consciously or unconsciously) about the
purpose of the experiment. This “totality of cues which
convey an experimental hypothesis to the subject [are
called] demand characteristics” [34 p. 779].
We see maneuvers toward the reduction of demand
characteristics across other critically oriented work in HCI,
notably the techniques described earlier by Gaver et al.
where researchers gather multiple perspectives on design
work by informing neither the participants nor the
independently hired film crew of their design intentions.
Again, it is important to stress what while these
negotiations appear ‘scientific’—and Gaver even compares
the practices of evaluating Local Barometer to experimental
evaluations done during his graduate work in Cognitive
Science—the efforts to reduce demand characteristics are
employed deliberately, under different (non-empirical)
epistemic methodologies, to serve alternative ends [33 p.
140]. While this strategy can be rhetorically leveraged to
tactically engage with HCI audiences, it can also create
tension between researchers and study participants and
impede researchers’ ability to pursue study goals.
Representative Sampling
Each of the above case studies demonstrates different
attempts to reduce demand characteristics. For example, the
work applying critical design to personal informatics was
described to participants as dealing with personal data. We
did not tell subjects that the design was “critical” in nature
or that it was intended to prompt reflection on the value
commitments that underlie personal informatics. In the
frame reflection study, participants were told that the tool
was about understanding political language, but framing
was not explicitly mentioned. In the farmers’ market work,
we did not want to tell participants that we were interested
in sustainability so that they would not focus on a single
aspect of their experiences at the market. This tactic
followed from the advice of Gaver et al. [18] about
ensuring that the speculative proposals had no trappings or
accouterments of a university research lab, and other
examples of researchers being deliberately ambiguous in
the presentation of their design.
Traditional evaluation techniques in HCI in their significant
conceptual borrowing from empirical practices in
experimental psychology and demographic sociology often
center on questions of whether the participants in a study
comprise a “representative sample”. Representative
sampling is a statistical technique based on the idea that
while it is rarely practical or even possible to get
information from every person in a targeted population, the
qualities of a population as a whole can be closely
paralleled by a much smaller, properly selected segment.
By selecting for some key variables such as gender, age,
technical expertise, or socioeconomic status, and
controlling for others, the representative sample becomes an
effective ‘stand-in’ for other parts of that population.
Traditionally in HCI, representative sampling emerges as a
way to learn about potential users of a technology [38].
However, as HCI accrues interdisciplinary practices into its
standard methodologies (e.g. interpretive and ethnographic
ways of knowing,) the concern over representative
sampling itself came to represent HCI’s internal tensions.
As described in [33 p. 17], an ethnographer may refer to
“sampling” a population to account and interpret their
experiences and practices, and sometimes seek to make
broader statements based on their interpretations without
having their population “stand for” a broader statistical
phenomenon in the same ways that controlled demographic
or experimental approaches might attempt to do.
In each case, though, these decisions, intended rhetorically
to minimize demand characteristics, ended up affecting, and
in some cases limiting, the potential of our designs to
engage participants in the very types of critical thinking and
reflection we sought to engender. With the critical personal
informatics study, participants expressed critical ideas
about normative values in personal informatics in
conjunction with literal interpretations of how the tool
could be used to optimize behavior, because they suspected
that this was our research goal. With the frame reflection
system, participants were left a bit baffled as to how they
might use the tool. Since frame reflection is admittedly a bit
outside the normal approach to political news coverage,
participants fell back on familiar forms of identifying and
comparing partisan bias. At the farmers’ market,
participants reacted most dramatically, thinking that these
were official proposals rather than reflecting on broader
themes about the market to which the speculative designs
sought to draw attention.
The question of representative sampling was salient at
several moments of our research that are often not
accounted for in traditional research accounts. For example,
when writing on the critical personal informatics work went
through peer review, one issue raised by reviewers was that
our sample was not representative of the population as a
whole, and that the results that we got might be specific to
the participants we recruited. While we leveraged a
rhetorical appeal to this same trope by arguing that our
“sample” was representative of the audience to whom
personal informatics applications are traditionally marketed
(i.e. upper middle-class, tech-savvy, college educated), in
retrospect, this interaction was an example of the way that
critically oriented researchers negotiate and enforce HCI’s
evaluation norms. Representative sampling negotiations are
also visible in the Affector research, where the authors
point to concerns about “auto-biographical blinders” that
might emerge from designers testing technology on
themselves, and employed the use of a “third party”
evaluator in response to this concern [42 p. 12:22].
than to the evaluation protocol. It is relevant to note here
that the effort to evaluate towards an imaginary where
prototypes exist in the world and are used by people
without the designers’ intervention is not unique to critical
projects in HCI, and that most (if not all!) HCI deployments
employ elements of speculation. However, doing so tended
to limit our ability to achieve some of the very goals our
studies were intended to show we had accomplished, such
as the double bind that occurred in the frame reflection
example. Again, we see similar attempts to engage
stimulus-response causality in others’ work: the low-tech
sensors research argues that the design of the sensors
afforded particular types of usage. For example, by
explicitly not attaching instructions about how the sensors
worked, and then citing how all of the participants said the
sensors were easy to use, the research team rhetorically
convey that the devices were usable by traditional HCI
standards. The research team was able to appeal to this
trope of stimulus-response causality while also engaging
participants in reflective and interpretive practices around
environmental sensing, which illustrates the methodological
creativity of critically oriented researchers to negotiate
different values into the evaluation of their systems.
As mentioned previously, the idea that research participants
in critically oriented work can be used empirically to justify
broader claims is also echoed by the very ways that we
invoke scientific language of lingua franca to refer to
participants in our writing (e. g. “P5” to refer to specific
participants.) This is of course not to say that researchers
literally dehumanize their participants into data, but rather
that engaging with empirical epistemologies can be used
rhetorically to make legible and render valuable to HCI
some of the intellectual contributions of critically oriented
Stimulus-Response Causality
Experimental controls attempt to isolate various factors
from one another in order to identify causal stimulusresponse effects. To establish causal links, researchers often
look for differences between an experimental condition and
a control condition, where the two conditions are identical
in every way with the exception of a single experimental
manipulation: the stimulus. Thus, any differences observed
between the two conditions must result directly from the
stimulus. Generally, evaluations of critically oriented HCI
are not situations of experimental control. Researchers
rarely compare a control and experimental group or
explicitly attempt to demonstrate causal relationships
between some manipulated stimulus and some independent
response. That said, many of the examples discussed above,
both from our work and from others, evidence an interest in
showing a type of causal relationship between a stimulus
(i.e., design intervention) and a response (i.e., critical
thinking or reflection).
Broadly speaking, engagement with these tropes can
legitimate new research directions while also shaping the
nature of system evaluation and particularly the modes of
engagement between researchers and participants. As we
discuss in the next sections, these ramifications are not
solely limited to individual research programs, and also
bear important consequences for the critically oriented
community as a whole.
Engagement and Tempered Radicals
In previous sections we described how, when compared to
Agre’s work in AI, the challenge of simultaneously
engaging two audiences emerges as a quality unique to
HCI. However, negotiating relationships between different
communities while challenging the status quo is a practice
that also finds parallels in other fields. We have found a
helpful parallel between ideas espoused in critical technical
practice and a concept from organizational science of the
“Tempered Radical” [31]. Tempered radicals are
individuals who identify with and belong to a certain
organization while simultaneously being committed to a
cause, community, and ideology that is fundamentally
different from and at times at odds with the dominant
values of the individual's organization. Tempered radicals
seek to challenge the status quo by building up legitimacy
within their organization and identifying strategic
opportunities to negotiate change within their institution.
For example, based on the project’s motivation, we had a
vested interest in demonstrating that the frame reflection
tool itself, rather than any element of our study protocol,
could effectively foster reflection on the framing of political
issues. Similarly, we sought to show that the critical
personal informatics themselves, and the generalizable
strategies we used to design them, and not our interview
questions, provoked people to consider the value
commitments on which personal tracking technologies
hinge (for example, to show that deliberate malfunction
provoked critical reflection on infrastructure through
reverse engineering.)
In spanning these boundaries, the tempered radical adopts
an “ambivalent” stance that leads to tension. However, as
Meyerson and Scully describe [31], the tempered radical
can also be remarkably well positioned to assess where and
These evaluations were designed so that observed responses
could be attributable to the system designs per se, rather
at which times “small wins” can be successfully enacted.
Over the course of our analysis, we have been continuously
impressed at the ways in which researchers have negotiated
traditional HCI rhetoric and values to make room for and
legitimate new research programs. Though this work is not
without risks of “loss” or re-absorption into mainstream
practices (e.g., in the case of [11]), we feel that critically
oriented research practice can be used, gainfully, to
challenge status quo values and practices in the design and
evaluation of technology.
we offer a generalizable stance or lens that critically
oriented researchers can use to articulate their own tactical
engagements with HCI’s discursive norms. We believe that
such conversations will help critically oriented researchers
work with each other to acknowledge and innovate
evaluation methods in HCI.
We would like to thank the NSF under Grants Nos. IIS1110932 and IIS-1217685; the Intel Science & Technology
Center for Social Computing; Melissa Mazmanian and the
Doctoral Research and Writing Seminar at UC Irvine; and
the anonymous reviewers for their wonderful feedback.
Politics of Evaluation
An essential component for the success of the “tempered
radical” is their affiliation with other like-minded members
of their organization. Previously, we have described
critically oriented researchers as people engaging
simultaneously with two audiences, their participants and
the broader HCI academic community. However,
methodological decisions made by critically oriented
researchers also impact a third group of people, namely
other critically oriented researchers in HCI. Here we
channel an argument made by Cohn et al. that methods used
in designing and evaluating have more general
consequences for discursive and practical action because
methods enable specific discourse and forms of knowledge
production [14]. In other words, just as the HCI community
enforces a set of discursive norms, so do critically oriented
sub-communities within HCI. In this sense, critically
oriented researchers should be conscious of their role in
normalizing practices for future researchers who wish to
engage in related forms of subversion. We recognize that
mainstream HCI publications may not always be the most
appropriate venues for these discussions (although it has
certainly been possible to write about tensions while
couching them in lingua franca rhetoric of success, e.g.
[20]). We do believe that conversations among critically
oriented researchers lead to helpful research contributions
(e.g. [20, 5, 32]) and hope that our analysis can be used to
articulate the trade-offs that critically oriented researchers
make when engaging with HCI practices.
1. Agre, P. Computation and human
Cambridge University Press, 1997.
2. Agre, P., Chapman, D. 1987. Pengi: an implementation
of a theory of activity. In Proceedings of the sixth
National conference on Artificial intelligence - Volume
1 (AAAI'87), Vol. 1. AAAI Press 268-272.
3. Agre, P. "Toward a critical technical practice: Lessons
learned in trying to reform AI." Bridging the Great
Divide: Social Science, Technical Systems, and
Cooperative Work, Mahwah, NJ: Erlbaum (1997): 131157.
4. Aoki, P. M., Honicky, R. J., Mainwaring, A., Myers, C.,
Paulos, E., Subramanian, S., & Woodruff, A. "A vehicle
for research: using street sweepers to explore the
landscape of environmental community action." In Proc.
CHI ‘09
5. Bardzell, S., Bardzell, J., Forlizzi, J., Zimmerman, J.,
and Antanitis, J. "Critical design and critical theory: the
challenge of designing for provocation." In Proc. DIS
6. Bardzell, J., Bardzell, S, and Stolterman, E. 2014.
Reading critical designs: supporting reasoned
interpretations of critical design. In Proc. CHI ‘14
7. Bardzell, J., and Bardzell, S. 2013. What is "critical"
about critical design?. In Proc. CHI ‘13
By looking back on our research and by thinking through
our analysis of the values we engage and their trade-offs, it
becomes possible to imagine potential future trajectories for
our projects. For instance, in the frame reflection example,
we could change the conditions of the demand
characteristic reduction to allow sharing information about
framing that we previously withheld from participants.
Instead, we could re-negotiate the “response” portion of the
stimulus response to ask not whether the system makes
people reflect on framing but rather to evaluate the ways in
which participants then conceptually engage with ideas of
framing using the system. More broadly, what we hope to
contribute through this analysis is not a generalizable
method or framework for critically oriented evaluation or
engagement with the traditional HCI lingua franca. Rather,
8. Baumer, E. P. S., Cipriani, C., Davis, M., He, G., Kang,
J., Jeffrey-Wilensky, J., Lee, J., Zupnick, J., and Gay, G.
K. (2014). Broadening Exposure, Questioning Opinions,
and Reading Patterns with Reflext: a Computational
Support for Frame Reflection. Journal of Information
Technology & Politics, 11(1), 45–63
9. Baumer, E. P. S., Halpern, M., Khovanskaya, V., &
Gay, G. Probing the Market: Using Cultural Probes to
Inform Design for Sustainable Food Practices at a
Farmers’ Market. In J. H. Choi, M. Foth & G. Hearns
(Eds.), Eat, Cook, Grow: Human-Computer Interaction
with Human- Food Interaction. MIT Press, 2014.
10. Blythe, M., Overbeeke, K., Monk, A., Wright, P. (eds).
Funology: From Usability to Enjoyment. Kluwer
Academic Publishers, 2003.
28. Kuznetsov, S., Odom, W., Moulder, V., DiSalvo, C.,
Hirsch, T., Wakkary, R., & Paulos, E. HCI, politics and
the city: engaging with urban grassroots movements for
reflection and action. In CHI EA '11.
11. Boehner, K., Vertesi, J., Sengers, P., & Dourish, P "How
HCI interprets the probes." In Proc. CHI ‘07
29. Lieberman, H. The Tyranny of Evaluation,
12. Boehner, K., Sengers, P., and Warner, S. 2008.
Interfaces with the ineffable: Meeting aesthetic
experience on its own terms. ACM Interact. 15, 3,
Article 12 (December 2008), 29 pages.
30. Löwgren, J. 2013. Annotated portfolios and other forms
of intermediate-level knowledge. interactions 20, 1
(January 2013), 30-34.
13. Chong, D., & Druckman, J. N. (2007). Framing Theory.
Annual Review of Political Science, 10(1), 103–126.
31. Meyerson, D., and Scully, M. "Crossroads tempered
radicalism and the politics of ambivalence and change."
Organization Science 6.5 (1995): 585-600.
14. Cohn, M., Sim, S., and Dourish, P. 2010. Design
methods as discourse on practice. In Proc. GROUP ‘10
32. Michael, M. (2012). “What are we busy doing?”
Engaging the idiot. Science, Technology & Human
Values, 37(5), 528-554.
15. DiSalvo, C. Adversarial Design. The MIT Press, 2012.
16. Dourish, P. Where the action is: the foundations of
embodied interaction. MIT press, 2004.
33. Olson, J., and Kellogg, W. Ways of Knowing in HCI.
Springer, New York, NY, 2014.
17. Entman, R. M. (1993). Framing: Toward Clarification of
a Fractured Paradigm. Journal of Communication, 43(4),
34. Orne, M. T. (1962). On the social psychology of the
psychological experiment: with particular reference to
demand characteristics and their implications. American
Psychologist, 17(11), 776–783.
18. Gaver, W. W., Boucher, A., Pennington, S., & Walker,
B. (2004). Cultural probes and the value of uncertainty.
interactions, 11(5), 53-56.
35. Pierce, J. 2012. Undesigning technology: considering
the negation of design by design. In Proc. CHI ‘12
19. Gaver, William W., Jacob Beaver, and Steve Benford.
"Ambiguity as a resource for design." In Proc. CHI ‘03.
36. Pierce, J. and Paulos, E. “Counterfunctional things:
exploring possibilities in designing digital limitations.”
In Proc. DIS ‘14
20. Gaver, W., Bowers, J., Kerridge, T., Boucher, A., &
Jarvis, N. "Anatomy of a failure: how we knew when
our design went wrong, and what we learned from it." In
Proc. CHI ‘09.
37. Polletta, F., Pierski, N., Baumer, E. P. S., Celaya, C., &
Gay, G. (2014). A “Peopled” Strategy of Frame
Reflection. In Annual Meeting of the American
Sociological Association (ASA). San Francisco.
21. Gaver, W., Boucher, A., Law, A., Pennington, S.,
Bowers, J., Beaver, J., Humble, J., Kerridge, T., Villar,
N., and Wilkie, A. “Threshold devices: looking out from
the home”. In Proc. CHI ‘08
38. Preece, J., and D. Maloney-Krichmar. "The humancomputer interaction handbook." (2003): 596-620.
22. Goffman, E. (1974). Frame Analysis. Cambridge, MA:
Harvard University Press.
39. Schön, D. A., & Rein, M. (1994). Frame Reflection:
Toward the Resolution of Intractable Policy
Controversies. New York: Basic Books.
23. Hirsch, T. 2010. Water wars: designing a civic game
about water scarcity. In Proc. DIS ‘10
40. Sengers, P., Boehner, K., Mateas M., and Geri Gay.
2008. The disenchantment of affect. Personal
Ubiquitous Comput. 12, 5 (June 2008), 347-358.
24. Khovanskaya, V., Baumer, E.P.S., Cosley, D., Voida, S.
& Gay, G. “Everybody knows what you’re doing: A
critical design approach to personal informatics.” In
Proc. CHI ‘13
41. Sengers, P., Boehner, K., David, S., & Kaye, J. J.
"Reflective design." In Proc. CC ‘05
25. Kuznetsov, S., Hudson, S., and Paulos, E. "A low-tech
sensing system for particulate pollution." In Proc. TEI
42. Sengers, P., and Gaver, B. “Staying open to
interpretation: engaging multiple meanings in design
and evaluation.” In Proc. DIS ‘06
26. Kuznetsov, S., Davis, G. N., Paulos, E., Gross, M. D., &
Cheung, J. C. 2011. Red balloon, green balloon, sensors
in the sky. In Proc. UBICOMP ‘11
43. Silberman, M., Blevis, E., Huang, E., Nardi, B. A.,
Nathan, L. P., Busse, D. Preist, C., and Mann, S."What
have we learned?: a SIGCHI HCI & sustainability
community workshop." In CHI EA'14.
27. Kuznetsov, S., Davis, G., Cheung, J., & Paulos, E. “Ceci
n'est pas une pipe bombe: authoring urban landscapes
with air quality sensors.” In Proc. CHI ‘11
44. Suchman, L. Human-machine reconfigurations: Plans
and situated actions. Cambridge University Press, 200
Note to Self: Stop Calling Interfaces “Natural”
Lone Koefoed Hansen & Peter Dalsgaard
Department of Information Studies and Digital Design,
Participatory Information Technology, Aarhus University
Helsingforsgade 14, Aarhus N, Denmark
[email protected], [email protected]
(NUI) and a verb, “natural interaction”. When viewed from
the point of view of interaction design, this is rather strange,
given that the experiential qualities of technologies are very
often hard to describe; in order to understand and design
meaningful interfaces, we need all the help we can get from
the words we have at our disposal. Since we in the HCI
community evaluate, design, or research relations between
a user and a technological setup, we must be better at reminding ourselves that we also need to develop the way that
we talk about and characterize the nuances of this interaction.
The term “natural” is employed to describe a wide range of
novel interactive products and systems, ranging from gesture-based interaction to brain-computer interfaces and in
marketing as well as in research. However, this terminology
is problematic. It establishes an untenable dichotomy between forms of interaction that are natural and those that are
not; it draws upon the positive connotations of the term and
conflates the language of research with marketing lingo,
often without a clear explanation of why novel interfaces
can be considered natural; and it obscures the examination
of the details of interaction that ought to be the concern of
HCI researchers. We are primarily concerned with identifying the problem, but also propose two steps to remedy it:
recognising that the terminology we employ in research has
consequences, and unfolding and articulating in more detail
the qualities of interfaces that we have hitherto labelled
In this paper, we continue the brief statement in [6] and
argue that the use of terms such as “natural” is problematic
because the terminology highlights qualities that it does not
help us understand and explain adequately, obscuring important aspects at the same time. To frame it in the spirit of
the Critical Alternatives conference, we will first offer a
critique and then outline alternatives.
Author Keywords
Natural user interfaces; criticism; terminology.
We specifically wish to criticize the way the term natural is
employed to describe user interfaces—an issue that we see
not only in marketing, but also in our own work and interaction with colleagues in the field. Even if [13] attempted to
dismiss the term in 2010, we nevertheless see university
courses on designing for NUIs, workshops at academic conferences with NUI in the title, as well as an increasing
number of publications in the ACM Digital Library that
explicitly mention NUI (from 19 publications in 2007 to 85
in 2014), of which only few (e.g. [14]) seek to discuss and
clarify the use hereof. At present, a wide array of interfaces
bears the label. The most common are touch screens, gesture-based interaction, and speech recognition, but the term
also encompasses stereo 3D and haptic interfaces [12]. It
even extends to specific types of interactions with wellknown instruments, such as the trackpad on a MacBook Air
that can be adjusted to offer users a “natural scroll direction” (see Figure 1).
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g., HCI):
The words we use to describe technology matter. They carry with them connotations, establish expectations about the
role and use of an object or an interface, and foreground
particular qualities, while obscuring others. In many areas
of interaction design and HCI, especially the ones pertaining to technical aspects of interaction, rich terminologies
are continuously being developed in order to address the
intricacies of a particular field. However, in other areas,
often those pertaining to the experience and perceived
qualities of interfaces, the terminology is disturbingly unnuanced. One instance of this is “natural”, a catch-all term
that has seen widespread use in industry as well as in academia, both using it as a noun, “natural user interface”
The above list shows that the “naturalness” of an interface
is neither defined nor delimited by the underlying technologies, nor does it refer to something that exists in, or emerges from, nature as it is clearly fully man-made and artificial.
Rather, the common conception of a natural user interface
refers to the experience of interacting with or through the
interface, as Microsoft Research Labs’ description states
“People already use gesture and speech to interact with
their PCs and devices; such natural ways to interact with
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
meaning of a particular word through time in order to understand its connections to other words.
Naturalized representations
An important part of Foucault’s discourse theory is that a
discourse is practically invisible in the period in which it
exists—it is often naturalized. In a Foucauldian sense, the
dominant discourse of a given period (which can extend
over hundreds of years) makes it hard to recognize that the
way the world is framed and experienced could be in any
other way than it is. Though operating on a smaller scale,
French semiotician Roland Barthes’ concept of the naturalized connotation is similar; it describes when a connotation,
i.e. a representation or understanding of something, is mistaken for a denotation, i.e. a fact, a given, a natural occurence [1]. What Barthes saw was that this communication
strategy is used widely in politics and advertising, and he
argued that a central aspect of being a modern citizen is to
be a skilled sign-reader, i.e. to learn how to spot and challenge attempts of deception.
Figure 1: Scroll Direction Tutorial for Apple Trackpad.
with technologies make it easier to learn how to operate
them” [9]. On Wikipedia, a NUI is described as “a user
interface that is effectively invisible, and remains invisible
as the user continuously learns increasingly complex interactions. The word natural is used because most computer
interfaces use artificial control devices whose operation has
to be learned.” In this definition we have at least three different concepts of what “natural” might mean: it is unnoticable (“effectively invisible”) since it does not involve a
physical (“artificial”) input device; it is a walk-up-and-use
interface (it doesn’t have to be learned); and becoming a
super-user is easy. This definition clearly echoes Weiser’s
vision for the computer for the 21st Century, which would
“disappear into the background” [17]; indeed Weiser himself employed metaphors from nature, stating that using
computers would become as refreshing “as taking a walk in
the woods”.
These strategies of ‘deception’ are not always on purpose,
though, as becomes clear when we turn to feminist theories
and critiques where making naturalizations visible in order
to change them is a central goal. With language and discourse analyses as “weapon,” the goal is to make visible
how particular ideas of gender, minorities, etc. are part of
power structures that are deeply embedded in a given culture. “We must be wary of the suggestion that anything is
natural,” is how feminist philosopher Haslanger summarizes this stand, and although she specifically argues that
not everything is a cultural construction, language is not
one of those things [6]. Language is understood as an encoder of meanings and norms, for instance when the word
“man” both means a person of the male sex and the human
race in general. Since terminology can be a barrier to accurate communication, the argument is that words must be
chosen very carefully, allowing for greater nuances in our
understanding of the world and the possibilities we have for
changing it. See [15] for an excellent overview of the intersection of feminist critique and language.
None of the above readings are intended by those that use
the term, obviously, but this only points to a core problem
and to our purpose with this paper: words are not only descriptive but also formative—they are important because
they help constitute the way we perceive (our possibilities
to act in) the world. Central to many language theories is
the idea that the world is discursively constructed—we understand the world as well as our possibilities herein also
through the way we put it into words. Lakoff’s cognitive
linguistics is an example of this; in [9] he and Johnson famously highlight how metaphors shape our lived experience of the world. Likewise, Foucault’s concept of “discourses” places language as an important part of power
structures that change over time, enabling certain developments of society whilst making others less likely [4]. On a
micro as well as on a macro level all highlight that it is important to question the terminologies we use. This is done
with critique, here understood both in the Frankfurt School
tradition of using critical theory to investigate hidden structures, and in the etymological tradition of tracking the
In summary, these examples of discourse critique from
other fields than HCI show that a constant focus on representation and language in general and on terminology in
particular is necessary if we don’t want the words we use to
describe the world to prevent us from exploring its potentials fully. Following this, we argue below that the term
“natural”, when referring to user interfaces, is problematic
in at least three different respects, namely in regards to the
unclear meaning of the term, the issues that it foregrounds,
and even more so the issues that it obscures.
What does “natural” mean?
The first and arguably most obvious critique to raise is that
the meaning of the word natural is unclear and imprecise.
Considering the definitions outlined earlier, it is difficult to
decipher exactly what the common denominator for natural
user interfaces is, unless we move to a very abstract level:
NUIs require no instruments, they are easy to learn, and
they disappear into the background. These are dubious
claims for most NUIs. Furthermore, “natural” is being employed in ways that contradict this very broad definition, as
exemplified by the “natural scroll direction”, which clearly
indicates that instruments do exist and that there are certain
“natural” ways of using them, although, ironically, this way
needs to be demonstrated in a tutorial.
the realm of interaction design and HCI is unclear and imprecise at best, and self-contradictory at worst.
What does “natural” foreground, what does it obscure,
and why does it matter?
Employing discourse analysis to this field, it is clear that
the term “natural” is not neutral; rather, it bears with it primarily positive and desirable connotations. Weiser’s walk
in the woods is “refreshing”, the Leap Motion “senses how
you naturally move your hands [so that you can do] things
you never dreamed possible”, [10] and NUIs are “more
intuitive, engaging and captivating” [12:1]. From a pragmatic point of view, the desire to label interfaces as natural
is understandable. Owing to the imprecise definition of the
term, which easily accommodates new technologies as they
emerge, this is a non-committal way of imbuing a novel and
unfamiliar product with positive associations. Especially in
a marketing discourse, “natural” handily uses the naturalization trick that Barthes talked about, marking objects as
inescapably good; who wants something unnatural if you
can get the natural? However, this approach is unsuitable in
a research discourse, where—following Barthes—a central
aspect of being a highly skilled and reflective researcher is
to be a highly skilled reader and critic of signs, given that
words shape the way we give agency to the world.
In many NUIs, instruments are quite clearly still in play: a
gesture based interface such as the Leap Motion controller
may obviate the need for a mouse, but it is still clearly present as an instrument that requires the user to maneuver the
hands in a specific manner within a constrained interaction
zone; likewise, speech recognition may obviate the need for
a keyboard, but it still needs to be activated and deactivated in a specific manner, e.g. by picking up a phone
and uttering particular activation commands, be it “OK
Google” or the anthropomorphic counterpart “Siri”. In continuation, these technologies have not eliminated the need
for learning how to interact, but rather introduced new interaction modes for us to learn. We need to learn how to do
micro-gestures that can be sensed with a reasonable level of
precision by the Leap Motion controller; we need to learn
how to speak in a monotonous, robotic voice (preferably
without accents) to minimize speech recognition errors. In
short, we need to learn how to use our bodies as instruments in new ways to accommodate the new “natural” interfaces. Thus, neither the claim of getting rid of instruments nor the claim of not having to learn them seems to
hold true. Furthermore, the broad use of the term natural
user interface makes “user” a generic concept, implying
that some users are more (un)natural than others—as when
a Kinect makes it impossible to operate a natural interface
when missing an arm or sitting in a wheelchair.
By using as biased and imprecise a term as “natural”, there
is a risk of conflating these two very different discourses in
HCI, which makes it particularly pertinent for researchers
to strive for terminological precision—a prerequisite for the
development of a research field. Precision prompts designers to go beyond the surface of a phenomenon, examining
and articulating in greater detail how and why it unfolds as
it does. Further, precision makes arguments presented in a
research community contestable and opens them to joint
elaboration, discussion, and development—an essential part
of a research field.
Looking beyond these murky definitions of what makes an
interface natural, there also appears to be an underlying
assumption that the use of instruments is unnatural, or at
least that some instruments are more natural than others.
These assumptions, too, are problematic. Firstly, they run
counter to the study of human evolution [as discussed in
e.g. 5], which indicates that the use of instruments is a crucial aspect of human nature. Secondly, the distinction between more or less natural interfaces is questionable and
potentially untenable; e.g. touch and gesture interfaces are
referred to as natural user interfaces, but it is unclear how a
four-finger swipe is a more natural way of switching to a
new desktop space than moving a mouse pointer to a hotspot corner; or how it is more natural to wave your arms in
front of a camera mounted on a tv screen in order to calibrate camera tracking so that you can control an in-game
cartoon avatar than using a game controller? Thirdly, they
ignore the body of work in HCI and beyond that addresses
how the use of instruments becomes internalized into human action [e.g. 3]. In summary, the definition of natural in
While this may appear to be terminological nit-picking, we
argue that the words we use matter, even if we are more
interested in developing technologies than in discussing
semantics. Words have real consequences for how we understand the interfaces we make; for how well we can convey what they do and why to the others; and for how we as
a field of both researchers and practitioners can work together to further develop them. The problem with using a
term such as natural is that it emphasizes purportedly positive qualities of an interface, but it does not help us understand if, how, and why the interface works. If we are interested in anything beyond promoting a novel interface—and
as HCI researchers and practitioners, we ought to be—then
we need a better vocabulary to understand, discuss, criticise, and develop these interfaces.
Changing a discourse is difficult, but that should not keep
us from proposing alternative courses of action, especially
in a field that transforms as rapidly as HCI. We consider
two intertwined steps towards addressing the over-reliance
on the term “natural”. The first step is to raise awareness
that this is in fact problematic. We have laid out a series of
criticisms here, from theoretically grounded arguments to
more practically oriented ones. While we consider it highly
relevant to discuss this topic in the context of the Critical
Alternatives conference, we need to bring these criticisms
to a wider HCI audience, including venues that might be
less amenable to them, if we are to have any hope of having
an impact. For this to be constructive, it must be combined
with the second step, namely to develop a more refined
vocabulary for articulating and addressing what is currently
covered by the umbrella term “natural”.
novel interfaces; on the opposite, we are advocating the
development of a discourse that is rich enough to address
the intricacies of these interfaces.
1. Barthes, R. 1993 [1957], Mythologies. Vintage, London.
2. Blackler, A.L. & Hurtienne, J. 2007, Towards a unified
view of intuitive interaction: definitions, models and
tools across the world. MMI-Interaktiv, 13. pp. 36-54.
3. Bødker, S. Through the interface: A human activity approach to user interface design. Hillsdale, NJ, Lawrence
Erlbaum, 1991.
4. Foucault, M. 2001 [1966] The order of things: an archaeology of the human sciences, Routledge.
The types of interfaces currently labelled natural are being
developed and adopted at a rapid pace, and we need to also
develop and adapt a language for interfaces able to match
the complexity of them. We propose that this can be inspired and informed by recent contributions seeking to unfold the notion of “intuitive interfaces”. As with natural,
intuitive is a term laden with positive connotations and attached to many an interface, but also one that in itself does
not say much about the qualities of an interface and its use.
However, recent contributions, including [2;8;16] have examined in more depth what intuitive interaction might
mean, and in turn what can make an interface intuitive. At
an overarching level, [2] examines properties how prior
cultural, sensori-motor, and acquired knowledge influence
what is perceived as intuitive; in an attempt to develop tools
for examining and refining specific interfaces, [16] has developed a questionnaire for studying if and how users of a
given interface conceive of it as being intuitive; and [8]
examines what intuitive interaction means in installations in
public spaces, drawing among others on the work of [2].
These contributions highlight complementary approaches to
developing a more extensive vocabulary of interaction in
ways that are theoretically well founded as well as of value
to HCI practitioners. since a richer understanding of the
notion of intuitive interaction can in turn help developers
create better products. The study of the so-called natural
user interfaces is in need of similar contributions.
5. Gibson, K.R. & Ingold, T. (eds.) 1993, Tools, Language,
and Cognition in Human Evolution, Cambridge University Press.
6. Hansen, L.K. 2014, What's in a word?: Why natural isn't
objectively better, Interactions, 21, 1, pp. 22-23.
7. Haslanger, S. 2000, Feminism in metaphysics: Negotiating the natural., in Hornsby & Fricker (eds.), The Cambridge companion to feminism in philosophy, Cambridge University Press.
8. Hespanhol, L. & Tomitsch, M. 2015, Strategies for Intuitive Interaction in Public Urban Spaces, Interacting
with Computers 2015.
9. Lakoff, G. and Johnson, M. 1980, Metaphors we live by.
University of Chicago Press, Chicago.
10. Leap Motion. “Leap Motion, Product”., accessed 2015-02-16.
11. Microsoft Research “NUI: Natural User Interface”, focus/nui/default.aspx, accessed 2015-02-16
12. Murphy, S. 2013, White Paper: Design Considerations
for a Natural User Interface (NUI). Texas Instruments.
13. Norman, D. 2010, Natural user interfaces are not natural. interactions 17, 3.
14. O'hara, K., Harper, R., Mentis, H., Sellen, A. and Taylor, A. 2013, On the naturalness of touchless: Putting the
“interaction” back into NUI. ACM Trans. Comput.Hum. Interact. 20, 1, Article 5.
Academic fields outside of HCI have long-standing traditions of discussing the terminology applied within the field
itself. Barthes is famous for pointing out how many of the
things we say and do are ideologies in camouflage, altough
often unintendedly so. This is akin to Luhmann’s notion of
the blind spot, as well as Marxist and feminist cultural critiques, all focusing on how the way we talk about things
shapes our perception of and actions in the world. In this
paper, we pursue a similar line of critique in order to examine the notion of “natural” user interfaces. We have argued
that the term “natural” does not at all suffice for articulating
and understanding these types of interfaces; the use of the
term is unnuanced and marred by imprecision and contradictions. Our objective is not to critique the development of
15. Saul, J. 2012, ‘Feminist Philosophy of Language’, in
Zalta, E. N. (ed.) The Stanford Encyclopedia of Philosophy (Winter 2012 Edition),.
16. Ullrich, D. & Diefenbach, S. 2010, INTUI. Exploring
the Facets of Intuitive Interaction. In J. Ziegler & A.
Schmidt (Eds.) Mensch & Computer 2010 (pp. 251260). München: Oldenbourg.
17. Weiser, M. 1991, The computer for the 21st century.
Scientific American, Sept. 1991, pp. 94-104.
Gaza Everywhere: exploring the applicability
of a rhetorical lens in HCI
Omar Sosa-Tzec
Indiana University
901 E. 10th Street
Bloomington, IN 47408
[email protected]
Erik Stolterman
Indiana University
901 E. 10th Street
Bloomington, IN 47408
[email protected]
Martin A. Siegel
Indiana University
901 E. 10th Street
Bloomington, IN 47408
[email protected]
An enthymeme is an argument that provides the claim or
conclusion to the audience, but leaves one premise unstated,
thus it is also known as a truncated syllogism. It then becomes the task of the audience to fill in that premise [7, 13].
In the case of a visual enthymeme, the observer detects or
interprets a claim or conclusion from the observation of the
composition, whose elements serve as the stated premise,
and fills in the unstated premise through reasoning derived
from such observation [13].
By examining application software as a type of rhetorical
artifact, it is possible to highlight its social, ethical and
moral implications. In this paper, we explore one possibility
for such a lens: application software functioning as a visual
enthymeme. To explore the applicability of that concept in
HCI, we analyze one web application as a first step. In our
analysis, we observe that interaction and usability are two
features that support an application in functioning as a visual enthymeme. Also, online sharing could help the user
take the role of the arguer. Our analysis allows us to outline
the elements of a user-centric persuasive experience and
shows promise for further explorations regarding the applicability of rhetoric in HCI.
In this paper we argue that rhetoric is an appropriate and
useful lens in HCI. To explore the applicability of this lens,
we focused on one possibility: application software functioning as a visual enthymeme. In particular, we focus on
applications with a GUI. As a first step of this exploration,
we chose to analyze a web application known as “Gaza
Everywhere” which compares the Strip of Gaza with other
territories in the world [16]. During news coverage regarding the Gaza Strip in the summer of 2014, relevant tweets,
and an online publication from The Independent [3] suggested that Gaza Everywhere could function as a visual
enthymeme. The application’s interface contains demographic statistics about the Gaza Strip. It also contains a
Google Maps widget that overlaps the Gaza Strip with any
territory selected by the user. Moreover, it allows the user
to adjust the position and orientation of the Gaza Strip in
the map. Tweets about the application are also shown in the
interface. Through the analysis of the application’s interface, interaction, and usage, we observe that Gaza Everywhere illustrates a case of visual enthymeme. Also, we notice that interaction and usability are two features through
which the user can fill in the unstated premise. Moreover,
online sharing provides a means for the user to take the role
of someone making an argument.
Author Keywords
Visual Enthymeme; Rhetoric; Persuasive Technology; Experience Design; Design Theory
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g., HCI):
User Interfaces – Graphical User Interfaces (GUI), Screen
Design, Theory and Methods
Rhetoric is a critical human ability that permeates all forms
of human communication [7,10]. Rhetoric can mobilize
people; it can influence their perception of reality and truth
[7]. Many people are exposed to symbolic, persuasive, and
visual artifacts every day [8,12,13]. Some of these artifacts
are intended to make a point, increasing awareness about a
particular situation, and thus change the beliefs, attitudes or
values of people. For example, the 1996 advertisement of
United Colors of Benetton showed three hearts, labeled
“White, Black, Yellow,” intended to point out that people
are equal regardless of their skin color.1 This is an example
of a visual enthymeme, a form of argument aimed at persuading an audience, rather than finding truth. These visual
enthymemes can be found in printed advertisements, televised political campaigns, and documentary photographs.
This paper is structured as follows. First, we present our
case study. Second, we discuss some observations derived
from the analysis of Gaza Everywhere. Third, we emphasize possible ways in which a rhetorical lens could be advantageous in the analysis and design of software.
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
Object of analysis
The interface of Gaza Everywhere (GE) is mostly composed of a Google Maps widget. It also includes a Twitter
widget and some demographic information about the Gaza
Strip. At the beginning of the interaction, the map shows a
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
available at
translucent red shape of the Gaza Strip overlaying its territory (Fig. 1). As a result of a search, the map places the
translucent shape on the territory of interest. At any time
during the interaction, the user can zoom and drag the map,
as well as drag and rotate the translucent shape. Also, the
software allows the user to share the current state of the
map online or take a screenshot of it. The user can access
the source code through GitHub or embed the application in
a webpage.
According to A. Nassri, author of GE, the web application
is intended to “help visualize Gaza’s relative size in comparison with other territories in the world.” Most of the
tweets shown in the interface’s Twitter widget align with
this description. Yet, some of them express a political opinion, including a note from i100, an online publication from
The Independent [3]. In that note, images derived from the
interaction with GE are used to support the note’s claim,
“Gaza Everywhere app highlights true scale of humanitarian crisis”. The web application is embedded at the end of
the note, so the user can explore the claim herself.
Figure 1. Initial state of GE. Screenshot from the WWW.
crease her awareness while doing visual comparisons with
GE. Unlike other traditional forms of visual enthymeme,
GE allows the user to play with the composition at will.
That means altering the GUI as a result of interaction with
the application. Traditional forms of visual enthymeme tend
to be static [12,13]. However, the visual comparisons in GE
can be done at any time and place as long as the user has
access to the Internet and a device with a web browser.
Those visual comparisons could elicit experiential knowledge, particularly the territories in which the user has lived.
In that regard, GE could elicit sensations and emotions as a
consequence of interaction, supporting not only the user
experience but also the detected or interpreted claim. Thus,
interaction and experiential knowledge help the user to fill
in the unstated premise in order to support the detected or
interpreted claim. Nevertheless, the user’s awareness is
affected after each interaction and by other sources of information. Consequently, the user might revisit the detected
or interpreted claim, which makes the persuasive effect of
GE evolve with the user.
Analysis and Observations
The impact of GE relies on its GUI. The physicality of the
interface depends on the device being used to interact with
the web application (e.g., smartphone). However, the device
has no major effect on the visual information or the interaction (i.e., search, interaction with the map and sharing online). GE doesn’t employ audio in its composition. However, the visual information conveyed in the map is inaccurate since Google employs a variation of the Mercator projection. In this sense, the GE’s intent could be regarded as
illustrative, rather than affirmative. GE’s intent is not oriented to provide a territorial truth.
The user might be familiar with Google Maps, which makes
the interaction with GE simple. Looking for a particular
territory and adjusting the map is a task that should not represent a major usability issue. GE’s usability supports filling in the unstated premise. Later, it is relatively easy for
the user to share an image or link of the current map. Another user that observes that map follows the same dynamics as the first user: detect or interpret a claim and validate
it based on her awareness about the Gaza Strip and the territory shown on the map. In a similar fashion, access to the
source code allows a knowledgeable user to modify GE in
order to reflect her vision of the web application and thus
affect other users. These characteristics give GE a versatility that is not commonly seen in traditional forms of visual
Regardless of the stated intent of GE, each user has a different awareness regarding the conflict in the Gaza Strip,
which affects the perception of the application’s intent. For
example, some users might see GE as a means to emphasize
political issues. The note from The Independent’s i100 is an
exemplar of this situation. Gaza Everywhere gives room for
several interpretations that are related to that awareness.
The interpretations correspond to claims that can be only
supported through the combination of reasons derived from
the user’s awareness and reasons provided through the interface’s information. These characteristics allow GE to be
classified as a visual enthymeme [13].
Regarding the interface, the detected or interpreted claim is
supported by the overlap shown by the map in juxtaposition
with the demographic information shown on the interface.
Yet, such a claim is only accessible after interaction with
the web application. The initial state of GE represents a
visual identity wherein Gaza is equal to Gaza since the
translucent shape and the territory shown on the map are the
same, including their dimension and orientation. Nonetheless, the shape becomes a metaphorical visual unit of measurement after interaction with the web application. Later,
the capability of manipulating the shape helps the user in-
The case of Gaza Everywhere exemplifies how application
software can work as a visual enthymeme. An enthymeme,
either linguistic or visual, is rooted in probability; it relies
on the beliefs, presuppositions, and experiences of the audience [7]. Consequently, software functioning as a visual
enthymeme not only relies on its design in order to be persuasive; it also relies on the user’s beliefs, presuppositions,
topoi, the lines of reasoning for inventing persuasive arguments [7], nor rhetorical figures for the composition of
software functioning as a visual enthymeme. Thus, we point
out the opportunity to explore the interrelation and applicability of rhetoric to the design of application software and
the so-called user experience. Below, we highlight some
implications of adopting a rhetorical lens in HCI and call
for further research efforts in this direction.
A rhetorical lens could help analyze or generate a design of
application software. Efforts related to analyzing a design
would contribute to the existing body of knowledge in HCI
focused on the persuasive and phenomenological aspects of
technology [e.g., 2,9,19 and Fallman in 24]. Such an analysis could reveal aspects that help the designer understand
the dynamics and intricacies of a persuasive user experience
(e.g., Fig. 2). A rhetorical lens could aid the designer to
criticize and reflect upon the design of application software,
especially in the comprehension of denotations and connotations conveyed through the design, and thus the meaning
that emerges during the user experience [6,15, 22].
Figure 2. Elements of the user experience when interacting
with software functioning as a visual enthymeme.
and experiences in order to leverage its persuasive character. In that sense, the user is placed as the central element of
a persuasive experience. Based on our analysis, placing the
user in that position causes the persuasive character of the
application software to be conditioned by: 1) the user’s
computer and information literacy; 2) the user’s beliefs,
presuppositions, and experiences, especially those related to
the situation for which the application was intended; 3) the
effect of the particular characteristics found in the context
of use, including the influence of the discourse [10] about
the situation for which the application software was intended and the rhetorical exchange with other people (Fig.
Efforts related to generating a design can contribute to
many aspects of HCI. Here, we will emphasize how it contributes to the existing body of knowledge in HCI focused
on design pedagogy [11,23]. A rhetorical lens would allow
the designer to understand the design of application software as a composition intended to address a (rhetorical)
situation that needs to be solved or improved [6,7,14,20].
Additionally, it provides vocabulary for the designer to
characterize the participants involved in the user experience
and their interrelations [e.g., 1,6,18 and Christensen &
Hasle in 24]. This might support reflection-in-action [21]
during the design process due to the designer’s awareness
of the current rhetorical situation and the people involved,
as well as the possible impact of the design approach used
in composing the solution [e.g., 14, 20,22].
Moreover, none of these conditions are static. After each
interaction with the application software, the user gains
knowledge and a certain rhetorical agency to influence the
discourse related to the situation for which the software was
intended. That influence occurs at least at a personal level.
In terms of persuasive technology, this means inheriting an
autogenous persuasive intent [9]. However, GE illustrates
that the technical features of an application (e.g., online
sharing), as well as its usability could expand the scope of
that influence. One user could persuade other users to interact with the application. In terms of persuasive technology,
the user of software functioning as a visual enthymeme
could become an element of the exogenous persuasive intents [9] to other potential users. Moreover, the impact of
such an influence could affect the designer and hence future
designs. As we learned from GE, software working as a
visual enthymeme outlines a user-centric persuasive experience, a set of complex, evolving relations of people, technology, and discourse (Fig. 2).
Considering the design of application software as a composition encourages the exploration of tropes and schemes, the
so-called figures of speech or rhetorical figures, as a generative tool for the designer. In graphic design, such an exploration has helped teach designers to conceptualize before
execution, and to be aware about the social, moral and political dimensions of design [8]. An understanding about
rhetorical figures could help the designer not only better
comprehend the concept of metaphor and metonymy,
probably the most used rhetorical figures in HCI, but also to
notice and conceptualize interfaces and interactions beyond
these two figures.
Throughout this paper, we have illustrated how application
software can function as a visual enthymeme. The case of
GE is pertinent to HCI because it illustrates that design is
inherently persuasive [19] and functions as an argument
[5,6], regardless of the designer’s intent. Certainly, the case
of GE provides a glimpse of the implications of application
software functioning as a visual enthymeme. Yet, it doesn’t
completely illustrate the generative applicability of the concept. For instance, this analysis does not address the use of
Besides understanding the rhetoricity of HCI, rhetorical
knowledge and experience could expand the designer’s set
of competency [17]. The designer could apply rhetorical
knowledge in order to convey a design argument before
peers, the client, and other stakeholders during the design
process. Consequently, the designer could strengthen her
ability to reflect-on-action [21]. Rhetoric entails persuasion,
either in the form of communication that brings about
change in people [4] or in identification with common
ideas, attitudes or material possessions [10]. This could help
the designer achieve an empathetic state of alignment not
only with the client but also with the user. Knowing about
rhetoric reinforces the following goals: that the agency of
all the people involved in the process should be managed,
trust created, and a common understanding achieved [17].
9. Fogg, B.J. (1998). Persuasive Computers: Perspective
and Research Directions. In Proc. CHI ’98. ACM
Press/Addison-Wesley (1998), 225-232
10. Foss, S.K., Foss, K.A., & Trapp, R. Contemporary Perspectives on Rhetoric. 2nd Edition. Waveland Press,
11. Getto, G., Potts, L., & Salvo, M.J. Teaching UX: Designing Programs to Train the Next Generation of UX
Experts. SIGDOC ’13. ACM Press (2013), 65-69.
In this paper, we took a first step exploring one way
through which a rhetorical lens can be applied to the analysis of software. We analyzed Gaza Everywhere, which illustrates the possibility of application software functioning
as a visual enthymeme. This analysis allowed us to outline
the set of complex, evolving relations involved in such a
case, introducing a schema of user-centric persuasive experience. We consider that a rhetorical lens could bring
awareness of the social, moral and ethical implications not
only about a design but also about the user experience. Providing such a lens to designers could help them comprehend or become sensitive to the pervasiveness of everyday
rhetoric, intentional and unintentional. The user experience
could be interpreted as a rhetorical interface between one
person, other people, and artifacts. Through this interface
discourses are shaped – discourses that go back to the designers and influence the way and with whom they identify.
12. Handa, C. Visual Rhetoric in a Digital World. Bedford/St. Martin Press, 2004.
13. Hill, C.A., & Helmers, M. (Eds.) Defining Visual Rhetorics. Lawrence Erlbaum Associates, 2004.
14. Hullman, J., & Diakopoulos, N. Visualization Rhetoric:
Framing Effects in Narrative Visualization. T. on Visualization and Computer Graphics 7, 12 (2011) 22312240
15. Kannabiran, G., & Graves Petersen, M. Politics at the
interface: A Foucauldian power analysis. In Proc. NordiCHI 2010. ACM Press (2010), 695-698.
16. Nassri, A. Gaza Everywhere (2014, August 8). URL:
17. Nelson, H.G., & Stolerman, E. The Design Way. MIT
Press, 2012
This work is supported in part by the National Science
Foundation (NSF) Grant Award no. 1115532. Opinions
expressed are those of the authors and do not necessarily
reflect the views of the entire research team or the NSF.
18. Price, J. A Rhetoric of Objects. SIGDOC ’01. ACM
Press (2001), 147-151 .
19. Redström, J. Persuasive Design: Fringes and Foundations. W. IJsselsteijn et al. (Eds.) PERSUASIVE 2006,
LNCS 3962 (2006), 112-122. Springer.
1. Arvola, M. The Mediated Action Sheets: A Framework
for the Fuzzy Front-End of Interaction and Service Design. Proc. European Academy of Design Conference,
20. Rosinki, P., & Squire, M. Strange Bedfellows: HumanComputer Interaction, Interface Design and Composition Pedagogy. Computers and Composition 26, 3
(2009), 149-163. Elsevier.
2. Bardzell, J. Interaction Criticism: An introduction to the
practice. Interacting with Computers 23, 6 (2011), 604621. Elsevier.
21. Schön, D.A. The Reflective Practitioner. How Professionals Think In Action. Basic Books, 1984.
22. Sosa-Tzec, O., & Siegel, M.A. Rhetorical Evaluation of
User Interfaces. NordiCHI ’14. ACM Press (2014), 175178.
3. Barlett, E. Gaza Everywhere app highlights true scale of
humanitarian crisis. i100 from The Independent (August
14, 2014). Shortened URL:
23. Torning, K., & Oinas-Kukkonen, H. Persuasive System
Design: State of the Art and Future Directions. Persuasive ’09, ACM Press (2009).
4. Bostrom, R.N. Persuasion. Pretince-Hall, 1983.
5. Buchanan, R. Declaration by Design: Rhetoric, Argument, and Demonstration in Design Practice. Design Issues 2,1 (1985) 4-22.
24. Kord. Y., IJsselsteijn, W., Midden, C., Eggen B., Fogg,
B.J. (Eds.) Persuasive Technology. 2nd International
Conference on Persuasive Technology, PERSUASIVE
2007. Revised Selected Papers. Springer (2007).
6. Carnegie, T.A.M. Interface as Exordium: The Rhetoric
of Interactivity. Computers and Composition 26, 3
(2009) 164-173. Elsevier.
7. Covino, W.A., & Jolliffe, D.A. Rhetoric: concepts, definitions, boundaries. Allyn and Bacon, 1995.
8. Ehses, H., & Lupton, E. Rhetorical Handbook. An Illustrated Manual for Graphic Designers. Design Papers 5
(1988), 1-39.
Human-computer interaction as science
Stuart Reeves
Mixed Reality Lab
School of Computer Science
University of Nottingham, UK
[email protected]
between Newell, Card, Carroll and Campbell around the
deployment of cognitive psychology for designing user
interfaces, and the prospects of developing “a science of
human-computer interaction” [43, 12, 44]. Since then there
have been sporadic expressions—a tendency if you will—
towards cultivating some element of ‘scientific
disciplinarity’ for HCI. This may be seen in the form of
panels and workshops on matters like scientific replication
[58, 59] or interaction science [32] that have surfaced at the
ACM CHI conference in the last few years. Most recently
Liu et al. [36] and Kostakos [35] have argued that HCI is a
poor scientific discipline when measured against other bone
fide examples (such as those of the natural sciences or
disciplines with ‘science’ in their title). In this analysis HCI
is found devoid of central motor themes that are taken as a
signature of thoroughbred scientific disciplines, thus
representing a presumed failure of the HCI programme.
Echoing the calls of Greenberg and Thimbleby in 1992
[27], work is thus required to make HCI “more scientific”
The human-computer interaction (HCI) has had a long and
troublesome relationship to the role of ‘science’. HCI’s
status as an academic object in terms of coherence and
adequacy is often in question—leading to desires for
establishing a true scientific discipline. In this paper I
explore formative cognitive science influences on HCI,
through the impact of early work on the design of input
devices. The paper discusses a core idea that I argue has
animated much HCI research since: the notion of scientific
design spaces. In evaluating this concept, I disassemble the
broader ‘picture of science’ in HCI and its role in
constructing a disciplinary order for the increasingly
diverse and overlapping research communities that
contribute in some way to what we call ‘HCI’. In
concluding I explore notions of rigour and debates around
how we might reassess HCI’s disciplinarity.
Author Keywords
Science; disciplinarity; cognitive science.
ACM Classification Keywords
In exploring these complex debates, this paper addresses a
range of cognate concerns in HCI: ‘science’, ‘disciplinarity’
and ‘design’. The argument I present in this paper contends
that the status anxiety over HCI as an academic object has
its origins in the early formulation of HCI’s research
practice. This practice blended the application of cognitivist
orientations to scientific reasoning with Simon’s view of
design [56], in order to establish a particular research
idea—what I refer to as the scientific design space. This
guides both what human-computer interactions are, and
how we investigate them. This idea, I argue, has configured
how many HCI researchers relate to interactive artefacts in
their work practices and thus shaping HCI’s disciplinary
circumstances and discussions.
H.5.m. Information interfaces and presentation (e.g., HCI):
Human-computer interaction (HCI) is represented by a
large and growing research community concerned with the
study and design of interactive technologies. It rapidly
emerged from the research labs of the 1970s, matching the
lifetime of the Århus Decennial conferences. Perhaps
characteristic of all ‘new’ research communities, anxieties
have been expressed over its status as an academic object
from the very beginning (indeed, as an early career
researcher I myself have often felt a similar confusion about
what HCI is as an academic object). Many of these
anxieties centre around disciplinary shape and how that
shape relates to ‘science’—the topic of this paper.
It is not the intention of this paper to suggest that cognitivist
scientific reasoning is the only orientation to reasoning
present in HCI research. It is also not within the scope of
this paper to fully map out the landscape of different forms
of reasoning in HCI (e.g., the ‘designerly’), nor to evaluate
the claims of different approaches compared with their
achievements. Neither is it the intention to imply that
disciplinary anxiety is solvable. Instead, this paper focusses
upon cognitivist scientific reasoning and its expression
through the scientific design space, arguing that this has
been an important and persistent force in the broader logics
of significant portions of HCI’s programme. This is despite
Discussion about the role of science both in and of HCI can
be traced to various formative exchanges in the early 1980s
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
repeated suggestions of transformative intellectual changes
that HCI as a whole may have gone through.
Law?), and no obvious shared commitment to a certain set
of problems or ways of approaching them [36].
This paper firstly unpacks the debates that I broadly
subsume into questions over the status of HCI as an
academic object and its relationship to ‘science’; through
this I detail two important anxieties that have animated this
debate. I then relate these to a core concept—the scientific
design space—that I argue emerged in HCI’s formative
years at the confluence of cognitive science and interface
engineering challenges. In this I revisit HCI’s relationship
to cognitive science, and the introduction of a particular
‘picture of science’, tracking subsequent influence of this
on attempts at crafting HCI’s disciplinary architecture.
Finally, the paper’s discussion returns to evaluate these
Debate about this incoherence problem has two poles of
discussion: descriptions of ‘how things are’ (which often
include explanations), and prescriptions of ‘how things
should be’. A key way of describing present incoherence to
HCI is the accumulation of diversity in its approaches over
time. As Rogers states, “HCI keeps recasting its net ever
wider, which has the effect of subsuming [other splinter
fields such as CSCW and ubicomp]” [50 p. 3]. As a matter
of this accumulation, HCI researchers have tended to ‘bolt
on’ new approaches repeatedly. This accumulation has
involved accommodating new epistemological perspectives
and the disciplinary objects that come with them. This is
perhaps best illustrated by theory and theorising in HCI,
which has been accorded importance given its association
as a signature of established disciplines. Beginning with a
particular cohort of theories drawn from cognitive science
and psychology, ‘theory’ in HCI has increasingly been
adopted from diverse origin disciplines [50, 29]. This has
introduced a great diversity of technical senses in which
‘theory’ is meant (see [50 pp. 16-17] for a descriptive
account). Question marks are raised over what qualifies,
what a theory is useful for, how they are to be organised,
what the relationships between them actually are, how new
theories should (or can) be developed, amalgamated,
divided, or simply decided as ‘good’ or ‘bad’, relevant or
out of scope. As Bederson and Shneiderman admit prior to
an attempt to define theory in information visualisation and
HCI, “a brief investigation of the language of theories
reveals its confusion and inconsistent application across
disciplines, countries, and cultures” [2].
An external view of HCI’s disciplinary status could assume
that it is secure. For instance, in CHI 2007’s opening
address, Stuart Feldman (then president of the ACM)
described the HCI research as “absolutely adherent to the
classic scientific method” [1]. But the picture from within
HCI seems radically different. By reviewing broad
discussions around HCI’s disciplinarity in this section, I
intend to sketch a background for subsequently addressing
the specifics of ‘science’ in HCI.
Questions over HCI’s disciplinarity emerged early in its
development. In 1987 ergonomist and HCI pioneer Brian
Shackel asked during his INTERACT conference keynote
whether “HCI was a discipline, or merely a meeting
between other disciplines” [15]; a couple of years later,
Long, Dowell, Carroll and others discussed what kind of
discipline HCI might be described as [38, 11]. Although
Carroll characterises this and his exchanges with Newell
and Card as the “theory crisis” of the mid-1980s [11 p. 4],
one only need glance at a standard textbook to notice that
HCI seems still to be routinely presented by an ambiguous
constellation of overlapping disciplinary descriptors (e.g.,
interaction design, user experience, etc.). The term itself is
also problematised, and HCI can be taken to perhaps
subsume or compose these various related descriptors. For
example, one position adopted by a key textbook—
Interaction Design: Beyond Human-Computer Interaction
[51]—formulates HCI as a contributing academic discipline
to a broader field of interaction design.
With theory also comes attendant epistemological
commitments (which may or may not be honoured, of
course) and other associated objects. For instance, these are
the methods (e.g., experimental design, experience
sampling, anthropological ethnography) and corresponding
instruments for administering these methods (e.g., NASA
TLX, social network analysis metrics [23]).
This “remarkable expansion” [50 p. xi] tends to be taken
sometimes as indications of success (in the form of rich
emerging discipline) but perhaps more significantly as a
signifier of problems, such that—as Carroll states—“an
ironic downside of the inclusive multidisciplinarity of HCI
is fragmentation” [10]. Similar views are expressed perhaps
most commonly within the program committees of the SIG
CHI conference, and in other public fora e.g., Interactions
magazine [29]. An absence of uniformity of theory, method
and instrument is unsettling when compared to formal
accounts of how disciplines should be, particularly the
disciplines with coveted scientific status. The choice thus
seems stark; Rogers raises the question of prescribing
disciplinary order: i.e., whether to “stem the tide and
impose some order and rules or let the field continue to
expand in an unruly fashion” [50 p. 2]. The dichotomy
Here I sketch out two features of HCI’s disciplinary
anxieties: incoherence and inadequacy. Later on I argue that
invocations of ‘science’ are often attempts to remedy these
perceived problems in HCI.
In essence, the core of the incoherence problem lies in the
idea that HCI seemingly has few ‘secured’ propositions that
researchers generally agree upon (except, perhaps Fitts’s
presented is now a familiar one where the opposite of
disciplinary prescription is “unruly”.
specifically, when considering the prospects of HCI’s
position amidst ‘the disciplines’, this debate is often
configured as a disciplinary contrast between the ‘hard’ and
‘soft’ sciences, and correspondingly, between the rigorous
and the less disciplined, between the quantitative and the
qualitative, between maturity and immaturity, and between
the ideal-scientific and the striving-scientific. Bederson and
Shneiderman present a paradigmatic expression of this in an
attempt to explain the inadequacies: “Mature scientific
domains, such as physics and chemistry, are more likely to
have rigorous quantitative laws and formulas, whereas
newer disciplines, such as sociology, or psychology, are
more likely to have qualitative frameworks and models” [2]
(also see Card’s comments on this [55]). Such comparisons,
we might note, are of course performed under the
assumption of an essential disciplinary comparability, in
spite of the questions over the relevance of this
endeavour—a point I return to later.
What of the response to this in HCI? One possibility is to
redescribe HCI in such a way that creates some semblance
of order. This includes attempts to rationalise the existing
range of work that occupies the HCI space, perhaps most
visibly represented in discussions around ‘turns’ [50]
‘waves’ [5] and ‘paradigms’ [30]. For instance, Rogers
offers four key turns: design (early 1990s), culture (late
2000s), ‘the wild’ (mid 1990s) and embodiment (early
2000s) [50].
Another way is to prescribe standardisation; and where
standardisation comes, so do calls for ‘scientific’ ways of
establishing order. These calls are largely intended, I think,
to strengthen HCI’s disciplinary coherence. For example,
Whittaker et al. argue that HCI’s “radical invention model”
works against “the development of a ‘science’ of HCI”,
with their proposed solution being the development of
standardised sets of “reference tasks” to support
“cumulative research” [57]. This view is consonant with
programmatic statements arguing for developing a more
science-like approach in HCI through practices like routine
replication [27, 26, 58, 59, 31] or other prescribed forms of
evaluation [45]. Relatedly, Liu et al. have also argued for
the need of prescriptive standards of order via the
development of what they term as shared “motor themes”
In order to understand the twin anxieties of incoherence and
inadequacy, I think it helps to return to HCI’s formation. By
examining how some key features of present-day HCI
emerged from the research labs of North America in
particular, this section discusses the intellectual origins of
an orienting concept: the scientific design space. As part of
this I am interested in the relationship to both the ‘picture of
science’ in HCI as well as ideas of architecting HCI as a
(possibly) scientific discipline. In closing, this section then
turns to assess some of the intellectual foundations of this
concept, based in Simon’s perspective of design and
The second and interrelated expression of anxiety is that of
HCI’s (intellectual) inadequacy when positioned as an
academic discipline against a roster of other, better
established disciplines. In 2002 panel, positions on this
were represented in a panel of key HCI figures
(Shneiderman, Card, Norman, Tremaine and Waldrop)
discussing 20 years of the CHI conference [55].
Shneiderman argued the importance of “gain[ing]
widespread respect in the scientific communities”, Norman
commented on HCI as a “second-class citizen”, while Card
reflected on his aspirations for HCI to graduate to
“something somewhere in the lower half of the engineering
disciplines, maybe civil engineering”. The situation seems
to be unchanged since 2002: in a foreword to a recent
(2012) handbook for HCI, Shneiderman reflects that “HCI
researchers and professionals fought to gain recognition,
and often still have to justify HCI’s value with academic
colleagues or corporate managers” [54].
Designing the mouse
Prior to broad recognition of HCI as a distinctive, nameable research activity—and naturally prior to debate about
its status even as a ‘discipline’—the design of the humancomputer interface was primarily approached by pioneering
research labs as a construction problem involving the
provision of some possible control of a computer system.
Many early fundamental interaction techniques were
initially conceived of in this way (e.g., direct manipulation,
the mouse, windowing environments [42, 41])—i.e., as
primordially technological engineering endeavours that
aimed to produce task-functional and efficient user
interfaces. Verification of the usability of such interfaces
tended to be applied only after design decisions had been
made and implemented [10 p. 2]. Challenges were mounted
to this software engineering focussed approach by nascent
HCI. From this emerged a pairing between design work and
cognitive scientific work [9]. Norman described this as
“cognitive engineering” [46], although he retained a
separation between the different roles of design and
cognitive science. Yet, as we will see, the situation of this
pairing in its formation was ambiguous in terms of whether
it also is a conflation of the two.
These concerns are political ones—they are motivated by
the ways in which HCI is seen to be judged by others
(academics, research funders, etc.). As part of this it seems
standard practice to perform disciplinary comparisons
between HCI and disciplines that are formally labelled as
‘sciences’, including physics (which remains the favoured
model of scientific purity in positivistic philosophy of
science), chemistry or biology [50 p. xii, 31]. More
The practical foundations of the scientific design space (i.e.,
a science of design [9], not the scientific study of design
[14]) erupted from research within Xerox PARC, perhaps
illustrated most emblematically by Stuart Card and his
pioneering work on the computer mouse. In the book
Designing Interactions [41]—documenting the history of
the design of interactive systems and devices using firsthand accounts from its key players—Moggridge states of
the computer mouse: “Stu [Card] was assigned to help with
the experiments that allowed [mouse developers Doug
Englebart and Bill English] to understand the underlying
science of input devices”. Card’s aim was, in his own
words, to develop a “supporting science” that would
undergird the design activity of emerging interface
technologies like the mouse or the desktop metaphor based
graphical user interface. While earlier human factors and
software engineering influenced work in HCI had been
concerned with the use of ergonomic theory, its role in
relation to design was generally verificationist [9, 10 p. 2],
i.e., not used in predictive ways that serviced design. Basic
atheoretic trial-and-error engineering approaches were
typically being used at the time: in optimising the mouse’s
design Card states that “the usual kind of A-versus-B
experiments between devices” were no longer sufficient
since “the problem with them was that [English, the
designer] did not really know why one [mouse design] was
better than the other” [41 p. 44]).
hand-eye coordination than the device itself. In addition to
its explanatory power, Card found that applying cognitive
scientific concepts to conceptualise and shape the design
space could also be generative; it could offer different
design possibilities through prediction. For example, it was
found that by designing a mouse-like input device that
incorporated “putting fingers together you can maybe
double the bandwidth [of input precision]”. This resulted in
some clear advice for designers: to “put your transducer in
an area that is covered by a large volume of motor cortex”
[41 p. 45]—i.e., direct the design towards the capacities of
the cognitive motor processor and its relationship to the rest
of the cognitive subsystem. From this example it becomes
clear how the cognitivist orientation was applied not only to
the user, but also in shaping the idea of a scientific approach
to the design space itself. Card, Mackinlay and Robertson
later summarised this approach thus: “Human performance
studies provide the means by which design points in the
space can be tested. We can use this basic approach as a
means for systematizing knowledge about human interface
technology, including the integration of theoretical, human
performance, and artifact design efforts.” [6].
The scientific design space approach and HCI’s
disciplinary architecture
The scientific design space of Card and others offers a
consistent and strong visionary prescription for how
research in HCI can proceed. That Card and colleagues
have had a large influence on much of HCI’s development
in uncontroversial, but I argue that some aspects of this
influence on key forms of reasoning in HCI have been
overlooked. In this section, I explore its role in the broader
endeavour of describing and prescribing HCI’s
disciplinarity. In this sense, I am interested in how the
scientific design space offers solutions to some of the
disciplinary anxieties of HCI.
Card instead saw a role for “hardening” the “soft sciences
of the human-computer interface” [43] through applying
cognitive science as a way of explaining why interfaces
failed or succeeded and thus predictively guiding design
work. Cognitive science, offering a representational theory
of the mind, could be deployed in order to construct a
correct “theory of the mouse” [41 p. 44] based on theories
of interoperating mental structures that were being
developed in cognitive science research. Although initially
drawing upon the non-cognitive behaviourist model of
Fitts’s Law and Langoff’s work on this, Card’s work
integrated this into a fuller assembly of cognitive units (e.g.,
perceptual processors, motor processors, memory stores,
etc.). Cognitive psychology, with its mappings between
human action and cognitive units, offered explanations for
how Card might formally rationalise a ‘design space’ of the
mouse so as to guide the mouse designers’ work along the
right pathway according to the predictions of cognitive
science. In this sense, cognitive science was employed to
‘tame’ the apparent ‘irrationality’ of design work.
In 1983’s The Psychology of Human-Computer Interaction,
Newell, Card and Moran sought to develop “a scientific
foundation for an applied psychology concerned with the
human users of interactive computer systems” [8]. Card
would later point to the development of the mouse as an
“ideal model of how the supporting science could work”
[41 p. 45]. Although this hinted at a separation between
design space exploration and cognitive science, the nature
of that relationship remained largely unspecified. Yet,
drawing out the implications of this “ideal model” meant
extending the scientific design space of the mouse to a
scientific design space approach for human-computer
interactions more generally. The first step was to consider
input devices as a whole [6], but linked to a broader
prescriptive programme of “discovering the structure of the
design space and its consequences” [7].
At times Card’s novel application of cognitive psychology
challenged assumptions about what was actually important
for design decisions being made for input devices like the
mouse. For example, using this approach Card found that
the differences between mouse designs was more to do with
‘lower’ levels of organization than cognition; cognition, in
turn, is a lower level of organization than social
interaction.” [10].
Shneiderman also adopts this hierarchical descriptive model
of relationships between more fundamental and ‘higher’
sciences. For instance he distinguishes between “microHCI”, where researchers “design and build innovative
interfaces and deliver validated guidelines for use across the
range of desktop, Web, mobile, and ubiquitous devices”,
and “macro-HCI”, which deals with “expanding areas, such
as affective experience, aesthetics, motivation, social
participation, trust, empathy, responsibility, and privacy”
[53]. Macro and micro HCI have “healthy overlaps” yet
have different “metrics and evaluation methods”.
Finally, Rogers presents a description of HCI that seems
oriented by a similar hierarchical sensibility. In HCI
Theory: Classical, Modern and Contemporary, HCI’s
(scientific?) disciplinary structure is a logical, hierarchical
arrangement of “paradigms, theories, models, frameworks
and approaches” that vary in scale, “level of rigor,
abstraction and purpose” [50 p. 4]. In this scheme
paradigms are the large scale “shared assumptions,
concepts, values and practices” of a research community,
while a theory indicates an empirically “well-substantiated”
explanatory device that resides within a particular
perspective or theoretical tradition (and is presumably
associated with a particular paradigm). Beneath this sit
models, which are predictive tools for designers, of which
Fitts’ Law is the most familiar.
Figure 1: Architecting the discipline—positioning HCI
within a scientific order (figure reproduced from [43]).
A clearer scientific disciplinarity to HCI was also
developed subsequent statements. In 1985 Newell and Card
presented a vision for the role of psychological science in
HCI. This work offered a descriptive account of a layered
model in which temporal bands (between decades and
milliseconds) map to different action units, associated
memory and cognitive capacities, and the relevant
theoretical bands which apply [43] (see Figure 1). Within
this schema HCI’s phenomena of interest sit largely within
the psychological and bounded rationality bands, happening
to coincide precisely with the concerns of cognitive science
of the time and ordering the rest of the space in terms of
various related sciences.
Unpacking the idea of the scientific design space
Returning again to the core idea of the scientific exploration
of the design space, here I want to unpack its orienting
ideas. In doing so, I think we can better understand the lines
of reasoning being deployed in its pursuit.
The significance of this layered model should not be
underestimated. It offers a unified, reductive organisation
that is similar to positivistic philosophy of science where
higher order sciences are reducible to lower ones
(ultimately physics) [20]. In this scheme, it is social and
organisation science, various ‘levels’ of the cognitive
sciences, neurosciences, and biological sciences, that fill in
this order. Critically, this model for HCI is also cumulative:
thus, latter developments such as Information Foraging
Theory [48]—an influential theory that explains the
information search and collection behaviours of web users
(for example)—fit within the ‘rational’ band, yet build upon
lower, more foundational bands from which information
foraging “gains its power and generality from a
mathematical formalization of foraging and from a
computational theory, ACT-R, of the mind” [32].
The central idea of design spaces and their systematic and
empirical investigation as a scientific matter seems to have
a strong resonance with the work of Herbert Simon. Simon
was frequent collaborator with Newell, whom Card had
been a student of. Simon’s influential book, The Sciences of
the Artificial [56], is important for this concept in that that
not only lays out a programme for a scientific approach to
design but also discusses this within the context of Simon’s
prior work around solution-searching within bounded
rationality (e.g., ‘satisficing’).
In essence Simon argues that the phenomena of the design
activity itself demands a scientific approach of its very
own: a new “science of design” [56 ch. 5], but one that is
unshackled from the tendency to employ methods from the
natural sciences. Simon’s conception of this new science of
design shares a link to the formulations of Card, Newell and
others that I described earlier. Crucially, both these
approaches conceptualise design as an optimisation
problem. As an optimisation problem, the design of “the
artificial” (here meaning human-constructed objects such as
As a guiding notion of hierarchical disciplinary order, this
work seems to have been influential in sparking attempts at
descriptions of HCI. Firstly, Carroll, while in disagreement
with Newell and Card’s presentation of the role of
psychology in HCI [12], elsewhere concurs to some extent
with this organisation in terms of “level of description.
Thus, perception and motor skill are typically thought of as
interactive digital artefacts) may be rendered as dimensions,
enumerated, rationalised, and essentially made docile. This
step gives rise to the application of a spatial metaphor of
design: i.e., the idea of a design space that is
“parametrically described” and populated with “device
designs as points” [7]. Card positions cognitive science as a
scientific way to “structure the design space so that the
movement through that design space was much more rapid”
[41 p. 45]. Simon offers a more generalised version of this:
design spaces not only of artefacts but also economic,
organisational and social systems. This view is broad and
encompassing, and it is perhaps a conclusion drawn from a
fundamentally cognitive conception of the mind. Hence
Simon states that “the proper study of mankind is the
science of design” [56 p. 138]. For Simon a scientific
approach to designing such things involves constructing all
design problems as computational spaces—ones that are
amenable to formalisation and therefore computational
search, optimisation and so on. And it is this notion which I
argue undergirds the design space concept as it has found
its way into forming a scientific approach to HCI research.
This influence was initially made possible through the
importance of cognitive science in HCI’s formation.
Conceptually the prospect of transforming design problems
into reductive computational spaces (to be addressed by
scientific methods) was facilitated by the central
implications of a ‘strong’ cognitive science position.
Specifically this position offers an isomorphism between
the computer and the human (i.e., as a cognitive object,
with various input / output modalities). There are two key
conflations that follow: firstly that the human is
computational, or can be described with computational
concepts; and secondly that the designer and therefore
design itself is computational. This underlying idea in HCI
has enabled the adoption process of the design space model.
The practical expression of these ideas may be readily
found the research outputs of many HCI conference venues
like CHI and UIST, but also in those of related
communities such as Ubicomp. Following Card and
colleagues, it has become the approach of choice for work
that evaluates novel input or output devices, but also for
innovative interaction techniques for established interface
forms (e.g., GUIs, touch and gestural interfaces, etc.). The
‘ideal expression’ of the scientific design space is often
conducted under the broader glossed label of psychology. It
characteristically involves engaging in task-oriented
interface evaluations via a hypothesis-driven experimental
format—being often classed as ‘usability evaluation’ [26].
In building hypotheses and delivering their results, classic
features of cognitive science theory are recruited, for
example cognitive objects like memory, task load and so
on. This might also include methods where the rationale is
grounded in cognitive science reasoning, such as ‘think
aloud’ techniques. Hypothesis testing enables an organised
and systematic traversal of the design space particular to the
class of device and interface under investigation [6, 7]. As
part of this, cumulative, replicable, generalisable and
theoretically-informed findings are delivered both to HCI as
a whole but also potentially back to cognitive science as
instances of applied research. It is also possible that
attempts to form novel cognitive theory specific to
interactive systems may result, for example Information
Foraging theory presents one well-known instance of this
(also note its relationship to the ACT-R cognitive
architecture [48]).
Returning to the disciplinarity question once again, the
scientific design space idea in its broader construction from
Simon seems to offer both security for the status of HCI (in
terms of academic adequacy and coherence), and also
answers some of the questions presented earlier regarding
the status of HCI as a discipline—whether engineering,
craft or science [38]. The idea effectively responds by
reformulating design problems in ways that let them be
“reduced to declarative logic” [16 p. 176], meaning that
“design [becomes] a system with regular, discoverable
laws” [24 p. 26]. We will return to this topic of ‘design’ in
the closing of the discussion section.
Having firstly described HCI’s disciplinary anxieties, and
then covered formative concepts around scientific design
spaces, the discussion now broadly explores the relations
between them. I will initially do this through picking apart
the intellectual coherence of the design space approach.
First I argue that this idea has been very important in HCI
research, and moreover, still is particularly in the evaluation
of novel input devices. Then I wish to discuss the
conceptual problems of the ‘scientific design space’, and in
doing so must necessarily also turn to tackle the more
general issue of the role of ‘science’ in HCI. Finally, I turn
to review the introduction of ‘designerly’ perspectives in
HCI and examine how this relates to scientific design
As an adoption phenomenon in HCI, this approach seems
very well-established. But complaints have emerged about
the trappings of this approach being prioritised at the
expense of “rigorous science” [26]. For instance, Greenberg
and Buxton articulate this trend as one of “weak science” in
HCI that is more concerned with the production of one-off
“existence proofs” of designs than systematic rigour that
can “put bounds” on the wider design space under test [26].
In comparison, Card, Mackinley and Robertson emphasise
the importance of uncovering design space structure [7],
and through this taking different device designs as inputs or
“points in a parametrically described design space” [7]—in
The adoption of scientific design spaces in HCI
Via Card and others, it seems that Simon’s way of
conceptualising the design of artefacts has been a
significant contributor to HCI’s ‘DNA’. Yet this impact of
Simon on HCI is only occasionally acknowledged [9].
other words, “systematizing” [6]. In contrast, the
prioritisation of existence proofs has meant theory-free
meandering explorations of an unspecified design space
with little clear epistemic justification for taking a design
space approach in the first place—or to put it another way,
the methods of this approach end up deciding the problems
to be tackled [26]. The scientific design space is something
of a ‘muscle memory’ for HCI’s relationship to interactive
devices and systems.
three forms: 1. design as a (cognitive) scientific activity; 2.
the application of (cognitive) science in design—i.e.,
retaining some notional separation between these activities;
3. the development of scientific understandings of design
itself (see [14 ch. 7]).
The question for the scientific design space in HCI is based
around some blurring between the first two forms. In some
ways the initial programme of Card’s recounted in
Designing Interactions appears different to Simon’s, yet in
others seems to have a similar end result. At first glance
Card seems to position his work as offering a “supporting
science” that does not replace design (i.e., the application of
cognitive science to design). He reminds us that it still
requires “good designers to actually do the design” [41 p.
45]. Yet, I think the picture is somewhat more complex than
this. In later work, such as morphological design space
analysis [7], extant designs become ‘inputs’ to a design
space which itself generates the parameters along which
“good designers” must work. In this way designers must
engage with the predictive authority of the (cognitive)
scientific shaping of the design space—so as to “alter the
designer’s tools for thought” [44]. (Card, Mackinley and
Robertson’s story of the “headmouse” user is also worth
reflecting upon in this regard [7 p. 120].)
I must note, of course, that there are broader issues at play
here in HCI beyond the intellectual framework of design
spaces. For instance, publishing cultures that reward
quantity and the ‘least publishable unit’ will also tend to
tolerate existence proofs. This is likely encouraged or at
least further enabled by the idea of ‘scientific’ cumulative
progress in HCI.
Design spaces and scientific method
Now that I have discussed the adoption of the scientific
design space mode of HCI research, I wish to tackle the
very idea itself in two ways. In this section here I will argue
that this mode of work borrows from the natural sciences in
spite of Simon’s original articulation. In the section after
this I will link the discussion to much more general issues
around the notion of ‘science’ itself in HCI’s discourse.
HCI’s ‘picture of science’
The first point is that curiously, even given his call for
genuinely new sciences of the artificial that were
specifically not derivative of the natural sciences, the
position Simon outlines nevertheless relies upon this
strategy anyway. It seems that this is a product of the way
in which Simon conceives of and specifies design as an
activity—and how this could help knowledge about design
move on from what he saw as “intellectually soft, intuitive,
informal, and cook-booky” ways of conceiving of it at the
time [56 p. 112].
At the core of the design space notion—I believe—is a
desire to bring some aspect of (or perhaps all of) HCI closer
towards a scientific disciplinarity. Doing so offers the
promise of addressing anxieties over incoherence and
inadequacy. Yet in order to better understand this I argue
that we must start to unpack what is meant by ‘science’ and
how it is used in HCI’s discourse. I also wish to bracket off
‘science’ and ‘science talk’ more generally and look at what
is done with it. After examining this, I then move on to
contrast these deployments with understandings of
scientific practice from philosophy of science and empirical
studies of scientists’ work—contrasts that reveal a
problematic dissonance between the two.
As Ehn expresses it, Simon performs something of a ‘trick’
which “poses the problem of design of the artificial in such
a way that we can apply the methods of [formal] logic,
mathematics, statistics, etc., just as we do in the natural
sciences” (original emphasis) [16 p. 175]. In other words,
because of Simon’s conceptualisation of design activities in
terms of spatial, computable design spaces that are
essentially reduced to search problems, the deck becomes
stacked in such a way that ‘textbook’ understandings of the
methods of the natural sciences just happen to turn out to be
the relevant choice. And, following Simon, we find the
science of the design space in HCI also relies upon
perceived methods of the natural sciences. Interestingly,
this perception is itself second-hand: it is actually that of
cognitive science’s version of the methods of the natural
sciences not the actual practices per se.
Perhaps the first debates over what kind of science might be
relevant to HCI can be found in the Newell, Card, Carroll
and Campbell exchanges of the early 1980s [43, 12, 44].
Here, Carroll and Campbell took issue over Newell and
Card’s characterisation of “hard science” (“quantitative or
otherwise technical”) and “soft, qualitative science” [43] in
their arguments about the possible role of science (cognitive
psychology) in HCI. As Carroll and Campbell disputed, this
was a “stereotype of hard science” and psychology itself; a
false dichotomy around science was created by Newell and
Card in order to support a positivist programme [12].
Since then, discourse on ‘science’ in HCI research broadly
has featured a web of conflicting deployments of the term.
These deployments can offer both descriptions of HCI as
scientific and prescriptions about ensuring that HCI is
scientific (in some way). It is possible to offer two
The second point here is the lack of clarity in this notion
around design and science as activities (I shall come to
unpack ‘science’ and ‘design’ in the sections following this
one). Specifically we could draw a distinction between
contrasting poles of this so as to illustrate the difficulties.
At one end ‘science’ can be deployed in its loosest meaning
as a synonym of rigour. For instance, I would suggest that
Carroll often uses this formulation (see [11]); it is an
invocation that seems to avoid scientism or positivist
assumptions. At the other end of the scale we see ‘science’
being used to denote a specific set of ‘scientific qualities’
that are seen as gold standards of being scientific.
Elsewhere I have summarised what is typically meant here
in terms of three linked concepts [49]: 1. accumulation—
science’s work is that of cumulative progress (e.g., [44,
57]); 2. replication—science’s work gains rigour from
being reproducible by ‘anyone’ [28]; and 3.
generalisation—science’s cumulative work builds toward
transcendent knowledge. As part of this, various other
descriptions of what it means to be “more scientific” [35]
are pointed to: an adherence to the scientific method,
empiricism and the “generation of testable hypotheses”
[32]. In this sense ‘science’ is produced as a description of a
general knowledge production procedure that offers
standardisation and guarantees against ‘bias’. In this
account ‘science’ represents a self-correcting incremental
process of knowledge accumulation: “the developing
science will gradually replace poor approximations with
better ones” [34].
Other studies of scientific work in astronomy, physics,
chemistry and biology practice unpack the ‘lived order’ of
these practices, describing how natural phenomena must be
socially ‘shaped’ and transformed into scientific “Galilean
objects” for presentation in academic papers and so on [21,
Following this, I think that HCI discourse around ‘science’
broadly tends to make a common mistake in expecting
formal accounts and material practices to be
interchangeable, leading to various confusions (elsewhere
this necessary dissonance has even lead to accusations of
fraud [40]). For instance, the replication of results as a
necessity in principle for HCI has been described as “a
cornerstone of scientific progress” [59]. This notion trades
on the idea that formal accounts should provide an
“adequate instruction manual for replication work” [28].
Yet when we turn to studies of the natural sciences, from
which the principle is ostensibly extracted, we find that
firstly most scientific results do not get replicated and
secondly that where it is used, replication is a specifically
motivated, pragmatic action for particular contested,
relevant cases [13].
Not only is there the potential for confusion between formal
accounts and material practices, but studies of the natural
sciences also question the notion of the homogeneous way
in which ‘science’ broadly is conceptualised, further
problematising the term as we find it in HCI. This use tends
to presuppose the existence of a coherent specific set of
nameable enumerable procedures that make up ‘the
scientific method’; procedures that are also held to be
representative of a standardised approach to science in
general. Philosophy of the sciences instead argues that there
are sciences, plural, each sui generis, with no uniform
approach or necessary promise of ultimate unity [20]. The
notion of ‘the scientific method’ is itself questionable.
Drawing attention to problems of an induction-based model
of scientific progress, Feyerabend has instead argued that
counterinduction in scientific practice that is not visible in
formal accounts [18]. This is not that ‘anything goes’
methodologically, but rather that adherence to formal
accounts of method alone cannot explain how science
progresses. As Bittner puts it, the natural sciences have
“tended to acquire arcane bodies of technique and
information in which a person of average talent can partake
only after an arduous and protracted preparation of the kind
ordinarily associated with making a life commitment to a
vocation” [3]. Hence it becomes potentially distorting to
compress such a diverse set of practices into a singular but
unspecifiable method of ‘science’ so as to draw out
principles for being “more scientific” [35].
There are of course many other ways of talking about
‘science’ which have found their way into HCI [49]. The
most obvious would be in vernacular usage—where ‘being
scientific’ is used in place of ‘being professional’, ‘being
reasonable’, ‘being careful’ or ‘being scholarly’ and so on.
Often these forms may be political, for example, using
‘science’ as a rhetorical strategy to assert epistemic or
moral authority (‘science’ as good work or a transcendent
truth). It could also be used as a way of categorising
research as scientific and non-scientific. Or it can be an
aspirational label that requests peer recognition—for
example, computer science rather than informatics [49]. It
seems, then, that the overall ‘picture of science’ in HCI is
I wish to now to examine this picture with that presented by
empirical studies of the natural sciences. The first general
point that I address to the broad HCI description /
prescription of ‘science’ is the difference between formal
accounts of science (e.g., scientific papers) and the material
practices that are carried out to produce them. In
ethnomethodological accounts of scientific practice the
relationship between the two, and their essential
inseparability, is explored. Livingston, for instance,
describes the ad hoc and profoundly local achievement of
performing everyday practical laboratory tasks (e.g.,
determining chemical soil composition). He makes the
observation that “From an external point of view, the
procedures themselves determined the results. From the
point of view of those working in the lab, our joint
experiences helped us find what those procedures were in
practice and how we could make them work.” [37 p. 155].
HCI’s relationship to design
In this section I want to discuss another key element of the
design space besides its ‘scientific’ sensibilities—‘design’.
Of course, design has always been a concern for HCI
research. As Card, Newell and Moran stated, “design is
where the action is in the human-computer interface” [8 p.
11]. The question is what kind of design—the deployment
of the term ‘design’ is itself somewhat like ‘science’, i.e., a
potential source of great confusion [61]. Regardless of the
conceptual challenges I raised, Card and colleagues
nevertheless very clearly articulated a strong sense of
design following Simon. Yet I have argued this adoption at
the same time provides an intellectual foundation to some
of HCI’s recurrent concerns about developing a scientific
disciplinarity. The formalised rigour of the scientific design
space has meant this particular conceptualisation of design
has flourished in HCI; this conceptualisation is grounded in
background(ed) and unreflective scientific framing devices.
It is for this reason that many novel interactive systems and
technologies are still evaluated as inputs to the design space
although there have been distinctions made between design
practice and critical design [61].
Further, as they are expressed within HCI, designerly
perspectives have similar rehearsals of arguments around
matters of disciplinary order, anxieties [19], and the
relevance of scientific disciplinarity to design as an activity.
For example, Zimmerman, Stolterman and Forlizzi
specifically call for research through design to “develop
protocols, descriptions, and guidelines for its processes,
procedures, and activities” and find its own sense of
“reliability, repeatability, and validity” [62]. Gaver
characterises this as designerly approaches succumbing to
HCI’s tendency towards scientism [22]. It seems that the
debate around these designerly perspectives is thus no less
susceptible to HCI’s orienting conversations around
(scientific) disciplinarity that I have discussed in this paper.
However, perhaps because of the struggle for recognition in
HCI, designerly perspectives do tend to consistently
emphasise the importance of assessing and valuing
designerly research correctly—the argument being that the
products of this work do not always fit with how HCI
values research. This is highlighted by Gaver (“appropriate
ways to pursue our research on its own terms” [22]), Wolf
et al. (“its own form of rigor” [60]), Fallman and
Stolterman (“rigor and relevance have to be defined and
measured in relation to what the intention and outcome of
the activity is” [19]), and Zimmerman, Stolterman and
Forlizzi (“a research approach that has its own logic and
rigor” [62]). Yet this argument about rigour has not been
generalised to HCI as a whole and remains stuck within the
frame of the designerly perspective attempting to gain
legitimacy within the HCI community—in concluding I
want argue for that generalisation.
Since at least the late 1990s and certainly the early 2000s,
however, HCI has seen the development of its own
subcommunity of researchers concerned with what I will
gloss here as ‘designerly’ perspectives (a gloss that I will
later problematise). As they appear in HCI, designerly
perspectives work with a range of terms like ‘design
research’, ‘research through design’, etc.; to highlight a few
examples I discuss here: [17, 60, 61, 19, 62, 22]. This
relatively small but distinct subcommunity is one of the few
places in HCI that actually foregrounds Simon’s conception
of design—which is suggested by some to be a
“conservative account” [17]. In this account, Simon
effectively democratises and deskills design by arguing that
“[e]veryone designs who devises courses of action aimed at
changing existing situations into preferred ones” [56 p.
111]. This logically flows from his bounded rationality
view of design as search.
Yet within the context of this designerly perspective, it has
been argued that traditions along the lines of the scientific
design space model tend to mask wider discussion about the
nature of the design activity itself; as Fallman argues,
“Design is thus a well-established and widespread approach
in HCI research, but one which tends to become concealed
under conservative covers of theory dependence, fieldwork
data, user testing, and rigorous evaluations” [17]. In short,
the absence of designerly alternatives to Simon has meant
design is “at best limiting and at worst flawed” in its usage
in HCI [60].
In this paper I have sought to examine disciplinary anxieties
in HCI through picking apart its ongoing relationship to
‘science’. This has meant identifying the idea of the
scientific design space—an approach conceiving of
designed artefacts as scientific objects, influenced by
formative early applications of cognitive science to input
devices. This significant approach to design seems to have
subsequently configured much HCI research discourse,
leading to
discussions around scientific qualities of
accumulation, replication, and generalisation. Yet, as I have
tried to show, matters of science and scientific disciplinarity
in this perspective are somewhat problematic conceptually
and far from settled. Further, in spite of announcements
over HCI’s various ‘turns’ and successive ‘waves’ [5] of
development, or even ‘paradigms’ [30], I have contested
that some of the key assumptions of HCI have been quite
resilient to such apparent changes, even with the
introduction of designerly perspectives to HCI that
challenge Simon’s conceptualisation of design—indeed,
there we also find similar debates played out around
(design) science and (design) disciplinarity.
Yet, following the pattern of eclectic importations of new
literatures to HCI, these debates around designerly
perspectives typically (perhaps necessarily) offer a
somewhat simplified presentation. As a point of contrast,
Johansson-Sköldberg detects five interrelated but different
discourses of “designerly thinking”: as creation of artefacts
(Simon), as reflexive practice (Schön), as problem-solving
(Buchanan), as sense-making (Cross), or as meaningcreation (Krippendorff) [33]. Within HCI accounts of
design, the literature tends to brush over such nuances,
In concluding I wish to suggest two implications. Firstly,
that HCI researchers should—with some caveats—stop
worrying about ‘being scientific’ or engaging in ‘science
talk’ and instead concern themselves with working in terms
of appropriate forms of rigour. Secondly, that HCI
should—again, with some caveats—stop worrying about
disciplinary order or ‘being a discipline’ and instead engage
with the idea of being interdisciplinary and all the potential
difficulties and reconfigurations that requires.
must arise from and be appropriate for the actual problem
or research question under consideration” [26].
There are other purposes with which ‘science’ terms may
be put to in HCI that might be necessary—albeit as a
double-edged sword. For example, in political purposes,
such as rhetorical, or persuasive uses, ‘science’ may have
importance for communicating how HCI fits within
research funding structures that adhere to the normative
descriptive / prescriptive forms that this paper has
questioned. The danger here, of course, is the apparent
cynicism of this approach.
From science to rigour
The first point turns on ‘science’ in HCI. To summarise, the
paper has presented the role that ‘science’ plays in
descriptions of HCI, e.g., accounts of HCI research as
having scientific qualities. Secondly the paper has
highlighted the use of ‘science’ in building prescriptions for
HCI research, e.g., programmes by which HCI research can
be conducted as a scientific discipline of design. These
descriptions and prescriptions pertain to HCI research
practice. Yet, in both cases I have tried to show that these
are problematic when we consider how these articulations
compare with what the model—the natural sciences—looks
like as a set of lived practices. I have argued that the model
being invoked by HCI—i.e., formal accounts—and the
everyday material practices of natural scientists do not
match up (for good reason), meaning that the case for
employing said formal accounts of ‘science’ as a descriptor
of or prescription for HCI seems weak and potentially
confusing. At the most charitable we might say that
‘science’ could be used as a synonym for rigour, albeit a
highly loaded one.
From discipline to interdiscipline
The second concluding point turns rethinking disciplinarity
in HCI and concerns about constructing a rationalised
multidisciplinarity has long been identified in HCI (e.g.,
[10]), what I argue for here is underlined by Rogers’s
characterisation of HCI (for good or bad) as an “eclectic
interdiscipline” [50 p. xi]. In other words, the difficulties of
assembling HCI into some disciplinary order may be the
natural state, the key characteristic of HCI. Reflecting upon
this, Blackwell has suggested that HCI could be best
conceived of as a catalytic interdiscipline between
disciplines, rather than a discipline that engages in the
development and maintenance of a stable body of
knowledge [4]. If HCI is to be a rigorous interdiscipline
then it will require working more explicitly at the interface
of disciplines. We will need more reviews of and reflections
upon the landscape of different forms of reasoning in HCI
and through this better ways of managing how potentially
competing disciplinary perspectives meet together. This
paper has touched only one part of the landscape, but there
are many more.
In abandoning the formal-scientific, I want to emphasise the
notion of appropriate rigour. This is the idea that rigour in
HCI must be commensurate with the specific intellectual
origins of the work; e.g., this may be (cognitive, social, etc.)
psychology, anthropology, software engineering, or, more
recently, the designerly disciplines. This runs counter to the
standardisation, or other forms of positivist reductionism in
HCI that I have discussed in this paper. Firstly, such
orderings will necessarily foreground contradictory
accounts of rigour, and secondly, invocations of ‘science’
will tend to replace focus on seeking the relevant frame of
rigour. Instead, appropriate rigour is achieved “not through
the methods by which data is collected, but through the
ways in which the data can be kept true [...] during the
analysis” [52].
At the same time it should be noted that there are dangers
here too: being an interdiscipline can mean that HCI
research diffuses into contributing disciplines and ‘credit’ is
never recognised for HCI. This suggests that in addition to
reconceptualising HCI as an interdiscipline, we must think
of new and perhaps radical ways to characterise HCI as it is
presented to the outside world.
This work is supported by EPSRC (EP/K025848/1).
Elements of this paper developed through conversations
with Bob Anderson, to whom I am grateful. Thanks also to
those who conversed with me on the paper and / or the
topic: Susanne Bødker, Andy Crabtree, Christian
Greiffenhagen, Sara Ljungblad, Lone Koefoed Hansen,
Alex Taylor, Annika Waern, and anonymous reviewers.
To highlight the notion of appropriate rigour I point to the
“damaged merchandise” controversy of the late 1990s,
where the reliability and validity of well known usability
methods in prominent studies were critiqued [25]. Perhaps
one of the reasons why this critique produced a significant
response [47] was that it took the studies to task using the
intellectual framework that had been implicitly ‘bought
into’ (cognitive psychology). As Greenberg and Buxton
argue, “the choice of evaluation methodology—if any—
EPSRC Research Data Statement: All data accompanying
this publication are directly available within the publication.
17. Fallman, D. 2003. Design-oriented human-computer
interaction. In Proc. CHI ‘03. ACM, New York, NY,
USA, 225-232.
1. Bartneck, C. 2008. What Is Good? – A Comparison
Between The Quality Criteria Used In Design And
Science. In Proc. CHI ‘08 Extended Abstracts. ACM,
New York, NY, USA, 2485-2492.
18. Fallman, D., Stolterman, E. 2010. Establishing criteria
of rigour and relevance in interaction design research.
Digital Creativity 21, 4, Routledge, 2010, 265–272.
2. Bedersen, B., Shneiderman, B. 2003. The Craft of
Information Visualization, chapter 8, p. 350, Elsevier,
19. Feyerabend, P. K. 2010. Against Method. 4th ed., New
York, NY: Verso Books.
3. Bittner, E. 1973. Objectivity and realism in sociology.
In Psathas, G. (ed.), Phenomenological Sociology.
Chichister: Wiley, 109-25.
20. Fodor, J. A. 1974. Special sciences (or: The disunity of
science as a working hypothesis). Synthese 28, 2,
Springer, 1974, 97-115.
4. Blackwell, A. F. 2015. HCI as an Inter-Discipline.
In Proc. CHI ‘15 Extended Abstracts. ACM, New York,
NY, USA, 503-516.
21. Garfinkel, H., Lynch, M., Livingston, E. 1981. The
Work of a Discovering Science Construed with
Materials from the Optically Discovered Pulsar.
Philosophy of the Social Sciences 11 (June 1981), Sage,
5. Bødker, S. 2006. When second wave HCI meets third
wave challenges. In Proc. of NordiCHI ‘06. ACM Press,
22. Gaver, W. 2012. What should we expect from research
through design?. In Proc. CHI ‘12. ACM, New York,
NY, USA, 937-946.
6. Card, S. K., Mackinlay, J. D., Robertson, G. G. 1990.
The design space of input devices. In Proc. CHI ‘90.
ACM, New York, NY, USA, 117-124.
23. Golbeck, J. 2013. Analyzing the Social Web. Elsevier,
7. Card, S. K., Mackinlay, J. D., Robertson, G. G. 1991. A
morphological analysis of the design space of input
devices. ACM Trans. Inf. Syst. 9, 2 (April 1991), 99122.
24. Goodman, E. 2013. Delivering Design: Performance
and Materiality in Professional Interaction Design. PhD
thesis, University of California, Berkeley, 2013.
8. Card, S. K., Newell, A., Moran, T. P. 1983. The
Psychology of Human-Computer Interaction. L.
Erlbaum Assoc. Inc., Hillsdale, NJ, USA.
25. Gray, W. D., Salzman, M. C. 1998. Damaged
merchandise? A review of experiments that compare
methods. Hum.-Comput.
Interact. 13, 3 (September 1998), 203-261.
9. Carroll, J. M. 1997. Human–computer interaction:
Psychology as a science of design. International Journal
of Human-Computer Studies, Volume 46, Issue 4, April
1997, Pages 501-522.
26. Greenberg, S., Buxton, B. 2008. Usability evaluation
considered harmful (some of the time). In Proc. CHI
‘08. ACM, New York, NY, USA, 2008.
10. Carroll, J. M. (ed.) 2003. HCI Models, Theories, and
Frameworks: Towards a Multidisciplinary Science,
Elsevier, 2003.
27. Greenberg, S., Thimbleby, H. 1992. The weak science
of human-computer interaction. In Proc. CHI ‘92
Research Symposium on HCI, Monterey, California,
11. Carroll, J. M. 2010. Conceptualizing a possible
discipline of human-computer interaction. Interact.
Comput. 22, 1 (January 2010), 3-12.
28. Greiffenhagen, C., Reeves, S. 2013. Is replication
important for HCI? In Workshop on replication in HCI
(RepliCHI), CHI ‘13.
12. Carroll, J. M., Campbell, R. L. 1986. Softening up Hard
Science: reply to Newell and Card. Human–Computer
Interaction, 2(3):227-249, Taylor and Francis, 1986.
29. Grudin, J. 2015. Theory weary. interactions, blogpost
(posted 14/03/14), retrieved 15th June, 2015, from:
13. Collins, H. M. 1975. The seven sexes: A study in the
sociology of a phenomenon, or the replication of
experiments in physics. Sociology, 9(2):205-224, 1975.
30. Harrison, S. Tatar, D., Sengers, P. 2007. The Three
Paradigms of HCI. In Proc. alt.chi, CHI ‘07, San Jose,
CA, May 2007.
14. Cross, N. 2007. Designerly ways of knowing. Birkhäuser
GmbH, Oct. 2007.
31. Hornbæk, K., Sander, S. S., Bargas-Avila, J. A.,
Simonsen, J. G. 2014. Is once enough?: on the extent
and content of replications in human-computer
interaction. In Proc. CHI ‘14. ACM, New York, NY,
USA, 3523-3532.
15. Dix, A. 2010. Human-computer interaction: A stable
discipline, a nascent science, and the growth of the long
tail. Interact. Comput. 22, 1 (January 2010), 13-27.
16. Ehn, P. 1990. Work-Oriented Design of Computer
Artifacts. L. Erlbaum Assoc. Inc., Hillsdale, NJ, USA.
32. Howes, A., Cowan, B. R., Janssen, C. P., Cox, A. L.,
Cairns, P., Hornof, A. J., Payne, S. J., Pirolli, P. 2014.
Interaction Science Spotlight, CHI ‘14. ACM, New
York, NY, USA, 2014.
48. Pirolli, P. 2007. Information foraging theory: Adaptive
interaction with information, vol. 2. Oxford University
Press, USA, 2007.
33. Johansson-Sköldberg, U., Woodilla, J., Çetinkaya, M.
2013. Design Thinking: Past, Present and Possible
Futures. Creativity and Innovation Management. 22, 2
(June 2013), 121–146.
49. Reeves, S. 2015. Locating the “Big Hole” in HCI
Research. Interactions 22, 4 (July 2015).
50. Rogers, Y. 2012. HCI Theory: Classical, Modern and
Contemporary, Synthesis Lectures on Human-Centered
Informatics 5, 2, Morgan & Claypool, 2012, 1-129.
34. John, B. E., Newell, A. 1989. Cumulating the science of
HCI: From S-R compatibility to transcription typing. In
Proc. CHI ‘89. ACM, New York, NY, USA, 109-114.
51. Rogers, Y., Sharp, H., Preece, J. 2011. Interaction
Design: Beyond Human—Computer Interaction, 3rd
Edition. Wiley, 2011.
35. Kostakos, V. The big hole in HCI research. Interactions
22, 2 (February 2015), 48–51.
36. Liu, Y., Goncalves, J., Ferreira, D., Xiao, B., Hosio, S.,
Kostakos, V. 2014. CHI 1994-2013: mapping two
decades of intellectual progress through co-word
analysis. In Proc. CHI ‘14. ACM, New York, NY, USA,
52. Rooksby, J. 2014. Can plans and situated actions be
replicated? In Proc. CSCW ‘14. ACM, New York, NY,
USA, 603-614.
53. Shneiderman, B. 2011. Claiming success, charting the
future: micro-HCI and macro-HCI. Interactions 18, 5
(September 2011), 10-11.
37. Livingston, E. 2008. Ethnographies of Reason. Ashgate,
54. Shneiderman, B. 2012. The Expanding Impact of
Human-Computer Interaction. Foreword to The
Handbook of Human-Computer Interaction, Jacko, J. A.
(ed), CRC Press, May 2012.
38. Long, J., Dowell, J. 1989. Conceptions of the discipline
of HCI: craft, applied science, and engineering. In Proc.
5th conference of the BCS-HCI Group on People and
computers V, Sutcliffe, A., Macaulay, L. (eds.).
Cambridge University Press, 9-32.
39. Martin, A., Lynch, M. 2009. Counting Things and
People: The Practices and Politics of Counting. Social
Problems 56, 2 (May 2009), 243-266.
55. Shneiderman, B., Card, S., Norman, D. A., Tremaine,
M., and Waldrop, M. M. 2002. [email protected]: fighting our
way from marginality to power. In Proc. CHI ‘02
Extended Abstracts. ACM, New York, NY, USA, 688691.
40. Medawar, P. B. 1963. Is the scientific paper a fraud?
The Listener 70 (12 September), 377–378.
56. Simon, H. A. 1996. Sciences of the Artificial. 3rd ed.,
MIT Press, Sept. 1996.
41. Moggridge, B. 2006. Designing Interactions. MIT Press,
Oct. 2006.
57. Whittaker, S., Terveen, L., Nardi, B. A. 2000. Let’s stop
pushing the envelope and start addressing it: a reference
task agenda for HCI. Hum.-Comput. Interact. 15, 2
(September 2000), 75-106.
42. Myers, B. A. 1998. A brief history of human-computer
interaction technology. interactions 5, 2 (March 1998),
58. Wilson, M. L., Mackay, W. 2011. RepliCHI—We do
not value replication of HCI research: discuss (Panel). In
Proc. CHI ‘11 Extended Abstracts. ACM, New York,
43. Newell, A., Card, S. K. 1985. The prospects for
interaction. Hum.-Comput. Interact. 1, 3 (September
1985), 209-242.
59. Wilson, M. L., Resnick, P., Coyle, D., Chi, E. H.
2013. RepliCHI: The workshop. In CHI ‘13 Extended
Abstracts. ACM, New York, NY, USA, 3159-3162.
44. Newell, A., Card, S. 1986. Straightening Out Softening
Up: Response to Carroll and Campbell. Hum.-Comput.
Interact. 2, 3 (1986), 251-267.
60. Wolf, T. V., Rode, J. A., Sussman, J., Kellogg, W. A.
2006. Dispelling “design” as the black art of CHI.
In Proc. CHI ‘06. ACM, New York, NY, USA, 521530.
45. Newman, W. M. 1997. Better or just different? On the
benefits of designing interactive systems in terms of
critical parameters. In Proc. DIS ‘97. ACM, New York,
NY, USA, 239-245.
61. Zimmerman, J., Forlizzi, J., Evenson, S. 2007. Research
through design as a method for interaction design
research in HCI. In Proc. CHI ‘07. ACM, New York,
NY, USA, 493-502.
46. Norman, D. A. 1986. Cognitive engineering. In D. A.
Norman & S. W. Draper (Eds.), User centered system
design: New perspectives on human-computer
interaction. Hillsdale, NJ: Erlbaum Associates.
62. Zimmerman, J., Stolterman, E., Forlizzi, J. 2010. An
analysis and critique of Research through Design:
towards a formalization of a research approach. In Proc.
DIS ‘10. ACM, New York, NY, USA, 310-319.
47. Olson, G. M., Moran, T. P. 1998. Commentary on
“Damaged merchandise?”. Hum.-Comput. Interact. 13,
3 (September 1998), 263-323.
Designed in Shenzhen:
Shanzhai Manufacturing and Maker Entrepreneurs
Silvia Lindtner
University of Michigan
Ann Arbor, MI-48104
[email protected]
Anna Greenspan
NYU Shanghai
Shanghai, China-200122
[email protected]
David Li
Hacked Matter
Shanghai, China-200050
[email protected]
design [3, 6, 7, 27, 43]. Drawing on this work, this paper
argues that contemporary processes of technology design
necessarily include the place and culture of production.
We draw from long-term research in Shenzhen, a
manufacturing hub in the South of China, to critically
examine the role of participation in the contemporary
discourse around maker culture. In lowering the barriers of
technological production, “making” is being envisioned as a
new site of entrepreneurship, economic growth and
innovation. Our research shows how the city of Shenzhen is
figuring as a key site in implementing this vision. In this
paper, we explore the “making of Shenzhen” as the “Silicon
Valley for hardware.” We examine, in particular, how
maker-entrepreneurs are drawn to processes of design and
open sharing central to the manufacturing culture of
Shenzhen, challenging conceptual binaries of design as a
creative process versus manufacturing as its numb
execution. Drawing from the legacy of participatory design
and critical computing, the paper examines the social,
material, and economic conditions that underlie the growing
relationship between contemporary maker culture and the
concomitant remake of Shenzhen.
Today, PD’s call for involvement of users into the design
process is not only accepted in popular design approaches
such as human-centered design, but has also morphed into a
business strategy. Bannon and Ehn, for instance, document
the ways in which corporations promote the view that users
and designers co-create value [3]. They illustrate the
expansion of “a managerial version of user driven design”
rooted in “market-oriented business models removed from
PD concerns” [3]. Closed company innovation has
increasingly given way to “open innovation” models, where
creativity, knowledge and expertise of users are leveraged
for company profit.
PD’s call for critical intervention is further complicated by
the recent flurry of devices and tools ranging from social
media apps to smart devices (or Internet of Things), whose
value depends on the participation of users. While
companies like Facebook mine behavior data online to sell
it back to its users in the form of ads, newer companies like
Misfit see the value of smart wearables in the sensitive data
their users generate and share by wearing the device while
sleeping, walking, driving, working, exercising, etc.
Advocates of the “maker movement” also celebrate a new
formulation of user participation. By providing the tools,
machines and platforms that enable people to make their
own technologies, “makers” hope to turn passive consumers
into active participants not only in technological design, but
also in economic processes and civic matters (for prior
work see e.g. [2, 18, 25, 31, 33, 39, 40, 45]. Open hardware
platforms like the Arduino and affordable CNC machines
like the 3D printer are envisioned to enable otherwise
passive consumers to produce their own devices, tools, and
eventually machines.
Author Keywords
Maker culture, industrial production, manufacturing,
participation, open source, DIY, China, , Shanzhai.
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g., HCI):
Critical scholarship of computing has long been committed
to questioning the apparently strict separation between
production and consumption, design and use. One of the
most widely known and impactful approaches has been
participatory design (PD). With roots in the Scandinavian
labor movement in the 1970s, PD emerged alongside
outsourcing, automation and the introduction of
Information Technology into the workplace. PD sought to
intervene in these processes, promoting the view that the
user and the larger social context and surrounding material
culture should be central to considerations and processes of
This contemporary promotion of “participatory production”
[3] has critical gaps, as a return to the original concerns
central to participatory design makes clear. Although design
has been opened up to include and benefit from the
participation of users (as elaborated above), the question of
who is considered a legitimate participant in the design
process has remained fairly limited. In particular, there is
often an unspoken separation between what happens in the
design studio, makerspace, hardware incubator or home
office (the site of ideation, co-creation, appropriation, and
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
day-to-day use) and what happens on the factory floor (the
site of manufacture, production, and wage labor). The
“human” in human-centered design, the “participant” in
participatory design, and the “maker” who advocates the
“democraticization” of production concentrates on the
designer-user/producer-consumer relationship, but rarely on
the relationship to the factory worker, producer, mechanical
engineer, and so on. This is particularly ironic considering
PD’s original concern to intervene in processes of
outsourcing, deskilling of labor, and the re-organization of
work [3, 6, 7]. The central argument of what follows is that
‘participation’ in the design process does not only include
the social context of the end user, but also, crucially, the
material, socio-economic and cultural context of
production. This paper demonstrates this by focusing on the
manufacturing hub of Shenzhen, China, as a crucial agent
in much of the design and creation of contemporary
models of technological production in which design and
innovation are seen to emerge predominantly from global
epicenters in the West (e.g. Silicon Valley) [2, 3, 13, 23, 25,
Our work builds upon this research, by taking seriously
manufacturing as site of expertise, design and creative
work. We draw from long-term ethnographic research with
factories, makers, and hardware start-ups in Shenzhen, a
global hub of electronic manufacturing located in Southern
China. In this paper, we analyze the social, technological,
and economic processes of manufacturing in Shenzhen,
rooted in a culture of tinkering and open source production
that has evolved in the shadows of global outsourcing and
large-scale contract manufacturing. We demonstrate that a
growing number of maker entrepreneurs have begun to
intersect with this manufacturing ecosystem, experimenting
with modes of design, production, and collaboration.
Examining these intensifying collaborations enables a
deeper and nuanced conceptualization of both design and of
the ongoing transformation of Shenzhen.
“Making” is often celebrated as a method that might
revitalize industrial production in Western knowledge
economies, e.g. [1]. In reality, this is not a straight-forward
or easy process. Many hardware start-ups face difficulties
in transitioning from hobby to professional making and
manufacturing [16, 49]. A number of businesses have tried
to capitalize on these difficulties by providing makerentrepreneurs with an access to manufacturing in China.
Take, for instance, Highway1, a hardware incubator in San
Francisco, which promises start-ups a smooth transition into
mass manufacturing without having to spend substantial
amounts of time at their China-based manufacturing sites.
Here, engaging with manufacturing expertise is rendered a
problem space and an inconvenient hurdle for designers,
makers and start-ups. Implicit in this approach is a
widespread conception of technology production, which
splits manufacturing and design along geographical lines; in
which technology is conceived and designed in the West,
and then manufactured in low-wage regions with loose
regulatory environments. The evidence of this idea of
design is emblazoned on the iPhone: “Designed by Apple in
California. Assembled in China.” Designers, here, are
understood as the agents, with their ideas being executed
elsewhere. In its most extreme formulation this division
corresponds to a Cartesian inspired ‘mind-body dualism’ in
which an active rational mind in the West guides a passive,
inert body in the so-called developing world.
Shenzhen & the maker movement
In the last years, there has been a growing interest in the
potential impact of a so-called “maker” approach to
technological innovation, education, and economic growth
[29]. “Making” is thought to enable a move from tinkering
and play, to prototyping and entrepreneurship and, finally,
to help revive industries and sites of manufacturing lost due
to histories of outsourcing. Making is drawing investment
from governments, venture capitalists, and corporations
around the world. While the US government promotes
digital fabrication and making as a way to return to the
“made in America” brand (with the White House hosting its
own Maker Faire) [33, 36], the European Union has
introduced formal policies aimed at rebuilding
manufacturing capacities and know-how in order to sustain
their knowledge economies [15]. Large international
corporations have also started to invest. In 2013, Intel
introduced the Arduino compatible Galileo board; an “Intel
inside” microcontroller platform aimed at branding Intel as
a champion of the maker approach.
Our work challenges the dominant narratives of maker
culture by critically investigating the relationship between
making, designing, and manufacturing. We argue for a
return to one of the most fundamental concerns of PD, i.e.
to foreground the expertise, tacit and situated knowledge of
everyday work practice [43, 46]. Our focus is on the ways
in which the city of Shenzhen has emerged as a central
player in the broader imaginary as making shifts from
hobby to entrepreneurial practice. Shenzhen figures in the
global maker imaginary as a “maker’s dream city” or “the
Silicon Valley for hardware,” where visions of
technological futures get implemented today. Until recently,
few technology researchers and people in the broader IT
media sector have paid much attention to Shenzhen. This
began to change, when a growing number of “makers”
In this paper, we build on prior work that has begun
challenge simplistic binaries of design-production,
examining processes and cultures of design, making, and
repair in regions outside of the United States and Europe [3,
23, 25, 39]. Drawing from research with mobile repair
workers in rural Namibia, for example, Jackson et al. [25]
focus on mundane sites of repair, breakdown and reuse as
important, but often neglected sites of design. In engaging
with these often overlooked places, commonly thought of
as technologically, economically and socially “behind,”
scholars have argued for an approach that challenges
traveled to the coastal metropolis to turn their ideas into
end-consumer products. Well-known examples of these
made-in-China devices are the virtual reality goggles
Oculus Rift and the Pebble smart watch. In 2012, one of the
first hardware incubator programs, HAXLR8R (now
renamed as HAX), opened its offices in Shenzhen. Other
investment programs such as Highway1, Bolt, and Dragon
Innovation followed suit. Shenzhen draws not only makers
and hardware start-ups, but also large corporations such as
Intel, Texas Instruments, Huawei, and more. Intel, for
instance, has invested 100 million USD in what the
company calls the “China Technology Ecosystem” in
Shenzhen [22]. Since 2013, the MIT Media Lab has
organized tours for its students through Shenzhen’s
electronic markets and factories. In a recent blog post Joi
Ito, head of the Media Lab, records his impressions,
describing local factories as “willing and able to design and
try all kinds of new processes to produce things that have
never been manufactured before” [24].
participant observation in five hackerspaces and at over
thirty maker-related events such as Maker Faires, Maker
Carnivals, Hackathons, Barcamps, and Arduino workshops
across the cities of Shanghai, Beijing, and Shenzhen, as
well as several months of ethnographic fieldwork at a
hardware incubator in Shenzhen, following the day-to-day
workings of ten start-ups and their journeys of moving from
idea into production. Participant observation at
hackerspaces included joining daily affairs such as
prototyping, space management, member meet-ups, open
houses, and the organization of workshops. The research at
the hardware incubator included daily observations at the
office space as well as accompanying start-ups during
sourcing, prototyping, and manufacturing.
Between 2012 and 2014, we made numerous trips to
Shenzhen to focus on the history and culture of the region’s
local manufacturing industry. We hosted a series of handson workshops and intensive research trips in Shanghai and
Shenzhen (in total 5 over the duration of two years). These
events enabled us to bring together an interdisciplinary
network of 120 scholars, makers and industry partners from
China, the United States, South-East Asia, and Europe
concerned with “making.” Backgrounds of our participants
spanned the fields of HCI, the arts, design, engineering,
manufacturing, science fiction writing, and philosophy.
How did Shenzhen, once known as a site of cheap and low
quality production, become the place to be for
contemporary hardware innovation? How have design
processes, such as those that Ito speaks of, developed and
fed into the culture of manufacturing that has emerged in
the city over the past three decades? Who is considered a
legitimate participant and what sites of expertise and design
are rendered invisible?
Throughout these events, we collated hundreds of hours of
video and audio material of interviews, field visits, panel
discussions, hands-on workshops and discussion sessions.
In total, we conducted over 150 formal interviews with
relevant stakeholders including makers, members and
founders of hacker and maker spaces, organizers of maker
related events, factory workers, owners, and managers,
government officials and policy makers, employees in
design firms and large IT corporations who were invested
in making and manufacturing, artists and urban planners,
entrepreneurs and investors. As common in ethnographic
research, we prepared sets of interview questions, which we
expanded and modified as we went along and identified
emergent themes and new questions. We combined
discourse analysis, situational analysis [11], and research
through design [5, 51]. Although we have interviewed
people from a wide range of backgrounds, for the purposes
of this paper, we draw on a subset of our interviews, which
were conducted with people active in Shenzhen’s
manufacturing industry as well as those from the global
maker scene who are intersecting with manufacturing. As
many of our interviewees are public figures, we refer them,
when they spoke in a public context (e.g. at workshops,
conferences, Maker Faires, etc.), by their real names. We
anonymized all informal conversations and interviewees
who preferred not to be named.
The findings presented in this paper challenge the common
binary of “made in China” versus “designed in California”
that inherently associates the West with creativity and
innovation and China with low quality production. We
argue that what we witness in Shenzhen today has an
important impact on the relationship between making,
manufacturing and design. This paper contributes by
shedding light on a situated practice of design, prototyping
and ideation that emerges from within manufacturing. The
paper, thus, provides new insights into histories and
cultures of professional design and making that have
emerged outside of more familiar IT hubs such as Silicon
Valley [41, 47]. Our aim is to foster an engagement with
mundane sites of contemporary industrial production – like
Shenzhen – in order to advance a critical inquiry of design,
maker production, global processes of technology work and
labor, and participation.
We draw from long-term research about technology
production in China in order to examine the cultural and
technological processes that unfold at the intersection of
design and manufacturing. This includes in-depth
ethnographic research conducted over 5 years, hands-on
participation in maker and manufacturing projects, and the
hosting of a series of interdisciplinary workshops and
conferences that brought together scholars and practitioners
concerned with making and manufacturing. Ethnographic
research conducted by the first author included long-term
Our research team comes from a mixed background
including interaction design, HCI, cultural anthropology,
China studies, urban studies, philosophy, entrepreneurship,
and physical computing. This has proven to be effective in
allowing an in-depth engagement with both the
technological and social practices of making and
manufacturing. All of us speak Mandarin Chinese (one of
us is a native speaker and the other two have received
formal language training for more than 5 years). Interviews
were conducted in both English and Chinese. All formal
interviews were professionally translated and transcribed.
As contract manufacturers grew in size, and began catering
predominantly to large brands, a network of entrepreneurs
saw an opportunity to establish themselves in the gaps of
the global economy. A dense web of manufacturing
businesses emerged in Shenzhen, catering towards less
well-known or no-name clients with smaller quantities, who
were not of interest to the larger players. This less formal
manufacturing ecosystem (known as shanzhai 山 寨 in
Chinese) is comprised of a horizontal web of component
producers, traders, design solution houses, vendors, and
assembly lines. They operate through an informal social
network and a culture of sharing that has much in common
with the global maker movement (though largely motivated
by necessity rather than countercultural ideals). We now
turn, in greater detail, to this local manufacturing culture.
Shenzhen is a young city; the build up of its urban
landscape dates back only 30 years ago, when a series of
village collectives began to be transformed into one of the
world’s largest manufacturing hubs, e.g. [14, 34]. This was
in part enabled by the implementation of a government
policy that declared Shenzhen a Special Economic Zone
(SEZ) [19, 30]. In 1979, when the SEZ policy went into
effect, Shenzhen had a population of under 50 000, by 2010
it had morphed into a metropolis of over 10 million people1.
Shanzhai 山寨
Shanzhai translates into English as mountain stronghold or
mountain fortress, and connotes an informal, outlaw
tradition. The term has been in use in China for a long time
and features most prominently in folk stories like the
Shuihuzhuan (water margins) that tells the adventures of
108 rebels, who hide in the mountains and fight the
establishment. Building on this common narrative, Jeffrey,
describes shanzhai as the story of “outlaws who have gone
away to the mountains, doing things within their own rules.
There's an element of criminality about shanzhai, just the
way that Robin Hood is a bit of an outlaw. But it's really
about autonomy, independence, and very progressive
survival techniques.“ [26].
The growth of Shenzhen coincided with, and was propelled
by, an outsourcing boom, which, to quote Lüthje et al.,
“emerged from the massive restructuring of the US
information technology industry that began in the 1980s”
[30]. Throughout this period, companies in the US and
Europe moved their manufacturing facilities into low-cost
regions of the so-called developing world. Shenzhen
constituted a particularly attractive site; as an SEZ the
barriers of entry for foreign corporations were significantly
lowered, with a range of incentives including tax
reductions, affordable rents and investments aimed at
integrating science and industry with trade. The outsourcing
of factories and manufacturing clusters radically reshaped
the high tech districts of the United States. As a result, by
the 1990s, with the rise of the “new economy,” the IT
industry was “no longer dominated by vertically integrated
giant corporations such as IBM but rather was shaped along
horizontal lines of specialized suppliers of key components
such as computer chips, software, hardware disk drives, and
graphic cards” [30].
Scholars speculate that the term was first applied to
manufacturing in the 1950s to describe small-scale familyrun factories in Hong Kong that produced cheap, low
quality household items, in order to “mark their position
outside the official economic order” [19]. They produced
counterfeit products of well-known retail brands such as
Gucci and Nike, and sold them in markets that would not
buy the expensive originals. As electronic manufacturing
migrated to Shenzhen the informal network of shanzhai
manufacturing found a perfect product in the mobile phone.
Shanzhai production includes not only copycat versions of
the latest iPhone, but also new creations and innovations of
phone design and functionality (see Figure 1).
With the gradual upgrade of technological and
organizational skills in former low-cost assembly locations,
a process of vertical re-integration began to take place. By
the late 1990s, Taiwanese ODMs (original design
manufacturing) such as Acer, HTC, Asus and Foxconn,
which designed the manufactured product on behalf of their
brand-name customers, started to develop substantial
intellectual property rights on their own [30]. One
particularly famous example is the ODM HTC that entered
the market with its own branded cell phone. This shift
began challenge the global leadership of established hightech economies.
Within China, shanzhai devices are catered towards lowincome migrant populations that could not afford more
expensive branded products. Shanzhai phones have a strong
global market, targeting low-income populations in India,
Africa, and Latin America [20, 48]. As the shanzhai
ecosystem matures, we are beginning to see the
development of branded phones. Xiaomi (小米
take but one example, is an affordable smart phone that
comes with a chic design and makes use of sophisticated
branding techniques. Although it grew by leveraging the
shanzhai industry, Xiaomi is rarely associated with it.
We can’t do justice here to the complexity of Shenzhen’s history
and direct the reader to work by Mary Ann O’Donnell, Juan Du,
Winnie Wong, Josephine Ho, Carolyn Cartier, and others [9, 14,
19, 34, 35, 50].
hardware hacking in the West is celebrated as enabler of
future innovation, the open manufacturing mechanism of
shanzhai is often denounced as holding China back on its
modernization path due to its lack of principles and norms
such as the international copyright law [19]. In the next
section we describe in greater detail the particularities of
shanzhai’s open production.
During our research in Shenzhen, we met and interviewed
many different players in shanzhai production ranging from
component producers, vendors, traders, assembly, and
design solution houses. One consistent element that we
found to be at the core of shanzhai was the production of
so-called “public boards,” called gongban ( 公 版 ) in
Chinese; production-ready boards designed for endconsumer electronics as well as industry applications.
Gongban are typically produced in independent design
houses that link the component producers (e.g. a chip
manufacturer) and the factories that assemble the different
parts into phones, tablets, smart watches, medical devices,
and so on.
Figure 1. Examples of four shanzhai phones (from left to
right): phone shaped as apple, phones shaped after children's
toy and Chinese alcohol brand, phone that also functions as
flashlight and radio. Photos taken by authors, 2012-2014.
Rather it has become widely accepted as a national phone
brand that many Chinese are proud of.
While some people associate shanzhai with stealing and
low quality goods [48], there is a growing endorsement of
shanzhai as a prime example of Chinese grassroots
creativity that has innovated an open source approach to
manufacturing. One strong proponent is Bunnie Huang,
who gained widespread recognition when he hacked the
Xbox in 2003. In a series of blog posts, Huang details the
workings of shanzhai as a unique “innovation ecosystem
[that developed] with little Western influence, thanks to
political, language, and cultural isolation” [21]. Huang here
refers to a highly efficient manufacturing ecosystem that
rests on principles of open sharing that are different from,
but also compatible with, more familiar open sharing
During our research, we followed closely the process of one
of the region’s largest distributers and their internal design
house that produces about 130 gongban per year. The
design house does not sell any of these reference boards,
but rather gives them out to potential customers for free,
alongside a list of components that go into making the
board as well as the design schematics. The company
makes money by selling the components that go into the
boards. As such, it is in their interest to support as many
companies as possible to come up with creative “skins” and
“shells” (called gongmo in Chinese) that are compatible
with their boards. Their customers, then, take a gongban of
their liking as is or build on top of it. The boards are
designed so that the same board can go into many different
casings: e.g. one board can make many different smart
watches or many differently designed mobile phones. Since
2010, years before Pebble Watch or the Apple Watch made
news, thirty some companies in Shenzhen were shipping
their own smart watches based on this open production
mechanism (see Figure 2).
Shanzhai is neither straightforward counterculture nor prosystem. As a multi-billion USD industry, it is deeply
embedded in contemporary modes of capitalist production.
At the same time, with its roots in and ongoing practices of
piracy and open sharing, shanzhai challenges any inherent
link made between technological innovation and the tools,
instruments, and value systems of proprietary, corporate
research and development. As Jeffrey and Shaowen
Bardzell argue, analysis that upholds the strict boundaries
between critical design and affirmative design; resistance
culture and capitalist culture is often too simplistic [4].
The gongban public board functions like an advanced
version of an open source hardware platform such as the
Arduino, yet differs in that it constitutes a bridge into
manufacturing. “We call this shanzhai in Shenzhen. It’s a
mass production artwork,” explained Larry Ma
(anonymized), the head of the aforementioned distributor’s
design house. To Larry Ma, there is no question that
shanzhai is different from simple copycat. “First, shanzhai
needs creativity: it is something only a person with a quick
reaction who knows the industry chain very well can do.
Shanzhai producers are acutely aware of the global market
economy, and have developed incisive and canny strategies
to negotiate, subvert, criticize, ironize, and profit from it
[19]. The early and affordable shanzhai versions of the
smart phone, for instance, were designed for customer
segments that could not afford the expensive and branded
phones on the market. Shanzhai disrupted who gets to
decide over new markets, customers, and how tech business
was to be done. In other words, issues of concern in critical
and reflective design practice – such as “passivity,”
“reinforcing the status-quo,” “illusion of choice” [4] – are
as salient in shanzhai production as they are in conceptual
design. It is particularly ironic, then, that while open
and important decisions with regards to investment, release
dates, and collaboration partners are often made over
informal dinner meet-ups and weekend gatherings. These
social connections are central to getting business done in
Shenzhen, as we discuss in greater detail in the next section.
Many of our interlocutors saw themselves as belonging to a
grassroots community and maintained that it was the mutual
support of Shenzhen’s open manufacturing culture that
enabled their competitive advantage.
Shenzhen’s population comes from elsewhere. More than
95% of the city’s population is migrants. Shenzhen’s
technology sector grew from the intersection of two early
flows. The first were technological entrepreneurs from
Taiwan, involved in the early chip industry, who sought to
take advantage of China’s economic opening and it’s initial
experiments with SEZ’s. This stream of capital cross-fed
into a giant internal movement throughout the Mainland, in
which a vast ‘floating population’, freed from the controls
of the command economy, poured into the coastal cities
looking for work. This dynamic is still very much at work
today. In the summer of 2014, Foxconn was reported to be
recruiting 100 000 workers to build the iPhone 6.
Figure 2. Gongban (public/open board) and Gongmo
(public/open casing) of a smart watch, Shenzhen, China. Photo
taken by authors, April 2014.
Shanzhai makers are asking themselves what the normal
people will need next… It is very important that you are
very familiar with the upstream and downstream industry
chain. And there is a kind of hunger. These three elements
together make it an art work… it’s about being hungry for
the future.”
Larry Ma’s R&D unit is one of many corporate entities in
the shanzhai ecosystem that have grown over the years into
substantial businesses. This growth has occurred outside the
traditional IP regime, using an open manufacturing
ecosystem rooted in open reference boards, and a culture in
which the bill of materials (a list of all the materials that
goes into making a particular device, something that a
company like Apple keeps strictly closed) is shared. This
open culture of production has enabled local chip
manufacturers such as Allwinner and Rockchip to compete
with renowned international corporations like Intel. At the
crux of this manufacturing process is their speed to market,
driven by what Larry Ma describes as “hunger.” In the
shanzhai ecosystem, ideation, prototyping and design
happen alongside the manufacturing process. Products are
designed in relation to the demands of a fast changing
market. Rather than spending months or years deliberating
over the next big hit, shanzhai builds on existing platforms
and processes, iterating in small steps. In this way, shanzhai
brings new products to the market with remarkable speed.
In Shenzhen, cellphones can go from conceptual designs to
production-ready in 29 days. Products are market-tested
directly by throwing small batches of several thousand
pieces of a given product into the market. If there is demand
and they sell quickly, more will be produced. There is a
commitment to never building from scratch (an approach
that is shared by the open source community). Prototyping
and consumer testing occur rapidly and alongside the
manufacturing iteration process, rather than occurring
beforehand (where it is commonly placed in Westerncentric design models).
It is not only the promise of a better income, but the hopes
for a different future that motivate hundreds of thousands of
migrant workers every year to seek employment in
Shenzhen, often far away from their home towns and
families, sending back remittances. Though, as is widely
reported, there is an issue of sweatshop labor in Shenzhen,
many of the people we met during our research promote
Shenzhen as full of opportunities, a dream city, a place
where “you can make it” in China today. Violet Su, for
instance, worked her way up from a part-time job to
personal assistant to Seeed Studio2’s CEO “Shenzhen is a
good place to live,” she says. “If you go to another city,
people treat you like outsiders. But here everyone belongs.
It’s like as if everyone was born here. When I first came to
Shenzhen I really liked one of the city’s slogan that
decorated the bus: ‘When you come to Shenzhen, you are a
local person.’”
Many who enter the shanzhai ecosystem do not come from
privileged socio-economic backgrounds. Take, for instance,
Ye Wang (anonymized), the manager of a shanzhai tablet
company. Wang is one of the few, who “made it.” His
company has revenue of several million USD a year,
shipping tablets to South America, Eastern Europe, Russia,
and the United States. Wang originally came to Shenzhen at
the urging of a relative who was working at the Chinese car
manufacturer BYD (Build Your Dream) and who helped
Seeed Studio is a Chinese hardware facilitator that sells open
hardware products and educational kits, and connects makers
driven to move from prototyping into production with Shenzhen’s
manufacturing ecosystem.
A particular social dynamic is crucial to this design in
manufacturing process. Personal and business lives blend,
Wang to get a corporate scholarship that funded his college
education. After college, Wang entered what he calls “the
shanzhai community.” He made a name for himself by
leading a development team that produced one of the first
copycat versions of the Apple iPad. The localized, slightly
altered version of the tablet was introduced into the Chinese
market before Apple had officially released the iPad in the
United States. This did not go by unnoticed by bigger
players in the shanzhai ecosystem. Wang explained how,
once one has gained trust and made a name for themselves,
it is easy to find partners who are willing to freely share
resources: “Shenzhen is working just like this. You can
understand it as crowdfunding. It works differently from
crowdfunding via online social networking … you must be
firmly settled in the industry, be recognized, have a good
personality… Everybody in the industry chain gives you
things for free, all the materials, and only when you have
sold your product, you do the bills [and pay back].”
drawn to the city’s abundance of materials and the
production processes located here. For many of these
newcomers the first stop in Shenzhen are the markets of
Huaqiangbei )
, a 15-by-15-city block area, filled
with large department store buildings. Each mall contains a
labyrinth of stalls spread over several floors (see Figure 3).
Malls specialize in everything from basic components such
as LEDs, resistors, buttons, capacitors, wires, and boards to
products such as laptops, phones, security cameras, etc. For
makers, the markets provide immediate access to tools,
components and expertise. Ian Lesnet from Dangerous
Prototypes, a company that sells open hardware kits,
describes the lure of Huaqiangbei and Shenzhen as a whole:
“The wonderful thing about Shenzhen is that we have both
horizontal and vertical integration. In Huaqiangbei, you can
buy components. Go a little bit further out, people sell
circuit boards. A little bit further out, there are people who
manufacture things and attach components to circuit boards.
So you can actually have something built. And a little
further out there are people who make product cases. A
little further out you have garages with large-format printers
who make labels for your products and a little further out
they recycle it back down again. So you can build
something, design it entirely, have it manufactured, sell it,
and then break it down its components and recycle it back
into the center of the markets. You have all the skills and all
the people who can do that and they are all here in one
place. And that’s what’s really enticing about Shenzhen.”
Wang, here, describes an important funding mechanism that
enables people who lack the financial resources to
nevertheless receive support from within the larger
shanzhai network. People become part of this social
network by participating in both informal face-to-face
gatherings (over dinner, lunch, at the manufacturing site)
and networking via mobile social media platforms such as
Wechat ( Much of the offline activity
takes place over alcohol-infused meals, KTV bars and
massage parlors, establishments that are frequented by a
largely male clientele, (all of which speaks to a strong
gender hierarchy that infuses shanzhai culture). People in
shanzhai think of themselves as driven and hard working,
committed to improving their standard of living and to
make money. Many considered the level of entrepreneurial
possibilities unique to Shenzhen: “there is no other place
like this in China. Here you find a lot of opportunities, you
can become yourself, you can realize your dream, you can
make a story out of your life.”
“Living in Shenzhen is like living in a city-size techshop,”
echoes Zach Smith, one of the co-founders of the 3D printer
Makerbot. Smith first came to Shenzhen when Makerbot
started to collaborate with a local manufacturing business.
Since then he has spent many years working and living in
the city and has learnt to adapt to what he calls Shenzhen’s
“native design language.” “If you come to Shenzhen, you
are going to take your American design language and you
Shanzhai production is fast and nimble mostly due to this
unique social fabric through which decisions about new
products, design and pricing are made collaboratively. This
process entails people to be “on 24/7.” Every personal
interaction, no matter if offline or online, is also about
furthering a collective goal: the expansion and spread of
business opportunities, the discovery of niche markets and
the distillation of new mechanisms that will generate
additional sales. In this way shanzhai production culture is
not dissimilar from Silicon Valley, with its male dominated
management and entrepreneurial leadership, hard-driven
work ethic and peer pressure, all of which forms a closeknit community of informal socializing and information
sharing [41].
Figure 3. Huaqiangbei markets (upper left to bottom right):
USB sticks shaped as plastic figurines, stacks of wires,
assortment of magnets, department store building view from
the top.
In the last few years, Shenzhen has begun to draw yet
another wave of migrants – mobile elites such as tech
entrepreneurs, hackers, makers, geeks and artists, who are
through bodily reactions to the size of a button or the feel of
a knob, as Ian Lesnet elaborates: “When you design
electronics, it's not just an engineering problem. It's a
design process. Being able to just walk into Huaqiangbei,
touch buttons, push them, be like, ‘Oh, this one is weak.
This one is strong.’ Choosing things. Holding things. Get
this amount of knowledge that you don't get sitting at a
computer sitting somewhere else in the world” (see Figure
4). Many agreed that this tacit and embedded learning had
become central to their design process and was something
they learned only after they had arrived in Shenzhen. “In
school, they don't teach you DFM, design for
manufacturing, at all,” says Antonio Belmontes from Helios
Bikes, “the factory helps us bring our ideas down to design
for manufacturing. They also help you save money.
Especially when you approach them during the design
are going to have to translate it,” Smith explains. “If you
are out here you can start to learn that local design
language, and start using it in your own designs… It helps
you make designs that are easier to manufacture, because
you are not substituting a bunch of stuff… People out here
can build their designs in this native way. As you go and
meet with manufacturers you understand their design
process, how they want to build things, or what they are
capable of building. This changes the way you want to do
your design, because as a designer, if you are a good
designer, you are going to try and adapt to the techniques
instead of making the techniques adapt to you.”
What Smith describes here was something many of the
makers we interviewed experienced; transforming their
designs through interactions with factories, engineering
processes, machines and materials. Manufacturers and
makers work together to prototype, test materials and
functionality, continuously altering everything from the
shapes of product casings to PCB design (Printed Circuit
Board). Together, they iterate and shape the design of the
final product through a process that typically spans several
months of frequent often-weekly meetings. Take, for
instance, maker entrepreneur Amanda Williams, one of the
few women active in the scene. She has been working
closely with several different manufacturing units in
Shenzhen during the process of designing an interactive
lamp. Williams reflects on these collaborations as follows:
“sometimes you find out from a factory that this won't work
or that won't work, or you can't use this size because you
need a certain amount of wall thickness or this material's
gonna break… working with the factories, we understand
how to modify our design, in order to make it better for
mass manufacturing.”
What draws tech entrepreneurs, makers and designers to
Shenzhen is that phases of ideation, design, market testing,
and industrial production evolve together in an iterative
process (as opposed to design practices in which ideation
and prototyping are thought of as phases that proceed and
then guide processes of execution). What emerges is a
tactile and deeply embodied design practice that requires
close connections with both materials and the local skillsets
that many describe as a highly professionalized form of
making in action. John Seely Brown, former director of
Xerox Park, during a visit to Shenzhen, reflected upon this
process by speaking of tacit versus explicit knowledge.
“What you are really doing,” he said speaking of hardware
production in Shenzhen, “is modulating a conversation
between your tools and the materials your are working on
for some end result. And you are overseeing that dance in
its own right.”
Makers working in Shenzhen are brought closer to the
tactility that lies at the heart of hardware design. In molding
their visions whilst enmeshed – rather than removed from –
the context of manufacturing, their designs become tuned to
the materiality of the hardware, modulating their visions
Much of what we see with regards to maker
entrepreneurialism in Shenzhen today goes back to the early
efforts of Seeed Studio, a Chinese hardware facilitator that
connects Shenzhen’s world of manufacturing with the
global maker scene. Seeed Studio was founded in 2008 by
the then 26-year old Eric Pan
( and grew quickly
from a two-people start-up into a successful business that
now has more than 10 Million USD annual revenue and
over 200 employees. Seeed Studio sells hardware kits,
microcontroller platforms, and custom-made printed circuit
boards to makers. It also provides highly personalized
services. One of Seeed Studio’s core businesses is to enable
maker start-ups to move from an idea to mass production by
identifying what Eric Pan calls “pain points”—moments of
transition, where a company lacks the knowledge of how to
scale up. Seeed Studio products have gained reputation
worldwide. They are offered for purchase online, on makerspecific platforms, and in mainstream retailers in the US.
When HAXLR8R opened its doors as one of the first
hardware incubator programs in Shenzhen in 2012 it was
with the help and in the offices of Seeed Studio.
Figure 4. in Huaqiangbei: makers getting a "feel" for
different components. Photos by first author, 2013.
moving from making one thing to making hundreds of
thousands or millions of things. While Dougherty
emphasized processes of tinkering and play, Tong and Lin
focused on the role of design in the professionalized
manufacturing process, or as Lin put it: “the process of
making just one thing is very different from continuous
production. It requires cross-disciplinary work. Hardware
is different from the Internet. You need to think about
design from the beginning. Design is central to all steps of
the process of manufacturing including differentiation,
customization, standardization… You also need to design
for future manufacturing, for the next assembly you need to
think about this from the beginning of the design process.”
Figure 5. "Innovate with China," product label by Seeed
The Shenzhen Maker Faire was Dougherty’s first visit to
China. When we interviewed Dougherty during his visit, he
reflected on the differences of making in the US and in
China. “It’s an indeterminate problem of ‘how do I get this
made?” he said speaking of the difficulties many hobbyist
and professional makers face in the US, “where should I go
to find the parts?” Makerspaces address part of the issue,
he further elaborated, but scaling up was almost impossible:
“They don't necessarily have the context, skill sets or
knowledge to make. Even, "What are the right things to
make or not make at all?" Part of it is that American
manufacturing is geared to large companies, and so those
interfaces aren't there for a small company.” Dougherty,
here, counters the overly euphoric narratives that view
making as enabling an easy return to the “made in
America” brand. “I see this as an information problem,” he
says “you might find out while being here that if you
manufacture it this way, you should have designed it
Eric Pan has become an influential voice of China’s maker
scene eager to demonstrate that “made in China” can mean
something more than just copycats and cheap labor. The
first thing one reads, when entering the offices of Seeed
Studio, is the tagline “innovate with China,” painted on a
large mural wall. A pun on the “made in China” brand, it is
also the label that adorns Seeed Studio products (see Figure
5). “When I came to the US in 2010, people there knew us
and liked our products, but nobody wanted to believe that
we are a Chinese company,” Pan recalls, “nobody had
thought that cool and innovative products could come of
China. That’s why, ever since, we have been using
‘innovate with China’ on our product labels to demonstrate
that manufacturing in China can mean ‘partnership’ and
innovation instead of cheap labor and low quality.”
“Innovate with China” was also the slogan of China’s first
featured Maker Faire that took place in April 2014,
organized and hosted by Seeed Studio. The Maker Faire
constituted an opportune moment for Seeed Studio to
demonstrate its vision of China’s creative role in the world
of making and manufacturing. People who attended the
Maker Faire were well-known figures in the maker
community, and included amongst others Dale Dougherty,
founder of MAKE magazine, Chris Anderson, who
authored the book Makers, Tom Igoe who co-founded
Arduino, Jay Melican who carries the informal title “Intel’s
Maker Czar,” Eri Gentry from BioCurious, Vincent Tong
and Jack Lin from Foxconn.
In Shenzhen, design on the factory floor is not unique to
shanzhai, as those involved in the process know well. For
instance, what transpired from a couple of visits to a big
contract manufacturer (anonymized), even companies like
Apple have their designers and engineers (just like maker
entrepreneurs) work side by side with the designers and
engineers at the factory, iterating together until the very last
minute, when the product is frozen for release. This is in
contrast to a common perception of Apple as the creator
who outsources to the cheap labor provided by the
The talks and presentations at the Shenzhen Maker Faire
were wrapped between two keynote speeches: Dale
Dougherty, considered by many to be the founding father of
the US maker movement, gave the opening speech, while
Vincent Tong and Jack Lin ,
from Foxconn, gave
the closing plenary. Dougherty, in his talk, focused on the
creativity that lies in making one thing. He emphasized the
culture of hobbyist creation and tinkering that went into the
early stages of development of the first Apple computer,
and described making as an adventure where the outcomes
are uncertain. Tong and Lin, on the other hand, talked about
the opportunities and challenges that lie in scaling up,
“Apple computers are this really big example. Designed in
California, made in Shenzhen. We pride ourselves on
design and we don't have to do that other work. Remember
the paperless office. Things would just be designed on
computers and then made. It was almost like we didn't need
that dirty world near us. It could be in China… But physical
things have properties that speak to us intuitively that we
cannot just analyze on a computer screen, no matter how
much resolution we have. That's calling into question that
split between designed here and made there. “
representation of production [17]. This separates the
designer and maker from the embedded and embodied
practice of production and the tacit knowledge that is
essential to cultures of production documented here. Our
aim is to challenge a mythic structure of technology
innovation in which the “creative” work of design is
highlighted, while the work of manufacturing remains at
arms length. In short, we follow Bannon and Ehn in
arguing, alongside the tradition of design anthropology, that
the insights “from an understanding of material culture” be
“more directly fed on to the practices of participatory
design” [3]. A rigorous participatory design practice not
only includes a deep engagement with the social context of
users, but also with the material and social conditions of
contemporary production.
(Dale Dougherty, Interview with the
authors, April 2014)
This paper sets out to question a prevailing myth of
technological production in which design is separated from
what Dougherty, here, calls the “dirty world” of
manufacturing. It does so by focusing on the culture of
open production and design that has developed in Shenzhen
over the last 30 years. More specifically, our research has
concentrated on how the ecosystem of shanzhai emerged
alongside the more-well known processes of outsourcing
and governmental policy that opened up the region to
foreign investment.
In doing so, our work challenges some of the prevalent
discourses and practice around making and its engagement,
however implicit, with participatory design. Central to the
early efforts of participatory design, and critical scholarship
of computing more broadly, has been an emphasis on the
user and a desire to empower those who might have less say
in technological production. Prominent figures of the maker
movement have turned this call for individual
empowerment into a powerful business strategy, e.g. [1].
Many maker kits and smart devices are marketed as
educational in that they train their consumers to become
producers themselves. Today, many users of digital
fabrication tools and open hardware platforms are indeed
producing a wide and rich variety of software code,
electronic schematics, 3D designs, and so on. Committed to
the culture and spirit of open source, many of these users
also freely share their design contributions. Maker products,
in this sense, function much like social media apps such as
Facebook or virtual worlds like Second Life, in which the
value of the product is significantly shaped by what people
“make” with it [8]. While this certainly broadens the range
and number of “participants” in the design of technology it
is also subject to a growing critique of the “sharing
economy,” in which, “the labor of users, fans, and
audiences is being put to work by firms” [45].
Maker entrepreneurs who come to Shenzhen to turn visions
of smart and networked devices into products are
intersecting with these embedded and tactile processes of
production. Indeed, it is the close proximity to the processes
and materials of production that makes the city so enticing
to makers. As we have shown in this paper, it is not just
access to tools and machines, but a particular process of
design that draws makers into Shenzhen; prototyping is part
and parcel of fabrication, rather than preceding it; and
testing and designing evolves through daily interactions
with the workings of machines, materials, components, and
tools. From the electronic markets and craftsman
workshops to assembly lines and design solution houses,
Shenzhen immerses technology designers in a mode of
prototyping that is tied to the feel and touch of materials as
well as the concrete processes of manufacturing. Many of
the people we interviewed agreed that “being in it” was
crucial to learning, understanding, and working with what
they considered to be an open, informal and highly
professionalized design practice.
The goal of this paper has been to critically unpack
contemporary maker discourse by examining the remake of
Shenzhen. In so doing, we question the imaginary of
Shenzhen as the “Silicon Valley for Hardware,” that has
been fueled by promotional campaigns of hardware
incubators and corporate investment in the region. These
often linear stories of progress, which assume that
Shenzhen is “catching up” with innovation centers like
Silicon Valley, tend to be void of the intricacies of the
region’s production processes described in this paper; from
its history of outsourcing and piracy to the global scale of
contemporary shanzhai production. We have shown that
innovation, design and production are necessarily situated,
evolving in close relation to particular histories of
technological, economic and social development. In this,
the paper follows the call to locate design [9, 23, 25, 46] so
as to include the site of industrial production itself. Efforts
in critical computing have long called upon researchers and
designers to reflect upon “the values, attitudes, and ways of
looking at the world that we are unconsciously building into
our technologies” as well as the “values, practices and
Moreover, digital fabrication tools such as the 3D printer or
the CNC milling machine, which are envisioned to enable a
broader audience to engage with processes of making, often
keep the designer at arm’s length from the kind of tacit
knowledge necessarily involved in the manufacturingcentered design process we have described in this paper.
While digital fabrication tools provide techniques for rapid
prototyping in a design studio, they do not engage one with
the situated and embodied processes of manufacturing on a
large scale. What becomes increasingly clear from our
engagement with Shenzhen is that, to repeat Dougherty’s
point stated above, “physical things have properties that
speak to us intuitively that we cannot just analyze on a
computer screen, no matter how much resolution we have.”
Thus, whilst the promotion of a return to hands-on making
is pervasive (“everyone is a maker”), many of the software
applications aimed at bringing designers into the production
of hardware have been oriented around creating an abstract
experiences that are unconsciously, but systematically left
out” [42].
8. Boellstorff, T. 2008. The coming of age in Second Life:
an anthropologist explores the virtually human.
Princeton University Press.
Clearly this extends well beyond the common user-designer
relationship. What values, norms and attitudes towards
manufacturing and production do we consciously or
unconsciously build not just into our designs, but also into
our critical theories and practices? What new possibilities
are opened up if we take seriously diverse and distributed
cultures of production? Who is considered a legitimate
participant in the “maker” revamp of industrial production?
What expertise and work is rendered invisible as makers
turn visions of networked objects into mass-produced
artifacts? These questions recall the central concerns of
early theorists of participatory design: a deep engagement
with sites of production, labor, and manufacturing.
9. Cartier, C. 2002. Transnational Urbanism in the
Reform-era Chinese city: Landscapes from Shenzhen.
Urban Studies 39: 1513-1532.
10. Chan, A. Networking Peripheries: Technological
Futures and the Myth of Digital Universalism. MIT
Press (2014).
11. Clarke, A. 2005. Situational Analysis: Grounded Theory
after the Postmodern Turn. SAGE Publications.
12. Dourish, P. 2001. Where the Action Is. The Foundations
of Embodied Interaction. MIT Press.
13. Dourish, P. and Mainwaring, S.D., "Ubicomp’s Colonial
Impulse," Proc. of UbiComp'12, Springer (2012), 133142.
We would like to thank everyone who contributed to this
research, particularly those at Seeed Studio, Chaihuo
DFRobot, and XinCheJian
as well as all the
makers, entrepreneurs, and shanzhai producers, who shared
with us their time and insights. This research was in part
funded by the National Science Foundation (under award
#1321065), the Lieberthal-Rogel Center for Chinese
Studies, and the Intel Science and Technology Center for
Social Computing.
14. Du, J. Shenzhen: Urban Myth of a New Chinese City.
Journal of Architectural Education, Volume 63, Issue 2,
15. Factory in a day.
16. Greenspan, A., Lindtner, S., Li, D. 2015. Atlantic
17. Hartman, B., Doorley, S., Klemmer, S. Hacking,
Mashing, Gluing: Understanding Opportunistic Design.
IEEE Journal of Pervasive Computing, 2008, Vol. 7, No.
3, pp. 46-54.
1. Anderson, C. 2012. Makers. The New Industrial
Revolution. Crown Publishing Group, New York.
18. Hertz. G. 2013. Critical Making. Last accessed
September 2014.
2. Ames, M., Bardzell, J., Bardzell, S., Lindtner, S., Mellis,
D. and Rosner, D. Making Cultures: Empowerment,
Participation, and Democracy - or Not? In Proc. Panel
at CHI'14, ACM Press (2014).Anderson, C. Makers: the
new industrial revolution. Random House, 2012.
19. Ho, J. 2010. Shanzhai: Economic/Cultural Production
through the Cracks of Globalization. Crossroads:
Cultural Studies Conference.
20. Houston, L. Inventive Infrastructure: An Exploration of
Mobile Phone, Dissertation: Lancaster University,
3. Bannon, L. and Ehn, P. 2012. Design Matters in
Participatory Design. In: J. Simonsen and T. Robertson
(eds) Routledge Handbook of Participatory Design, pp.
21. Huang, B. 2013. The $12 Gongkai Phone., last
accessed September 2014.
4. Bardzell, J. and Bardzell, S. 2013. What is “Critical”
about Critical Design. ACM Conf. Human Factors in
Computing Systems CHI 2013 (Paris, France), pp.
22. Intel Newsroom. 2014. Intel CEO outlines new
computing opportunities, investments and collaborations
with burgeoning China Technology Ecosystem.
blog/2014/04/01/ last accessed June 3, 2015.
5. Bardzell, J., Bardzell, S., and Hansen, L.K. 2015.
Immodest Proposals: Research Through Design and
Knowledge. Proc. of ACM Conference Human Factors
in Computing Systems CHI'15 (Seoul, South Korea).
23. Irani, L., Vertesi, J., Dourish, P., Philip, K., and Grinter,
R. 2010. Postcolonial Computing: A Lens on Design
and Development. Proc. ACM Conf. Human Factors in
Computing Systems CHI 2010 (Atlanta, GA), 13111320.
6. Beck, E. 2002. P for Political: Participation is Not
Enough. Scandinavian journal of Information Systems,
14 (1), pp. 77-92.
7. Bødker, S. 1996. Creating Conditions for Participation –
Conflicts and Resources in Systems Development.
Human-Computer Interaction, (11:3), pp. 215-236.
24. Ito, J. 2014. Shenzhen trip report – visiting the world’s
manufacturing ecosystem. LinkedIn Pulse.
6-1391 last accessed September 2014.
25. Jackson, S.J., Pompe, A. and Krieshok, G. Repair
worlds: maintenance, repair, and ICT for development
in rural Namibia. In Proc. the ACM 2012 conference on
Computer Supported Cooperative Work (CSCW 2012),
ACM Press (2012), 107-116.
38. Philip, K., Irani, L., and Dourish, P. 2012. Postcolonial
Computing: A Tactical Survey. Science, Technology,
and Human Values. 37(1), 3-29.
26. Jefferey, L. 2014. Mining Innovation from an
Unexpected Source: Lessons from the Shanzhai. Lecture
at the Shenzhen Maker Faire, 2014.
39. Rosner, D. and Ames, M. 2014. Designing for repair?:
infrastructures and materialities of breakdown. In Proc.
the 17th ACM conference on Computer supported
cooperative work and social computing, 319-331.
27. Kensing, F. and Blomberg, J. 1998. Participatory
Design: Issues and Concerns. Computer Supported
Cooperative Work (7:3-4), pp. 167-185.
40. Rosner, D. Making Citizens, Reassembling Devices: On
Gender and the Development of Contemporary Public
Sites of Repair in Northern California. Public Culture
26, 1 72 (2014), 51-77.
28. Norman, D. and Klemmer, D. 2014. State of Design:
How Design Education must change. LinkedIn Pulse.
41. Saxenian, A. Regional Advantage: Culture and
Competition in Silicon Valley. Harvard University Press
29. Lindtner, S., Hertz, G., Dourish, P. 2014. Emerging
Sites of HCI Innovation: Hackerspaces, Hardware Startups, Incubators. In Proc. of the ACM SIGCHI
Conference on Human Factors in Computing Systems
CHI'14 (Toronto, Canada), pp.439-448.
42. Sengers, P., Boehner, K., David, S., Kaye, J. 2005.
Reflective Design. AARHUS’05, pp.49-58.
43. Shapiro, D. 2005. Participatory Design: the will to
succeed. AARHUS’05, p.29-38.
30. Luethje, B., Huertgen, St., Pawlicki, P. and Stroll, M.
2013. From Silicon Valley to Shenzhen. Global
Production and work in the IT industry. Rowman &
Littlefield Publishers.
44. Sivek, S.C. "We Need a Showing of All Hands"
Technological Utopianism in MAKE Magazine. Journal
of Communication Inquiry 35, 3 (2011), 187-209.
45. Söderberg, J. 2013. Automating Amateurs in the 3D
printing community: connecting the dots between
‘deskilling’ and ‘user-friendliness.’ Work organization,
labour and globalization, vol 7, no. 1, pp. 124-139.
31. Mellis, D.A. and Buechley, L. Do-it-yourself
cellphones: an investigation into the possibilities and
limits of high-tech diy. In Proc. the 32nd annual ACM
conference on Human factors in computing systems
(CHI 2014), ACM Press (2014), 1723-1732.
46. Suchman, L. 2002. Located Accountabilities in
technology production. Scandinavian Journal of
Information Systems, 14 (2): 91 – 105.
32. Milian, M. The Sticky Situation that delayed the Pebble
Smartwatch. Global Tech, Bloomberg, Sep 16, 2013.
47. Turner, F. 2006. From Counterculture to Cyberculture:
Stewart Brand, the Whole Earth Catalogue, and the Rise
of Digital Utopianism. Chicago: University of Chicago
33. Nguyen, Josef. MAKE Magazine and the Gendered
Domestication of DIY Science. Currently under the
review for: Perspectives on Science.
34. O’Donnell, M-A. 2010. What exactly is an urban village
anyway? last accessed June
1, 2015.
48. Wallis, C. and Qiu, J. 2012. Shanzhaiji and the
Transformation of the Local Mediascape of Shenzhen.
In Sun, W. and Chio, J. (eds). 2012. Mapping Media in
China, pp. 109-125, London: United Kingdom:
35. O’Donnell, M-A. 2011. Utopian Shenzhen 1978-1982. last accessed June 1, 2015.
49. Williams, A. and Nadeau, B. 2014. Manufacturing for
Makers: From Prototype to Product. Interactions XXI,
Nov-Dec 16, p.64.
36. Office of the Press Secretary, the White House (2013).
Remarks by the President in the State of the Union
Address. (
50. Wong, W.W.Y. Van Gough on Demand: China and the
Ready Made. University of Chicagor Press, 2014.
51. Zimmerman, J., Forlizzi, J., Evenson, S. 2007. Research
through design as a method for interaction design in
HCI. In Proc. of the SIGCHI Conference on Human
Factors in Computing Systems, pp. 493-50.
37. Office of the Press Secretary, the White House (2012).
Remarks by the President on manufacturing and the
economy. (
Material Speculation: Actual Artifacts for Critical Inquiry
Ron Wakkary1,2, William Odom1, Sabrina Hauser1, Garnet Hertz3, Henry Lin1,
Simon Fraser University, Surrey, British Columbia, Canada1
Eindhoven University of Technology, Eindhoven, Netherlands2
Emily Carr University of Art and Design, Vancouver, British Columbia, Canada3
{rwakkary, wodom, shauser, hwlin}; [email protected]; [email protected]
In considering the productive pairing of design and fiction
to advance critical speculation, there is an opportunity to
explore other forms of fiction informed practices that might
nurture and expand interaction design research efforts. To
date, fictional thinking in design has focused on science
fiction and scenarios, and on conceptual artifacts like nonfunctioning prototypes, storytelling props, and fictional
objects. The HCI community has paid less attention to other
theories of fiction in addition to science fiction. Relatedly,
HCI researchers have largely overlooked the role that actual
and situated artifacts in the everyday can offer for
speculative and critical inquiries in design. This shift in
attention to actual and situated artifacts would reveal design
artifacts and everyday settings to be sites for speculative
and critical inquiry.
Speculative and fictional approaches have long been
implemented in human-computer interaction and design
techniques through scenarios, prototypes, forecasting, and
envisionments. Recently, speculative and critical design
approaches have reflectively explored and questioned
possible, and preferable futures in HCI research. We
propose a complementary concept – material speculation –
that utilizes actual and situated design artifacts in the
everyday as a site of critical inquiry. We see the literary
theory of possible worlds and the related concept of the
counterfactual as informative to this work. We present five
examples of interaction design artifacts that can be viewed
as material speculations. We conclude with a discussion of
characteristics of material speculations and their
implications for future design-oriented research.
This paper introduces a complementary concept to design
fiction that we call material speculation. This concept
draws on the literary theory of possible worlds [cf. 48].
Material speculation emphasizes the material or mediating
experience of specially designed artifacts in our everyday
world by creating or reading what we refer to as
counterfactual artifacts. Material speculation utilizes
physical design artifacts to generate possibilities to reason
upon. We offer material speculation as an approach to
critical inquiries in design research. In plain fashion, for
this paper we consider speculative inquiries that aim to
generate progressive alternatives to be critical inquiries.
Author Keywords
Material Speculation, Speculative Design; Design Fiction.
Critical Inquiry
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g., HCI):
Interaction design and human-computer interaction (HCI)
have long borrowed from fiction in design techniques like
scenarios, personas, enactments, and even prototyping.
Speculative inquiries in design like futuring, forecasting,
and envisionments have also deeply incorporated practices
of fiction. Recently, design fiction has emerged as a
uniquely productive approach to speculative inquiries. Most
importantly, design fiction has extended the speculative aim
of design–its future orientation–into more reflective realms
that critically challenges assumptions we hold about design
and technology. This is a valuable step in interaction design
research toward offering approaches to more critical
speculative inquiries.
Our work builds on speculative and critical design, which
can be seen as broad yet established approaches to design
aimed at exploring and questioning possible, plausible,
probable, and preferable futures [18, 19, 24]. Notions of
speculative and critical approaches to design have a long
history that extends across several disciplines and continue
to be the subject of ongoing theorization and debate [4, 18,
41, 30, 1, 50, 27]. A primary goal of this paper is to
contribute to the growing relevance and interest in a
speculative and critical position on design in the HCI
community. We do this through proposing material
speculation as a conceptual framing for reading and
creating design artifacts for critical inquiry. It is important
to note that we do not propose material speculation as a
means of classification or definition of artifact types or
design approaches. In this sense, aspects of material
speculation may well overlap with design fiction, or related
notions like speculative design or critical design. Our aim is
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
not to develop a mutually exclusive term or to create a
hierarchy among concepts. However, we believe there are
unique benefits and outcomes in using the conceptual
framing of material speculation to understand design
artifacts and to create them.
“Design Fiction is making things that tell stories. It’s like
science-fiction in that the stories bring into focus certain
matters-of-concern, such as how life is lived, questioning
how technology is used and its implications, speculating
about the course of events; all of the unique abilities of
science-fiction to incite imagination-filling conversations
about alternative futures.”
Our contributions in this paper are multi-fold. We extend
the critical and reflective speculation in interaction design
and HCI research by articulating the concept of material
speculation. This concept advances the notion that the
material existence of specifically designed artifacts situated
in the everyday represent a unique and productive approach
to critical inquiry. This paper also contributes a theorized
understanding of material speculation and, by extension,
fiction-focused practices in design research through the
reasoned adaption of possible worlds theory to interaction
design research. More broadly, the paper can be seen as
another step forward in supporting the need for more
reflective and critical forms of knowledge production in
HCI and interaction design. Importantly, we stress that this
type of critical inquiry occurs through the conceptualizing
and crafting of design artifacts to generate theoretical
articulations and intellectual argumentation.
For Dourish and Bell [17], science fiction provides a
representation of a practice in which technical and material
developments will be understood. It is not only that science
fiction stories offer imaginary prototypes of things to be but
also that science fiction creates environments in which
these things are discussed, understood, and used. The
fictional embedding of design and technology with fictional
people and in practices brings to the fore the cultural
questions of these futures and the roles of technologies.
Dourish and Bell [17] argue that these cultural issues are
inherent in our notions of design and technology. Science
fiction reveals our prior cultural commitments before any
implementation of design or technology. What emerges in
their readings of science fiction is an “imaginative and
speculative figuring of a world” in which new things and
technologies will inhabit; and the bringing into focus of the
“central role of sociological and cultural considerations”
that are often obscured in our techno-centric reasoning of
actual technologies [17]. What is evident is that science
fiction affords an enhanced form of critical reasoning on
technologies and design.
The review of related work that follows is intended to
establish two main points: 1) the potential of critical
speculation in design, 2) the crafting and material strategies
in speculative inquiries in design that serve as antecedents
and inspirations for material speculation.
Similarly, Reeves [42] sees design fictions as texts to be
productively read and unpacked. Reeves argues a greater
role should be given to fiction in the futuring activities of
design that he refers to as envisioning. In his view, a more
critical envisioning would disentangle the aspects of fiction
from the less productive qualities of forecasting and
extrapolations [42, p.1580] (see [10] as an exemplary
approach to envisioning through fiction). Reeves
specifically cites Bleecker’s method of design fiction that
sets out the goals of not only reading but generating design
fictions that express multiple futures and by that let go or
challenge assumptions about the direction and breadth of
progress. Bleecker and Reeves see in design fictions a
design method that engages assumptions of the future as a
means to derive critical understandings of the present.
Speculation and fiction in design
Fiction has had a long trajectory of use in interaction design
research, particularly as a means to aid the process of
interaction design and more recently as a mode of critical
inquiry. Design as a discipline is concerned with change
and preferred futures. As a result there is a natural
orientation towards the future and the use of futuring
activities in design. For example, the creation of personas fictional characters representing potential users – [15] and
scenarios – narrated descriptions of future design details to
inform design rationales – [14]. In parallel, prototyping has
been a useful technique in design leading to final (future)
designs through mockups and models. These practices of
design can be seen to draw on fictional thinking or include
fictional components. In design fiction, the use of fictional
practices in design becomes more explicit and a source for
investigating the speculative potential of design.
Similarly, Bardzell and Bardzel [3] see how fiction and
science fiction can reinvigorate visioning in HCI through
what they conceptualize as cognitive speculation.1 They
propose a form of speculative thinking that is grounded in
the realities of current science but rely on imaginative
extrapolation that is intellectually rigorous and reasoned.
This methodological approach mobilizes science fiction to
Design fictions relate to representations of the future from
science fiction to design scenarios that detail “people,
practice and technology” [8, p.133]. Discussions on
technological futures have been well established within
Science and Technology Studies (STS) [13, 52, 43], yet
these discussions are relatively new to interaction design
and HCI. The term design fiction arose in a presentation
given by Julian Bleecker in 2008 [9]. Bleecker sees in the
idea of science fiction a genre-methodology for design [9]:
It should be noted that our use of the name “material
speculation” is largely coincidental to the name of Bardzell
and Bardzell’s concept.
critically inquire and envision technological futures that
foreground the lived experiences of the future world.
speculate on industrial progress and consumer culture, and
on the nature of design itself in these contexts. Dunne
makes clear that the ‘functionality’ of these design artifacts
is to act as materialized props through which alternative
stories emerge that operate in the space between rationality
and reality, where “people can participate in the story,
exploring the boundaries between what is and what might
be” [20, p. 67]. In other words, through leveraging the
seductive language of design, design artifacts are crafted to
provoke people to imagine it in use and the possible future
that would manifest around it—to “become lost in a space
between desire and determinism” [20, p. 67].
These deeply articulated discussions help to reveal the
significance and potential of science fiction in design and
critical inquiry. In essence, the practices of science fiction
bring to design research the reasoning on multiple futures
that challenge assumptions and the sociological, cultural,
and political tendencies that underlies our representations
and considerations of design and technology.
In considering a design orientation to critical inquiry,
cognitive speculation can be seen to have higher-level
concerns that run parallel to our comparatively grounded
concerns with material speculation. With respect to design
fiction, its limitation is its emphasis on the creation or
reading of fictional texts that are embedded with references
to design and technologies. The artifact in design fictions is
a mere reference, prop, or non-functioning prototype
referred to as a diegetic prototype based on Kirby [29]. To
paraphrase Kirby, a diegetic prototype is a technology or
technological artifact that only exists in the fictional world
but fully functions in that world. In opposition to that, our
discussion of material speculation opens the critical
functioning of alternative futures in design through the
crafting of material artifacts that operate and exist in the
actual world. This shift to materialized and crafted
speculations draws on the work we generally refer to as
speculative design.
Dunne and Raby [18] develop this strategy further through
their notion of physical fictions (p. 89), where design
artifacts extend beyond being merely props for films never
made, to being situated as things in exhibition spaces that
“prescribe imaginings” and “generate fictional truths” (see
p. 90). The aim of physical fiction design artifacts is to
critically project different possibilities and to “challenge the
ideas, values, and beliefs of our society embodied in
material culture” (p. 90). Similar to Sengers and Gaver’s
[49] aim to shift the site of meaning-making from the maker
to the user, physical fictions aim to open up moments of
suspended belief and, in doing so, shift our role from user
to imaginer. However, similar to para-functional objects
these are discursive objects—crafted interventions to create
discussions. Dunne and Raby [18] refer to these as
“intentional fictional objects” with no aim to be real:
“physical fictions that celebrate and enjoy their status with
little desire to become ‘real’” (p.89). For Dunne and Raby,
the ideal dissemination for their research and
experimentations is in the form of exhibitions in museums
and galleries, which “function as spaces for critical
reflection” [18, p. 140].
Crafting material speculations
Speculative artifacts have played important and ongoing
roles in design-oriented research in and outside of HCI. For
example, Sengers and Gaver [49] unpack a range of
speculative design artifacts that critically inquire into—and
often complicate or unsettle—the relationship between
functionality and user interpretation in interactive systems
design. While these design artifacts were diverse and
targeted various contexts, they are united in their aim to
speculatively open up situations that subvert a single
authoritative interpretation of a system in the service of
provoking people to arrive at their own self-determined
understanding of the meaning and ‘use’ of a system. Across
these cases, ambiguity is leveraged as a resource to create
embodied, functional systems to provoke dynamic, varied,
and speculative interpretations of the design artifacts from
the perspective of the user.
Designers Auger and Loizeau (see create speculative design projects that, similar
to Dunne and Raby, take the form of installations within
galleries. Drawing on a range of projects from the Royal
College of Art, Auger [2] offers insights into the crafting of
speculation in design. Borrowing from a range of
techniques in the humanities and sciences, Auger reflects
on important dimensions surfaced from these speculative
design projects—from generating tension to conflict with
engrained systems in our familiar everyday ecologies to
carefully managing the uncanniness of the design artifact to
provoke viewers to engage with the issue(s) it speculates
on. These dimensions are important for constructing what
Auger calls the perceptual bridge: “In effect, a design
speculation requires a bridge to exist between the
audience’s perception of their world and the fictional
element of the concept” [2, p. 2].
Dunne’s earlier notion of para-functionality [20] predates
the related work of Sengers and Gaver [49], where
speculative design artifacts are intentionally crafted to
encourage reflection on how technological devices and
systems shape (and often constrain) people’s everyday
lives, behaviors, and actions in the world. In articulating
para-functionality, Dunne draws on a wide range of
examples from furniture exhibitions to radical architecture
proposals to satirical design projects to unpack how design
artifacts can construct social fictions that critically
The work discussed above reveals the role the materialized
design artifact can play in critical inquiry and to shift the
authority of the interpretation to the “imaginer” or “user”. A
further lesson is that the material forms, along with the
concepts, mutually shape the inquiry and through this
process become unique or specialized types of artifacts. In
all cases the artifact serves as a bridge between our current
world and an imagined critical alternative or transformed
view of our world.
advantageously occupy a creative space at the boundary
between actual and possible worlds. We also elaborate on
how counterfactual artifacts generate possible worlds
through encounters with people. As a consequence of these
features, material speculation acts as a form of critical
The intent and shaping in Dunne and Raby, and Auger and
Loizeau are rhetorical strategies aimed at material artifacts
as discursive interventions. As such, these designers situate
their work in exhibitions arguing it occupies a critically
reflective space between the real and the unreal. We argue
through material speculation that the converse is equally
insightful—that material speculations find a critical space
of inquiry by occupying the actual or everyday world as
opposed to a gallery space.
Possible worlds theory
Possible worlds is a philosophical concept developed in the
latter twentieth century by the analytical school, including
philosophers Saul Kripke and David Lewis [34, 48] and
was later adopted by literary theorists [cf. 38, 22, 45].
Philosophically, possible worlds is an approach to the
problem of counterfactual statements in modal logic. For
example, Kripke asks what is the truth condition of the
statement that Sherlock Holmes “does not exist, but in other
states of affairs he would have existed” [31]; or this
counterfactual statement by Ryan [48], “if a couple hundred
more Florida voters had voted for Gore in 2000, the Iraq
war would not have happened.” In modal logic, the
question is how is each of these counterfactual statements
interpreted to be true or false. The philosopher David Lewis
who bridged analytical philosophy to literary theory [32]
offered the position that propositions like counterfactual
statements can be seen to be either true or false dependent
on in which worlds the statement is true and which worlds
the statement is false [34]. This allows for a reasoned
argument to be made on the inevitability of the Iraq war if
Al Gore was indeed elected president in 2000 despite the
fact that he was not elected president. This allows for the
fictional world of Sherlock Holmes to unfold such that any
faltering of the detective’s deductive reasoning would be
perceived as false or a negative development in the
character’s intellect.
In relation to the open-ended and leveraging of ambiguity
in design in the work of Sengers and Gaver [49], work like
the Prayer Companion [23] serves as an antecedent to
material speculation. Through crafting and situating a very
particular material and functional form in the everyday
world it speculates and reasons in a highly critical fashion.
The use of science fiction design has extended it into realms
of critical inquiry that have productively opened new
territory. Design fiction makes explicit the potential of
fiction in combination with design to challenge and
reconsider assumptions and underlying issues in order to
more critically and reflectively consider next steps and
advances in design and technology. Moving beyond
fictional texts that embed references to design and
technologies, crafted speculations are equally critical and
have the capacity and dexterity to tackle broad topics for
inquiry that may reflect back on design or focus beyond the
field. Crafted speculations reveal the potential of shifting
interpretation and meaning making to users and audiences
of design through an openness and provocation embodied in
the design artifacts. In the next section, we begin our
descriptions of material speculation and how it contributes
to this body of work.
Counterfactuals are central to the theory of possible worlds.
By virtue of contradicting one world (e.g., the world in
which Al Gore lost the presidential election), they elicit and
open up another possible world (e.g., a world in which Al
Gore won the presidential election). Lewis describes
counterfactuals as similar to if…then operators that create
conditional modes in which possible worlds may exist [32,
In what follows, we begin with an introduction of the
literary theory of possible worlds. We follow this with a
description of five examples of interaction design artifacts
that can be viewed as material speculations. Lastly, we
describe and interpret characteristics of a material
Possible worlds theory relies upon the ideal that reality is
comprised of all that we can imagine and that it is
composed of the “actual world” and all “possible worlds”
[47]. Philosophically, there are different approaches to this
idea however Lewis’ view tends to prevail and is most
influential with respect to literary theory [47]. Lewis sees
the actual world as the concrete physical universe over
time. In most respects, possible worlds are materially
similar to the actual world, however they are not connected
in any way spatially or temporally [33]. Importantly, Lewis
also views actual worlds as having no privilege over
possible worlds, rather actual worlds are simply our world,
the one we inhabit. The actual world is indexical. It merely
refers to the world of the inhabitant or the one who is
Manifestations of possible worlds
Here we articulate how particular design artifacts can be
seen to generate possible worlds. We draw on key concepts
from possible worlds theory to support our idea of material
speculation. These include the notion of actual versus
possible worlds and the notion of the counterfactual. We
discuss how design artifacts can be seen as counterfactual
artifacts while still being material things. We argue that the
material actuality of counterfactual artifacts enables them to
speaking within a given world. In this sense, all worlds like
the actual world hold their own internal logic and
fiction a reader construes a possible world to conform as
much as possible to his or her actual world. In other words,
the reader departs from his or her perceived reality only
when necessary. The obvious benefit for fictional authors is
that there need be no accounting for the rising and setting of
the sun; if its not described in the text, a reader can assume
the daily rotation of the planet. In addition, critical
differences can be focused upon such as in a reference to a
winged horse—a reader can imagine the combination of a
known horse with known wings and speculate on that
difference between the possible and the actual. These
aspects give a critical functioning to the boundary between
the actual and the possible. Truth conditions of the possible
are seen to be relevant to the actual or at least open to be
speculated upon. Further, there is a set of relational
propositions that are automatically considered such as:
What kind of saddle might a winged horse have? Where do
flying horses migrate and settle? Is there a whole new
biological class between the classes of mammals and birds?
The theorist Thomas Pavel [38] referred to this as the
adoption of a new ontological perspective that gives
possible worlds a degree of autonomy.
Metaphorical transference of possible worlds to interaction
design in material speculation
Amongst literary theorists there is the question of the
legitimacy of considering fictional worlds as possible
worlds [e.g. 44]: Would analytical philosophers validate the
idea of fictional worlds as possible worlds? Ryan is equally
content with the notion of a metaphorical transference
between disciplines [48].
She cites fellow theorists
Lubomír Doležel to argue that even if considered a
metaphorical transference, the validity of the application of
possible worlds theory is its potential to identify unique
features of fiction that other approaches do not [48]. It is in
this spirit that we extend possible worlds theory to
interaction design and HCI.
Material speculation is the adaption of possible worlds
theory to design research. As we have discussed, when
considering possible worlds and counterfactuals, in
philosophy or fiction we are concerned with either a
statement of logic or a text. In design, we are concerned
with a material thing. A counterfactual is a virtual or
tangible artifact or system in design and HCI rather than
statement or text. Hence we refer to it as a counterfactual
artifact. The notion of an actual counterfactual is a
departure from Lewis’ criterion that possible worlds have
no spatial or temporal connections to the actual world—
they are remote. Yet, here we view this departure more
advantageously than negatively.
Given that counterfactual artifacts sit on the actual side of
the boundary between the actual and possible worlds this
sense of a new ontological perspective is arguably more
pressing. An actual counterfactual artifact not only opens
up speculation on the artifact but on its conditions as well.
When encountering a material speculation, potential
reasoning would include not only ‘what is this artifact’ but
also ‘what are the conditions for its existence’ (e.g.,
including the systemic, infrastructural, behavioral,
ideological, political, economic, and moral). Material
speculation probes the desirability of the truth condition of
the proposition and the conditions bound to it.
The creative boundary between the actual and the possible
There is a productive and creative space at the boundary
between the actual and possible worlds, or the real and the
fictional. There are many examples from fiction in literary
texts, theatre or film where authors intentionally blur the
distinction between actual and possible worlds for its
creative possibilities. Whole genres have emerged like
mystery or interactive dinner theatres that directly involve
audiences in the fictional world. Live action role-playing
games, augmented reality games or alternate reality games
actively transgress the boundary between the real and
fictional. Janet Murray’s notion of the “fourth wall” in
interactive media aimed to cross theatrical illusion and
actuality [35]. In these cases, interactivity is the
counterfactual action that crosses the divide: fictional
characters are not supposed to interact with actual people or
in the actual world. In material speculation, it is making the
counterfactual into an actual artifact that crosses the divide
between the actual and possible worlds since, as we
discussed earlier (see Possible worlds theory),
counterfactuals are not supposed to exist in the same time
or place as the actual world.
The counterfactual artifact as proposition and generation of
possible worlds
In material speculation we can see the counterfactual
artifact as embodied propositions similar to propositions in
counterfactual statements in analytical philosophy. It is
helpful to think of the counterfactual artifacts as being
if…then statements as we discussed earlier (see Possible
worlds theory). In this sense, the counterfactual artifacts
trigger possible world reasoning that extends beyond them.
In other words, the possible world or fictional account is
not embodied fully in the counterfactual artifact rather it is
generated by interactors in the encounter or experience of
the counterfactual artifact. It is not a limitation that the
counterfactual artifact is of our actual world, rather it is this
very actuality that provokes or catalyzes speculation by
being at the boundary of the actual and the possible.
In fiction, the discussion of where the possible world is
situated is more complex. Since the influence of poststructuralist thinking on literary theory, namely in concepts
of open work by Umberto Eco [21] and textuality by
Roland Barthes [7], meaning and fiction are seen to be
Ryan [48] referred to this potential in her principle of
minimal departure in which she argues that in the case of
generated in the act of reading by readers and not solely by
the author. The importance of this for material speculation
is that those interpreting or reasoning upon the
counterfactual artifacts also generate possible worlds in
multiplicity. Eco viewed a literary text as “a machine for
producing possible worlds” [21, p.246] and in this sense we
view a counterfactual artifact as a “machine” for producing
possible worlds.
our aim is to illustrate how they exemplify counterfactual
artifacts. We only hint at the multitude of possible worlds
each may generate since this is part of the lived experiences
of each example. Our accounts of possible lived worlds of
these material speculations are not intended to be
exhaustive since this would require a separate and more indepth treatment of each that is beyond the aims of this
Inaccessible Digital Camera [40]—The inaccessible digital
camera is a digital camera that is made of concrete; all
photos are stored locally inside of its concrete case. The
only way for the owner to view the photos stored on the
camera is to, in effect, break the camera and retrieve the
memory card stored inside (see figure 1). The inaccessible
camera is part of a larger set of ‘counterfunctional devices’
designed to explore how enforcing limitations in the design
of interactive technologies can potentially open up new
engaging possibilities and encounters. In a later project
[39], elements of the inaccessible camera design were
embodied in another counterfunctional camera, named the
Obscura 1C, which again had a form comprised of cement
that required its owner to break it to access the digital
photos stored on a memory card inside. Participants were
exposed to the capsule camera in a lab setting and, later in a
different study, the Obscura 1C was handed out or sold to
people via an online website (e.g.,
The transference of criticality to interaction design
The adaption of possible worlds gives fiction an immense
criticality. Fictional texts can speculatively yet critically
inquire upon our world. As Ryan argues [48], fiction has
the capacity of truth and falsity giving it more consequence
than when perceived as artistic lies or fantasy. Fiction
assumes a real world shared between actual and possible
worlds giving it a perch for relevance and critical insights
into our actual world. However, through counterfactuals, it
does not mimic the actual world, rather readers and authors
alike construct possible worlds different than the actual
world leading to creative and reasoned speculations [48]. In
interaction design, counterfactual artifacts can also be seen
to gain a perch in this critical inquiry space of consequential
propositions rather than matters of functionality or
The critical nature afforded to fiction with recourse to
possible worlds is that its embodied propositions can be
accepted as truthful under certain conditions. With respect
to the actual world, or our world, the choice can be to
regard the propositions as false under the conditions of our
actual world. Or it can be seen as a critical alternative,
which is to change the conditions of our actual world to
make the proposition truthful. This, in essence, is the model
for material speculations in interaction design research as a
mode of critical inquiry.
The inaccessible camera can be seen as counterfactual in
that it draws its owner into a familiar device and
interaction—taking a photo with a camera. However, its
form and composition depart into an alternative situation in
which one must destroy the digital device recording one’s
life experiences in order to access these digital records. In
our contemporary world of constant availability and
connectedness, these counterfunctional cameras project a
critical stance on ‘functionality’—one based on inhibiting,
restricting or removing common or expected features of a
technology. To initiate consumption of one’s digital
photographs, one must first encounter the discomfort of
destruction. On a broader level, encounters with the
inaccessible camera invite critical reflection on one’s own
practices contributing to unchecked digital content
production and the almost unnoticed or assumed eventual
obsolescence and disposal of everyday digital devices.
In summary, we can see how possible worlds theory,
enabled by the work of literary theories, can be applied to
interaction design research to develop the notion of material
speculation. The basic outlines of material speculation can
be summarized as the manifestation of a counterfactual in a
material artifact that we refer to as a counterfactual artifact.
As a material thing it occupies the boundary between actual
and possible worlds. The counterfactual artifact is also an
embodied proposition that, when encountered, generates
possible world accounts to reason on its existence. These
two aspects combined afford material speculations a
position in critically speculating on the actual world.
Examples of material speculation
In what follows we provide an overview of examples of
interaction design artifacts that can be read as material
speculations. We aim to emphasize their actual material
existence situated in everyday settings. With our examples,
we focus largely on the description of the design artifacts as
Figure 1. The Inaccessible Digital Camera [40].
Rudiment #1 [26]— Rudiment #1 is a small machine
encased in wood and plastics that affixes magnetically to
surfaces, such as a refrigerator door (see figure 2). This
machine consists of two main parts or ‘modules’ that are
connected by a flexible cable. One part moves across a
magnetic surface with magnetic wheels when it’s narrowrange infrared detector senses peripheral movement. The
speed and direction are also randomly changed each time
these sensors detect close by movement. The second part is
wired to the first part; it provides the first module with
power and also signals further movement when its own
wide-range infrared detector senses movement. Both parts
are able to detect when an edge or obstacle is reached and
are programmed to change direction if either are
encountered. The authors’ aims for designing Rudiment #1
(and also its cousins Rudiment #2 and #3) were to
speculatively explore how interactive machines might
exhibit autonomy, and how this might be interpreted and
speculated on by people living with them. Two households
in the southeast region of the United Kingdom experienced
rudiment #1 for roughly four weeks each.
Figure 3. The table-non-table [54].
participants to more broadly consider their relationships to
other machines and objects in the home. The Rudiments
effectively struck the balance between offering relatively
familiar materials and formal aesthetics, while operating in
unfamiliar ways that opened up new possibilities for
thinking about human relations to everyday computational
things. Rudiment #1 itself, along with the speculations
triggered by household members’ lived-with encounters,
helped develop and advance a speculative space for moving
beyond the design of “machines for our own good, to a
possibility of interactive machines that might exhibit
autonomy, but not as we know it” [26, p. 152].
From the beginning household members struggled to make
sense of Rudiment #1 when they encountered it. Interactive
technologies and machines commonly occupy our everyday
environments, but the combination of an unclear ‘function’
or purpose paired with unfamiliar, yet resolved aesthetics
prompted a range of speculations on what Rudiment #1 is
and the nature of its intelligence and autonomy. These
design qualities provoked and challenged household
members to encounter a world in which machines may
exhibit and enact a form of autonomy that is very different
from what we know and understand in our world today.
This was evident in household members’ initial use of ‘pet’
metaphors in attempts to describe their relations to the
Rudiments. However, participants eventually migrated to
focus on qualities of function and engagement to make
sense of the autonomy exhibited by the Rudiments. In one
case household members perceived that Rudiment #1 could
be networked to other machines within their home, and in
other cases ongoing encounters with the Rudiments led
Table-non-table [37, 54]—The table-non-table is a slowly
moving stack of paper supported by a motorized aluminum
chassis (see figure 3). The paper is common stock (similar
to photocopy paper). Each sheet measures 17.5 inches by
22.5 inches with a square die cut in the middle to allow it to
stack around a solid aluminum square post that holds the
sheets in place. There are close to 1000 stacked sheets of
paper per table-non-table, which rest on the chassis about
one half-inch from the floor. The movement of the table is
in short durations (5-12 seconds) that occur once during a
longer period of time (a random selection between 20 to
110 minutes). The table-non-table lived with one household
for five months, became part of two households for six and
three weeks respectively, and became part of two
households in a preliminary deployment for several days in
Vancouver, British Columbia.
In some ways similar to Rudiment #1, the table-non-table
provoked a range of speculations as participants attempted
to make sense of its purpose and place within their homes.
While initially its owners attributed anthropomorphic
qualities to the table-non-table (e.g., perceiving it had the
abilities to ‘hide’ or ‘pretend’) [54], over time different
Figure 2. Rudiment #1 vertically affixed to a surface [26].
relations emerged as encounters with it accumulated. The
table-non-table became an artifact that was curiously
computational and clearly required electricity, yet many of
the ways participants used and related to it mirrored
manipulations and reconfigurations more commonly
associated with non-digital things. The flat surface of the
table-non-table opened it up to being subtly drawn on, at
times in unknowing ways as other objects were stacked on
top and it slowly became just another thing in the
background of domestic life. When its movement was
noticed the owners often relocated it to different locations
in the house or apartment as if trying to reveal different
understandings of the artifact. In other cases, the subtle yet
persistent movement of the table-non-table catalyzed
emergent, creative interactions by people and their pets as a
way of “resourcing” the table-non-table. For example, cats
alternated between using it as a bed and viewing it as
another entity with either caution or curiosity. In fact, one
cat began to treat a heater appliance next to the table-nontable in a similar fashion as if similarly constituted objects
were now alive. Both pets and people played with the
sheets of paper from ripping them. People made drawings
on the paper and turned them into large snowflakes [54].
The table-non-table can be seen as radically departing from
how many people experience domestic technology on an
everyday basis. In this way, people, pets and their material
environments were reconfigured over and over again to try
to incorporate the table-non-table alongside other domestic
artifacts, spaces, and experiences over time.
small opening in the panel to allow a photo to drop onto the
central platform of the box. The Photobox’s behavior is
enacted through an application, which runs on a laptop that
wirelessly connects to the embedded printer via Bluetooth.
At the start of each month, Photobox indexes its owner’s
Flickr archive and randomly printed four or five photos that
month. In similarly random fashion, it selects four (or five)
photos and generates four (or five) timestamps that specify
the print time and date for each photo; at print time, the
matching photo was printed. The Photobox was created to
speculatively explore how an interactive artifact could
critically intervene in experiences of digital overload (i.e.,
the proliferation of digital photos) and, more generally,
what might comprise a material instantiation of the slow
technology design philosophy [25]—an interactive thing
whose relationship with its owner could emerge and evolve
over time. Three nearly identical Photoboxes were designed
and implemented; three households in Pittsburgh,
Pennsylvania, USA subsequently lived with a Photobox for
fourteen months respectively.
The Photobox combines a recognizable form—a wooden
chest—and a common experience—viewing and engaging
with digital photos—within a networked design artifact that
provoked both familiar and alien experiences across all
households as they encountered this design artifact over
time. The Photobox can be seen as counterfactual in that
well-worn wooden chests do not manifest material
rendering of one’s online photo collection, at least in the
world as we experience it today. Nonetheless, it was
perfectly functional and developed a unique character and
configuration within each of the three households that
owned one. In addition to stimulating reflections from its
owners about the memories it surfaced from deep within
their digital photo archives, its unfamiliar (and
uncontrollable) slow pacing paired with its classification
among participants as ‘a technology’, triggered a range of
reflections about participants relations to other technologies
in the home and what values ought to constitute a domestic
technology. The Photobox clearly departed from any kind
of familiar combination of form, materials, and
computational behavior that typically characterize domestic
technologies. As a result, participants speculated on the
nature of the artifact and technology in everyday life,
though lived-with encounters over a long period of time.
These encounters opened a productive space for framing
future speculative design inquires.
Photobox [36]—The Photobox is a domestic technology
embodied in the form of a well-worn antique chest that
prints four or five randomly selected photos from the
owner’s Flickr collection at random intervals each month
(see figure 4). The two main components of Photobox are
an oak chest and a Bluetooth-enabled Polaroid Pogo printer
(which makes 2x3 inch photos). All technological
components are embedded in an upper panel in the chest in
an effort to hide of ‘technological’ components from view.
The printer is installed in an acrylic case that secured it to a
Mediated Body [28]—Mediated Body is a symbiotic system
consisting of a human (“the performer”) wearing custombuilt technology (“the Suit”). The system offers a play
session for a single participant (i.e. a person that is not the
performer). The role of the technology is to sense physical
bare-skin connection between the performer and the
participant, where the sensing yields analogue values that
range from a few centimeters from actual touch to light
touch to full contact (see figure 5). The values are
converted into a relatively complex soundscape, which is
Figure 4. The photobox largely took the form of a
European oak chest. [36]
played back in the headphones that both the performer and
the participant wear. Thus, from the participant’s point of
view, the performer is a musical instrument that she can
play by touching. However, due to the design of the system,
the instrument can also play its player: When the performer
touches the participant, the soundscape is affected in the
same way. The headphones make the interactive
soundscape a shared experience between performer and
participant, and they also serve to limit surrounding sounds
and thus make the experience more intimate and private for
the two players. Further, the suit includes bright lights on
the performer’s chest, which serve two purposes. First, the
lights enhance the interactive properties of touch by
changing color and pulse when a touch is sensed. Second,
they broadcast some of the interaction dynamics of the
ongoing session to the surrounding area. The mediated
body was encountered at the week-long Burning Man
Festival and in public spaces, such as the subway in Berlin
(see Figure 5).
engaged in making sense of this encounter in ways that
differ considerably from the performer and participant (see
Figure 5). The Mediated Body speculates on many issues
pertaining to the mobile experience of new media, the
cultivation and expression of personal space in public
places, the human body as a technical interface, and the
richness and tensions entangled across all of these themes.
Characteristics of material speculation
Based on the related works, adaption of possible worlds
theory, and the accounts of material speculation examples,
we summarize our conceptual framing with a series of
Material speculation is the coupling of counterfactual
artifacts and possible worlds––Material speculation is the
sum of the counterfactual artifact that is designed to exist in
the everyday world to be encountered and the multitude of
possible worlds it generates by those encounters.
Counterfactual artifacts exist in the everyday world— The
counterfactual nature of material speculations rely on the
contradiction of the artifact not appearing to “fit the logic of
things” in the everyday world yet undeniably existing in the
actual world to be encountered. Counterfactual artifacts
situated in the everydayness of our world offer a new
ontological perspective that over time makes more visible
assumptions, implications, and possible change. It is
important for the depth and quality of the emergent
possibilities that material speculations be a lived experience
rather than simply an intellectual reflection. More diverse
and deeper possibilities are generated through cohabitation,
interactions, and constant encounters over time.
While different in many ways in terms of its materials, form
and interactivity, the Mediated Body leverages familiar
interactions (e.g., touching another human) to venture into
unfamiliar territory similar to all of the prior examples. It
was evident that encounters with the Mediated Body not
only continually reconfigured relations between the
performer and participant, but also the evolving social and
material ecology encompassing these interactions. It
generated encounters in which issues of social conformity
became peripheral for the performer and participant in favor
of direct, intimate engagements in public spaces. However,
these engagements extended beyond the two people directly
involved in the interaction as those around them also
Counterfactual artifacts are generators of possible
worlds—Counterfactual artifacts in material speculations do
not embody possible worlds, rather they act as propositions
that, if considered, generate lived-with engagements with
new possibilities encapsulated within possible worlds. As
we discussed earlier, counterfactual artifacts are machines
that generate possible worlds. These include the world(s) as
imagined by the designers, world(s) imagined by those who
encounter the counterfactual artifact, and most
speculatively, the counterfactual artifact itself can be
understood to imagine a world.
Counterfactual artifacts in material speculations are
specially designed artifacts. They are crafted with the intent
and purpose to inquire on new possibilities. This is not a
straightforward practice; it requires expertise and design
judgment to create an artifact that successfully contradicts
and deviates from the world around it, yet is entertained as
a viable proposition in our everyday world. As evident
across our examples, counterfactual artifacts are carefully
shaped and designed through materials, form and
computation such that the artifact is balanced between
“falsely” existing in the actual world while being “true” in a
possible world.
Figure 5. Top: Mediated Body [28] in use at the Burning
Man festival. Bottom: The Mediated Body performed on a
Berlin subway among many onlookers.
Material speculation is critical inquiry––Counterfactual
artifacts by nature challenge the actual world since they are
designed to occupy the boundary between the actual and the
possible. The criticality of a material speculation can arise
from the quantity of possible worlds it opens up or the
quality in which it suggests fewer possible worlds. In either
case this speaks to the nature of the critical space revealed.
The precision and promise of the critical inquiry is
mediated through the crafting of the counterfactual artifact,
which can be shaped and directed toward critical or needed
spaces of inquiry.
the position that the things bare knowledge in distinct and
complex ways. While there are important differences in
their own epistemological commitments, emerging
theoretical notions such as Bogost’s [11] carpentry—
constructing artifacts that do philosophy, where their real
and lived existence embodies intellectual argumentation—
and Baird’s [6] thing knowledge—artifacts can embody and
carry knowledge prior to our ability to theorize or reason
through language—offer intriguing perspectives that can be
seen both as critical and generative mechanisms framing
how we do and how we unpack approaches like material
speculation within the interaction design community.
In our discussion we review our contributions and the
importance for HCI and interaction design research of the
centrality of the actual and situated artifact. We also
consider the potential future of speculative materiality as
part of everyday practices.
Speculative materiality of everyday practices
If critical speculative research in design shifts the emphasis
of interpretation to the design audience or users, might we
expect this same audience to go beyond interpretation and
appropriation to the active engagement in crafting
speculation themselves? Similar concerns have been
explored previously in considerations of the intersection of
social practices and design fiction. For example,
Tanenbaum et al. [53] looked at the practice of Steampunk
with the lens of design fiction. They discussed how
Steampunk enthusiasts manifest both making and fictional
practice that serves as an exemplar of the meeting of social
practices and speculative inquiries through design. The
authors argue that this intersection of fiction with
materiality through making verges on making a possible
world actual. Despite the impossibility of the Steampunk
world becoming real it forms an ongoing social practice of
non-designers centered on making such impossibility a
reality. Wakkary et al. [55] explore the relation between
speculation and material practices in a different manner.
These authors describe how speculative inquiries in
sustainable design like hydroponic kitchen gardens of the
future or vertical gardens enabled a ‘generative approach.’
These speculations are taken up within the existing
competences and materials of Green-DIY enthusiasts and
realized today through, for example, Ikea-hacks for the
hydroponic kitchen and reclaimed truck pallets turned into
vertical gardens. Here, the meeting of the speculative and
the material takes an everyday turn that is distinct but
related to our discussion of material speculative inquiries
for design-oriented research.
Actual and situated artifacts as knowledge
A core goal of this paper is to complement the nascent and
growing interest in design fiction in HCI and interaction
design research through offering an alternative conceptual
framing for critical and speculative inquiries in design. We
aimed to expand on the criticality that design fiction
brought to speculation in design research by motivating and
developing the role that actual design artifacts can play in
critical inquiries. Further, we sought to build on the
traditions of craft and material work of speculative and
critical design, as well as the shifting of responsibility for
interpretation to users by including situatedness and lived
with experience as sites for critical inquiry. A further
contribution we made was to provide an in depth theoretical
account to nurture the development of material speculation
within HCI. We aimed to be as clear and transparent in our
use of theory alongside the material speculation concept
such that other researchers can not only refine and revise
our concept, but also refine and revise our theoretical work,
which may in turn lead to other insights.
On a disciplinary level, fundamental to the concept of
material speculation is the centrality of the actual and
situated artifact as a producer of knowledge in interaction
design research. This articulation can be seen to support the
increasing turn within interaction design and HCI research
to develop forms of knowledge production centered on the
essential role of designed artifacts. This advances the notion
of making ‘things’ as a site of inquiry that produces
insights, theories, and argumentation that is unique to
interaction design research and distinct from critical art
practice and the humanities [5, 51, 30]. We see our work as
contributing to nascent and growing interest in design
artifacts as generators of knowledge (e.g. annotated
portfolios [12], critical making [41] and adversarial design
[16], among others).
This paper has motivated and articulated material
speculation as a conceptual framing to further support
critical and speculative inquires within HCI and interaction
design research. In this, we have reviewed and synthesized
a theoretical account of possible worlds theory and the
counterfactual, described and interpreted a set of examples
of material speculations, and proposed characteristics of
material speculations. Importantly our aim is to not be
prescriptive nor conclusive, rather we intend to provide a
conceptual framing to inspire future generative work at
intersection of critical and speculative inquires targeted at
More broadly, our work parallels movements emerging
outside of HCI and interaction design that critically advance
the everyday in HCI. We concluded with actual and
situated artifacts as knowledge and speculative materiality
of everyday practices as opportunity areas for framing
future contributions of material speculations in HCI and
design. In our future work, we aim to refine and expand
these concepts, both materially and theoretically. As the
HCI community continues to seek out critical alternatives
for exploring the nature of interactive technology in
everyday life, we hope material speculation can be seen as a
complementary framing for supporting these initiatives and,
more broadly, the need to recognize and develop ways of
practicing more reflective forms of knowledge production.
10.Blythe, M. Research through design fiction: narrative in
real and imaginary abstracts. In Proc. CHI 2014, ACM
Press (2014), 703-712.
11.Bogost, I. Alien Phenomenology, or What It’s Like to Be
a Thing. University of Minnesota Press, Minneapolis,
MN, 2012.
12.Bowers, J. The Logic of Annotated Portfolios:
Communicating the Value of ‘Research Through
Design.’ In Proc. DIS 2012, ACM Press (2012), 68–77.
13.Brown, N., B. Rappert, and A. Webster (eds).
Contested Futures: a sociology of prospective technoscience. Ashgate, Surrey, UK, 2000.
14.Carroll J. M. 1997. Scenario-based design. In Handbook
of Human-Computer Interaction. Helander M.,
Landauer T.K., and Prabhu P. (eds). Elsevier,
Amsterdam, NL, 383-406
The Social Sciences and Humanities Research Council of
Canada (SSHRC), Natural Sciences and Engineering
Research Council of Canada (NSERC), Banting
Postdoctoral Fellowships, and Canada Research Chairs
supported this research. We thank the many authors who
gave us permission to reprint their work and images.
15.Cooper, A. The inmates are running the asylum.
Macmillan Publishing Company Inc., Indianapolis, IN,
16.DiSalvo, C. Adversarial Design. The MIT Press,
Cambridge, MA, 2012.
1. Agre, P.E. 1997. Toward a critical technical practice:
lessons learned in trying to reform AI. In Bridging the
Great Divide: Social Science, Technical Systems, and
Cooperative Work, Bowker, G. et al. (eds). Erlbaum,
Mahwah, NJ, 131-157.
17.Dourish, P., and Bell, G. Resistance is futile: reading
science fiction alongside ubiquitous
computing. Personal and Ubiquitous Computing, 18, 4
(2014), 769-778.
2. Auger, J. Speculative design: crafting the
speculation. Digital Creativity, 24, 1 (2013), 11-35.
18.Dunne, A., and Raby, F. Speculative everything: design,
fiction, and social dreaming. MIT Press, Cambridge,
MA, 2013.
3. Bardzell, J. and Bardzell, S. A great and troubling
beauty: cognitive speculation and ubiquitous
computing. Personal and ubiquitous computing, 18, 4
(2014), 779-794.
19.Dunne, A., and Raby, F. Design noir: The secret life of
electronic objects. Springer Press, New York, NY,
4. Bardzell, J. and Bardzell, S. What is "critical" about
critical design? In Proc. CHI 2013, ACM Press (2013),
20.Dunne, A. Hertzian tales: Electronic products, aesthetic
experience, and critical design. MIT Press, Cambridge,
MA, 1999.
5. Bardzell, S., Bardzell, J., Forlizzi, J., Zimmerman, J. and
Antanitis, J. Critical design and critical theory: the
challenge of designing for provocation. In Proc. DIS
2012, ACM Press (2012), 288-297.
21.Eco, U. and Robey, D. The Open Work. Harvard
University Press, Cambridge, MA, 1989.
22.Eco, Umberto. The Role of the Reader: Explorations in
the Semiotics of Texts. Bloomington: Indiana University
Press, Bloomington, IN, 1984.
6. Baird, D. Thing Knowledge: A Philosophy of Scientific
Instruments. University of California Press, Berkeley,
CA, 2004.
23.Gaver, W., Blythe, M., Boucher, A., Jarvis, N., Bowers,
J., and Wright, P. The prayer companion: openness and
specificity, materiality and spirituality. In Proc. CHI
2010, ACM Press (2010), 2055-2064.
7. Barthes, R. and Howard, R. S/Z: An Essay. Hill and
Wang, New York, NY, 1975.
8. Bell, G., and Dourish, P. Yesterday’s tomorrows: notes
on ubiquitous computing’s dominant vision. Personal
and Ubiquitous Computing, 11, 2 (2007), 133-143.
24.Gaver, B., and Martin, H. Alternatives: exploring
information appliances through conceptual design
proposals. In Proc. CHI 2000, ACM Press (2000), 209216.
9. Bleecker, J. Design fiction: A short essay on design,
science, fact and fiction. 2009. Retrieved June 16, 2015
25.Hallnäs, L. and Redström, J. Slow Technology –
Designing for Reflection. Personal and Ubiquitous
Computing, 5, 3 (2001), 201–212.
26.Helmes, J., Taylor, A.S., Cao, X., Höök, K., Schmitt, P.,
and Villar, N. Rudiments 1, 2 & 3: design speculations
on autonomy. In Proc. TEI 2011, ACM Press (2011),
40.Pierce, J., and Paulos, E. Counterfunctional things:
exploring possibilities in designing digital limitations.
In Proc. CHI 2014, ACM Press (2014), 375-384.
41.Ratto, M. Critical Making: Conceptual and Material
Studies in Technology and Social Life. The Information
Society: An International Journal, 27, 4, (2011), 252260.
27.Hertz, G. Critical Making: Manifestos. Telharmonium,
Hollywood, CA, 2012.
28.Hobye, M., and Löwgren, J. Touching a stranger:
Designing for engaging experience in embodied
interaction. International Journal of Design, 5, 3 (2011),
42.Reeves, S. Envisioning ubiquitous computing. In Proc.
CHI 2012, ACM Press (2012), 1573-1582.
43.Retzinger, J. P. Speculative visions and imaginary
meals. Cultural Studies 22, 3–4, (2008), 369–390.
29.Kirby, D. The Future Is Now: Diegetic Prototypes and
the Role of Popular Films in Generating Real-World
Technological Development. Social Studies of Science
40, 1 (2010), 41-70.
44.Ronen, R. Possible worlds in literary theory (Vol. 7).
Cambridge University Press, Cambridge, UK, 1994.
45.Ryan, M.-L. The Modal Structure of Narrative
Universes. Poetics Today 6, 4, (1985), 717–56.
30.Koskinen, I., Zimmerman, J., Binder, T., Redstrom, J.,
& Wensveen, S. Design research through practice:
From the lab, field, and showroom. Elsevier,
Amsterdam, NL, 2011.
46.Ryan, M.-L. Possible Worlds, Artificial Intelligence and
Narrative Theory. University of Indiana Press,
Bloomington, IN, 1991.
31.Kripke, S.A. Semantical Considerations on Modal
Logic. Acta Philosophica Fennica 16, 1963 (1963), 83–
47.Ryan, M.-L. Possible-Worlds Theory. In Routledge
Encyclopedia of Narrative Theory. Herman, D et al
(eds). Routledge, London, UK, 2010, 446–450.
32.Lewis, D. Truth in fiction. American Philosophical
Quarterly, 15, 1 (1978), 37-46.
48.Ryan, M.-L. Possible Worlds. In the living handbook of
narratology. Hühn, Peter et al. (eds.), 2012. Retrieved
June 16, 2015 from
33.Lewis, D. K. On the plurality of worlds. Blackwell,
Oxford, UK, 1986.
34.Menzel, C., Possible Worlds. In The Stanford
Encyclopedia of Philosophy (Spring 2015 Edition),
Zalta, E. (ed.), 2015. Retrieved June 16, 2015 from
49.Sengers, P., and Gaver, B. Staying open to
interpretation: engaging multiple meanings in design
and evaluation. In Proc. DIS 2006, ACM Press (2006),
50.Sengers, P., Boehner, K., David, S., and Kaye, J. J.
Reflective design. In Proc. CC 2005, ACM Press
(2005), ACM Press, 49-58.
35.Murray, J. H. Hamlet on the holodeck: The future of
narrative in cyberspace. Simon and Schuster, New
York, NY, 1997.
51.Stolterman, E. and Wiberg, M. Concept-Driven
Interaction Design Research. Human-Computer
Interaction 25, 2 (2010), 95–118.
36.Odom, W. T., Sellen, A. J., Banks, R., Kirk, D. S.,
Regan, T., Selby, M., Forlizzi, J., and Zimmerman, J.
Designing for slowness, anticipation and re-visitation: a
long term field study of the photobox. In Proc. CHI
2014, ACM Press (2014), 1961-1970.
52.Sturken, M., Thomas, D., And Ball-Rokeach, S.
Technological Visions: Hopes and Fears That Shape
New Technologies. Temple University Press,
Philadelphia, PA, 2004.
37.Odom, W. T. and Wakkary, R., Intersecting with
Unaware Objects. In Proc. C&C 2015, ACM Press
(2015), in press
53.Tanenbaum, J., Tanenbaum, K., and Wakkary, R.
Steampunk as design fiction. In Proc. CHI 2012, ACM
Press (2012), 1583-1592.
38.Pavel, T. Possible Worlds in Literary Semantics.
Journal of Aesthetics and Art Criticism 34, 2, (1975),
54.Wakkary, R., Desjardins, A., Hauser, S. Unselfconcious
Interaction: A Conceptual Construct. Interacting with
Computers, (2015), (in press).
39.Pierce J., and Paulos, E. Making multiple uses of the
obscura 1C digital camera: reflecting on the design,
production, packaging and distribution of a
counterfunctional device. In Proc. CHI 2015, ACM
Press (2014), 2103-2112.
55.Wakkary, R., Desjardins, A., Hauser, S., and Maestri, L.
A sustainable design fiction: Green practices. ACM
Transactions on Computer-Human Interaction 20, 4,
(2013). Article No. 23
Charismatic Technology
Morgan G. Ames
Intel Science and Technology Center for Social Computing
University of California, Irvine
[email protected]
relation to the charisma of past technologies. This historical
perspective highlights the ideological commonalities between
all of these charismatic objects. It also suggests that far from
being a new phenomenon, charismatic technologies have
been captivating their users and admirers for decades, and
will likely continue to do so for decades to come. We will see
that charismatic technologies help establish and reinforce the
ideological underpinnings of the status quo through utopian
promises [39] – promises that persist even when the
technology does not deliver.
To explain the uncanny holding power that some
technologies seem to have, this paper presents a theory of
charisma as attached to technology. It uses the One Laptop
per Child project as a case study for exploring the features,
benefits, and pitfalls of charisma. It then contextualizes
OLPC’s charismatic power in the historical arc of other
charismatic technologies, highlighting the enduring nature of
charisma and the common themes on which the charisma of a
century of technological progress rests. In closing, it
discusses how scholars and practitioners in human-computer
interaction might use the concept of charismatic technology
in their own work.
The goal of this paper is to expose the ideological stakes that
buttress charismatic technologies. Those who create, study,
or work with technology ignore the origins of charisma at
their own peril – at the risk of always being blinded by the
next best thing, with little concept of the larger cultural
context that technology operates within and little hope for
long-term change. Recognizing and critically examining
charisma can help us understand the effects it can have and
then, if we choose, to counter them. However, it is also
important to acknowledge that charisma can smooth away
uncertainties and help us handle contradictions and obstacles.
As such, the purpose of this paper is not to ‘prove’ charisma
‘wrong,’ because its rightness or wrongness is beside the
point. As we will see, what matters is whether a technology’s
charisma is still alive.
Author Keywords
Charisma, childhood, education, history of technology,
ideology, One Laptop per Child, religion, science and
technology studies, technological determinism, utopianism.
ACM Classification Keywords
K.4.0. Computers in Society: general.
Scholars have noted the holding power that some
technologies seem to have – a power that goes beyond mere
form or function to stimulate devotion, yearning, even
fanaticism [2,39,44,62]. While Apple products, especially
iPhones, are often held up as the most common example of
this holding power [11,31,37,40,52,56], it exists in various
forms for many technologies, from sports cars to strollers.
This paper provides a framework for understanding how
charisma operates in relation to technologies, how it might be
identified, and what is at stake when we are drawn in. It
provides tools for identifying charismatic technologies and
teasing out the implications of this charisma, from the
hardware and software of the object itself to the ensembles,
agendas, and trajectories of the globalized organizations
around it [18], and back down to the ways that those same
groups shift, contest, or perpetuate the object’s charisma.
This borrows from Actor-Network Theory the idea that
nonhuman “actors” have agency in technosocial discourses
[33], and from Value-Sensitive Design a normative
examination of the ways in which the myriad values
influence design and use [19]. This analysis adds to these
theories a detailed case study of the role that charismatic
authority plays in the design and use of technologies, digging
beneath professed values to identify the ideological
underpinnings upon which values, and charisma, rest.
This paper describes this holding power as charisma.
Applying Weber’s theory of charismatic authority [68] to
objects, it presents a case study of a technology that was
highly charismatic to its makers and supporters (and remains
so to a devoted core): the One Laptop per Child (OLPC)
project’s “XO” laptop. With about two and a half million in
use globally, OLPC’s green-and-white XO remains a focal
point for discourses about children, technology, and
education, even a decade after its 2005 debut. This analysis
explores the roots of the laptop’s charisma and the important
role that charisma played in OLPC’s heady early days. It then
reflects on the charismatic elements present in this project in
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
network of relations [33]. ANT, however, tends to neglect the
beliefs that underlie these networks, especially if these beliefs
do not take material form [34:2]. Thus, while ANT provides
tools for analyzing the ‘scripts’ that designers build into
technologies [1], it falls short of providing the means to
account for how ideological frameworks animate or inhabit
these products – a gap charisma can fill.
This paper draws on archival research, interviews, and
ethnographic observations conducted between 2008 and
2015. This includes an investigation of the forty-year
development of the ideas behind One Laptop per Child
(OLPC) through a review of the project’s mailing lists, wikis,
and discussion boards; the publication history of its founders;
and interviews with some developers. The author also
conducted seven months of fieldwork of an OLPC project in
Latin America (see [5,6,53]), but this data is not directly
included here. Analysis followed an iterative, inductive
approach common in anthropology and cultural studies,
combining the themes that emerge ground-up from a
thorough understanding of participants’ worldviews with a
critical interpretation of these themes as ‘texts’ able to
expand or contest broader theoretical questions [10].
Distinct from fetishism’s focus on form, which stipulates a
fixation on the materiality of the (presumably) passive object
itself as a source of power, a charismatic object derives its
power experientially and symbolically through the possibility
or promise of action: what is important is not what the object
is but what it promises to do. Thus, the material form of a
charismatic technology is less important than how it invokes
the imagination. As sociologist Donald McIntosh [36]
explains, “charisma is not so much a quality as an
experience. The charismatic object or person is experienced
as possessed by and transmitting an uncanny and compelling
force.” A charismatic technology’s promises are likewise
uncannily compelling, evoking feelings of awe,
transcendence, and connection to a greater purpose [39,44].
This paper contextualizes the patterns noted in OLPC’s
rhetoric and design within the broader arc of technological
development, as told by historians of technology. The
combination of historical and contemporary data lends itself
to reaching beyond the often bounded scope of qualitative
research to answer more long-ranging questions about the
trajectory of technological development and use.
Charisma moreover implies a persistence of this compelling
force even when an object’s actions do not match its
promises – hence the magical element of charisma. This is
where a charismatic technology’s link to religious experience
is especially strong, as a system built on faith that is
maintained and strengthened outside of (or even counter to)
the auspices of ‘evidence’ [2]. This is also one of the places
that the consequences of charisma are most visible, as a
technology’s devotees maintain their devotion even in the
face of contradictions.
To explain the holding power that OLPC’s laptop has had on
technologists and others around the world, I develop the idea
of a charismatic technology. This section defines charisma,
outlines the salient features of charismatic technologies, and
details the connection between charisma and related concepts
from social theory including fetishism, religion,
technological determinism, and ideology.
This potential for charisma to override ‘rational’ thought is
something that is not lost on marketers. While they may not
call it as much, their promotions often tap into the charisma
that certain technologies have, and journalists often echo it as
well. In fact, religion scholar William Stahl examines the
popular discourses about technology and concludes that “our
language about technology is implicitly religious” [56:3].
Other scholars have also documented the connections
between technology and religion, highlighting the prevalence
of religious language in branding discourse [37,40,52,56],
revealing parallels between religious and engineering
practice [2], or showing how specific technologies are
rhetorically connected to divinity and redemption in use
[11,28,31,39]. Thus, while charismatic objects construct and
reinforce their own charisma, media are often implicated in
amplifying it, as we will see below.
Charisma as a sociological construct was theorized by Max
Weber to describe the exceptional, even magical, authority
that religious leaders seem to have over followers. In contrast
to (though sometimes subsumed by) other types of authority,
such as traditional or legal/rational, charismatic authority is
legitimized by followers through a belief that a leader has
extraordinary, even divine, powers that are not available to
ordinary people [68].
Though charisma is generally used to describe the power of
humans, not objects, it has been applied to nonhumans as
well. Maria Stavrinaki, for instance, describes the Bauhaus
African Chair as a ‘charismatic object’ within the Bauhaus
community [57]. Relatedly, Anna Tsing [59] discusses how
the idea of globalization has been charismatic to some
academics, who uncritically naturalize or even reinforce
globalist agendas by characterizing globalization as universal
and inevitable. Tsing’s model of charisma – a destabilizing
force that both elicits excitement and produces material
effects in the world (even if these effects differ from those
that were promised) – is at play here as well.
In their often utopian promises of action, charismatic
technologies are deceptive: they make both technological
adoption and social change appear straightforward instead of
a difficult process fraught with choices and politics. This
gives charismatic technologies a determinist spirit, where
technological progress appears natural, even inevitable. This
naturalizing element can lead us to underestimate the
sustained commitment needed for technological adoption. In
By treating an object as charismatic, these approaches, and
this paper as well, utilize perspectives from actor-network
theory (ANT), which subject both human and non-human
‘actors’ to the same analytical lens in a mutually-constituted
building out the railroad in the mid-nineteenth century, for
instance, Nye shows that the charisma of the locomotive led
to the U.S. paying an enormous price in resources and lives
in an attempt to realize the utopian promises of rail [44]. By
the same token, charisma’s naturalizing force can make
critique and debate appear unnatural.
operates within and little hope for long-term change.
Recognizing and critically examining charisma might, as
Mosco explains, “help us to loosen the powerful grip of
myths about the future on the present” [39:15].
Still, it is also important to recognize that charisma plays an
important, even indispensable, role in our lives. Charisma –
whether from leaders or technologies – can provide direction
and conviction, smoothing away uncertainties and helping us
handle contradictions and adversities. Rob Kling asserts that
faith in technology can play a major role in cultural cohesion:
“During the 1930s,” he explains, “this almost blind faith in
the power of the machine to make the world a better place
helped hold a badly shattered nation together” [30:48]. As
such, the purpose of this paper is not to ‘prove’ charismatic
technologies ‘wrong,’ because matters here is whether
charisma is still ideologically resonant.
However, charisma contains an irony. A charismatic
technology may promise to transform its users’
sociotechnical existence for the better, but it is, at heart,
fundamentally conservative. Just as charismatic leaders
confirm and amplify their audiences’ existing worldviews to
cultivate their appeal [36], a charismatic technology’s appeal
is built on existing systems of meaning-making and largely
confirms the value of existing stereotypes, institutions, and
power relations. This unchallenging familiarity is what
makes a charismatic technology alluring to its target
audience: even as it promises certain benefits, it
simultaneously confirms that the worldview of its audience is
already ‘right’ and that, moreover, they are even savvier to
have this technology bolster it.
OLPC’s laptop, called the ‘XO,’ was the first of its kind to
combine a rugged design, an open-source educational
software suite, and full (though purposefully underpowered,
in an attempt to prolong battery life) computer functionality,
with the goal of overhauling education across the Global
South. The project is a culmination of over forty years of
work at the MIT Media Lab and its predecessors, particularly
the intellectual legacies of MIT professors Seymour Papert
and Nicholas Negroponte. While many were involved in
OLPC, Negroponte and Papert together were largely
responsible for establishing the direction of the project.
This worldview that charisma reinforces is what social
theorists refer to as an ideology: a framework of norms that
shape thoughts and actions. Cultural theorist Stuart Hall
describes an ideology as a “system for coding reality” that
“becomes autonomous in relation to the consciousness or
intention of its agents” [22:71]. We live with many
ideologies, reinforced not just by charisma but across our
socio-political landscape: neoliberal economics, patriarchal
social structures, and Judeo-Christian ethics are among the
many dominant ideologies in the United States, for instance.
Both Negroponte and Papert are themselves charismatic, and
both used it to build the charisma of the OLPC project and
the XO laptop. While Negroponte has been the public face of
the project, glibly flinging XOs across stages at world
summits to demonstrate their ruggedness [64] and talking
about “helicopter deployments” of laptops to remote areas
[66], Papert was the project’s intellectual father. His whole
career focused on the idea of computers for children, leading
to the development of LOGO, Turtle Graphics, Lego
Mindstorms, and, finally, One Laptop per Child. Though the
results of these projects have been lackluster at best
[4,6,51,55,67], Papert is still often considered a central figure
in education and design (e.g. [8,20]), and his books remain
foundational to the curriculum at the MIT Media Lab [2].
Hall notes that ‘ideology’ has been useful in social theory as
“a way of representing the order of things which endowed its
limited perspectives with that natural or divine inevitability
which makes them appear universal, natural and coterminous
with ‘reality’ itself” [22:65]. What is important in this
definition is the way that ideology fades into the background:
by one metaphor commonly used in anthropology, ideologies
are as invisible to most people as we imagine water is to a
fish [15:123]. In Hall’s words, an ideology “works” because
it “represses any recognition of the contingency of the
historical conditions on which all social relations depend. It
represents them, instead, as outside of history: unchangeable,
inevitable and natural” [22:76]. By referencing and
reinforcing existing ideological norms, charismatic
technologies by extension can also appear ‘unchangeable,
inevitable and natural.’
While the charisma of these two men were important to
OLPC, I also want to emphasize that OLPC’s XO laptop
itself included a host of charismatic features that were
eagerly discussed in the tech community and media alike,
even when some never actually existed and others did not
work in practice. In this way, the machine itself embodied
and performed its charisma, and the discussion around the
machine amplified and perpetuated these promises.
Thus, charismatic technologies help establish and reinforce
the ideological underpinnings of the status quo. They do so
through promises that may persist among true believers even
when the technology does not deliver. The task of this paper,
then, is to “break the spell of the present” [56], exposing the
ideological stakes that underpin charisma. Technologists
ignore the origins of charisma at their own peril – at the risk
of always being blinded by the next best thing, with little
concept of the larger cultural context that technology
One of the most charismatic features of the initial proposal
for the laptop was a hand crank for manually charging it.
Though this feature never existed in a working prototype and
all laptops in use were charged via standard AC adapter [53],
journalists still mention the hand crank or make claims based
And yet, none of these niggling realities mattered. For several
years after its debut and even today in the popular press, the
project and its charismatic green machine were the darlings
of the technology world. High-profile technology companies
including Google, Sun, Red Hat, and others donated millions
of dollars and hundreds of hours of developer labor to the
project from 2005 to 2008. The open-source community,
wooed by the promise of a generation of children raised on
free software, continue to enthusiastically contribute to the
development of OLPC’s custom-built learning platform,
Sugar. Unsubstantiated and quite likely apocryphal stories
about children teaching parents to read, of impromptu laptopenabled classrooms under trees, or of the laptop’s screen
being the only light source in a village (never mind how the
laptop itself was recharged) were echoed enthusiastically
across the media and tech worlds. Even today, with ample
evidence that OLPC has not lived up to these promises, they
are still repeated, and XO laptops have been added to the
collections of major art museums.
These hyperbolic claims themselves, whether based on
design features or more general claims about how the laptop
would change the world, became part of the machine’s
charisma. The project’s leaders, like many who lead
development projects, had the utopian, if colonialist
[6,17,27], desire to transform the world – not only for what
they believed would be for the better but, as we will see, in
their own image. Negroponte infamously said that the project
and laptop “is probably the only hope. I don’t want to place
too much on OLPC, but if I really had to look at how to
eliminate poverty, create peace, and work on the
environment, I can't think of a better way to do it” [63].
Though Negroponte made an easy target by often saying
outrageous things that sometimes left even others on the
project shaking their heads or backpedaling, many on the
project echoed variations on this particular theme.
Figure 1. The XO in standard laptop configuration, showing the
first-generation Sugar interface (promotional picture).
on its presumed existence. Two antennae “ears” on either
side of the screen, which also acted as latches and covers for
ports, were added to visually replace it (Figure 1). They also
served to anthropomorphize the laptop, as did the person-like
“XO” name/logo which, when turned on its side, was meant
to look like a child with arms flung wide in excitement.
The XO included a “view source” keyboard button, a feature
hailed as revolutionary even though web browsers had long
included a similar function. Moreover, the button often did
not work and I never saw it used in months of observations.
The laptop’s hardware was meant to be similarly accessible,
and OLPC leaders boasted about the laptop being particularly
easy to repair. However, compromises with manufacturers
and cost-cutting made this promise unattainable as well, and
projects were plagued with breakage [4,53].
This raises the question, though, of why these worlds so
enthusiastically accepted these claims. Here is where the
charismatic authority of the XO laptop becomes important.
Again, my goal here is not to ‘prove’ OLPC’s charisma
‘wrong’: highlighting the project’s continued charm despite
its many failures is simply a method for demonstrating that
charisma is indeed at play. In the next three sections, I will
show how this charisma was based on specific ideas about
childhood, school, and computers. As I describe in more
detail in [3], OLPC hoped to replace a school experience that
Papert claimed had ‘no intrinsic value’ [49:24] with a
computer to encourage children to learn to think
mathematically [48]. This was aimed at elementary school
students who were assumed to be precocious, scientificallyinclined, and oppositional. Not coincidentally, this target
audience matched developers’ conceptions of their own
childhoods. In the following sections, I will briefly describe
each of these and why they were important to the charisma of
OLPC’s XO laptop. Then I will show that these same themes
undergird a number of other historically charismatic
technologies, demonstrating charisma’s conservatism.
The laptop had an innovative screen, invented by fellow MIT
Media Lab professor and founding OLPC contributor Mary
Lou Jepsen, that could be swiveled and flattened, and its back
light turned off, like an e-book – though I found that the
screen was also the second-most-common component to
break and the second-most-expensive to replace [53]. The
first-generation XO included a mesh network so that laptops
could connect to one another directly without an access point,
though in practice the underpowered laptops would grind to a
halt if more than a few were connected and the feature was
dropped from later versions of the laptop’s software [53,65].
Even the original name of the project – the “hundred-dollar
laptop” – was never achieved: the lowest price was $188
[6,53]. This in part was because OLPC’s goal of having
hundreds of millions of laptops in use across the Global
South also never came to fruition – instead, about 2.5 million
have been sold, most of them in Latin America [6].
The charisma of childhood
group that has managed to hold onto the magic of childhood:
independent thinkers, lifelong learners, and, not
coincidentally, many in the hacker community. They
generalized from experiences with largely white, middleclass American youth – or from their own idiosyncratic
childhoods – that all, most, or at least the most ‘intellectually
interesting’ [49:44–50] children are innately drawn to
tinkering with computers and electronics, or in Papert’s
words, ‘thinking like a machine’ [48].
One Laptop per Child explicitly stated and implicitly built
into their laptop the idea that children are born curious and
only need a small impetus (such as a computer or an
electronics kit) to keep that curiosity alive and growing.
OLPC moreover specified the kinds of learning that children
are naturally inclined to do: engineering-oriented tinkering.
In this section, we will briefly consider the cultural narratives
that Papert, Negroponte, and others draw on about what
childhood should mean and what constitutes a good one,
narratives that have become deeply rooted in middle-class
American culture and reflect American cultural values such
as individualism and (certain kinds of) creativity.
The anti-charisma of school
The One Laptop Per Child project frames itself and its laptop
as a radical break from an unchanging global educational
tradition [7,43,48–50]. In contrast to the naturally curious
state of children, OLPC leaders paint school as a stultifying
rote experience that has not changed in over a century
[42,43,48,49]. In his writing, Papert villanizes what he calls
education. Papert argues that instructionism creates people
‘schooled’ to think in limited ways and seek validation from
others out of ‘yearners,’ the innate, creative, playful state of
children, focused on thinking independently, not caring what
others think, and seeking answers via many routes for
questions in which they are personally interested [49:1].
This ideology of childhood, though seemingly universal, is
historically, geographically, and socioeconomically situated.
Childhood came to be understood as a distinct developmental
state in nineteenth-century Europe and America
[12,26,38,70,71], an ideological shift that was justified with
Romantic-era ideals of childhood as a noble state closer to
nature. These ideologies of childhood spread to mainstream
middle-class parenting culture by the mid-twentieth century
[26,32,70], when the idea of nurturing childhood creativity,
play, and individualism through consumerism (via large
numbers of aspirational toys) gained popularity among
middle-class white American families [45,46].
Papert is unequivocal about his disdain for school, calling the
classroom ‘an artificial and inefficient learning environment’
[48:8–9] with ‘no intrinsic value,’ its purpose molding
children out of their natural state and into a more socially
‘desirable’ form [49:24]. With the exception of a few brave
teachers who fight against the establishment [49:3], Papert’s
description of a monolithic ‘School’ (always capitalized) is
of an institution unchanged for over a century, ‘out of touch
with contemporary life,’ and shamelessly ‘impos[ing] a
single way of knowing on everyone’ [49:1–6].
Around this time, toy manufacturers began to reinforce the
idea that engineering was a space for natural masculine
creativity through construction toys aimed at boys [16,45,46],
even though those patterns of play were relatively new and
far from universal [12]. Alongside these shifts, ‘healthy
rebellion’ also became an accepted part of American boy
culture [38,46]. Far from the more ideologically intimidating
rebellion that threatens to actually change the status quo, the
rebellion that is sanctioned in boyhood is often tolerated as
‘boys will be boys’ or even encouraged as free-thinking
individualism. From Mark Twain’s Tom Sawyer to today,
popular culture has linked these relatively harmless forms of
rebellion against school and society with creative confidence,
driven by ‘naturally’ oppositional masculine sensibilities.
Papert is not alone in the expressing scorn for education.
Many of OLPC’s contributors, whether affiliated with MIT
(including MIT professors) or the open-source software
community, describe similar sentiments (e.g. [9,25,41]).
Even though not all people in the OLPC community actually
rejected school (and many, in fact, excelled), they published
and shared narratives about how boring, stifling, and
unfulfilling classroom education was.
The charisma of OLPC’s laptop references this ideology of
childhood. As I show in [3], though the XO was ostensibly
for children of any gender, its design echoed tropes of
boyhood specifically: its mesh network embodied a
resistance to authority and its programming-focused
applications echoed the engineering toys that companies had
been marketing to boys for almost a century before [46].
Papert’s and Negroponte’s writing also often praised
iconoclastic or free-thinking children – generally boys – who
took to computers (and to their experiments) easily. In the
process, they glossed over the many complexities of
childhood – not to mention how things like household
instability or food insecurity affect learning – by
universalizing children as natural ‘yearners’ [49].
These narratives resonated in the technology community as
well as across American culture more broadly, where it is
common, and even encouraged, to disparage public education
and recount tales of terrible teachers (while excellent ones are
often forgotten, and the role of school as a social leveler or
cultural enricher are similarly unmentioned). In this way, the
anti-charisma of school has become a common cultural trope,
and we will critically examine it later in this paper. For
OLPC, it served two purposes. First, it aligned the project
with this broader backlash against public school. Second, it
provided a rhetorical foil to ideologies of childhood – an
opportunity to reinforce the importance of individualism,
(technically-inclined) play, and rebellion important to the
idea of childhood OLPC relies on.
These actors also referenced these ideologies of childhood in
discussing themselves. In his writing, for example, Papert
describes himself and others like him as part of a rarefied
The charisma of computers
childhoods to other children around the world [3]. Papert, for
one, explicitly credits his own formative experiences with
computers as inspiration for the project. “I realized that
children might be able to enjoy the same advantages” with
computers as himself and other MIT hackers, Papert explains
in his book The Children’s Machine – “a thought that
changed my life” [49:13].
As an alternative to school, Papert proposes giving each child
a ‘Knowledge Machine,’ a clear forerunner to OLPC’s XO
laptop. Indeed, many who have watched a child with a
touchscreen or a videogame have marveled at how children
seem to be naturally enamored with technology. Stories
abound of precocious young children, especially boys, who
seem to take to electronics fearlessly and naturally
[41,48,49]. Both Papert and Negroponte rhapsodize about the
‘holding power’ that computers can have. Papert says,
In conversations with OLPC developers and those who
identify as ‘hackers’ across Silicon Valley and in Boston, I
often encountered the story of ‘teaching myself to program.’
These actors describe learning about computers as something
they did on their own, driven by feelings of passion and
freedom, and independent of any formal instruction
[24,25,41,58] – just as OLPC’s intended beneficiaries are
meant to do. Shunning school in favor of this libertarian
learning model, these people saw their excitement kindled
and powers developed from their personal relationship with
computers, with hours spent tinkering with Commodore 64s
or Apple IIs, chatting with others in BBS’s or Usenet groups.
The computer is the Proteus of machines. Its essence is its
universality, its power to simulate. Because it can take on a
thousand forms and can serve a thousand functions, it can
appeal to a thousand tastes. [48:viii]
The stories OLPC tells about the potentials of its charismatic
laptop roll into one package many of the promises connected
to computers and cyberspace: of newfound freedoms and
potentials for computer-based self-governance, of the
inversion of traditional social institutions (putting, of course,
the computer-savvy like themselves at the top), of the
“flattening” of bureaucracies, of the end of geographic
inequity, of a “reboot” of history [30,39,44,54,60,61,69].
Even though computerization has largely entrenched existing
power structures, these ideals of the computer age live on,
especially among technologists whose cocooned middle-class
existence may not bring them in contact with those whose
lives and livelihoods have not benefited from these
technologies or who have been actively excluded from the
wealth in the technology world.
But as I dug deeper into how this self-learning worked, I
found that in all cases they benefited from many oftenunacknowledged resources. This included a stable home
environment that supported creative (even rebellious) play,
middle-class resources and cultural expectations, and often
(though not always) a father who was a computer
programmer or engineer. Moreover, many also
acknowledged, sometimes readily, that they were not typical
among their peers in their youth, and were often shunned for
their unusual interests or obsessions. Even among peers who
also had computers at home they were often unique in their
interest in learning to ‘think like a machine,’ while many
others with the same of access were not nearly as captivated
by computers.
The connection between rebellion and computing cultures is
also well-established, particularly as countercultural norms of
the 1960s were embraced by early cyberculture communities
[60,61]. Narratives about the kinds of ‘all in good fun’
rebellion that computers could enable were popularized with
the establishment of the ‘hacker’ identity in the 1980s (e.g.
[21,35]). This imbued cyberspace with metaphors of the
Wild West, of Manifest Destiny, of a new frontier of radical
individualism and ecstatic self-fulfillment [60,61,69]. It also
encouraged a libertarian sensibility, where each actor is
considered responsible for their own actions, education, and
livelihoods – conveniently ignoring the massive
infrastructures that make the cyberspace ‘frontier’ possible.
However, neither Negroponte nor Papert discuss the
possibility that they, and their students at MIT, enjoyed
privileged and idiosyncratic childhoods, nor do they dwell on
the sociotechnical infrastructure that enabled that privilege. If
anything, Papert’s accounts of employing his theories in
classrooms or other settings (e.g. see [48,49]) reinforce
notions of exceptionalism by focusing exclusively on the few
engaged children, those rare emblems of ‘success’ who
appear to prove his theories, and ignoring the rest.
These ideals of computers and cyberspace motivated OLPC’s
commitment to provide computers to children around the
world, and helped the project resonate with many, from
journalists to governments, similarly enamored with
technology. Imbued with this infinite potential, laptops could
take priority over teachers, healthcare, even food and water.
And they took on yet greater power in utilizing ideas of
childhood, school, and computers together, as we see next.
OLPC’s idea of the self-taught learner who disdains school
for computers also discounts the critical role that various
institutions – peers, families, schools, communities, and more
– play in shaping a child’s educational motivation and
technological practices. Instead, Papert and other OLPC
developers essentialize the child-learner and make the child
and the laptop the primary agents in this technosocial
assemblage, favoring technological determinism – all it takes
is the right kind of computer to keep kids as ‘yearners’ – over
the complicated social processes involved in constructing and
negotiating childhood.
The nexus of childhood, computers, and learning: the
charisma of the self-taught hacker
These three ideologies come together in One Laptop per
Child through the narrative of the self-taught hacker and
developers’ desire to bring the experiences of their own
This reliance on personal experience underscores an
ideological slippage between the ideas taken up by XO
laptop designers and children’s activities on the ground [5,6].
OLPC aimed to provide access to pedagogical materials with
the assumption that children’s interests would take care of the
rest, but did not account for their own idiosyncratic
childhoods and ultimately reinforced existing socioeconomic
and gender inequalities [5,6,53]. Their reliance on personal
experience also enabled their naturalized images of
childhood to justify a particular set of normative social
objectives. Locating these slippages, and the tenacity to
which supporters held to their ideals even when confronted
with them [2], provides a powerful method of identifying
charisma and uncovering its possible consequences.
progress and Western expansion, and a guarantor of
economic development for all it reached [44].1
Even when railroads cost vastly more time, money, and
human lives to construct than anticipated, and even when
they did not bring endless prosperity and worldliness to their
termini, the locomotive’s charisma persisted for some time,
in the same way that subsequent charismas did over a century
later as highways were laid down alongside or over rail lines
and as airports replaced train stations as the newest symbols
of progress and modernity [39,44,54]. Indeed, these same
utopian discourses were applied to many other technological
advances over the next century – many of which we now take
for granted – including the steamboat, canals, bridges, dams,
skyscrapers, the telegraph, electricity, the telephone, the
radio, the automobile, television, cable television, and more
[16,39,44,54,62]. Even the airplane was hailed as a “winged
gospel” that would “foster democracy, equality, and
freedom” – despite its terrifying wartime role [69].
As we have seen, OLPC’s charisma has the ability to evoke
awe, a feeling of spiritual transcendence, and utopian visions
for the kind of world that its laptops might make possible. In
particular, the XO rolled into one package many of the
promises also connected to computers and cyberspace, to
childhood and play, and to learning outside of school. A key
component of this charisma was its ability to evoke the
nostalgic stories that many in the technology world tell about
their own childhood experiences with computers. The project
has the utopian, if colonialist [17,27], desire to remake the
world in its own image.
While it may be easy to discount these past charismatic
technologies given the perspective and tarnish of time, they
contain two lessons, one about the enduring importance of
charisma to the modern cultural imagination, and the other
about its limits. In particular, there is a striking parallel
between the charisma of computers that OLPC draws on and
these earlier charismatic technologies. Historian Howard
Segal notes that the rhetoric of the power of the Internet to
spread democracy “was identical to what thousands of
Americans and Europeans had said since the nation’s
founding about transportation and communications systems,
from canals to railroads to clipper ships to steamboats, and
from telegraphs to telephones to radios” [54:170]. Vincent
Mosco also notes these similarities in a call for both more
empathy for the past and more skepticism of the present.
“We look with amusement, and with some condescension,”
he writes, “at nineteenth-century predictions that the railroad
would bring peace to Europe, that steam power would
eliminate the need for manual labor, and that electricity
would bounce messages off the clouds, but there certainly
have been more recent variations on this theme” [39:22].
However, OLPC’s XO laptop is but the latest in a long line
of charismatic technologies. Historians of technology show
that from the railroad in the mid-nineteenth century to the
Internet today, many new technologies have been hailed as
groundbreaking, history-shattering, and life-redefining.
“Since the earliest days of the Industrial Revolution,”
Langdon Winner writes, “people have looked to the latest,
most impressive technology to bring individual and
collective redemption. The specific kinds of hardware linked
to these fantasies have changed over the years … [b]ut the
basic conceit is always the same: new technology will bring
universal wealth, enhanced freedom, revitalized politics,
satisfying community, and personal fulfillment” [69:12–13].
Here, I briefly describe the history of charismatic
technologies as a way to locate and bound OLPC’s charisma.
In short, a historical perspective helps (in William Stahl’s
words) “break the spell of the present” [56]. It demonstrates
that today’s charismatic technologies are neither natural nor
inevitable, but are ideologically conservative: even as they
promise revolution, they repeat the charisma of past
technologies and ultimately reinforce the status quo. This, in
turn, allows us to better identify new charismatic
technologies and to understand charisma’s consequences.
The next section examines a charismatic technology that
provides these lessons for One Laptop per Child: the radio.
Historian David Nye identifies the first time feelings of
redemption were linked to a new technology in his
description of the locomotive of the mid-nineteenth century.
From the inauguration of America’s first rails in Baltimore in
1828 to the completion of the transcontinental line at Utah’s
Promontory Point in 1869, Nye describes a nation gripped
with railroad mania. The railroad was implicated in dozens of
hyperbolic claims, including the “annihilation of space and
time” [29] with its sustained speed, the demise of manual
labor with steam, a liberation of humankind from
provincialism via increased contact and communication, the
triumph of reason and human ingenuity, an ‘engine’ of
Nye also notes the flip side of the sublime: a small group decried
the locomotive as a “device of Satan” because of its ‘unholy’ speeds
[44]. While a discussion of dystopianism is beyond the scope of this
paper, it is the flip side of utopianism: though ruled by fear instead
of hope, it is still beholden to the same ideologies [30,39,69].
Lessons from the charismatic radio
dreams. These “Radio Boys,” as Mosco identifies them, were
initially the “heroes” of the Radio Age, “youngsters who lent
romance and spirit to the time by building radios, setting up
transmitters, and creating networks” [39:2]. But as
compelling as their messages may be to anyone sympathetic
to contemporary hacker culture, the dreams of the Radio
Boys did not prevail. As radio became more commercialized,
the connection between technological tinkering and utopian
thinking largely receded into niche communities such as
HAM radio [23] until reappearing, with a new set of actors
but some remarkably similar practices, around the networked
personal computer [35,58,61] – the actors that Papert says
inspired his work on designing a ‘children’s machine.’
Of all the charismatic technologies of the past, the one that
has the strongest resonances with the charisma of computers,
education, and childhood – charismas that OLPC relies on –
is the radio. Aside from an increasingly marginalized culture
of HAM radio operators, it can be hard to imagine radio in
the U.S. (today so often a commercialized audio wasteland of
top-40 songs on repeat, with a few public stations limping
from one pledge drive to the next) as an intensely charismatic
technology. But radio took 1920’s America by storm,
capturing the country’s collective imagination with promises
blending technological miracles and manifest destiny.
Historian Susan Douglas explains that radio, as envisioned in
1924, “was going to provide culture and education to the
masses, eliminate politicians’ ability to incite passions in a
mob, bring people closer to government proceedings, and
produce a national culture that would transcend regional and
local jealousies” [16:20]. Commentators described the
replacement of telegraph wires with radio waves with
psychic metaphors and compared it to magic [16:41].
Historians have argued that remarkable parallels between this
group and the ‘hackers’ who designed personal computers,
the Internet, and OLPC’s XO laptop are no accident. The
charisma of radio, computers, the Internet, locomotives, and
more draws on the same set of utopian stories about
technology, youth, masculinity, rebellion, and selfdetermination. For instance, the individualist strains in these
earlier technologies found voice in the cyber-libertarianism
of recent decades [61]. The new frontiers of the imagination
that the railroad opened in mid-nineteenth century America
are rhetorically echoed in the new frontiers of radio in the
1920s and cyberspace in the 1980s, spaces of radical
individualism and ecstatic self-fulfillment [60,61,69]. The
charismatic appeal of tinkering that 1920s ‘Radio Boys’
championed found voice again in computer hacking, in
projects like OLPC, and most recently in maker culture [3].
Thus, while the technologies that these charismas are
attached to may shift with time, the charisma lives on.
Many of the amateur enthusiasts and educators who
pioneered radio were especially excited by the medium’s
apparent ability to transcend political and economic controls,
enabling virtual communities and informed populism – all
hopes that have been echoed more recently about computers,
the Internet, and OLPC. After all, Mosco observes wryly,
“How could any material force get in the way of invisible
messages traveling through the ether?” [39:27]
However, many things can and did get in the way of radio’s
utopian dreams. Douglas notes that until the Internet, the
radio was the main medium where struggles between
commercialism and anti-commercialism played out, in a
cyclic pattern, over decades [16:16]. When nascent radio
stations and other businesses realized how much money
could be made selling advertising time on the radio, they
advocated for commerce in the ‘ether.’ Governments, too,
wanted to control this new technology, and most either took
complete control or shared bandwidth with industry,
sidelining educators, amateur operators, and other early
enthusiasts with little leverage to realize their own hopes.
Even by the 1930s, a mere decade after radio’s most
charismatic days, “radio was no longer the stuff of
democratic visions” [39:27] – much like OLPC has receded
into the fog of history for many former supporters a decade
after its debut, or the Internet, now decades old, is gradually
co-opted for commercial interests.
What could be problematic about the feelings that these
charismatic technologies can evoke, at least when still
relatively new? After all, many of these technologies did, in
time, transform the technosocial worlds in which they
existed. However, Vincent Mosco argues that we actually
prevent these technologies from having their full effect as
long as we remain enthralled by their charisma. It was not
until they recede into the ‘mundane’ and we understand how
they could fit into the messy realities of daily life, rather than
making us somehow transcend it, that they have the potential
to become a strong social force [39:2,6,19]. If this is the case,
then perhaps it is just now, with the spell of OLPC broken,
that the XO laptop can start to show its lasting effects among
those using it. Time, and independent analysis [6], will tell.
Lessons from technologies in educational reform
Still, the small group of dedicated enthusiasts who had
pioneered the medium – almost all men – continued to use
radio to “rebel against the technological and programming
status quo in the industry” [16:15], flaunting intellectual
property, government rules, and big business in the process
[39:2]. This group continued to conceptualize their pastime
of tinkering, listening, and programming the radio as a
“subversive” activity [16:15] and maintained their equally
subversive hopes for the future of the wireless medium. And
it is understandable that they would be loath to give up their
Education is of course not immune to the draw of charismatic
technologies. In fact, educational reform has been a target of
techno-utopian discourse long before the One Laptop per
Child project was announced, even before Papert began his
career of studying children and computers. Charismatic
technologies from radio to the Internet have been hailed as
saviors for an educational system seemingly perpetually on
the brink of failure [62]. The hyperbolic promises of OLPC
echo these ongoing efforts to combine the twin promises of
charismatic technology and school reform to reach for utopia.
Because learning was the central goal of OLPC, examining
the history of charisma in this context lets us contextualize
the project and identify other charismatic technologies that
pervade education. Moreover, a survey of the history of
school reform shows that much of the anti-charisma of
school described above is based more in stereotypes than
fact, reinforced by a century of technological failures in
education. Finally, we will see that education reforms like
OLPC are often compelled to use charismatic technologies to
promise utopia in order to secure attention and funding,
which then sets them up for failure and short-lived projects.
As reformers then shift to the next charismatic technology,
charisma will continue to impede real, if incremental,
communication technologies including early motion pictures,
radio, television, and computers were (and, in the latter case,
still are) often championed as fast cure-alls for educational
woes by innovators who were often not involved in public
schools themselves. “Impatient with the glacial pace of
incremental reform, free of institutional memories of past
shooting star reforms” that left no effect in day-to-day
schooling, and “sometimes hoping for quick profits as well as
a quick fix,” Tyack and Cuban explain, these reformers
“promised to reinvent education” with technology [62:111].
Like the more general utopian visionaries of technology
discussed in previous sections, many of these innovators
over-promised the scope and ease of change and lacked a
nuanced understanding of the day-to-day social, cultural, and
institutional roles of the actors most directly involved in the
worlds they wanted to transform. When the messy,
expensive, time-consuming realities of using technology in
the classroom inevitably clashed with hyperbolic promises,
disillusioned innovators, along with the media and the
general public, would often blame schools and especially
teachers for not solving problems with technological
adoption that were, in reality, beyond their reach. Then,
research on the effects on the new technology in the
classroom would start to roll in, showing that the technology
was, as Tyack and Cuban describe, “generally no more
effective than traditional instruction and sometimes less”
[62:121–122]. Meanwhile, attention and resources had been
diverted from more complicated, expensive, or politically
charged social or educational reforms that did not promise a
quick fix [62:3] and were thus less ‘charismatic.’
Over at least the last century and a half, American schools
have been simultaneously held up as the foundation for
reforming society and under constant pressure to reform
themselves. Starting in the 1830s, upper-class activists such
as Horace Mann evangelized the idea of ‘common schools’
as a savior from moral ruin, detailing the horrors that could
result (and, in some cases, were resulting, for instance via
unregulated child labor) if the country did not adopt universal
public schooling [14:8,62:1, 16, 141–142]. Far from the
‘factory worker mills’ framed in some more recent
educational reform literature (see [48,49] for examples from
Papert), public schools and subsequent innovations in
schooling such as age-organized instruction (and sometimes
the abandonment of it), curriculum-based instruction (and
sometimes the abandonment of it), cheap paper and
textbooks, kindergarten, subject-separated instruction, middle
school, school lunch, desegregation, televised instruction,
and even standardized tests were all framed by socially and
politically elite reformers for more than a century and a half
as a cure-all for assimilating immigrants and marginal
populations, ending child labor, modeling good citizenship,
and instilling morality in the next generation [13,14,62].
This is not to say that reform or technological adoption in
education is impossible. Though the broader institutional
structures of schools and many of the strategies teachers use
to reach students appear to be unchanged, there have been
many successful innovations that have altered the daily lives
of students and teachers, sometimes dramatically. Still, even
more realistic reforms well-grounded in the realities of
students and schools sometimes have difficulty gaining broad
popular support outside of the school unless they add a
charismatic gloss of rapid, revolutionary change.
The panacean promise of education was gradually shifted in
the late twentieth century from moral to economic. In
introducing “Great Society” educational reforms meant to
eliminate poverty in the 1960s, President Johnson avowed
that “the answer to all our national problems comes down to
a single word: education” [62:2]. Two decades later, as
perceptions of the United States’ waning influence prompted
a sense of panic about reforming education to achieve
economic competitiveness, President Reagan’s National
Commission on Educational Excellence and Reform took up
the specter of impending dystopia in an alarmist report titled
A Nation at Risk: The Imperative For Educational Reform
that to this day motivates U.S. educational reform [13,14,62].
This charismatic pressure can put even open-eyed
educational reformers in a catch-22 [6]. They must promise
dramatic results to gain the social and financial support for
reforms, and then they must either admit to not achieving
their goals, or pretend that they did achieve them. Either way,
funders will declare that the project is finished and withdraw
financial support, and then researchers and other observers
will begin to note the discrepancies between reformers’
promises and their own observations. Thus, projects that rely
on charismatic technologies are often short-lived, cut off
before charisma recedes into the background and the
technology can become part of everyday classroom
experience. This catch-22 has dogged educational reform for
well over a century, and as the educational technology
community moves on to the next charismatic technologies
Throughout these efforts, reformers often implicated the
technologies most charismatic at the time as integral to
achieving their visions. Some technological innovations, like
chalkboards, inexpensive paper, overhead projectors, and
computer-assisted standardized testing, have, in fact, had
large and lasting effects on schools. But some of the most
charismatic technologies have not. Many media and
(whether they be MOOCs or makerspaces), it will continue
to hamper the possibility of real, if incremental, change.
childhood with universalizing concepts like ‘yearners’ and
‘schoolers,’ Papert and others in OLPC drew on cultural
narratives about what childhood should mean and what
constitutes a good one, narratives that have become deeply
rooted in American culture and reflect American cultural
values such as individualism and (certain kinds of) creativity.
These actors have generalized from their experiences with
largely white, middle-class American youth – or from their
own idiosyncratic childhoods – that all, most, or at least the
most ‘intellectually interesting’ [49:44–50] children are
innately drawn to tinkering with computers and electronics,
or in Papert’s words, ‘thinking like a machine’ [48].
What is the alternative to this catch-22 of charismatic
education reform? Incremental reforms, what Tyack and
Cuban call “tinkering,” are more effective in the long-term,
even if they are not charismatic. “It may be fashionable to
decry such change as piecemeal and inadequate, but over
long periods of time such revision of practice, adapted to
local contexts, can substantially improve schools,” they
explain. “Tinkering is one way of preserving what is valuable
and reworking what is not” [62:5].
The case of OLPC shows us why it is dangerous to ignore the
origins of charisma: one risks being perpetually blinded by
the newest charismatic technology as a result. Indeed, those
who pin reform efforts on charismatic technologies are often
caught in a catch-22 where their projects are cut short
whether they register success or not, because the promises of
charisma are ultimately unattainable.
Through an analysis of One Laptop per Child and a survey of
past charismatic technologies, this paper exposes the
ideological stakes that underpin charisma – the ability for
technologies (or, as originally theorized, people [68]) to
evoke feelings of awe, transcendence, and connection to a
greater purpose. It shows how the promises that charismatic
technologies make are ideologically charged, how they can
be identified, and what is at stake when we are drawn in.
While it may be easy to discount examples from the past
given the perspective and tarnish of time, taking a historical
perspective on charismatic technologies show us how
conservative charisma actually is – the same kinds of
promises have been made over and over, with different
technologies – and also how unattainable its promises are.
Moreover, as long as we are enthralled by charisma we might
actually prevent these technologies from becoming part of
the messy reality of our lives, rather than helping us
transcend it. We must remember that charisma is ultimately a
conservative social force. Even when charismatic
technologies promise to quickly and painlessly transform our
lives for the better, they appeal precisely because they echo
existing stereotypes, confirm the value of existing power
relations, and reinforce existing ideologies. Meanwhile, they
may divert attention and resources from more complicated,
expensive, or politically charged reforms that do not promise
a quick fix and are thus less ‘charismatic.’
Examining charisma can help us understand its effects and,
through understanding, counter them. This analysis is meant
to help ‘make the familiar strange,’ in Stuart Hall’s words
[22], helping researchers in human-computer interaction to
identify the ideological commitments of the technology
world. Analyzing a technology’s charisma helps us recognize
ideologies that may otherwise be as invisible as water is to
the proverbial fish. While concrete design suggestions are not
the goal of this paper, this analysis may also help designers –
who often hope to ‘do good’ through technological
intervention in ways similar to those analyzed here – identify
their own ideological commitments.
Still, it is also important to recognize that charisma plays an
important role in smoothing away uncertainties,
contradictions, and adversities. As such, the purpose of this
paper is not to ‘prove’ charisma ‘wrong.’ What matters is
whether a technology’s charisma is ideologically resonant –
whether it taps into deep-seated cultural values and identities,
as OLPC does with childhood, school, play, and technology.
This paper’s intention is likewise not to advocate for an
eradication of ideologies; just as it is impossible to escape the
bounds of our own subjective points of view, so too is it
impossible to operate entirely outside of the frameworks of
ideologies. However, a large body of Marxist theory (e.g. see
[22]) notes that becoming cognizant of the ideological
frameworks in which we operate allow us to evaluate
whether they are really serving the purposes we hope or
assume they are. Only by way of this cognizance can we shift
them if they are not.
To OLPC’s contributors, we saw that the charisma of the XO
laptop affirmed their belief in the power of computers in
childhood, imposed coherence and direction on their work,
and gave them reasons to push back against doubters, even in
the face of what might otherwise feel like overwhelming
odds or ample counterevidence [2]. On the other hand, we
saw that charisma could also have a blinding effect. It
prevented those on the project from recognizing or
appreciating ideological diversity, much less constructively
confronting problems of socio-economic disparity, racial and
gender bias, or other issues of social justice beyond most of
their personal experiences [47]. OLPC’s XO laptop was
charismatic to them because it mirrored their existing
ideologies and promoted a social order with them at the top.
I am indebted to many friends and mentors for helping me
develop the ideas presented in this paper. In particular, Fred
Turner and Eden Medina pushed me to articulate the
importance of charismatic technologies, and Paul Dourish,
Lilly Irani, Daniela Rosner, Ingrid Erickson, and Marisa
Brandt helped me flesh out the salient features of charisma. I
As a result, their narratives not only glorified childhood, they
specified the kinds of learning that children are naturally
inclined to do. In glossing over the many complexities of
also thank all of the participants involved in my research on
One Laptop per Child.
17. Paul Dourish and Scott D. Mainwaring. 2012. Ubicomp’s
colonial impulse. Proc. UbiComp ’12, ACM Press.
18. James Ferguson. 2006. Global Shadows: Africa in the
Neoliberal World Order. Duke Univ. Press.
1. Madeleine Akrich. 1992. The De-Scription of Technical
Objects. In Shaping Technology, Wiebe E Bijker and
John Law (eds.). MIT Press.
19. Batya Friedman, Peter H. Kahn, and Alan Borning. 2006.
Value-Sensitive Design and Information Systems. In
Information Systems: Foundations, P. Zhang and D.
Galletta (eds.). M.E. Sharpe, Inc.
2. Morgan G. Ames, Daniela K. Rosner, and Ingrid
Erickson. 2015. Worship, Faith, and Evangelism:
Religion as an Ideological Lens for Engineering Worlds.
Proc. CSCW 2015, ACM Press.
20. Gerald Futschek and Chronis Kynigos (chairs). 2014.
Constructionism and Creativity conference.
3. Morgan G. Ames and Daniela K. Rosner. 2014. From
drills to laptops: designing modern childhood
imaginaries. Information, Communication & Society 17,
3, 357–370.
21. William Gibson. 1984. Neuromancer. Ace Books.
22. Stuart Hall. 1982. The Rediscovery of “Ideology”: The
Return of the Repressed in Media Studies. In Culture,
Society, and the Media, Michael Gurevitch, Tony
Bennett, James Curran, Janet Woollacott (eds.). Methuen,
4. Morgan G. Ames, Mark Warschauer, and Shelia R.
Cotten. 2016. The “Costly Lesson” of One Laptop per
Child Birmingham. In When School Policies Backfire,
and What We Can Learn, Michael A. Gottfried and
Gilberto Q. Conchas (eds.). Harvard Education Press.
23. Kristen Haring. 2007. Ham Radio’s Technical Culture.
MIT Press.
5. Morgan G. Ames. Learning Consumption: Media,
Literacy, and the Legacy of One Laptop per Child. The
Information Society, in press.
24. Carlene Hempel. 1999. The miseducation of young nerds.
25. Benjamin Mako Hill. 2002. The Geek Shall Inherit the
Earth: My Story of Unlearning.
6. Morgan G. Ames. 2014. Translating Magic: The
Charisma of OLPC’s XO Laptop in Paraguay. In Beyond
Imported Magic: Essays in Science, Technology, and
Society in Latin America, Eden Medina, Ivan da Costa
Marques, Christina Holmes (eds.). MIT Press, 369–407.
26. Diane M Hoffman. 2003. Childhood Ideology in the
United States: A comparative cultural view. International
Review of Education 49, 1-2, 191–211.
7. Walter Bender. 2007. OLPC: Revolutionizing How the
World’s Children Engage in Learning.
27. Lilly Irani, Janet Vertesi, Paul Dourish, Kavita Philip, and
Rebecca E. Grinter. 2010. Postcolonial Computing: A
Lens on Design and Development. Proc. CHI 2010, ACM
Press, 1311–1320.
8. Paulo Blikstein. 2013. Seymour Papert Tribute at IDC
2013 Panel Avant-propos: Thinking about Learning, and
Learning about Thinking.
28. Kevin Kelly. 1999. Nerd theology. Technology in Society
21, 4, 387–392.
9. Chris Blizzard at OLPC Analyst Meeting. 2007.
10. Michael Burawoy. 1998. The Extended Case Method.
Sociological Theory 16, 1, 4–33.
29. Scott Kirsch. 1995. The incredible shrinking world?
Technology and the production of space. Environment
and Planning D: Society and Space 13, 5, 529–555.
11. Heidi A. Campbell and Antonio C. La Pastina. 2010.
How the iPhone Became Divine: New media, religion
and the intertextual circulation of meaning. New Media &
Society 12, 7, 1191–1207.
30. Rob Kling. 1996. Computerization and Controversy:
Value conflicts and social choices. Morgan Kaufmann.
31. Pui-Yan Lam. 2001. May the Force of the Operating
System be with You: Macintosh Devotion as Implicit
Religion. Sociology of Religion 62, 2, 243–262.
12. Howard P. Chudacoff. 2007. Children at play: An
American history. NYU Press.
13. Larry Cuban. 1986. Introduction. In Teachers and
Machines: The Classroom Use of Technology Since
1920. Teachers College Press.
32. Annette Lareau. 2003. Unequal Childhoods: Class, Race,
and Family Life. U. California Press.
14. Larry Cuban. 2001. Oversold and underused: Computers
in the classroom. Harvard Univ. Press.
33. Bruno Latour. 2005. Reassembling the Social: An
introduction to actor-network theory. Oxford Univ. Press.
15. Sidney I. Dobrin. 1997. Constructing Knowledges: The
Politics of Theory-Building and Pedagogy in
Composition. SUNY Press.
34. John Law. 1992. Notes on the Theory of the Actor
Network: Ordering, Strategy and Heterogeneity. Systems
Practice 5, 4, 379–393.
16. Susan Douglas. 2004. Listening In: Radio and the
American Imagination. U. Minnesota Press.
35. Steven Levy. 1984. Hackers: Heroes of the Computer
Revolution. Anchor Press Doubleday.
36. Donald McIntosh. 1970. Weber and Freud. American
Sociological Review 35, 5, 901–911.
37. Scott Forsyth Mickey. 2013. Constructing The Prophet:
Steve Jobs and the Messianic Myth of Apple.
55. Tim O Shea and Timothy Koschmann. 1997. Review,
The Children’s Machine: Rethinking School in the Age
of the Computer (book). Journal of Learning Sciences 6,
4, 401–415.
38. Steven Mintz. 2004. Huck’s Raft: A History of American
Childhood. Harvard Univ. Press.
56. William A. Stahl. 1999. God and the Chip: Religion and
the Culture of Technology. Wilfrid Laurier Univ. Press.
39. Vincent Mosco. 2005. The Digital Sublime: Myth, Power,
and Cyberspace. MIT Press.
57. Maria Stavrinaki. 2010. The African Chair or the
Charismatic Object. Grey Room 41, 88–110.
40. Albert M. Muniz Jr. and Hope Jensen Schau. 2005.
Religiosity in the Abandoned Apple Newton. Journal of
Consumer Research 31, 737–747.
58. Douglas Thomas. 2002. Hacker Culture. U. Minnesota
Press, Minneapolis, MN.
59. Anna Tsing. 2000. The Global Situation. Cultural
Anthropology 15, 3, 327–360.
41. Nicholas Negroponte. 1996. Being Digital. Vintage
60. Fred Turner. 2006. How Digital Technology Found
Utopian Ideology: Lessons from the first Hackers
conference. In Critical Cyberculture Studies: Current
terrains, future directions, David Silver, Adrienne
Massanari, Steven Jones (eds.). NYU Press.
42. Nicholas Negroponte. 1998. One-Room Rural Schools.
WIRED Magazine.
43. Nicholas Negroponte. 2006. No Lap Un-Topped: The
Bottom Up Revolution that could Redefine Global IT
61. Fred Turner. 2006. From Counterculture to
Cyberculture: Stewart Brand, the Whole Earth Network,
and the Rise of Digital Utopianism. U. Chicago Press.
44. David Nye. 1996. American Technological Sublime. MIT
62. David Tyack and Larry Cuban. 1995. Tinkering toward
Utopia: A Century of Public School Reform. Harvard
Univ. Press.
45. Amy F. Ogata. 2013. Designing the Creative Child:
Playthings and Places in Midcentury America. U.
Minnesota Press.
63. Wayan Vota. 2007. Is OLPC the Only Hope to Eliminate
Poverty and Create World Peace? OLPC News.
46. Ruth Oldenziel. 2008. Boys and Their Toys: The Fisher
Body Craftsman’s Guild, 1930-1968, and the Making of
a Male Technical Domain. Technology and Culture 38, 1,
64. Wayan Vota. 2009. Negroponte Throws XO on Stage
Floor “Do this with a Dell” - Everyone a Twitter. OLPC
News Forums.
47. Nelly Oudshoorn, Els Rommes, and Marcelle Stienstra.
2004. Configuring the User as Everybody: Gender and
Design Cultures in Information and Communication
Technologies. Science, Technology & Human Values 29,
1, 30–63.
65. Wayan Vota. 2010. Mesh Networking Just Doesn’t Work
on XO-1.5 Laptop. OLPC News.
48. Seymour Papert. 1980. Mindstorms: Children,
Computers, and Powerful Ideas. Basic Books.
66. Wayan Vota. 2011. XO Helicopter Deployments?
Nicholas Negroponte Must be Crazy! OLPC News.
49. Seymour Papert. 1993. The Children’s Machine:
Rethinking school in the age of the computer. Basic
67. Mark Warschauer and Morgan G. Ames. 2010. Can One
Laptop per Child Save the World’s Poor? Journal of
International Affairs 64, 1, 33–51.
50. Seymour Papert. 2006. Digital Development: How the
$100 Laptop Could Change Education. USINFO
Webchat, via OLPC Talks.
68. Max Weber. 1947. Charismatic Authority. In The Theory
of Social and Economic Organization, Talcott Parsons
(ed.). The Free Press.
51. Roy Pea. 1987. The Aims of Software Criticism: Reply to
Professor Papert. Educational Researcher 16, 5, 4–8.
69. Langdon Winner. 1997. Technology today: Utopia or
dystopia? Social Research 64, 3, 989–1017.
52. Brett Robinson. 2013. Apple and the religious roots of
technological devotion. LA Times.
70. Geraldine Youcha. 1995. Minding the Children: Child
care in America from colonial times to the present. Da
Capo Press.
53. Daniela K. Rosner and Morgan G. Ames. 2014.
Designing for Repair? Infrastructures and Materialities of
Breakdown. Proc. CSCW 2014, ACM Press.
71. Joseph L. Zornado. 2001. Inventing the Child: Culture,
Ideology, and the Story of Childhood. Garland
54. Howard P. Segal. 2005. Technological Utopianism in
American Culture. Syracuse Univ. Press.
An Anxious Alliance
Kaiton Williams
Information Science
Cornell University
[email protected]
This essay presents a multi-year autoethnographic perspective on the use of personal fitness and self-tracking technologies to lose weight. In doing so, it examines the rich and
contradictory relationships with ourselves and our world that
are generated around these systems, and argues that the efforts to gain control and understanding of one’s self through
them need not be read as a capitulation to rationalizing forces,
or the embrace of utopian ideals, but as an ongoing negotiation of the boundaries and meanings of self within an anxious
alliance of knowledge, bodies, devices, and data. I discuss
how my widening inquiry into these tools and practices took
me from a solitary practice and into a community of fellow
travellers, and from the pursuit of a single body goal into a
continually renewing project of personal possibility.
Author Keywords
self-tracking; autoethnography; experience; quantified-self
ACM Classification Keywords
H.5.m. Information Interfaces and Presentation (e.g. HCI):
At the beginning of 2012 I decided that it was time to lose
some weight, and I turned to a set of digital health & fitness
tools to help me on my way. I managed to lose the weight,
but my journey with these systems continued and I substantially changed my patterns and everyday experiences in ways
that ranged from the pedestrian to the sublime. Although this
project was not initially intended as research but as part of an
effort to address a real personal need, I believe that the journey has provided me with a new professional perspective on
systems like these, and has taken me from a solitary practice
into a community of fellow travellers, and from the pursuit
of a specific body goal to a continually renewing project of
personal possibility.
I embraced a personal, existential, crisis as an opportunity
to try solutions that I had been critical of professionally or
avoided personally. I considered this as an opportunity to develop an experiential understanding that was tied to a real personal need. Surely, I told myself at the outset of this project,
Copyright 2015 is held by the author(s).
University and ACM.
Publication rights licensed to Aarhus
this would be better than critical analysis lobbed in from the
The route would prove uncertain. I wanted to be in better
control of my self and I wanted to understand how it felt to be
healthy and fit. Yet, as I began to make progress towards my
goals, I wondered about the changes in mind and body that
were accompanying my technologically guided, deliberate
movement. Was the quest for more information about myself
and my immediate world helping me find either? And how
might any acquired tracking tools, and the ability to navigate
them, fit into my understanding of what it meant to be a modern connected person? Furthermore, where did the approach I
was using—experiments and experiences within one’s self—
fit within the toolkit of methods in use within HCI?
My retelling of that experience is presented here as a personal
essay, building on Bardzell’s articulation of the value of the
essay form for a CHI audience [2], and inspired by Taylor’s
essay on the politics of research into cultural others in HCI
[30]. My hope is that placing my self directly in the frame of
analysis will highlight a tangle of personal and professional
interests that are I believe should be embraced as we interact with, design, and sustain a discourse on systems that are
ever more integrated into our everyday lives. These frames in
which we pursue our health and fitness are neither fixed nor
external but are created by, and depend on, our perceptions
and beliefs, social norms, and our collective sense of technological possibility.
These unavoidably intimate concerns lend an approach centred on the self-as-site, self-as-subject a ready allure. The
autoethnographic approach used here (and to various degrees
previously in [14, 8, 18]), with its synthesis of postmodern
ethnography and autobiography, is intended to call into question the objective observer position, the realist conventions
of standard ethnography, and to disturb notions of the coherent, individual self and its binaries of self/society and objective/subjective [26]. In doing so, it has the potential to provide
a conduit to what McCarthy and Wright refer to as the “rich
interior world of felt life” necessary for producing empathetic
understanding [36]. But while undoubtedly productive, the
approach presented significant challenges—which manifest
here—both in the shaping of a research agenda through the
course of a personal transformation, and in the reporting of
those results.
My primary goal is to generate an empathetic engagement
with my experience, and through it, a connection with what
5th Decennial Aarhus Conference on Critical Alternatives August 17 21, 2015,
Aarhus Denmark
could be the concerns and life paths of others in similar situations. But I intend my account of my experience to be an
illustration, not general evidence or directly comparable to
that of others. This is a distinctly idiosyncratic account, but
it is through the production of idiosyncratic accounts that I
believe we can find the empathy—as a view of our lives as
entangled with others—that makes for a rich understanding
of the role of personal devices in our lives. This requires
highlighting, not suppressing, the vulnerability, motivations,
personality traits, contradictions and stumbles that made up
both my everyday experience and my attempts at analysing
This is not to say that matters of concern of a larger scale
are not important. It is vital that we understand the hybrid
and structural phenomena around health and big data that knit
together nations, social classes, forms of knowledge and inquiry with Latourian [16] hitches. In fact if others were not
so ably investigating these matters, I would have no room to
write an essay about the everyday and unspectacular.
This is then an attempt to perform a balancing act of personal
revelation within the tent of scholarly discourse. And just
how big that tent should be is also a matter that this essay
takes into consideration. My contention is that, to faithfully
convey that messy, polyvocal world that finds its berth in personal experience, the tent’s boundaries need to be reshaped.
Just more than 1,377,584 measured calories ago I embraced
a simple goal: to lose weight. This is one made, I’m certain,
by millions of others every day. Yet, more than any highminded investigation into technologies, or into a community
and its practices, it was this that drove my interest and kept
me going.
In reflecting on the essay’s denigrated place in the CHI/HCI
canon, Bardzell suggests that the essay is the best form for doing just this work: for revealing “a process of thinking” that
is still “shaped and crafted as a work of writing”[2]. He argues that the essay is positioned as only opinion within CHI,
but he places it as part of a millennia-long tradition of trying
to deliver exactly the kind of analytic understanding of experience that the field solicits. And that interpretative efficacy
is delivered through the playing up of a sense-making subject apprehending “an object of study...[that] must therefore, some sense confused, completely, cloudy, contradictory.” This is necessarily the state of affairs in situations like
these, where the subject and subjectivity are the sources of
I made the decision after the Christmas holidays, when my
sister had come to visit. I prepared a feast and we dined sumptuously for several days. At the end of the visit, we went to
the Pacific shore and took a few photos on instant film. It was
then, as I watched those photos develop, that I realised how
bad things had gotten. I had been slowly but steadily gaining weight even though I thought I was in control of my diet
and getting enough exercise. I looked, in my own estimation,
This is more than an secondary note about construction. The
essay’s production is embedded in the transforming frame of
the journey it describes and reflects that. The shifts in tense
and tone that follow reflect concerns that transform, in stages,
from a matter of physicality, to one of self-understanding, and
into an ontological project of expanding possibility pursued
in alliance with others.
At the same I remain faithful to an exploration of the pedestrian, and what Georges Perec referred to as the “infraordinary” or “endotic:” the small, quotidian, experiences that
accumulate and become our lives. Perec charged his readers
to join him and together “found our own anthropology, one
that that will speak about us, will look in ourselves for what
for so long we’ve been pillaging from others” [24]. He contrasted this exploration of small matters of concern against
the big stories, the front page headlines and the spectacular.
His exploration was to be one of the seemingly obvious and
It matters little to me that these questions [of the everyday] should be fragmentary, barely indicative of a
method, at most of a project. It matters a lot to me that
they should seem trivial and futile: that’s exactly what
makes them just as essential, if not more so, as all the
other questions by which we’ve tried in vain to lay hold
on our truth.
Granted, as physical issues go, this was a minor calamity.
Still, I decided then that I needed to not only get back in
shape, but to find a place of steady balance. To be completely
honest, I cannot say “back.” I was only ever in shape for brief
moments. I had only passed, momentarily, through shape. I
wanted to remain there; wherever there was.
So as much as I loathed New Year’s resolutions, I decided to
get going a few days later. The immediate problem was that I
didn’t know how exactly to go about it. I knew the accepted
wisdom was that I should eat less and exercise more but I
didn’t really know what I should eat less of, or what exercises
I should perform.
I had done some research on health and fitness research previously and knew that a range of consumer tools existed. I knew
about BMI, and even though I knew that number had contingencies, I knew that mine was too high. I knew what calories
were but I didn’t know how many I should be consuming. I
more or less still ate according to ratios and formulae mimicked from my childhood.
But life was different in the here and now. I was surrounded
by a stunning variety of foods, eating styles, and advertising
schemes. A steady compass was hard to come by. I knew
enough to avoid an all-cake diet, but where would I find salvation? Low-fat? Low-carb? I didn’t know which to follow,
or how to keep in line.
And what could I reasonably achieve? I had some vague goals
from a variety of sources but I wasn’t sure what good targets
and limits should be. And I definitely had little formal idea of
how to manage my consumption to meet them.
There was an overwhelming amount of information to consume, much of it contradictory. In contrast, many of the applications I surveyed offered simple solutions. Download,
launch, follow the prompts, and hit the suggested numbers.
This was immensely attractive, even with my professional
This remains something that I consider with a fair amount
of irony. I was among a group of researchers who had been
critical of this persuasive and reductive logic that powered
many of the popular diet control and tracking systems [25].
But now I found myself in need of them.
tion. One was a simple social network designed to promote
encouragement between users and to route encouragement
to and from Facebook & Twitter, and the other was a sponsored motivation system of reminders, with badges (Figure
1) awarded to users at particular milestones, for example: the
10lb club; the regular user (every day for 2 weeks); the hardcore (26 weeks); the exercise buff, hound, and king; and, curiously, the Tea Time badge (“tea powered motivation brought
to you by SPLENDA ESSENTIALS No Calorie Sweetener
Products”) for those who recorded enough cups of tea.
This was the time and place of the first of many conflicts. As
a researcher seeking to modify my body how could I be both
a participant in systems like these—systems backed by what
I saw as rationalist agendas threatening self-knowledge, intuition and reliance—and still champion a resistance against
them? How could I inhabit and speak from what [12] refers
to as the normalised position of a diet participant, and one of
a resister?
Would I be taking these systems and their ideologies down
from the inside? Maybe, I told myself, I’d get to that after I got my 6-pack, washboard abs. Then it would be
down with the tyranny of rational reporting systems and selfsurveillance.
But first, I needed a truce with that inside me that I thought
was working against my best interest. And it appeared that
the best way for me to do that was to swallow my pride and
call in a mediator. And so, like any well-meaning, wellconnected person of my age, means, and technological background would, I chose the Internet and my smartphone to help
me do so. And, as it turned out, there were many apps for that.
It was 22 months into my project, and I had exceeded my
early goals only to find myself frustrated with stalled progress
in newer, ever-more fantastic schemes of self-transformation.
In the pursuit of pinpoint diagnosis, I lay still on a hard
counter in a dark room breathing into a small device of bleeps
and bloops as it tried to measure my basal metabolical rate.
1950. Should I be relieved at this number? How did I get
It began with a scale. I told it my age and my height, stepped
on, and in reply it told me my weight and body composition:
5’ 8, 174lbs and 24% fat. This was, for me, an uncomfortable
conversation. I felt a stranger to this self.
A week later, I downloaded and started using Lose It!, a
weight-loss and calorie tracking application for the smartphone and web. At the time, it had several features broadly
representative of dominant calorie tracking and weight loss
systems: the ability to track overall calories along with nutrients, budgeted according to reported weight and estimated
metabolism; a large database of food items and the ability to
scan packaged, bar-coded foods into that catalog; the ability
to further extend that local database by creating new ingredients or saving food combinations as recipes. It also had
two other features that I avoided, primarily around motiva-
Figure 1. A sampling of the author’s badges
I never used these features; they were antithetical to why I
used the tool in the first place. I had no desire to share my
progress with others, and motivation through competition or
badges held little appeal. I learned to carve out a space between calls to upgrade for premium features and the emails
announcing my new badges.
This work of finding spaces in a single app expanded into
the formation of alliances, and then peace-keeping mandate
within an ad hoc assembly. As the weeks and months went
by, I auditioned, added, and managed many other mobile applications, physical tools, and fragments of knowledge. To
name a few: calorie count, a nutritional database that I would
consult to convert food into calorie and macronutrient values; Endomondo, my exercise tracking app that would report calories burned and detailed performance numbers from
my bicycle rides; the Moves app and a FitBit device helped
me keep track of places visited and steps taken; Gym Hero,
PUSH, and Strong helped me track weightlifting workouts.
Finally, a small scale, combined with measuring cups and
spoons, provided predictability in the kitchen.
With this shifting team I set out to change my life, navigating
a landscape of other tools and systems, online discussions,
and professional advice. I consulted with trainers, physicians
and nutritionists. I read internet forums of every kind. I
scheduled physicals and full body density scans in cuttingedge machines (See figure 2).
Digital and paper journals helped me keep track of and reflect on my struggles and progress. Treating this an ethnographic encounter with my self, I began by taking frequent
field-notes, recording and reflecting on goals, my progress
towards them, and my developing relationship with these systems. I kept these notes in parallel with my expanding data
recording, but those bars, curves, and data points were my
reflections as well, my successes and trepidation encoded in
their patterns and rhythms. The just-so upward slope of my
Figure 2. The results of a Dual-energy X-ray absorptiometry (DXA) scan
body-weight readings weeks after a holiday remains to me as
evocative as a madeleine.
As I recorded I worried about how to explain and recount
an experience steeped in numbers yet both fleeting and embodied. Should I try to separate my selves as recorder and
recorded? In [20, 36], the authors channel Bakhtin and
Dewey to hold that this sort of aesthetic experience might be
the paradigm for all experience, reflecting on Dewey’s definition of experience as a double-barreled “unanalyzed totality”
that “recognises in its primary integrity no division between
act and material, subject and object”[20]. [33] describes this
as a circularity, “where knowledge comes to be inscribed by
being with the ‘designing-being’ of a tool, thus in turn modifying (designing) the being of the tool-user”. I relaxed.
I drew inspiration from Neustaedter and Sengers’ investigation of autobiographical design as a method that can provide detailed and experiential understanding of a design space
[23]. My collection and curation of this menagerie of devices, applications, algorithms and APIs, and my fashioning
and querying of reports and visualisations revealed itself as a
form of design. I didn’t write the software in my conglomerate but I do understand my role here as a designer, and this
as a form of research through design [10, 37]. I am designing myself, in collaboration with the systems I’ve cobbled together. In this we exist as anxious allies: body; device; data;
cloud; others; algorithms; entrepreneurs.
This design is critical, reflective, ontological, and tinkering—
what [33, 34] position as fundamental to being human
wherein “we design...we deliberate, plan and scheme in ways
which prefigure our actions and makings [and] in turn...are
designed by our designing and by that which we have designed” [33]. This ontologically oriented understanding of
design is “a philosophical discourse about the self—about
what we can do and what we can be” necessarily “both reflective and political, looking backwards to the tradition that
has formed us but also forwards to as-yet uncreated transformations of our lives together” [34]. This design is building as
nurturing, not just constructing. And as we pervade the things
we build, they in turn pervade us in a ways more fundamental
than any single designer could consciously intend [33].
Rearrange the icons on my home screen and my thumb is
lost. It quivers, bent searchingly and confused. We had
a system, it seemed to say. (fieldnote excerpt)
I enjoyed the everyday accounting. Entering in Lose It! the
food I ate helped me reflect both on my food choices and on
the events that surrounded them. The manual repetition of entries underwrote this: launch app, my thumb to the top corner
of screen; “Add Food”, type in food; choose the amount, arrange ingredients and meals in orders temporal and strategic.
40 taps and scrolls to enter a meal of broccoli, roasted
chicken, and an arugula salad dressed simply. The result:
461 calories. An immediate question might be, could this
be faster? And maybe in the beginning, I would have wanted
that. I can even imagine someday an app will allow me to
take a picture of my food and return structured data in a flash.
But even the seemingly tedious has its attractions. Out for a
meal, far away from measuring cups and scales, I gradually
found play in reverse-engineering a dish back into composite
ingredients and macronutrients.
The practice of recording is soothing and has its charms. I
could reflect on the day’s flow, wordlessly. I could watch
the numbers accumulate and balance, as charts filled in and
lines extended; a retelling of my day in a new narrative form.
I found myself launching the app several times a day, often
only to scroll laps through the day’s huddled data.
Untended data pervades me as anxiously as an untended garden might, resulting in a “thinking as being with” [33]. The
daily weeding, pruning and nurturing of data not only pays
dividends over the long term but is rewarding and meditative in its own right. It is action with rewards at once past,
present, and future. I take a quiet pleasure from lying in bed
at the end of the day and looking up and confirming locations
I’ve visited, all tracked in Moves. I rarely look back at the
data analytically; rarely seek to recall where I was on a Monday night, 8 weeks before. But I’m happy to know that data’s
there, to feel its weight accumulating over time: the results of
making paths in new cities, and retracing old ones in a familiar geography. They are all constellations in my private data
An idiosyncratic self-knowledge
Despite my success and its attendant numeric pleasures, I’ve
yet to enjoy contemplating my self as a precarious balance of
inputs and outputs: what I eat versus how much I exercise.
I would have preferred to accomplish my self-transformation
within broader measures, and I still long for that: to comprehend my body in longer and less granular scales—seasons
instead of hours, calories banished altogether. A friend, at
once svelte and blissfully ignorant, remarked in a conversation about my calorie goal that day: “I have no idea of calories / What amounts to 10 much less 1500.” Even with my
fitness goal surpassed, my body seemingly in the balance I
sought, I could only reply: “That’s actually beautiful.”
Because it’s as hard to imagine going back as it is to imagine going forward. I can’t un-know the weight of things.
As Dewey notes, “every experience both takes up something
from those which have gone before and modifies in some
way the quality of those which come after” [7, p27]. I had
come to realise that an attention to ever smaller matters of
concern could yield effective results, but at the same time I
couldn’t help but imagine myself bound to a future where I
could scarcely ignore them. At least, not if I wanted to remain in my present condition. To know was now, in many
ways, to be bound to manage and optimise. I was coming
to view the insights my tools had begun to provide me, and
my developing dependence and relationship with them, as unsteady.
But aren’t we all wrapped up in similarly conflicted
relationships—what Bourdieu referred to as “the Benthamite
calculation of pleasures and pains, benefits and costs” [5]?
Our modern life is as much defined by asceticism and denial
as by gratification. To reach our goals we undertake broad
changes in our lives: no meat; no dairy; no to the second
drink; lock our WiFi router in a safe when we need to focus
[28]. Why should this relationship be any different?
Why did I continue with a program that contradicted my ideals and left me in an unsteady position? For one, it worked,
and worked quickly. The efficacy of the approach was difficult to deny. After the first 4 weeks I had lost 5.6 lbs and I felt
that I was arriving at a workable understanding of how what
I ate affected my body. I felt healthy, and I was performing
small experiments and getting valuable feedback: tuning the
right amount of fibre, getting enough protein, monitoring my
sugar intake. Still, I would tell myself that there were a number of other reasons that could have explained the changes to
my body: I had a renewed awareness of my self and what I
was eating; I was feeling less stressed now that I felt more
in control; I was starting to use exact measures rather than
The numbers and their precision continued to have a mesmerising effect over me. Intellectually, I had serious concerns
about the value of calories as the sole or even dominant basis
for weight management—that all calories were equal—and
though I tried to focus more on the ratios of nutrients or on
the quality of my food, Lose It!’s calorie budget readout, with
its scary red zone indication, remained a metaphysical hurdle
difficult to clear. Even as my attachment to these tools has
waned, I still view that threshold warily, finding soothing reward in almost always being several calories under it—safe
in the green zone (see figure 3)—even though I both theoretically and practically know it to be imprecise and perhaps even
This was a surprising revelation. I knew intellectually that
the numbers displayed in my app and used behind the scenes
in its calculations were based on rough estimates or theories.
And I knew how futile the notion of managing one’s self to
precise calorie budgets must be. I knew that, by describing
our bodies as precise systems that can go out of sync based
on small discrepancies, the health industry and app creators
benefit by this positioning of their tools and systems as indispensable and necessary guides in our lives. Yet, months after
that internal conversation, I continued to ask myself, “can our
bodies really optimise down to a few calories?” while measuring out exactly 7 evenly-sized almonds for a quick snack.
I had begun to read non-weight-related features of my life
through what [13] refers to as the “prism of weight loss.” My
days had been reshaped through those units of measure. My
tools became my oracles, and I consulted them before planning meals or an evening out. Through them I acquired a
fluency in the forms and verse of macronutrients.
Over the months I steadily made my life more calculable by
streamlining my diet to in turn streamline how I input data
into my tools. I prioritised certain foods and recipes, and
avoided others to work best within the capabilities of the food
database. I weighed my food or measured it by the fluid
Figure 3. Calorie Thresholds: Day marked green are those with calories at least 1% below threshold; those marked red are more than 1% over the
threshold; those in cream are within +/- 1% of the app’s suggested goal
ounce. I groaned at foods that weren’t in the database or
occasionally at recipes that were too complicated to assess
calories per serving once complete.
But still I found freedom in this calculation and control and
room in its reduction. Regardless of whether the I/O, calories
in vs. calories out, model was correct over the long term, I
had found reassurance in its immediate results. Before I began this study I didn’t know how many calories I was eating
overall or how many calories were even in particular foods. I
wasn’t even aware of the mix of nutrients that I was eating.
The apps provided me with expanded capabilities so that I
was no longer eating “blindly.” Pushed to spend more time
reading nutritional labels and scanning food databases to enter into my tools, I realised how little I knew about what I had
been eating.
I developed felt a strong sense of fidelity to my accumulated
system; an ordained from Logos desire to keep the record
true (read: often, precise). At the same time, I knew that I occasionally kept a false dialogue: over-estimating food items,
or under-entering exercise. Why would I perform these small
acts of rebellion even though they might ostensibly be against
my self-interest? Why did I trust the system when it confirmed my “bad behaviour” but conspire to undermine its reports of when I was “good?” Even as I continued performing them, I wondered if these minor insurrections were how I
came to terms with what seemed like an increasing rationality
in my life.
Together, we—my system and I—had co-constructed a digital model of my self that I fully bought into and managed:
managing myself, it seemed later, by proxy. Within this, I
had found a practical way to consume much of the advice and
research I would come across. With my model in hand, I felt
a relationship between my data and my body, though I struggled to decide how much of that connection was real or imagined. When I didn’t eat “enough” protein I felt weaker, and
when I had too much sugar I felt fatter. These were delayed
reactions—a re-reading of my body from the model. Had I
sacrificed one balance for another, or balanced my model instead of my self? Did that final distinction even matter?
Stepping Back
viscera of being human was replaced with a gnawing worry
about the feasibility about going it alone, even with my developing intuition. My reasons for continuing with this project
were complicated: personal and professional, subject, steward, and researcher. Projects must end but what about the
relationships I had developed within my alliance?
What would I do without my calorie tracker? How would I
wean myself away even now that I had met some of goals that
I had developed? I still think a lot about the transformation. I
am now “in shape.” I feel strong and healthy. This is a place
that I had scarce considered before, but these numbers seem
to hold as much fear as pleasure. Could I maintain this state
without “outside” help?
Relying on my existing alliance was no sure path to reassurance either:
Back around 153, and it felt like it took too much doing. I’m not sure if I would have naturally made it back
there or what the cause was. Was it a couple weeks eating [Lose It!’s] recommended 2000 calories? The flour
of Denmark? The week in Seattle? There’s a lot that
happens in between the numbers.
[Two months later...] Confused with my weight. Moved
too many variables and I don’t know how to unwind it
all (fieldnote excerpt)
And in the present, if I do cast these systems aside, would
doing so really lead to any better engagement with my self?
Granted, I am far better at mechanical feats of estimation
and proportion, handily verifying my estimates when I enter them. And when I don’t track, I survive, mostly without
worry, because I’ve internalised and made habit of my food
behaviours. The notion of a pure originary sealed-off self felt
As my project shifts from daily tracking and self-evaluations,
and I continue to pursue personal goals, I realise that I can
never truly divorce my personal and professional interests. I
feel forever embroiled in an all-encompassing exploration of
personal possibility and continuing improvement that pays no
heed to disciplinary borders.
Yesterday, after the gym, the scale showed 152.0 lbs and
15% body fat. My weight has been hovering from 150
to 152 but with the fat level, it’s safe to say this is the
trimmest I’ve ever been. It’s frightening to now reflect
on what I carried around on my frame before. The task
is now to keep keeping this weight off and to cement my
progress with a good workout regimen. It’s going to be a
long fight but hopefully a happy one. (fieldnote excerpt)
After a year, I had lost 24lbs, but my will to improve continued and expanded beyond weight loss. On one morning, after
some reading on the Internet, news of Anterior Pelvic Tilt1
arrived in my life. I wrote later:
As my study’s initially projected run was coming to an end,
my earlier fear of gazing too deeply at the messiness and
This is the weekend that I became aware, and seemingly
a victim, of anterior pelvic tilt. The exercises to correct
this have begun. Like many conditions, I doubt if I have
it, though it would explain a great deal. And, as I’ve
A posture defect
Figure 4. Tracked days: Both shades of green represent days that were tracked. Slices marked orange were days that I did not track. Days marked
with a light green shade had their food entries edited retrospectively
learned about other conditions, learning that you have
something becomes an exhausting ride towards fixing it.
(fieldnote excerpt)
The universe of self-improvement was unfolding before me.
Being healthy had become more than just avoiding obesity,
sickness or early death. It had become what Rose refers to
as the “optimization of one’s corporeality;” a model of fitness
that is “simultaneously corporeal and psychological” [27]. As
I came to revel, with some conflict, in my new-found selfunderstanding and weight loss, I also began to wonder what
I couldn’t achieve given the right companion tools. I began
weightlifting, tracking workouts and shifting my diet into performance mode; I experimented with tracking and optimising
my sleep. Did coffee improve my focus? Wonder, track, and
see. Track to peruse; then track to improve.
This new model of fitness and health embraces what Rose
refers to as “overall well-being,” inclusive of happiness,
beauty, sexuality and more. What he concludes is that,
spurred on by the privatisation of the health care industry and
state efforts at health promotion, our personhood is increasingly being defined not just by ourselves, but by a complex of
others; a complex that includes not just our immediate community but the application-creators, entrepreneurs and new
media companies that are building on our desire to improve
our selves. These collective decisions helped reshape how I
perceived and contested my possibilities and limits. My algorithms and applications fit within the frame of this developing
relationship with my self that was equally epistemological,
attentive, and despotic [9].
By allowing insight into bodily processes, systems like mine
increase our capacities and abilities: to critically assess “correct” functionality, to gauge inputs and outputs, and to help
us regulate our processes or our models of them. At the same
time though, as I felt, they encouraged our participation in
networks of power that constrain us and might decrease our
will or ability to exercise our capacities in the future [12].
That these networks of power are not just external (between
us and the world) but also internal (between us and our selves)
is what made a first-person evaluation so compelling yet simultaneously complex.
In my case, losing weight by controlling diet and exercise
required an ability to re-construct my body as obedient and
submissive through attention to small details and newly revealed units of measure. This is a “relentless surveillance”
[3] wherein I partitioned my body into new dimensions. Yet,
through this control and restriction, I became aware of exactly
what I consumed and its effects, and came to realise the positive embodied and ontological effects that can accompany
changing patterns in my life [13]. And the accomplishment
of those goals, even if those goals might have been subliminally directed by, and entangled with, a complex of exter-
nal sources—corporate or social—was still an enabling act
of self-transformation that honed an ability to enhance capacities and to develop new skills. As my study progressed
through its early phases, I had taken seriously Thoreau’s admonition to avoid becoming “the tools of our tools” or putting
too much, and too often, between ourselves and a direct connection to the world through our intuition [31]. At the same
time, I had to admit that as my self-tracking practice developed, it helped me increase my awareness and intuitive sense
of my self and my world.
That I had the advantage of a first person perspective is important to understand, but that I was party to an experience
that transformed, over several months, the way in which I
acted on, thought of, and was affected by the world through
my body is equally important. Our bodies express our “double status as object and subject—as some-thing in the world
and as a sensibility that experiences, feels, and acts in the
world” [29]; yet the argument from that somaesthetic view
is that the goal of objectivity and a nobler (in the enlightenment sense) version of humanity produces a one-sided focus on intellectual goals that marginalises studies of the body.
But embodied considerations inform how we choose to form
our existence, assess our quality of life, and make intellectual
decisions. This is not just an anti-Cartesian position. Connecting with and attending to our bodies—even through these
systems—might be a way to recuperate the kinds of aesthetic
experience that can help us connect to others.
My experience has shown me that such a path is far from
straightforward. Criticisms of these technologies show how
they, or at least the imaginaries that give birth to them, promote and cement a division of mind and body, and work to
establish a self-legible body that is always at risk yet best addressed through objective “information” [32]. At the same
time, even as a prosthetic, that information system expanded
my ability to make sense of the world and in doing so increased my sense of freedom (see [17] for a related example). Even the idea of the prosthetic felt limited, and the disjunctions between body and mind, tool and human, felt hard
to maintain. Data, device, and person at times felt instead
transduced and interwoven, and enabling new possibilities,
capabilities, and connections to emerge [19]. Transduced and
transformed, not only can I not cleave the entanglements between my self as subject and researcher, I am no longer the
same person who began this study.
Taking Thoreau’s notion of deliberate living seriously meant
attending not only to the practical realities of my body and
its immediate environment, but also to the social relations
around me. For Thoreau, starting with the self was not a
retreat but a strategy for re-engagement with the world. He
wrote that “not till we are completely lost...not till we have
lost the world, do we begin to find ourselves, and realise
where we are and the infinite extent of our relations” [31].
My project changed my relationship to the world at large. In
small ways, my conversations with those around me often
centred on changes to my physical proportions, or changes
to my diet. To others, my dietary choices and my measuring
and data entry habits were, at best, curious, and, at their worst,
alienating. Still, my admission of these quirks and personal
challenges provided a common ground to discuss other struggles. They may not have been self-trackers or body-hackers
but they all had goals they wanted to achieve.
As I presented ongoing work and told friends and family
about my approach, responses were varied. Two responses
stood out. I was either pitied: seen as someone crippled
through an embrace of technology, bound by a system of measures and restraints. Or I was envied, my control of my diet
and exercise taken as signs of a far reaching mastery of my
larger life (no such power exists).
For those with whom I found similar ground, we were connected not because we used the same suite of tools, but because of a shared commitment to a subjunctive understanding
of our systems. This was a view of the tools at our disposal
as a range of approaches that provided craggy but passable
conduits to an old self, a new self, or even a better grasp of
a current self. Focusing on my personal, experiential knowledge provided a starting point for those conversations.
Thoreau’s pursuit of self-knowledge had become a vehicle
for self confrontation and a vantage point from which to confront his world with fresh perspective. His deliberate stripping down of his life and activities was done in order to recreate a more empathetic one, and he built to political action
through this alternating retreat from and engagement with society. His concern was the development of a capacity to improve ourselves and our world. Our role as citizens to engage
a wider culture can begin from movements both big, and in
my case, small. I believe a cycling “retreat” into first-person
studies is a similar and valid methodology for engagement
with particular communities, one that fits within the frame
and support demonstrated by third-person and structural approaches such as [11, 25, 6].
My evolving relationship with my self and system has required me to be more critical of how I imagine my role. Is it as
a critic and “defender”? Is it as a willing (and perhaps pitied)
general participant? Is it as someone attempting a fused role
somewhere between? As critics, if we decide to defend users
against technology and social formations that we see as bad,
where do we position ourselves? Are we part of the laity?
Or are we standing apart, too savvy to be caught in the same
traps? Does our writing, strategies, and commitments reflect
that we too are in liminal positions? In [30], Taylor calls for
us to view technology and its design as a “means of participating in unfolding ways of knowing, being and doing” rather
than standing back here reporting on others out there. Might
deeply personal accounts help us focus on what he refers to as
the mutuality of our “unfolding enactments of ordering, classifying, producing and ultimately designing technology”?
When I began my project, I knew nothing of the Quantified
Self (QS), a community of users and makers of self-tracking
tools2 perhaps best known through its stated commitment to
“self-knowledge through numbers.” And yet, 2 years later,
with only slightly more of a clue, I found myself giving a
talk and running a workshop at an annual QS conference.
In the years since I began this study, popular conversation
and critical discourse on the QS movement in particular, and
on the wider practices of self-tracking in general, have increased, while its related devices and approaches have entered the mainstream. It’s now commonplace to meet people equipped with some form of automated personal tracking
clipped to waist or wrist.
An outsider view of this community might likely mirror the
view that others sometimes take of me, and that I had initially
taken of it: as narcissistic or self-indulgent; as trapped in a
corporate designer’s ideology; or as a member of a cabal determined to increase rationalisation in our lives and to make
data of our bodies. It was perhaps too easy for me as a critic
to see my charge as rescuing the naive from the great evil of
such an imagined neo-liberalist scheme of big data, surveillance, and individualising institutions.
Of course, there are reasons to decry and worry about such
an encroaching future of personal panopticons, and individuals alienated from supporting institutions that are divesting
themselves from social responsibility for issues of health. But
instead in QS I found positive possibilities among selves and
other—where other might be a device, a database, or a community of people. And it helped me see the boundaries that I
initially considered between myself and my devices, and between myself and others, as imaginary and limiting. I have
now begun to rethink tracking practice as something more
akin to a cyborg communion: with devices, others, and self.
Does it stretch the bounds of the scholarly tent too far to think
of us as bridged by data, or as connected by a joint recognition and quest for the sublime?
I don’t mean to paint the pursuit of personal informatics as a
religious one, although I’m sure it has its zealots. And I don’t
maintain that all or even many members of the Quantified Self
community would agree with my characterisation. In fact, my
presence and that of other critics who wished to inject more
reflection on QS’s effect on the wider public provoked some
understandably negative reactions. After all, why should individuals in a community have to worry about the consequences
that might spiral indirectly from their individual practice?
I’ve wondered the same. Friends and family have now followed my lead, tracking activity and intake. They’ve seen
great results but I worry that I’ve caught them in a subjunctive net that I can give little advice on how to fully escape.
What I found QS to be was markedly different from the majority if the popular media and critical academic discourse
about the movement. Their relationship with data was pro2
foundly different from what I had come to expect, yet profoundly similar to how I felt. Our questions were similar:
what data is important? Who gets to say so? What does it
mean for me? In a breakout session I led—provocatively titled Can Data Make Us More Human?—I asked what values
and considerations were important to the participants. The
responses were instructive:
ease of use, transformation, openness,
freedom, honesty and directness, reflexivity,
presence, constancy, comfort, anchor, exchange,
to find better questions, to create my own narrative, how
do I want to live?
objectivity, to make order, tangibility, discipline, selfcontrol, autonomy,
diversity, transparency, accountability,
value-free, judgement-free, guilt-free,
living better in the moment,
diversity, fragility, empathy
world” rather than a reliance on rigid distinctions between
the mechanics of capture and the formation of knowledge.
The alternative—“the possibility of pure transport”—is an illusion: we cannot escape the methods we use to frame and
capture our data, nor can we as self-trackers “ever be quite
the same on arrival at a place as when [we] set out.” My earlier understanding of the community and my own developing
practice as a collection of numbers missed this rich ontological designing, and saw transport in attempts at wayfaring.
I had a single, simple goal when I began this project: to lose
some weight. To achieve this goal, I brought a range of systems within the limits of my self, and we made more goals in
collaboration; mutually designed as we were designing. At
various points in this journey I felt lost: out of touch with my
numbers, my narrative, and even my collective. But I found
that losing myself was essential to continuing onward.
Nafus and Sherman [22] argue that QS practices are actually
“soft resistance” to institutional models of producing and living with data: “a readiness to evolve what constitutes meaning as it unfolds.” With this form of resistance, they find that
QS participants “work to dismantle the categories that make
traditional aggregations appear authoritative”. Using systems
like Lose It!, that were built with seemingly helpful badges,
gamification, and databases flush with particular kinds of
food, helped me understand firsthand those struggles of being
enrolled in, while simultaneously wrestling with, embedded
ideologies of how best to monitor and manage one’s health
and person. The community members I met were in a similar
struggle, also attempting to work around systems largely designed to meet purposes with which they did not completely
identify. This was not a site where self-authority was being
ceded nor one where a joint search for an objective, external
locus of meaning making was being launched. It was a place
where data and devices were being used to reflect on what
may or may not be happening inside our bodies, and to insist
on the “idiosyncrasy of individual bodies and psyches” [22].
This difference echoes Ingold’s [15] distinction between wayfaring and transport. For Ingold, transport is distinguished
from wayfaring not through the use of devices (mechanical
means) but by what he refers to as the “dissolution of the intimate bond that in wayfaring couples locomotion and perception.” The transported traveller is a passenger, one who “does
not himself move but is rather moved from place to place,”
while the wayfarer goes along and threads a way through
the world rather than routing across the surface from point to
point. This leads to a fundamental difference in the integration of knowledge, where wayfarers “grow into knowledge of
the world about them” while in transport, knowledge is upwardly integrated.
The QS discussions and presentations that I’ve been privy to
are less about reading results and more about following a trail
where participants, following Ingold, know as they go: sharing an always-bound trifecta of what they did, how they did
it, and what they learned. This a way of knowing which reflects what he refers to as “a path of movement through the
The use of my self as subject was an important vehicle for
getting close to the difficulty of mapping and communicating such an experience: a model of knowledge acquisition
equally epistemological and existential. Knowing logically
what to expect during this project did little to prepare me for
the daily mixture of sadness, elation, and fear that I experienced. I knew that numbers, and the intimate devices that deliver them, have a powerful force on us, but to experience that
somehow-embodied tug was important. And that I did this
with personal and professional consequences at stake is a productive complication that I am still unraveling. The last years
have been more than an academic research opportunity. They
also represented an important effort and transformational experience linked to my own ideas of self-worth.
My particular choices helped me reveal and begin to come
to terms with the nuances of an internally conflicted position. Long-term usage coupled with an existing personal need
modulated my position on health and fitness technologies,
confirming some of my criticisms but substantially reframing my prior estimations. While I remain cautious of the approaches of many of these systems, I learned that my conflicted position need not be hypocritical, and reflects the joys
and capabilities that can be found in reduction, and the freedom to be found in control. I had been wary of the attractions
of these technologies, and found that confirmed in my connection to them; I was not in a comfortable space of ironic
distance. Many of the theoretical issues that concerned me
prior to my project, such as the reductionism inherent in a
calorie-based point of view and the limitations of the modelled “truths” that these technologies are predicated on, remained with me throughout, but were experienced anew.
My intent has not been to present a thorough record of this
journey but to demonstrate a struggle to make sense of it.
What I tried to provide was a form of extensive, genuine usage that responds to what [23] refers to as “detailed, nuanced,
and experiential understanding of a design space.” But this is
a design space in which I exist as subject, object, and verb—a
perspective that, at once privileged and problematic, elastically collapses subject and researcher.
Through the essay I tried to make two main contributions
(topical, methodological), but without distinct separations of
section which I feared would too easily mirror these binaries
against which the essay also labours. Given the underlying
approach (autoethnography, research through an ontological
design), only discussing the method without showing it in action would likely not make a compelling argument, nor would
deploying it without also arguing significantly for its validity.
As such, this essay leaves much unresolved. In this though, it
is faithful to my underlying experience, and responsive to the
calls to action in [20] and [30].
2. Bardzell, J. HCI and the Essay: Taking on “Layers and
Layers” of Meaning. In CHI 2010 Workshop on Critical
Dialogue (Sept. 2010).
I hope that writing about this project in a first person, autobiographical voice helps to fulfil a commitment to support a
“fuller range and richness of lived experiences” [4], wherein
we draw on aesthetic principles and techniques [1, 21, 35] to
achieve that goal. As a method for both understanding others
and the self, I believe this voice marries well with a desire to
deeply describe and engage with those moral, political, emotional and ineffable experiences in our work. That said, the
value for me has not been in whether I could reveal different
things than a more traditional third-person approach might,
but whether I might be allowed a look at the same things in a
different way [3, 26].
5. Bourdieu, P. Distinction: a social critique of the
judgement of taste. Harvard University Press,
Cambridge, Mass., 1984.
I don’t assume though that my experiences will be the same
as others. I hope to read other accounts that will be markedly
different. This has been an unabashedly person-centred attempt to present a perspective on these systems and communities that is not common within HCI, and in doing so to
hopefully expand its tent. I tried to present as much of an
inner view possible but that view is inescapably complex, obscured, and contradictory. Instead of attempting to transform
an idiosyncratically personal account into a directly generalizable one, I opted to draw out and demonstrate a connection
between a personal and embodied foundation, and a broader
politically and socially engaged position—one that should not
require us to gloss over or reduce our individual distinctions.
8. Efimova, L. Weblog as a personal thinking space. In
Proceedings of the 20th ACM Conference on Hypertext
and Hypermedia, HT ’09, ACM (New York, NY, USA,
2009), 289–298.
It may be that to live with these technologies is to be
necessarily—not just until the version next—in alliances
marked by these capabilities and productive contradictions,
and by the frequent negotiating and renegotiating of perspectives and boundaries. In this, they are perhaps but another set
of relationships among many others in our lives; a collection
of others that have always been expanding the limits of our
This work was funded through the Intel Science & Technology Centre for Social Computing, and NSF Grant IIS1217685. I’m particularly grateful for the thoughtful feedback and support of Phoebe Sengers over the years of the
study, and to the helpful comments, criticisms, and suggestions of the anonymous reviewers.
3. Bartky, S. L. Toward a Phenomenology of Feminist
Consciousness. Social Theory and Practice 3, 4 (1975),
4. Boehner, K., Sengers, P., and Warner, S. Interfaces with
the ineffable: Meeting aesthetic experience on its own
terms. ACM Trans. Comput.-Hum. Interact. 15, 3 (Dec.
2008), 12:1–12:29.
6. Brynjarsdottir, H., Håkansson, M., Pierce, J., Baumer,
E., DiSalvo, C., and Sengers, P. Sustainably
unpersuaded: How persuasion narrows our vision of
sustainability. In Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems, CHI ’12,
ACM (New York, NY, USA, 2012), 947–956.
7. Dewey, J. Experience and education. The Macmillan
company, New York, 1938.
9. Foucault, M., Martin, L. H., Gutman, H., and Hutton,
P. H. Technologies of the self : a seminar with Michel
Foucault. University of Massachusetts Press, Amherst,
10. Gaver, W. What should we expect from research through
design? In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems, CHI ’12, ACM
(New York, NY, USA, 2012), 937–946.
11. Grimes, A., Tan, D., and Morris, D. Toward
technologies that support family reflections on health. In
Proceedings of the ACM 2009 International Conference
on Supporting Group Work, GROUP ’09, ACM (New
York, NY, USA, 2009), 311–320.
12. Heyes, C. J. Foucault goes to weight watchers. Hypatia
21, 2 (2006), 126–149.
13. Heyes, C. J. Self transformations: Foucault, ethics, and
normalized bodies. Oxford University Press, Oxford,
14. Höök, K. Transferring qualities from horseback riding to
design. In Proceedings of the 6th Nordic Conference on
Human-Computer Interaction: Extending Boundaries,
NordiCHI ’10, ACM (New York, NY, USA, 2010),
15. Ingold, T. Lines: a brief history. Routledge, 2007,
ch. Up, across and along, 72–103.
1. Bardzell, J. Interaction criticism and aesthetics. In
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, CHI ’09, ACM (New
York, NY, USA, 2009), 2357–2366.
16. Latour, B. We have never been modern. Harvard
University Press, Cambridge, MA, 1993.
17. Leshed, G., Velden, T., Rieger, O., Kot, B., and Sengers,
P. In-car gps navigation: Engagement with and
disengagement from the environment. In Proceedings of
the SIGCHI Conference on Human Factors in
Computing Systems, CHI ’08, ACM (New York, NY,
USA, 2008), 1675–1684.
31. Thoreau, H. D. Walden, Or, Life in the Woods. Penn
State Electronic Classics Series. The Pennsylvania State
University, 2006.
18. Ljungblad, S. Passive photography from a creative
perspective: ”if i would just shoot the same thing for
seven days, it’s like... what’s the point?”. In Proceedings
of the SIGCHI Conference on Human Factors in
Computing Systems, CHI ’09, ACM (New York, NY,
USA, 2009), 829–838.
19. Mackenzie, A. Transductions: bodies and machines at
speed. Technologies. Continuum, London, 2006.
20. McCarthy, J., and Wright, P. Technology as experience.
MIT Press, Cambridge, Mass., 2004.
32. Viseu, A., and Suchman, L. Wearable Augmentations:
Imaginaries of the Informed Body. In Technologized
Images, Technologized Bodies, J. Edwards, P. Harvey,
and P. Wade, Eds. Berghahn, 2010.
33. Willis, A.-M. Ontological designing: Laying the ground.
In Design Philosophy Papers Collection Three.
Ravensbourne (Qld, Aust): Team D/E/S Publications,
2007, 80–98.
34. Winograd, T., and Flores, F. Understanding computers
and cognition: a new foundation for design. Ablex Pub.
Corp., Norwood, N.J., 1986.
35. Wright, P., and McCarthy, J. The value of the novel in
designing for experience. Future Interaction Design.
Springer (2005), 9–30.
21. McCarthy, J., and Wright, P. Putting ‘felt-life’ at the
centre of human–computer interaction (hci). Cognition,
Technology & Work 7, 4 (2005), 262–271.
36. Wright, P., and McCarthy, J. Empathy and experience in
hci. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems, CHI ’08, ACM
(New York, NY, USA, 2008), 637–646.
22. Nafus, D., and Sherman, J. Big data, big questions —
this one does not go up to 11: The quantified self
movement as an alternative big data practice.
International Journal of Communication 8, 0 (2014).
37. Zimmerman, J., Forlizzi, J., and Evenson, S. Research
through design as a method for interaction design
research in hci. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems,
CHI ’07, ACM (New York, NY, USA, 2007), 493–502.
23. Neustaedter, C., and Sengers, P. Autobiographical
design in hci research: Designing and learning through
use-it-yourself. In Proceedings of the Designing
Interactive Systems Conference, DIS ’12, ACM (New
York, NY, USA, 2012), 514–523.
24. Perec, G. Species of spaces and other pieces. Penguin
Books, London, England; New York, N.Y., USA, 1997.
25. Purpura, S., Schwanda, V., Williams, K., Stubler, W.,
and Sengers, P. Fit4life: The design of a persuasive
technology promoting healthy behavior and ideal
weight. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems, CHI ’11, ACM
(New York, NY, USA, 2011), 423–432.
26. Reed-Danahay, D. Auto/ethnography: rewriting the self
and the social. Berg, Oxford, 1997.
27. Rose, N. The politics of life itself. Theory, Culture &
Society 18, 6 (2001), 1–30.
28. Schüll, N. D. The folly of technological solutionism: An
interview with evgeny morozov., September
29. Shusterman, R. Thinking through the body, educating
for the humanities: A plea for somaesthetics. The
journal of aesthetic education 40, 1 (2006), 1–21.
30. Taylor, A. S. Out there. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems,
CHI ’11, ACM (New York, NY, USA, 2011), 685–694.
The User Reconfigured: On Subjectivities of Information
Jeffrey Bardzell
Informatics and Computing
Indiana University
Bloomington, IN 47408
[email protected]
Shaowen Bardzell
Informatics and Computing
Indiana University
Bloomington, IN 47408
[email protected]
theorizing subjectivity in the humanities. Using critical and
empirical methods, we explore sexual subjectivity as designed, and reflect back on the benefits and limits of subjectivities of information as a formulation of computing use.
Foundational to HCI is the notion of “the user.” Whether a
cognitive processor, social actor, consumer, or even a nonuser, the user in HCI has always been as much a technical
construct as actual people using systems. We explore an
emerging formulation of the user—the subjectivity of information—by laying out what it means and why researchers are being drawn to it. We then use it to guide a case
study of a relatively marginal use of computing—digitally
mediated sexuality—to holistically explore design in relation to embodiment, tactual experience, sociability, power,
ideology, selfhood, and activism. We argue that subjectivities of information clarifies the relationships between design choices and embodied experiences, ways that designers
design users and not just products, and ways to cultivate
and transform, rather than merely support, human agency.
Although at first blush “the user” might seem to be very
obvious in its meaning—it’s the person who is using a system!—in fact HCI researchers and practitioners operate
with evolving understandings that have been informed for
years by technical understandings, grounded in epistemological, ethical, and methodological commitments. Even the
following brief survey makes this clear. In their 1983 classic, Card, Moran, and Newell [8] characterize the user thus:
a scientific psychology should help us in arranging this
interface so it is easy, efficient, error-free—even enjoyable…. The key notion is that the user and the computer
engage in a communicative dialogue whose purpose is
the accomplishment of some task…. The human mind is
also an information processing system. (pp.1, 4, 24)
Author Keywords
User; subjectivity; sexuality; theory; criticism
ACM Classification Keywords
H5.m. Information systems: Miscellaneous.
In 1985 Gould and Lewis [16] proposed a three-part methodology for systems design: an early focus on users, empirical measurements, and iterative design. And in 1988,
Norman [26] advised, “make sure that (1) the user can figure out what to do, and (2) the user can tell what is going
on” (p.188). Common to these is the user understood as an
individual completing a task with a system, where the human is abstracted as a cognitive processor well defined
needs; tasks are understood as behavioral sequences that
can be explicitly defined; and interaction is understood as a
dialogue between this cognitive processer and the task
needing to be done—and all of the above were available to
and measurable by empirical methods.
A decennial conference beckons us to reflect with a wider
perspective than our everyday research often affords. We
reflect in this paper on the ways that HCI has theorized the
user. We begin in the early 1980s, when notions of “usability” and “user-centered design” were first consolidating a
notion of the user—one that would quickly be criticized and
reworked, beginning an ongoing activity that continues today. We will turn our attention to an emerging formulation
in HCI of the user that is informed by recent philosophy:
the user as a subjectivity of information. After characterizing what this formulation means, we then seek to test it. We
do so by investigating a marginal use of computing, digitally mediated sexuality. We choose this use because it
brings onto the same stage a diverse range of concerns that
HCI is increasingly seeking to take on: design, embodiment, tactual experience, sociability, power, ideology, selfhood, and activism. Sexuality has also centrally figured in
But it was not long before these formulations were critiqued
and expanded. In 1986 and 1987, respectively, Winograd
and Flores [46] and Suchman [40] challenged the view of
the human constructed by these earlier notions of the user,
leveraging philosophy to rethink what it means to act in a
situation. Bannon & Bødker [1] wrote that in classic HCI,
the user had been “hacked into” a “disembodied ratiocinator,” referring to cognitive modeling disengaged from social
and material praxis. Kuutti [22] and Cockton [10] would
later trace three HCI “users” emerging in the research discourse from the 1980s to the 2000s: the user as source of
error, as social actor, and as consumer, broadly corresponding to changes in computing paradigms over the decades,
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
from one-user-one-terminal systems, towards systems supporting workplace cooperation, and finally towards experience-oriented systems. More recent work has added the
“non-user” to the mix as well [31].
requiring another: teacher-student, mother-daughter, policecitizen, designer-user. Subjectivities are more diverse and
situated: two students can occupy the same subject position
and yet perform distinctly different behaviors and attitudes.
A subject position thus constrains possible behaviors, attitudes, etc., but it also provides generative “rules” for creative, stylized, and individual performances within them.
Any individual occupies many different subject positions: a
schoolchild is also a daughter, a citizen, a goalie, etc. Subject positions can come into conflict, thus shaping (and
helping explain) how an individual negotiates them.
More radically, Cooper & Bowers [11] analyzed “the user”
as a discursive structure, rather than an actual person:
The representation of the user [in HCI discourses] can be
analytically broken down into three constituent and mutually dependent moments [as follows…] In the first, the
user is created as a new discursive object that is to be
HCI’s concern […] In the second, HCI tends to construct
cognitive representations of the user […] In the third,
HCI represents the user in the political sense of ‘representation’: as Landauer puts it, people within HCI should
be the ‘advocates of the interests of users’. [p.50]
We can apply this concept to computer use. Common IT
subject positions include power users, novices, early adopters, and non-users; social actors; gamers, trolls, and n00bs;
standard users and admins; expert amateurs and everyday
designers; quantified selves; makers and hackers; social
media friends; and many more. It is clear that all of these
positions comprise both technical and social roles. The concept of subjectivities gets at how these roles are diversely
enacted, embodied, or performed. For example, we understand how step- and calorie-counting apps construct quantified self subject positions, but how those apps are subjectively experienced (e.g., as a source of empowerment, anxiety, and/or curiosity) and how they are enacted or stylized
(e.g., integrated into one’s lifestyle, displayed and/or hidden
from the view of others) is a different type of question. Subject positions are analytically available to research, while
subjectivities are empirically available.
The present work continues this thinking, analyzing the user in relation to abstract structural positions or roles that are
inhabited by actual people using systems in real situations.
Situated Actors and/or Structured Roles
The key conceptual move is to distinguish the user qua an
individual person—an embodied actor in a given situation—from the user qua structural roles, with anticipated
needs and capabilities. We make this move because the
former—the user qua embodied individual—maps (perhaps
too intuitively) onto everyday notions of persons as already
given: that is, that the individual is already there, and our
job as designers is to support that individual in achieving a
need. What the latter—the user qua structural role—gives
us is not only a type of individual whom we can support,
but also an analytic means to think about designing the role
itself, i.e., designing not just the system but also the user.
(We believe that designers already design both systems and
users, but some HCI theory, fusing with ordinary language
notions of personhood, obfuscates that we also design the
user—consider, again, how most will parse Norman’s dictum: “make sure that the user can figure out what to do….”)
We understand subjectivity of information to refer to any
combination of (structural) subject positions and (felt and
performed) subjectivities relevant to the context of information technology use. We do not mean to reify a subjectivity
of information as somehow distinct or independent from
other subjectivities. We simply understand the concept in
its usual theoretical sense, applied in the domain of IT design and use. That is, we seek to understand “users” insofar
as they are structurally subjected to certain IT systems and
associated social practices, and how they become subjects
of (i.e., agents of) those systems and social practices. These
are analytically distinct from “users” as commonsense persons because a given person can inhabit multiple subject
positions and can experience/perform them not only differently from other individuals, but even differently from
themselves at different times or in different situations.
Our argument is that to resist the analytical conflation of
situated actors and discursive constructs into “the user,”
that we should analyze “the user” as a “subjectivity of information,” that is, structural roles that are designable and
designed, which users-as-situated-actors take up and perform. In what follows, we will argue that such a view has
practical advantages for HCI researchers and designers, as
well as important methodological implications.
We argue that the notion of subjectivities of information
has three benefits. It can give us a tighter analytic coupling
between specific design decisions and particular social experiences, because the design decisions and social experiences can both be analyzed using an analytic vocabulary
common to both, i.e., the structures that comprise that subject position, understood as an amalgam of computational
rules and social structures, including rights and responsibilities, laws, mores, etc. Second, it makes clear that we can
(and do) design subjects as well as interfaces, products, and
services (while recalling that the parallel concept of subjec-
Subjectivities of Information
The concept of subjectivity is derived from the philosophy
of Lacan, Althusser, Foucault, Delueze & Guattari, Butler,
and others. Two related concepts are at the core of this theory: the notion of subject positions, which are social roles
that people are thrust into, and subjectivity, which is the felt
experience and creative agency of individuals within that
situation. Subject positions are typically relational, that is,
tivities means that individuals have plenty of agency beyond what we can prescribe as subject positions). And finally, such a view supports design practices aimed at cultivating and transforming, rather than merely supporting or
extending, human agency; one use of subjectivity in critical
theory has been to serve activist purposes, to imagine and
pursue the transformation of society.
the way she did. But as a feminist, Radway must acknowledge and contend with the value that actual women find in
these texts.
Radway’s dilemma is HCI’s as well: if researchers cede the
authority of identifying the sociotechnical significances of
systems to actual users, we risk overlooking to the very
subtle roles of ideology and false pleasure in systems, an
approach that is ultimately regressive. If HCI researchers
instead become more active critics, we risk speaking for
users, substituting our own values for theirs. Thus we believe something akin to the methodology that Radway
sought to undertake, which stages a dialogue between
equally weighted critical and empirical studies of the same
phenomenon, might be an appropriate strategy for investigating the meanings of interactive products—and the subject positions and subjectivities inscribed within them.
Methodologies for Information Subjectivities
The user as subjectivity of information has methodological
implications, because “the subject” is shaped by sociotechnical structures that include individual algorithms and system structures (e.g., permissions, task sequences), and also
social structures, including shared practices, intersubjective
understandings, policies, and broader sociological structures including gender, racial, and class ideologies. As such,
the subject is not a simple given, and cannot be rendered
available to understanding merely with data, but rather it
must be interpreted, a critical activity that makes sense of
multiple sources, including systems, empirical studies of
users, and analysis of data—all of which can be supported
by relevant theories. One methodological challenge, then,
becomes how we weigh differing theoretical, data, and critique-based resources in interpreting the subject.
The shift to envisioning users as sociotechnical subjects is
anything but new: it is the outcome of a generation of research and theory throughout HCI, explored in CSCW, participatory design, in traditional user research, and so on (see
e.g., [12]; the term “subjectivities of information” itself was
coined by Paul Dourish). Because we see analysis of sociotechnical subjects as involving (at least) three planes of
meaning (i.e., the values and subject positions inscribed in
designs; the skilled practices of actual users; and the critique of domain experts), we undertook a research initiative
that engages all three.
Adapting from feminist film theory [43], we can distinguish
three sources of the subject: cultural objects (i.e., interactive technologies for our purposes) that express expectations about their ideal user who will get the most out of
them; actual users, who have their own experiences, purposes, and perspectives about their use of an object; and
researchers/critics, who offer a trained expert perspective
on objects. Most research only accesses one or two of these
positions. For example, if a work of interaction criticism
[2,5] asserts that a system manipulates users, two of the
three positions are active—the critic and the inscribed
user—but not the second (actual users). When HCI researchers study the actual aesthetic judgments that users
make (e.g., [41]), they are activating the second but not the
first and third positions.
The research focuses on a paradigmatic change that began
about a decade ago in sex toy design. Prior to it, sex toy design was linked to the porn industry, marketed to men,
manufactured cheaply, and called “novelty products” to
avoid regulation under medical standards. Since then, the
industry has been transformed by designers with backgrounds in high-end industrial design, including Apple, Ericsson, Frog Design, and so on; the toys are manufactured
according to the standards of modern consumer electronics
(e.g., MP3 players) and in most cases using medical grade,
body-safe materials; they are marketed to women as luxury
accessories; their price point is generally higher and in
some cases much higher; and they align themselves with
feminist sex positive activists who seek to transform the
meanings and consequences of sexual body practices (see
[3]). Broader cultural meanings of sex toys also changed in
this time, with episodes of television shows, such as Sex &
the City, presenting them in a positive light.
Once we acknowledge these three positions, another research challenge comes into focus: they often do not agree.
The “actual” meaning of the design, then, must be some negotiation of these different positions. A methodological
strategy proposed by feminist researchers to study subjectivity has been to deploy a critical-empirical approach. For
example, in a landmark study of romance novels, feminist
literary critic Janice Radway [29] read the romance novel
genre as patriarchal (because it offers narratives that reassure and satisfy women while preserving an unequal status
quo), and then she ethnographically studied working class
women readers. Not surprisingly, their literary understandings did not agree. The dilemma for Radway, then, is this:
as a scholar trained in feminist theory, Radway is in a position to see “false pleasures” that everyday readers may not
be able or want to see; one cannot dismiss Radway’s reading simply because the everyday readers did not see things
Our overall research program on sex and HCI has included
an interview study with sex toy designers and feminist sex
activists, critical analysis of sex toys, and empirical studies
of digitally mediated sexual life in virtual worlds, on social
media, and with digital sex toys. Some of this research has
been reported elsewhere [3], so we focus here on a criticalempirical study of sex toys and their sexual subjectivities.
In conducting this research, we are able to focus on the interrelations of critical and empirical epistemologies for design and user research, and we use this as an opportunity to
better understood subjectivity holistically in its designed,
technological, embodied, political, and social dimensions.
trix of interpersonal humanity and responsibility…. Erotic experience is experiencing the other as a person in and
through experiencing oneself as a person. The epiphany
of the other is the foundation of ethics [30, pp.76, 85]
Sexual subjectivity. Part of the process of socio-sexual maturity is acquiring skills of sexual perception. Sexual perception is skilled, because it must be learned, and it is mediated by our learning of related conceptual vocabularies
[39]. For example, a discriminating palette for tasting wine
requires not just tasting many different wines but also developing a conceptual vocabulary to name the distinctions
that one learns to perceive (e.g., notes of oak, chalkiness,
cherry; the mouthfeel, the finish). For Shusterman, sensory
pleasure, aesthetic appreciation, and the formation of the
self are all contingent on one’s skill with one’s body. We
develop our body skills through training, understood as “actually engaging in programs of disciplined, reflective, corporeal practice aimed at somatic self-improvement” [37],
including diets, exercise programs, and erotic arts. From
Cosmo to The Joy of Sex, there is plenty of evidence that
millions cultivate their sexual abilities.
Theory Background: Sex and the Subject
Studies of sexuality in HCI have increased since 2005, albeit sometimes in the face of resistance, which, along with
dominant social norms, often wants to bracket sexuality
aside irrelevant to computing, usability, and software design, except for very narrowly construed sexual technologies. But sexuality researchers in health sciences [18], sociology [6,33,38], philosophy [4,30,36], and the fine arts [45]
offer a different point of view, one in which sexuality is not
compartmentalized but rather is a basic aspect of our social
life, body knowledge, and body habits. Attention to sexuality allows us to recuperate “a range of often discounted or
forgotten social actors, movements, and landscapes” [15].
Our research program leverages two related ideas from this
literature and explore their applicability to HCI: sexual sociability and sexual subjectivity.
Sexuality research helps us understand the interrelations
among a complex set of tactual and social skills, body-body
interactions, and body habits. HCI, CSCW, and ubiquitous
computing research is already taking up such themes. To
them we add that the embodied subject of interaction is
gendered and sexual all the time, not just in the bedroom. It
is only through such a formulation that we can begin to hear
the voices of sexual subjects: “Those who have been marginalized, oppressed and pathologized begin to speak, begin
to make their own demands, begin to describe their own
lives, not in a spirit of scientific objectivity, but with passion, indignation, and exhalation” [18, p.187].
Sexual sociability. Sexuality is not merely a private, but it
also has an outward-turning dimension. Yet, physical
pleasures are commonly assumed to be private and even
antisocial. Philosopher Richard Shusterman challenges this
view: “Most pleasure,” he writes, “does not have the character of a specific, narrowly localized body feeling (unlike a
toothache or stubbed toe). The pleasure of playing tennis
cannot be identified in one’s running feet, beating heart, or
swing of the racket hand” [36, p.41]. Shusterman adds that
“Feeling bien dans sa peau can make us more comfortably
open in dealing with others” [36, p. 41]. Shusterman tightly
couples physical pleasure and social competence as inseparable aspects of human body practices, and he concludes
that we should cultivate, rather than repress, our capacity to
experience socio-physical pleasure. This contrasts with traditional morality, in which sexual pleasures are seen as selfish and antisocial, exemplified by the myth, frequently critiqued by sex educators, that vibrators are fine for when one
is alone, but not when one is in a relationship [34]. On the
sociability of sexuality, psychotherapist Horrocks writes,
Critical-Empirical Methodology
For this study, we deployed both a critical analysis of sex
toys and an empirical investigation of how different people
react to them, partly in hopes of better understanding how
these two planes of meaning related to one another, and
specifically to understand how sexual subjectivities are projected by, and how they contribute to an understanding of,
these toys.
I construct my sexual identity not simply to obtain pleasure or love, but also to communicate who I am…. But I
do not exist simply as an individual: the reason I am able
to use sexuality in this complex manner is because I take
part in a socialized sexual system” [18, p.191]
Critical analysis. We offer an expert analysis of five of the
toys used in the empirical portion of the study, three recognized as in the “designerly” paradigm and two from the earlier paradigm. From that analysis we produce an interpretation that helps us understand the ways that this new generation of sex toys inscribed within its toys a new (to sex toys,
if not to discourse) sexual subjectivity. We use critique in
the same ways it is used in the humanities, i.e., as an interpretative activity that seeks to accomplish some combination of the following: to provide a value judgment supported with reasons and evidence and to help a particular
audience or public appreciate (dis)value in works of the cultural domain [9]; to reveal values and assumptions inscribed
This social view of sex is mainstream in the scholarly
community. It stresses that characteristics deemed “natural”
about sex are in fact historically contingent and hegemonic
[33]. The scholarly view has deep ethical implications:
[In] erotic life, we experience our bodies as the locus of
the communication between us, and the site of our answerability to each other’s perspective…. Sexuality is not
‘animal,’ and it is not ‘amoral.’ It is … the originary ma-
in works and subject them to interrogation [19]; to account
for a work’s most general organization and the ways that it
“presents its subject matter as a focus for thought and attention” [13]. We refined this general activity to generate a set
of critical prompts (summarized below), by leveraging the
sexuality and subjectivity theories introduced above.
Empirical analysis. This portion of the study was labbased, in which subjects interacted with 15 different sex
toys chosen, in consultation with an AASECT-certified sex
educator, to represent the range of toys available, when the
toys were purchased. Subjects participated in two activities:
a self-guided talk-aloud exploration of the 15 toys over the
course of roughly 20-30 minutes, followed up by a semistructured interview lasting 15-30 minutes. 17 study sessions were conducted with a total of 25 subjects in 9 individual sessions and 8 done in pairs, at their preference. The
subject population consists of 9 male and 16 female, 22 to
33 years of age. 11 of the subjects identify themselves as
heterosexual, 10 as homosexual, and 3 as queer/other, and
22 out of the 25 total subjects have prior experience with
sex toys. Sessions were video-recorded with subjects’ permission; afterwards, the audio from the video sessions was
transcribed, and videos were used to annotate the transcriptions to clarify what was going on (e.g., which toy a participant was talking about when not obvious from the audio).
Figure 1. The five toys that were critiqued
granted or constrained; how do toys assert a relationship to
the human body; what demands on perception does the toy
make; what subject positions are implied? What sociocultural contingencies are expressed as norms in these toys?
What would likely happen if a friend or parent discovered
it; in what ways does the toy reflect, reify, or resist attitudes
toward sex and issues of sexual social justice; in what ways
are social relationships inscribed?
The five toys we analyzed (Fig. 1) are from left to right, the
OhMiBod Freestyle, a wireless vibrator that converts MP3
audio into vibration patterns; the Jimmyjane Form 6, a jelly
bean-shaped vibrator; the VixSkin Johnny, a non-vibrating
dildo that simulates in detail a penis; the Maximus Power
Stoker Viper, a vibrating penis sleeve; and Nomi Tang’s
Better Than Chocolate, a clitoral vibrator with a touchsensitive UI. Following industry experts, we divide these
toys into two groups. Freestyle, Form 6, and Better Than
Chocolate all represent the “designerly” turn in sex toys,
which brought about the dramatic change in the industry.
Johnny and Maximus represent the earlier era. They range
from USD $90-$185, so all might be seen as on the highend of the mainstream commercial spectrum.
We acknowledge that lab-based interactions with 15 sex
toys do not provide the same insight that in-the-wild-based
interactions would. Because we obviously could not observe and interview participants during sex with each of the
toys, we pursued this as a practical alternative. That said,
given the expansive view of sexuality that we have just
summarized—one that comprises much more than actual
sexual acts and sees it instead as a basis for sociality and
perception—we also believed that participants would not
“turn off” their sexualities in the lab. That many of them
brought their sexual partners, used language of sexual attraction/repulsion during the study, and linked their perceptions of these toys with prior experiences—all provide evidence that this is true. Moreover, the Kinsey Institute—one
of the most renowned sexuality research labs in the world—
conducts most of its studies in labs, for the same reasons.
So while we do not provide data concerning reports of sexual acts with these toys, we do have reason to believe that
our subjects engaged these toys as sexual subjects.
Group 1: “Designerly” Sex Toys
OhMiBod Freestyle. The Freestyle’s distinctive feature is
its wireless connectivity to an iPod, so that the audio signals
from the iPod can be converted into vibrations: the idea is
that the user masturbates with it to music. Its physical form
is abstract and inorganic. Its shape is geometrically cylindrical, and it is symmetrical along its axes. At 21cm, it is
considerably longer than the average human penis length,
which is 12-13cm. The silver band that visually separates
the handle end (i.e., the end that one holds, which also features the UI and the electrical jack) has no anatomic analogue, and it is clearly branded and contains UI icons of its
own. The visual language of the Freestyle is that of a consumer electronic at least as much as it is of the penis.
Our critical goal in this project is to offer an interpretation
about how designs propose relationships between technology and the body-as-subject in sexually specific ways—
organized around a sexual subjectivity. To interpret the designs on these terms, we began our analyses of each toy
with three orienting questions: How might one characterize
this as an object? What are the product features, semantics,
functions; what does this design claim as pleasure and what
does it ask users to give up; in what ways is it precise? How
does the product design subjects? What agencies are
The penis-like form, its size, and the iPod connectivity all
suggest that OhMiBod is targeted at a solo female—solo
because of the personal music player, female because of its
pink color and form suitable for clitoral or vaginal massage.
It literalizes cultural links between pop music and sexuality.
The success of the device largely depends on the music one
selects; beyond personal preferences, we noticed that music
with a well defined beat produces a more structured vibration massage than music without. But the design also asks
its users to give up something: social intimacy with a partner, because it invites its user to retreat into a private world
of personal music and individual genital pleasure. By associating itself with iPods and consumer electronics, it also
asserts itself as a middle class product, hip and youthful. In
categorizing itself as another iPod-like device, it denies
stigmas about sex and specifically masturbation.
part of the body, or in any other orientation than the single
intended one; of all the designs, it is arguably the most
striking in its visual form—it looks like a cross between a
medical device and a sculpture. Its implied subject is a solo
female. Better Than Chocolate constrains the user to a single (albeit very clever) use—clitoral stimulation. With its
clever name and formal beauty, it would make a nice gift,
again proposing that masturbation can be sophisticated and
tasteful as well as immediately pleasurable. In this way, like
the Freestyle, it also rejects stigmas about masturbation.
Jimmyjane Form 6. This design is like the Freestyle in its
color and consumer electronic aesthetic. But Form 6 is ambiguously shaped, not clearly resembling a body part nor
obviously intended to couple with any particular body part.
Its curvy asymmetrical shape connotes without denoting the
organic. Form 6 is all about ambiguity, with its two motors,
nearly invisible controls, lack of any obvious grip or handle, and diverse affordances. The larger end is large, heavy,
and powerful enough to serve as a non-erotic massager,
e.g., for the back. It also has rounded ridges running along
its length, which do not in any literal sense represent penile
ridges, but they add an organic look-feel and suggest that
they might feel good when the product is inserted and rotated; the ridges also make it easier to hold, which is important for a device that is likely to have lubricant on it.
Taken as a group, several themes emerge. All three designs
are tasteful and sophisticated. Though “pleasure objects”
(as designer Lelo puts it), they are not explicit. They come
in feminine colors, look like cosmetic products, and merely
allude to their function. Their names are playful and girlie
or pretentious—but not kinky or raunchy. All are made with
body safe materials—and heavily marketed as such. They
not only resemble consumer electronics, but they are made
using their manufacturing processes [3]. They are expensive
and they look expensive. Their subject is a classy and sophisticated woman who attends to her sexual needs like
other body needs, say, skincare or feminine hygiene. Sex,
then, is not special or hidden; it is just another aspect of the
body needing care, and sexual taboos don’t even seem to
exist, or ever have existed, in this world.
At USD $185, the Form 6 is by far the most expensive of
the six toys we critiqued. It is full featured and heavy, connoting product quality. Its price, weight, formal ambiguity,
and pretentious name all connote luxury and taste: this toy
suggests that sex is valuable enough to invest in, both financially and imaginatively. It demands of the user sufficient imagination to try it out in different places and different ways. But it also asks its user to give something up: its
pampered, sensual, refined sexuality is intimate like a bubble bath, and it thereby resists the raunchy and gritty connotations of sexuality—bestial passion, getting dirty, etc.
Group 2: Traditional Sex Toys
We now turn to two more traditional designs, which propose different different sexual subjects.
VixSkin Johnny. The Johnny is spectacularly literal: it has
an anatomically modeled glans, veins, circumcision scars, a
wrinkly scrotum, and human skin color. Whereas the three
preceding toys have a thin soft skin covering an extremely
firm device, the Johnny is rigid but gives realistically to
pressure. It is also an unusually large penis: 21cm tall and
4cm in diameter, made of medical grade materials. At USD
$126, the Johnny is not cheap, and yet it sells briskly.
Nomi Tang’s Better Than Chocolate. Better Than Chocolate
is placed over the vulva to mediate between the hand and
the clitoris. Its bottom half conforms to the pubic mound,
while its top half is fit for the hand. Its dramatic curves
connote the body. One of the key features of the Better
Than Chocolate is that rather than controlling vibration intensity or pattern through buttons, it uses a 4cm touch
slider: stroking the slider changes vibration. When the device is in place, the slider is located directly above the clitoris (on the top side of the device). Thus, controlling the device is used with similar gestures and in a similar place to
stroking the clitoris itself. Sex expert Cory Silverberg (personal communication) describes this design technique as
“kinaesthetic onomatopoeia,” because its user interface
mimics the physical action it augments. In this way, Better
Than Chocolate introduces another reinterpretation of the
organic, but through its interactive vocabulary and not just
its form.
Johnny’s user is someone who straightforwardly wants a
big penis to play with. In its formal and visual fidelity, it
seems pornographic. As a dismembered body part, it can be
used hand-held or with a strap-on. In its literalness, it asks
users to give up any sense of mystery; it has no pretentions
to be other than what it is. Its sheer size suggests that some
people may view it as a sexual challenge: what is it like to
be penetrated by something so big? It implies a view of
sexuality that is based around anatomy, athleticism, and animal intensity. It neither denies nor seeks to undermine the
stigma of sex: it embraces it as a source of heat.
Maximus Power Stroker Viper. The Maximus is a “beaded
power masturbator” featuring a black, white, and gray color
scheme, a symmetrical design, visibly cheap materials and
construction (indeed, it caught on fire during a lab session).
The opening in which the penis is to be inserted is white
plastic with 5mm-long nubs in ordered rows and columns.
The penis sleeve itself is surrounded by cables with 8mm
Better Than Chocolate’s ergonomics are clever, but also
inflexible; that is, the device won’t fit well with any other
steel bearings on them, which massage the penis. Its
mechanisms are all visible through its exterior shell. The
product is inorganic in its design language: its shape, color,
and materials do not reference the body. Its color scheme,
material connotations (e.g., steel bearings), perfect rows
and columns all suggest a mechanical aesthetic.
But these toys can also be read as a form of denial: for they
deny that their users are subject to sexual taboo, and by extension they also deny both the political imperative to resist
repressive sexual ideologies (or suggest that by buying
these one already is), and also, perversely, the sexual pleasures of transgressing them. Moreover, they frame their denial in a classist way, where high-class sexuality (reflected
in the pricing, naming, and taste of the toys) is barely recognizable as sex, and by implication not subject to social
powers that govern sex, while (by implication) low-class
sexuality is raunchy sex and thereby subject to social censure. Such a reading suggests that the new paradigm sex
toys can be read as regressive, reinforcing both a social
class system and the social control of sexual life—just not
for those who have the income to buy a Form 6. We will
not try to resolve the progressive vs. regressive readings of
these toys here, but we hope we have demonstrated that designers design subjects as well as objects, and that analysis
helps us get at some of the sociopolitical consequences of
that design.
This toy suggests a masculine subject, from its masculine
name to its color scheme and look-under-the-hood aesthetic. In calling itself a “masturbator” (as opposed to, say,
“pleasure object”) it is direct about what it is for. Thus, in
its product form, semantics, and marketing, this toy asks its
user to give up tenderness, allusiveness, and femininity. Indeed, like many machines, it favors functionality over aesthetics, and this functionality is well understood, well defined, and very clearly solved: put the penis in the hole, turn
it on, masturbate, remove and clean. Sexuality seems to be
an engineering problem with an engineered solution.
It is not our intention to argue that the theory of sexuality
inscribed in one category of designs is intrinsically superior
to that of another. But it certainly is clear that these two
toys differ from the first grouping of toys. They are much
more explicitly sexual, understood in the sense of genital
attraction and stimulation and the pleasures immediately
associated with them. Both of these products represent and
indulge “base” or “raunchy” notions of sexuality.
Our empirical study was designed to help us understand
sexual subjectivities: our research question was not, which
toys did users like?, but rather, how did users express themselves as sexual subjects through their interactions with
these toys? Linking back to the theory summarized earlier,
we focus here on participants’ perceptiveness, social meaning-making, and boundary-drawing.
Discussion of Critiques
Obviously, sexual subject positions are not created solely
by the sex toy industry. Sociocultural norms have a lot to
say about how we understand sexuality. Sex toys reify these
norms in concrete and specific ways, which relate to but are
also distinct from sexual norms presented in Sunday sermons, porn movies, advice columns, therapy sessions, and
so on. In the west, there are many taboos surrounding our
understanding of sexuality: the notion that sex is shameful
and should be kept private is pervasive. Much of sexual
practice, especially visible in consensual BDSM, involves
exploring the sexual power of taboos—that which is forbidden becomes especially hot. Johnny and Maximus both position themselves within such a sexual subjectivity.
Though study participants obviously made use of their perceptual abilities to notice the objective features of toys—
colors, shape, texture—such characterizations only infrequently appear in their words. Instead, the focus of their
perceptions was on the fit of toys in their lifeworlds, and
objective features of the toys, rather than being seen in and
of themselves, were raised in a more subjective sense. A
simple example of this is how subjects articulated size.
The pocket vibrator like it could be hung, it could carry that
around in a purse, you could almost carry that around in
those little itty bitty like go to a bar purses. [F, 24,
heterosexual, Y]
What becomes clear in contrast is that the new paradigm
toys do not. They associate themselves with non-sexual
forms of body care (and increasingly they are showing up in
body care aisles of department stores, where one would
never find Johnny), which they define in allusive and
vaguely positive ways: “pleasure,” “feeling sexy,” “lifestyle.” As critics, we are concerned with alienation, how
products reify ideologies that alienate us from goods that
are rightfully our own. As sex positive feminists, we view
traditional sexual morality as having alienated us from sexual pleasures that are rightfully ours. But the politics here
are complex. We welcome the new paradigm toys for their
celebration of sexual pleasure, and they can be (and frequently are) read as exemplars of sex positive feminism.
So on the side of my bed I have a little box… And it’s a
really nice box and I’m never like oh like I have to hide this.
And oh so when I’m shopping for one I look for something
that’s going to fit in there. [F, 24, heterosexual, Y]
[Note: the annotation after each quote refers respectively to
gender (M/F/GQ [gender queer]), age, orientation (subject’s
own self-description), and whether or not they have used
sex toys in their personal life (Y/N).]
In both quotes, the objective quality is size, but this quality
is understood in relation to other artifacts in the toy’s anticipated ecology, here the containers that participants
imagine themselves putting the toys in. These evaluations
tacitly imply intentions (e.g., portability and making available) and social predispositions (e.g., the need for discretion).
spending the money to get in the range where it is dishwashable—a culturally specific sexual hygeine practice if
ever there was one.
Of course, like any other design, a sex toy is meant to be
used, and participants’ commentary on toys showed some
specific ways that people anticipate use:
These three sets of examples (toy size and its place in everyday life, anticipated uses, and concerns about hygiene)
make clear that toys are never interpreted in terms of “just
sex.” This helps explain why the designerly toys—which
situate themselves in relation to personal care and pleasure—are more desirable; they speak to a more holistic sociosexual subject. The one negative comment (that VixSkin is
too literal) indicated that the toy is not legible beyond sex
acts. We also see here that participants’ imaginative exploration of the toys is temporally structured (to be carried and
stored; to be used; to be cleaned up). Such characterizations
are compatible with people’s understandings of intimacy
[28] and experience itself [24]. The toys’ meanings say as
much about the participants as it does about the toys themselves. Yet our participants did not use the language of
critical theory; although in our critiques we discussed ideology, false pleasures, semantics, inscribed users, etc., such
issues seldom explicitly came up with the users. Yet user
comments that express or respond to sexual norms do
bridge the two discourses, as we see in the following sections.
I don’t like vibrating things because the motors make them
sound like they’re always screaming. I don’t want to have
something inside of me that’s yelling unless it’s yelling in
pleasure. [M, 22, homosexual, Y]
You know, if you’re masturbating or having some sort of
sex, do you really want to change the batteries like
midstream? It just makes everything unsexy. [F, 30, lesbian,
There seems to be too much of an imitation of the penis,
and the penis…doesn’t do that much. So mimicking the
penis is not very imaginative because it doesn’t really get
you what females really want or need. [M, 26, straight, Y]
A range of sexual norms are articulated in these quotes: that
vibrators should not be too loud, that sexual activity is optimally experienced as a flow, and that sex toys should be
imaginatively designed. Yet none of these norms is “natural” or “animal”; all of them are situated in everyday social
life. Sociologist William Simon writes, “Desire is the
scripting of potential futures … drawing upon the scripting
of the past desire as experienced in the contingent present.
Desire, in the fullness of its implicit ambiguity, can be described as the continuing production of the self” [38,
p.139]. These comments do not merely reference such
scripts, as if the scripts were already whole and stored
somewhere: rather, they enact and embody those scripts.
So, for example, the participant is not saying that a noisy
vibrator would probably turn her off were it to be introduced into a hypothetical sexual situation; instead, it is already turning her off in the moment she is saying it—even
in the lab. Nor does this scripting end with the sex act:
Social Meaning-Making
The theory of self as an outward-facing sexual subject outlined earlier this paper emphasizes the role of intersubjective experience as an input to and outcome of our sexualities. We saw plenty of evidence that intersubjective experience is central to how participants think about toys:
My partner and I for a really long time … didn’t use sex
toys because it wasn’t necessary…. It wasn’t until we had
been together for a while that we were like, hey, let’s throw
something else into the mix. [F, 30, lesbian, Y]
It’s just kind of one more thing for me to offer my partner
or myself… Yeah it’s just kind of enhancing what I’m able
to give her [F, 22, lesbian, Y]
I’d be worried about getting water and moisture in here….
that’s like my number one thing is can I clean it…. I look
for nice, solid, something that says it’s waterproof that you
can submerge the entire object in water because then I
know the entire surface is able to be cleaned. [F, 28,
straight, Y]
If I’m looking for something for me and my partner, then we
pick something that we’re both going to like, and her tastes
are a little bit different than mine, so we…compromise. [F,
30, lesbian, Y]
Sex toys here are meaningful inasmuch as they enhance or
mediate their sexual partnership with another. Each speaker
has a collaborative, not individual, relation to the toy. This
sort of concrete interaction defines the self as a subject [27].
As [32, p.204] writes, “embodied subjects develop direction
and purpose on the basis of the practical engagements they
have with their surroundings and through the intentionality
they develop as a result of the situatedness of embodied existence.” The subject’s practical engagements are not limited to those between partners; our bodies are also socially
positioned. The presentation and interpretation of our bod-
When I [have] money I will get the ones that are $100 or so
and you just toss it in the…dishwasher…. [Y]ou wash them
like that, and you can use them without having to go
through the steps of putting a condom on and cleaning it
out. I think mine is starting to stain now, even though I use
a condom…so I think yeah, that’s a good criteria. [M, 22,
homosexual, Y]
These comments focus on hygiene, which is a practical
consideration for sex toys. But hygiene is a culturally specific activity [35,44]. Thus, hygiene is not an afterthought,
but primary (“my number one thing”); hygiene justifies
ies is restrained by external forces [7,32]. One external
force is body norms, as the following quotes exemplify:
mantics, provide more room for expressive agency (e.g.,
most people would presumably rather be discovered with a
Form 6 than a VixSkin Johnny).
The problem with things like this is the belts are not
made…for people who are fat. [GQ, 21, Lesbian, Y]
The sex toy assumes a normative weight, and when the participant falls outside of it, she feels “fat.” Quotes like these
help bridge to the critiques, because they so clearly point to
ideological issues (here, norms of slenderness as beauty).
We also wanted to explore the language of acceptance or
rejection of sex toys to understand what they reveal about
the self and its boundaries as a sexual subject. We begin
with several rejections of sex toys:
It’s not adjustable in any way, so for a man to get full
pleasure out of that, he has to be a particular size not any
bigger not any smaller. [F, 28, Straight, Y]
it’s lime green and purple so like I’m already putting
something unnatural inside my body. [F, 24, heterosexual;
Here again hegemonic norms are foregrounded. Penis size
is meaningful, frequently linked to notions of masculinity,
self-esteem, and racism (e.g., the perceived threat or special
appeal of the stereotypically well-endowed black man). Our
bodies are not merely biological but are “inscribed, marked,
engraved by social pressures external to them” [17], and our
subjects fluently understood such inscriptions.
I don’t know if I like the size of it, and it’s really hard also
so I don’t know if I would enjoy putting it in any holes
because it might end up hurting me. [M, 22, homosexual,
She hates this material. She says it like absorbs bacteria
[…], and over time it will discolor. [F, 28, straight, Y]
In each of these quotes, the relationship between the toy
and the body is extremely intimate, as if the toy is a candidate to become a part of the body. Theorist Julia Kristeva
proposed the concept of the abject to refer to objects in the
world that challenge the borders between our bodies and the
outside world: body excretions, blood, sweat, corpses, vomit, etc. [21]. Our repulsion to the abject is protective mechanism, a way of reinforcing the boundaries between self and
outside when those borders are threatened. As Body theorist
Blackman writes, the abject “demarcates a number of key
boundaries, such as the inside and outside, the natural and
the cultural, and the mind and the body…. [The body] is
engendered or brought into being through two key
concepts: individuation and separation.” [6]
Shame, implied in both of the size quotes above, is an intersubjective emotion, defined as the reaction when one
sees oneself through the eyes of another in a negative way
[23]. In addition to being ashamed of body properties, people can also be ashamed of its needs or pleasures:
I think that a lot of my friends and I would much more go
for something smaller; something more compact something
that you could hide. [F, 24, heterosexual, Y]
I recently found out my friends would find these in their
parents’ rooms, their mothers’ rooms. The divorced
mothers, and I think everyone has that dirty little secret.
[M, 22, homosexual, Y]
the cat actually found this in my sister’s room. Yeah, she
had gone away to college and like the cat was playing with
it for a good month or so and my mom came home and was
like, ‘What is this?’… If I knew what I know now back then,
I would have not thought it was funny but disgusting. [M,
22, homosexual, Y]
It is interesting to see that dislike was often specifically expressed as a threat, and rejection in terms of self-protection;
sex toy likes and dislikes are often far stronger than mere
preference statements, e.g., for flavors of ice cream. Our
application of the abject also suggests specifically where
participants perceive their own borders between what could
be accepted into the self versus those that could not. In the
above quotes, the unnatural, the physically large, and the
discolored are all offered as boundaries. Yet as Kristeva’s
theory suggests, boundaries are movable. The same physically large toy that is intimidating to one is an object of curiosity or even a sexual challenge to another. The following
quotes all explore boundary reactions, in which some aspect
of the toy is threatening, but rather than rejecting it, the participant seeks ways to accommodate it:
These quotes suggest three ways vibrators and masturbation
are social: by involving a partner, inscribing social norms
(and provoking personal reactions), and causing embarrassment/amusement when discovered. Social structures—
from body ideals to racism—are part of the toy, and so the
toys not only bear the capacity for erotic arousal and intimate creativity, but also pride/shame, disgust, indignation,
and hilarity. Moreover, none of these perceptions or reactions are mutually incompatible; the same person can be
subjected to and become subjects of any or all of them. An
analytic understanding of such a complex response benefits
from a conceptual vocabulary that can account for the copresence of diverse selves (qua subjects) as well as the
complex enactments and performances that they occasion in
different social situations. In such a view, it becomes clear
why the more designerly toys, with their more nuanced se-
But it seems like you would have to really lubricate well
just because it like, it kind of like pulls. The texture of it
kind of pulls so you’d have to definitely makes sure that you
use lots of lube so there’s no like pulling or tearing. [F, 22,
lesbian, Y]
this one seems like it would be a little too much because I
think like too much vibration can kind of just be numbing
[M, 20, homosexual, Y]
Subjectivity theory, developed in the context of postmodernism, might come across as radical, proposing a “fragmented self” that lacks unity or coherence, in stark contrast
to our intuitive sense of ourselves as unified wholes. On the
other hand, the idea that our sense of self is shaped by our
participation in social structures (parent-child, teacherstudent, partner-partner), and that we have some agency in
how we embody and perform such roles, seems intuitive.
We argue for subjectivity theory in HCI not in a metaphysical sense, as if to assert that “subjectivities of information”
somehow corresponds better to reality as the “hidden truth”
about how we “really are.” Rather, we argue that this formulation has pragmatic benefits for research and design.
So if it’s any bigger than that then I probably wouldn’t
want it at least not for penetration purposes. [F, 23,
heteroflexible, Y]
Size, texture, and vibration are all qualities that in some
measure are acceptable and in other measures are rejected.
Along the borders, then, toys perceived as challenging (as
opposed to outright threatening) can be managed, e.g., with
more lubricant or by changing its use. A possible outcome
of management efforts is an increasingly inclusive and
more specific drawing of the boundaries, which appear to
reflect a certain amount of self-awareness:
The strength of this theory is its ability to account for the
mutability of selfhood, as it negotiates the boundaries of
internal experience and intention and our (designed) environments and social reality. If we view the self as unified
and even fixed, it limits design to “supporting” and “augmenting” existing capabilities (and much of HCI literature
uses these terms). If we can understand the mechanisms by
which selves change—as embodied social beings situated in
sociotechnical environments and practices—then we can
look beyond supporting and augmenting towards the hope
and the ideal of social change. From environmental sustainability to social media, and from participatory design to action research and feminist HCI, the field has been aspiring
to contribute to social change. In doing so, it has already
begun to explore many humanistic concepts and methodologies. For this reason, we believe that many in the field
are already embracing, if tacitly, some version of subjectivities of information, from Turkle’s Life on the Screen to research on the “quantified self.” Our hope here is to offer an
argument, a set of concepts, an example, as a provocation
that helps frame and consolidate these developments.
I would rather have a lot, a larger range of vibrations
rather than a larger range of pulsing. [F, 30, lesbian, Y]
For vaginal stimulation it’s not really a big deal; but for
clitoral stimulation I have to have a big range…just so I
can have a good variation [F, 23, heteroflexible, Y]
These are not “natural” embodied reactions but rather
learned and practiced ones. Each speaker has a developed
sensibility regarding the nature of vibration and its tactual
effects on her body. That the toy challenges boundaries—
not only physically as body penetration, but somatically as
an extended body part—helps explain why sex toys are
greeted with such extreme reactions—desire, repulsion—
and giving rise to the sober skill of managing the toy’s
power in one’s own bodily ecosystem.
Discussion of Empirical Findings
The critical perspective was strong at analyzing links between design choices to sociocultural issues, e.g., how toys
embody and replicate class norms and ideologies, how toys
participate in activist challenges to traditional morality, and
how sexual experiences can be offered up as/in products.
The empirical findings also provided linkages between specific design choices and experiences, but in more concrete
terms. They revealed how design choices exposed (or
pushed) the boundaries of participant sexuality: what was
desirable, what was possible under what conditions, and
what must be rejected outright both from a physical perspective (e.g., too big, wrong color) and from a social perspective (e.g., too shameful to acknowledge, reifies regressive body ideals). The findings also reveal how sexuality
relates to other parts of life, e.g., storage and hygiene. The
emphatic reactions emphasized how much these toys are
literally incorporated, taken into the body, and how powerful, for better and for worse, that move is. The body is
changed, both to oneself and to the social world. This
change is a powerful experience to the self; the discovery
that one’s partner (or mother!) has been subjected to such
change is socially significant; as is the realization that one’s
body is ineligible for such change (e.g., one is “too fat” or
“too small”).
Near the beginning of this essay, we argued that this theory
provides a tight analytic coupling between specific design
decisions and particular socio-subjective experiences; a vocabulary to reason about the ways that we can design subjects as well as interfaces, products, and services (without
denying human agency enacted in subjectivities); and a
means to get at design aimed at cultivating and transforming, rather than merely supporting or extending, human
agency. We conclude by returning to these three claims.
Linking Design Choices with User Experiences
Aristotle in his Poetics exemplifies an analysis that links
formal features of literary genres (e.g., tragedy and epic) to
subjective experiences and social significances, e.g., how
the form of a tragic reversal in a tragedy causes feelings of
pity and fear in the audience, which then purge them of
their own emotions, reestablishing their rationality. Our
critical analysis likewise seeks to understand how the product semantics of certain contemporary sex toys cohere with
certain particular kinds of experiences, themselves situated
in certain social meanings. More specifically, we followed
experts in distinguishing between two categories of sex
toys—earlier adult novelty products vs. more recent “designerly” lifestyle accessories—to interpret how the designs
project subject positions as part of their meaning.
ing becomes a reality, we are constructing new environments and human organisms will adapt. Part of this means
the acquisition of new skills of perception and a precise
conceptual vocabulary to grasp and to express them: no one
is born with a capacity to explicate a sex toy feature’s
physical, sexual, romantic, social, and ideological consequences (e.g., reification of body weight norms and penis
size). Part of it also means researchers and designers improving their own conceptual grasp: for example, the way
that our subjects perceived and interpreted the significance
of toys as (prospectively) part of their bodies would seem to
raise potentially generative concepts about wearable technologies.
In our empirical study, we gained a sense of individuals
perceive, interpret, and tell stories about sex toys. A common statement involved a participant pointing to something
specific about the design (a shape, color, texture) and then
relating it to experience (how it would feel, its implications
for hygiene, how one might hide it). Such statements can
add up to a poetics in their own right. Thus, even though we
view subject positions as inscribed in objects and available
to researchers analytically through interpretation, and we
view subjectivities as part of human experiences and available to research via empirical studies, they need not be incommensurate. To the extents that these two modes of
analysis overlap increases confidence in the findings.
Our study of sex toys, which comprises expert interview
studies, critical analysis of toys, and empirical studies of
sexual experiences, has helped us grasp not only the emerging paradigm of designerly sex toys compared to what came
before it in design historical terms, but more importantly to
understand its tremendous range of significance: from intensely personal experiences to forms of public activism,
literal orgasms to public shame, product design history to
third wave feminism, and law enforcement to pop music.
Each of these is an opening to forms of agency: to try out a
new sexual experience, to become active in sexual politics,
to jump into sex toy design, to become more sex educated
and/or an educator. It is not our position that design alone
causes such transformations, but it is our position that design can contribute meaningfully to such transformations,
and subjectivity theory, and a critical-empirical research
methodology, can help us understand them better. Because
designers in HCI are hoping to contribute such transformations, we believe a reconceptualization of the user as a subjectivity of information will be salutary.
Designing Subjects as Well as Interfaces
Both our critical and empirical studies pointed to two different sexual subjectivities. Earlier sex toys propose a raunchy and transgressive sexual subject, who finds pleasure in
genital attraction/sensation in precisely the ways that are
taboo in traditional morality. The toys construct this subjectivity through their blunt visual and functional appeal to sex
acts, including literal replication of anatomy, pornographic
packaging, overt problem-solving functionalism, as well as
more subtle features, such as vibrator controls that face
away from a woman (i.e., so her partner can control them).
As sex positive feminism gained momentum over the decades, however, an alternative sexual subject was offered: a
body-conscious woman, who takes care of her physical
needs with taste, discrimination, and consumer choice. The
designerly sex toys speak to this subject, in their forms, materials, and marketing. Such features include more abstract
shapes that connote but do not denote sexual organs, feminine color palettes instead of skin tones (especially bright
pinks and purples), formal allusions to cosmetics (e.g., the
silver band on the Freestyle), emphasis on health (e.g., body
safe materials), and euphemistic marketing.
We thank Cory Silverberg, Gopinaath Kannabiran, and Katie Quehl for their contributions to this study. We thank our
peers in the Intel Science and Technology Center for Social
Computing for their contributions to this research, and to
Intel for supporting the research. We also thank our reviewers for their tough questions.
It is important to stress that any individual can choose between these subjectivities, embracing one, the other, both,
or neither—and that this choice can be made again and
again, changing over time. One can be in the mood for
raunchy and transgressive sex one day, and be in the mood
for something more sensual and synesthetic then next. It is
also possible to start predominantly in one place and then to
change one’s tastes, or form new desires, over time. Designs—and the images of sexual life inscribed within
them—can help effect such changes.
1. Bannon, L., and Bødker, S. (1991). Beyond the interface: Encountering artifacts in use. In Designing Interaction: Psychology at the Human-Computer Interface.
Carooll, J. (eds.). Cambridge UP, 227-253.
2. Bardzell, J. (2011) Interaction criticism: an introduction
to the practice. Interacting with computers 23(6): 604621.
3. Bardzell, J., and Bardzell, S. (2011). “Pleasure is your
birthright”: Digitally enabled designer sex toys as a case
of third-wave HCI. Proc. of CHI'11.
Cultivating and Transforming Human Agency
Dewey frequently reminds us in his writings that humans
are organisms, that we adapt to our environments. As ubiquitous computing, Internet of Things, and wearable comput-
4. Bell, S. (2010) Fast Feminism. Autonomedia.
5. Bertselsen, O. & Pold, S. (2004). Criticism as an ap-
proach to interface aesthetics. Proc. of NordiCHI’04,
ACM Press, 23-32.
26. Norman, D., 2002. The Design of Everyday Things. Basic Books, New York.
6. Blackman, L. (2008). The Body. Berg.
27. Nussbaum, M. (1999). Sex and social justice. Oxford
7. Butler, J. (1993). Bodies that matter: On the discursive
limits of sex. Routledge.
28. Pace, T., Bardzell, S., and Bardzell, J. (2010). The rogue
in the lovely black dress: Intimacy in World of Warcraft. Proc. of CHI’10. ACM.
8. Card, S., Moran, T., Newell, A., 1983. The Psychology
of Human–Computer Interaction. Lawrence Erlbaum
Associates Publishers.
29. Radway, J. (1984). Reading the romance: Women, patriarchy, and popular literature. The University of
North Carolina Press.
9. Carroll, N. (2009). On criticism. Routledge.
10. Cockton, G. (2008). Revisiting usability's three key
principles. CHI2008 EA. 2473-2484.
30. Russon, J. (2009). Bearing witness to epiphany: Persons, things, and the nature of erotic life. SUNY Press.
11. Cooper, G., and Bowers, J. (1995). Representing the user: Notes on the disciplinary rhetoric of humancomputer interaction. In Thomas, P. (ed.). The social
and interactional dimensions of human-computer interfaces. Cambridge UP.
31. Satchell, C., and Dourish, P. (2009). Beyond the user:
Use and non-use in HCI. Proc of OzCHI'09. 9-16.
32. Schilling, C. (1993). The body and social theory. Sage.
33. Seidman, S. (2003). The social construction of sexuality.
W.W. Norton & Company.
12. Dourish et al. (2012). Research Themes: Intel Science
and Technology Center for Social Computing.
34. Semans, A. (2004). The many joys of sex toys: The ultimate how-to handbook for couples and singles. Broadway Books.
13. Eldridge, R. (2003). An introduction to the philosophy of
art. Cambridge UP.
35. Shove, E. (2003). Comfort, cleanliness, and convenience: The social organization of normality. Berg.
14. Goodman, E. and Vertesi, J. (2012). Designing for X?
Distribution choices and ethical design. CHI’12 EA.
ACM Press.
36. Shusterman, R. (2008). Body consciousness: A philosophy of mindfulness and somaethestics. Cambridge UP.
15. Gordo López, Á., and Cleminson, R. (2004). Technosexual Landscapes: changing relations between technology and sexuality. Free Association Books.
37. Shusterman, R. (2011): Somaesthetics. In: Soegaard, M.
and Dam, R.F. (eds.). Encyclopedia of HumanComputer Interaction. Aarhus, Denmark: The Interaction Design Foundation.
16. Gould, J. & Lewis, C. (1985) Designing for usability:
Key principles and what designers think. Communications of the ACM 28(3), ACM Press, 300–11.
38. Simon, W. (1996). Postmodern sexualities. Routledge.
39. Sonneveld, M., and Schifferstein, H. (2008). The tactual
experience of objects. In Schifferstein, H., and Hekkert,
P. (eds.). Product experience. Elsevier.
17. Grosz, E. (1994). Volatile bodies: Toward a corporeal
feminism. Indiana University Press.
18. Horrocks, R. (1997). An introduction to the study of
sexuality. Macmillan Press, Ltd.
40. Suchman, L., 1987. Plans and Situated Actions: The
Problem of Human–Machine Communication. Cambridge University Press, Cambridge, UK.
19. How, A. (2003). Critical Theory. Palgrave Macmillan
20. Irani, L., Vertesi, J., Dourish, P., Philip, K., and Grinter,
B. (2010). Postcolonial computing: A lens on design and
development. Proc. of CHI’10. ACM.
41. Sutcliffe, A. (2010) Designing for user engagement:
Aesthetic and attractive user interfaces, Synthesis Lectures on Human-Centered Informatics. Morgan & Claypool Publishers.
21. Kristeva, J. (1980). Powers of horror: An essay on abjection. Columbia UP.
42. Taylor, A. (2011). Out there. Proc. of CHI'11. ACM.
22. Kuutti, K. (2001). Hunting for the lost user: From
sources of errors to active actors – and beyond. Cultural
Usability Seminar, Media Lab, University of Art and
Design Helsinki, 24.4.
43. Thornham, S. (1997). Passionate detachments: An introduction to feminist film theory. Arnold.
44. Vigarello, G. (1998). Concepts of cleanliness: Changing
attitudes in France since the Middle Ages. Cambridge
23. Laine, T. (2007). Shame and desire: Emotion, intersubjectivity, cinema. Peter Lang.
45. Watson, G. (2008). Art & Sex. I.B. Tauris.
24. McCarthy, J., and Wright, P. (2004). Technology as experience. MIT Press.
46. Winograd, T., Flores, F., 1986. Understanding Computers and Cognition: A New Foundation for Design. Ablex
Corporation, Norwood, NJ.
25. Nielsen, J. (1993). Usability Engineering. Morgan
Creating Friction: Infrastructuring Civic Engagement
in Everyday Life
Matthias Korn & Amy Voida
School of Informatics and Computing
Indiana University, IUPUI
{korn, amyvoida}
Civic engagement encompasses the myriad forms of both
individual and collective action that are geared toward
identifying and addressing issues of public concern.
Human-Computer Interaction (HCI) researchers employ
diverse methods and design strategies to study, support, and
provoke civic engagement. Some researchers work within
mainstream politics to improve the efficiency with which
citizens can engage with the state through e-government
services [6, 73]; to improve access to voting [22, 69]; to
seek input and feedback from citizens on public planning
issues [24, 42]; or to foster dialogue, debate, and
deliberation among citizens and with the state [5, 41, 65].
Researchers also work to foster civic engagement outside of
the political mainstream, supporting the work of activists,
protestors, and grassroots movements [2, 18, 35, 37, 51].
The sites in which political life takes place are not only
sites of government, but also cities, neighborhoods, and
This paper introduces the theoretical lens of the everyday to
intersect and extend the emerging bodies of research on
contestational design and infrastructures of civic
engagement. Our analysis of social theories of everyday life
suggests a design space that distinguishes ‘privileged
moments’ of civic engagement from a more holistic
understanding of the everyday as ‘product-residue.’ We
analyze various efforts that researchers have undertaken to
design infrastructures of civic engagement along two axes:
the everyday-ness of the engagement fostered (from
‘privileged moments’ to ‘product-residue’) and the
underlying paradigm of political participation (from
consensus to contestation). Our analysis reveals the dearth
and promise of infrastructures that create friction—
provoking contestation through use that is embedded in the
everyday life of citizens. Ultimately, this paper is a call to
action for designers to create friction.
Individual technologies and systems, and the civic
engagement they support, are enabled by and based upon a
myriad of different, ‘layered’, interwoven, and complex
socio-technical infrastructures [37, 47, 52, 68]. Yet,
infrastructures of civic engagement are a particularly
challenging site for HCI as there are competing forces at
play. On one hand, infrastructures of civic engagement are
fundamentally about engaging people; and even more so,
they may be designed to engage people to enact change. On
the other hand, infrastructures are typically invisible; they
remain in the background and are taken for granted by their
various users [68]. Even further, Mainwaring et al. warn
that infrastructures, which are so conveniently at-hand, can
breed complacency and stasis [52]. Infrastructures of civic
engagement, then, must counter not only the challenges of
provoking civic engagement through everyday life; they
must also overcome challenges of complacency and stasis.
These are the dual challenges that we take up in this
Author Keywords
Civic engagement; everyday life; infrastructuring; friction.
ACM Classification Keywords
K.4.2. Computers and Society: Social Issues; K.4.3.
Computers and Society: Organizational Impacts.
The object of our study is everyday life, with the idea,
or rather the project (the programme), of
transforming it. [..] We have also had to ask ourselves
whether the everyday [..] has not been absorbed by
technology, historicity or the modalities of history, or
finally, by politics and the vicissitudes of political life.
([49]: 296, vol. 2)
The distinction between everyday life and political life is a
highly problematic one for social theorists such as
Lefebvre. And it is a distinction that is increasingly being
challenged by researchers designing to support or provoke
civic engagement in everyday life.
We advocate for the construct of friction as a design
strategy to address the dual challenges of civic engagement.
Following Tsing [70] and Hassenzahl et al. [30], we
maintain that friction produces movement, action, and
effect. Friction is not exclusively a source of conflict
between arrangements of power; it also keeps those
arrangements in motion [70]. In the infrastructuring of civic
engagement, we believe that frictional design can help to
expose diverging values embedded in infrastructure or
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
values that have been left aside during its design. We also
contend that frictional design can help to provoke people
not only to take up more active roles in their communities
but to question conventional norms and values about what it
means to be a citizen, as well
Lefebvre contends that contact with the state has become a
superficial and apolitical one in modern society:
Not only does the citizen become a mere inhabitant,
but the inhabitant is reduced to a user, restricted to
demanding the efficient operation of public services.
[..] Individuals no longer perceive themselves
politically; their relation with the state becomes
looser; they feel social only passively when they take a
bus or tube, when they switch on their TV, and so on—
and this degrades the social. The rights of the citizen
are diluted in political programmes and opinion polls,
whereas the demands of users have an immediate,
concrete, practical appearance, being directly a
matter for organization and its techniques. In the
everyday, relations with the state and the political are
thus obscured, when in fact they are objectively
intensified, since politicians use daily life as a base
and a tool. The debasement of civic life occurs in the
everyday [..]. ([49]: 753–754, vol. 3)
In the remainder of the paper, we introduce the theoretical
lens of the everyday to extend the emerging bodies of
research on contestational design and infrastructures of
civic engagement. Our analysis of social theories of
everyday life suggests a design space that distinguishes
‘privileged moments’ from the more holistic ‘productresidue’ of everyday life. We analyze various efforts that
researchers have undertaken to design infrastructures of
civic engagement along two axes: the everyday-ness of the
engagement fostered and the underlying paradigm of
political participation. Our analysis reveals the dearth and
promise of infrastructures that create friction—provoking
contestation and debate through use that is embedded in the
everyday life of citizens. Ultimately, this paper is a call to
action for designers to create friction.
For Lefebvre, one of the most fundamental concerns about
civic engagement in modern society is that it has been
confined to “privileged moments” ([49]: 114, vol. 1)—
special occasions or punctuated feedback cycles on public
servants and service provision. Civic engagement has been
degraded in the product and residue of everyday life.
Lefebvre argues that “use must be connected up with
citizenship” ([49]: 754, vol. 3)—that the everyday
demonstration of concern for public services are as essential
a facet of being a citizen as, e.g., voting or debate.
Our research synthesizes strands of scholarship about
everyday life, infrastructuring, and contestational design.
We describe each strand of scholarship below, with a focus
on their relationships to civic engagement.
Everyday Life and Civic Engagement
As technology has been woven into “the fabric of everyday
life” ([77]: 94), it surrounds us in ever smaller and more
invisible ways [20, 25] and reaches into multiple spheres of
our lives [10]. But what is everyday life? Two social
theorists—Lefebvre and de Certeau—have been most
central in emphasizing everyday life as a legitimate site of
study—a site that follows an underlying logic that is both
relevant and discoverable.
The significant tensions in everyday life that Lefebvre
characterizes as playing out between personal and social
rhythms are reprised somewhat, albeit with different
framing, in the work of de Certeau. De Certeau [13]
theorizes about the everyday as interplay between the social
forces of institutional rituals and routines, “strategies,” and
the “tactics” opportunistically employed by ordinary
people, who subvert these strategies as they go about their
everyday lives. The performative and embodied nature of
everyday life is also an emphasis taken up by more recent
work in cultural studies [25].
According to Lefebvre—operating in post-war democratic
France—the everyday is the space in which all life occurs
([49]: 686ff., vol. 3).1 It is not merely the stream of
activities in which people engage over the course of their
days (thinking, dwelling, dressing, cooking, etc.). Rather,
everyday life more holistically understood is also the space
between which all those highly specialized and fragmented
activities take place. It is the residue. And further, everyday
life must also be understood as the product of these
activities, the conjunction and rhythms of activities that
render meaning across fragmented activities.
Where strategies accumulate over time through the exertion
of power (e.g., standardization), tactics live in and
capitalize on what is in the moment ([13]: 37). The
institutional rituals, routines, and their underlying
representations, which influence and are influenced by the
performance of the everyday, also, then, begin to form
infrastructures for everyday life. Even amidst the push-andpull interdependence of the personal and the political, the
tactics and the strategies, people still manage to carve
boundaries amidst and around the everyday. And these
boundaries, which are “normally taken for granted and, as
such, usually manage to escape our attention” ([80]: 2), are
important factors for consideration in the infrastructuring
work that needs to be done if civic engagement is to be
released from its confinement to privileged moments.
For Lefebvre, everyday life is dialectically defined by
contradictions between the body’s personal rhythms and the
rhythms of society or at the “intersection between the sector
man [sic] controls and the sector he does not control” ([49]:
43, vol. 1). Everyday life includes the political; however,
The three volumes of Lefebvre’s Critique of Everyday Life
[49] were published in 1947, 1962, and 1981, respectively.
Infrastructuring Civic Engagement
disempowerment [52], which is particularly problematic for
the infrastructuring of civic engagement.
Infrastructures are the predominantly invisible and takenfor-granted substrates, both technical and social, that enable
and support local practices [48, 68]. Infrastructures of civic
engagement are those socio-technical substrates that
support civic activities.
Contestational Design and Civic Engagement
Critical Design Roots
There is a strong interest within the field of HCI to critique
and provoke, to question and reflect in and through design
—either on established social and cultural norms in general
[3, 30, 66], or within socio-political domains more
particularly [1, 17, 37]. Dunne and Raby [21] discern
between two modes of design—affirmative design and
critical design:
Infrastructures are relational. They are embedded in social
structures, learned as part of membership in social
structures, and shaped by the conventions of social
structures [68]. The relationship between infrastructures
and practices of use is dialectic, characterized by a dynamic
interplay between standardization and the local practices
they support at each site of use [68]. Infrastructure must,
therefore, be understood as processual, evolving over time.
What is infrastructure in one moment may become a more
direct object of attention and work in the next, particularly
when breakdowns occur—either when the technology
ceases to work or when local practices change and deviate
from standards implemented in infrastructure [58, 68].
The former reinforces how things are now, it conforms
to cultural, social, technical, and economic
expectation. [..] The latter rejects how things are now
as being the only possibility, it provides a critique of
the prevailing situation through designs that embody
alternative social, cultural, technical, or economic
values. ([21]: 58)
Because infrastructures reflect the standardization of
practices, the social work they do is also political: “a
number of significant political, ethical and social choices
have without doubt been folded into its development” ([67]:
233). The further one is removed from the institutions of
standardization, the more drastically one experiences the
values embedded into infrastructure—a concept Bowker
and Star term ‘torque’ [9]. More powerful actors are not as
likely to experience torque as their values more often align
with those embodied in the infrastructure. Infrastructures of
civic engagement that are designed and maintained by those
in power, then, tend to reflect the values and biases held by
those in power.
Critical design aims to question the status quo, revealing
hidden values and agendas by designing provocative
artifacts that adopt alternative values not found in
mainstream design. Critical design aims to “enrich and
expand our experience of everyday life” ([21]: 46) by
“generat[ing] dilemmas or confusions among users in such
a way that users are encouraged to expand their
interpretative horizons or rethink cultural norms” ([4]: 289).
By so doing, critical design directs attention to issues of
public concern. In the context of civic engagement, critical
design introduces dilemmas and confusions into the
experience of civic issues with the aim to surface multiple,
alternative values and spur debate. By problematizing
values that are ‘hard-wired’ into infrastructures of civic
engagement, critical design can counter the stasis,
complacency, and inertia that can result from the at-hand
experience of infrastructure.
Infrastructures are also relational in a second sense. As they
increasingly extend into everyday spaces, infrastructures
are shaped by the spaces of everyday life and shape our
encounters with those spaces; they are increasingly
experienced spatially [19]. Infrastructures of civic
engagement, then, must also contend with these spatiorelational experiences insofar as they engage with everyday
space and the physical environment (e.g., [40]).
Critical Design for the Everyday
Dilemmas and confusions can also be designed into
artifacts of everyday use—furniture or government ID
cards, for example [15, 21]. Hassenzahl et al. advocate for
designing everyday artifacts following an ‘aesthetics of
friction’ as opposed to an aesthetics of convenience and
efficiency [29, 30, 44]. Based on the psychology of
motivation, they recommend designing artifacts that will
lead to momentary transformation, subsequent reflection
and meaning making, and, taken together, longer-term
behavior change [30]. In contrast to many strategies of
persuasive design, the emphasis here centers on building
from small, mundane, everyday moments of transformation:
The processual and evolving character of infrastructure has
led participatory design researchers to examine the
tentative, flexible, and open activities of ‘infrastructuring’
with the goal of empowering sustainable change [47, 58,
67]. Researchers exploring civic engagement have
increasingly focused on similar issues of empowerment and
sustainability, moving beyond individual applications to
engage more systematically with the infrastructures that
support civic engagement (e.g., [47, 74]). This research has
emphasized challenges associated with the fragmentation
and interoperability of infrastructures currently supporting
civic engagement [74]. However, the research community’s
critical engagement with infrastructure has also found that
the full-service, ready-to-hand nature of many
infrastructures may also invite complacency or
The primary objective is, thus, not necessarily
consumption) per se, but supporting people with
realizing the goals they find worthwhile to pursue, but
hard to implement.” ([44]: 2)
These momentary transformations are provoked by a class
of technology that they term ‘pleasurable troublemakers’
[30]. These troublemakers create small but pleasurable
obstacles to targeted moments of everyday life, i.e., they
create friction. Rather than designing to help or to make
everyday life easier, Hassenzahl et al.’s designs engineer
pause and provoke reflection, using friction to emphasize
the active role individuals should have in constructing
meaning of their experiences [30, 44].
‘product-residue.’ These two perspectives form the first
axis of our design space.
Privileged Moments
Everyday life includes political life [..]. It enters into
permanent contact with the State and the State
apparatus thanks to administration and bureaucracy.
But on the other hand political life detaches itself from
everyday life by concentrating itself in privileged
moments (elections, for example), and by fostering
specialized activities. ([49]: 114, vol. 1)
There is genealogical resonance between theories of the
everyday and theories of critical design. Lefebvre’s
Critique of Everyday Life [49] was a foundational influence
on the Situationists, whose art tactics have been taken up by
designers and critical theorists in HCI [46]. Lefebvre’s
critique of the dilution of citizens’ everyday agency in
politics and civic engagement, which we take up here,
parallels his criticism of the passive role of individuals
imposed on them by a consumer society. Whereas the
Situationists created spectacles to raise awareness and
intervene in the consumerist status quo, we advocate for
creating friction to raise awareness and intervene in the
depoliticized status quo.
According to Lefebvre, the depoliticization of everyday life
confines civic engagement to privileged moments.
Foremost, privileged moments are privileged through their
status as special activities that occur only infrequently. Yet,
privileged moments are also privileged through invitation
by institutions of power. When citizens are mere users, “the
state is of interest almost exclusively to professionals,
specialists in ‘political science’” ([49]: 754, vol. 3). That is,
Lefebvre argues, the state extends the privilege to
participate to citizens only when needed. Such privileged
moments may be activities during which structures of
power designed by experts are merely refined with input
and feedback provided by users.
From Critical Design to Contestational Design
Contestational design examines how design can provoke
and engage ‘the political’—the conflictual values
constitutive of human relations [17, 31]. It aims to
challenge beliefs, values, and assumptions by revealing
power relationships, reconfiguring what and who is left out,
and articulating collective values and counter-values [17].
Contestational design takes a more explicit confrontational
approach than critical design does, reflecting an activist
stance: “it is an openly partisan activity that advances a
particular set of interests, often at the expense of another”
([31]: 11). Based on the political theory of Chantal Mouffe
[56], contestational design sees dissensus and confrontation
as inherent yet productive aspects of democracy, drawing
attention to the plurality of viewpoints that fundamentally
can never be fully resolved. Rather than working to resolve
differences, contestational design embraces pluralism and
seeks ways to engage critically with contentious issues of
public concern.
Privileged moments can further be understood reflexively
by returning to de Certeau’s ‘strategies’ ([13]: 35ff.). For de
Certeau, those with power—i.e., with proprietorship over a
space and able to delineate a space of their own distinct
from others—are the ones who employ ‘strategies’ (e.g.,
laws and regulations, urban design). Strategies of the state,
then, define the frame within which citizens can go about
their lives. Citizens do not themselves have power over
these structures; in order for citizens to influence the frame
itself, i.e., the structures of power, the state has to extend
opportunities of influence to citizens and it does so only as
deemed appropriate—i.e., in privileged moments.
Consequently, civic engagement and the political are often
relegated to the periphery of everyday life and to specialists
who define strategies and identify and construct privileged
An understanding of the everyday as product-residue, in
contrast, emphasizes the ways in which political life is
everyday life ([49]: 114, vol. 1). For Lefebvre, an everyday
political life is much more than privileged moments; it is
both the product of meaning constructed across specialized
and fragmented civic activities (including privileged
moments) and the residue of civic life lived between these
Synthesizing the literature about contestational design and
the everyday, we introduce two cross-cutting dimensions
that, together, provide a framework for understanding the
infrastructuring of civic engagement. Our analysis
foregrounds a design space of untapped potential for HCI
researchers to provoke civic engagement through
contestational design in everyday life.
An understanding of Lefebvre’s product-residue can be
expanded through theoretical resonances with de Certeau’s
‘tactics’. Tactics embody the product-residue. De Certeau
characterizes tactics as the practices by which people
appropriate the structures they are confronted with, i.e.,
structures framed through ‘strategies’ ([13]: 35ff.). By
Theories of the everyday foreground two perspectives on
how politics and civic engagement can be experienced—as
confined to ‘privileged moments’ or as experienced through
Contestation and Critique
putting these structures to use, people invariably produce
understandings, interpretations, and opportunities—spaces
become neighborhoods, from entertainment people derive
values, and grammar and vocabulary are put to use in
language [13]. People, then, are not mere consumers of
structures of power (of politics). Within and through the
ordinary and everyday, people are producers of their own
civic lives, engaging with these political structures and
thereby actively appropriating them.
A contrasting perspective on civic participation understands
democracy as a condition of forever-ongoing contestation
and ‘dissensus’ [17, 56]. Political theorist Chantal Mouffe
has called this ‘agonistic pluralism’ [56]: a fundamental
multiplicity of voices inherent in social relations that are
forever contentious and will never be resolved through
mere rationality. Agonistic pluralism is a perspective that
seeks to transform antagonism into agonism, moving from
conflict among enemies to constructive controversies
among ‘adversaries’ forever in disagreement. Agonistic
pluralism emphasizes the non-rational and more affective
aspects of political relations. It sees contestation and
dissensus as integral, productive, and meaningful aspects of
democratic society. As DiSalvo [17] points out:
De Certeau’s tactics embody the product and the residue
because they concern conscious and unconscious practices,
the activities and their results, the given structures and their
derived meanings. In being tactical, people actively engage
with, adapt, and appropriate the physical, social, cultural,
and political structures in the ordinary and everyday. This
appropriation and enactment is fundamentally political—
moving people form passive consumers to active producers
of issues of public concern.
From an agonistic perspective, democracy is a
situation in which the facts, beliefs, and practices of a
society are forever examined and challenged. For
democracy to flourish, spaces of confrontation must
exist, and contestation must occur. ([17]: 5)
Paradigms of Political Participation
Scholars in contestational design have pointed toward
alternative understandings of civic engagement and the role
that technology and design might play in countering the
political status quo. Drawing from the political theory of
Mouffe [56], they argue for an understanding of civic
engagement based on two contrasting theories of and
approaches to democracy and participation—what might
best be described as a consensual and a contestational view
[2, 17, 31].
Designers have drawn from Mouffe’s theoretical position in
multiple ways: directly in the form of contestational or
adversarial design [17, 31] and indirectly in designing for
activist technologies and technologies for protest [2, 33, 35,
37]. In this view, reflection and critical thinking are at the
core of civic processes and activities, and provocation and
contestation are seen as means to attain these values.
Consensus and Convenience
The two cross-cutting dimensions described above provide
the theoretical framework for understanding the
infrastructuring of civic engagement. Through our analysis
of socio-technical research in the domain of civic
engagement, we characterize each of the four quadrants in
that design space. These four approaches—deliberation,
situated participation, disruption, and friction—have been
taken up, albeit unevenly, by designers or researchers to
foster civic engagement (Figure 1).
The consensus and convenience paradigm of political participation emphasizes rationality and consensus as the basis
for democratic decision-making and action (e.g., [26, 60,
61]; see [55]). It subscribes to the idea that rational
compromise and consensus can be arrived through the
deliberation of diverging arguments and viewpoints. Efforts
to foster civic engagement following this paradigm
typically focus on involving citizens in an efficient and
inclusive manner.
As DiSalvo [17] argues, a typical trope of e-democracy
initiatives within this paradigm is to improve mechanisms
of governance generally and to increase participation of the
citizenry through convenience and accessibility. In order to
better support the administrative operations of government,
initiatives within this trope often translate traditional
democratic activities into online tools for participation (e.g.,
e-deliberation, e-voting, etc.; see [2]). The main concerns of
such initiatives center around issues of efficiency,
accountability, and equitable access to information and
means of ordered expression and action such as petitions,
balloting, or voting. They seek to retrofit or replace existing
civic activities in order to realize established political ideals
and maintain the status quo (see [2]).
E-government and e-democracy research has embraced
opportunities offered by emerging ICTs and broader-based
internet access to translate offline activities of civic
engagement into computer-mediated online counterparts
[63]. Novel platforms that support the deliberation of civic
issues such as urban planning or public policy are common
within these bodies of research [7, 23]. Here, the focus of
design is often on fostering discourse among stakeholders
with differing viewpoints in order to arrive at some form of
actionable consensus. Other prototypical systems include
platforms for collective decision making—often a more
formal conclusion to deliberative processes either
affirming, rejecting, or choosing among various
alternatives—in the form of e-voting [22, 62].
into infrastructures supporting ongoing dialogue (temporal
embedding) about a variety of civic issues, from
neighborhood living to public service provision. This
continuous involvement stands in contrast to civic
participation only in ‘privileged moments’.
Privileged moments
Paradigms of Political Participation
Consensus/convenience Contestation/critique
Situated Participation
With the rise in popularity of social media, other research in
this vein integrates civic engagement and particularly
advocacy into people’s online and offline social networks
(social embedding) [16, 28, 53, 65]. Such research seeks to
build, extend, or connect communities of interest around
shared causes. Its focus ranges from generating awareness
of civic issues [28], to enabling debate and discussion [53,
65], to motivating and rallying for action [16, 53].
A third strand of embedded approaches to fostering civic
engagement is to spatially align engagement opportunities
with people’s whereabouts in the city (spatial embedding)
—particularly in domains where spatiality plays a central
role such as urban planning [40, 64]. Spatially situated
engagement seeks to make relevant and meaningful to
people the issues and topics of discussion that are in their
close proximity as they move about their day [40]. Spatially
situated technologies for civic engagement range from
being stationary (installed at places of interest; e.g., [36, 64,
69, 75]), to mobile (typically location-based and often with
rich media capturing capabilities; e.g., [5, 27, 41, 50]), to
ubiquitous (more deeply embedded into the fabric of the
city in the form of sensors, smaller pervasive displays,
ubiquitous input/output modalities, etc.; e.g., [43, 72]).
Figure 1. Approaches to designing for civic engagement.
This strand of research often reflects an understanding of
civic engagement that is extended as an explicit invitation
for participation from the state to the citizen, invitations that
are proffered only periodically, either at major electoral
junctures or for significant public projects. We see less
research supporting discourse about citizen-originated
issues in this quadrant; those discourses are more
commonly initiated, instead, through everyday, productresidue infrastructures of engagement.
Research supporting deliberation has been foundational in
moving civic engagement online and much of this research
has shown promise in finding ways to increase the
participation in and quality of civic discourse, enabling
more broad-based, actionable forms of consensus. But this
strand of research still relegates citizenship to the periphery
of everyday life taking place only in privileged moments.
While researchers design for situated participation are
concerned with unobtrusively integrating civic engagement
into people’s daily lives, they, at the same time, are nudging
and reminding people of participation opportunities that
may be relevant to their interests through strategies such as
personalization or notifications [5, 40].
Some of the research focused on embedding civic
engagement into everyday life manifests a view of civic
engagement as a form of one-way issue or bug reporting on
public services and infrastructures [24, 42]. Citizens are
encouraged to report the small, mundane nuisances that get
in the way of leading a ‘productive’ everyday life. This
research often mines local knowledge, relying on citizens—
as ‘users’ of a city—to maintain, repair, and improve the
efficient operation of public services and infrastructures. A
related, but larger-scale, strand of research leverages large
crowds of people in a more automated fashion (e.g., as
‘sensors’) to contribute data about issues of civic relevance
such as problems with physical infrastructure or shared
environmental concerns [54, 57]. In contrast to these two
strands, other approaches seek to foster active dialogue,
community, and more permanent relationships among
citizens and with the state [5, 41, 50].
Situated Participation
The strand of research fostering civic engagement through
situated participation has capitalized on opportunities to
embed civic engagement in everyday life through novel
networked, mobile, and ubiquitous technologies. These
systems resonate with Weiser’s vision of ubiquitous
computing by emphasizing technology that dissolves into
the everyday [77], interleaving civic engagement between
and across temporal, social, and spatial contexts of activity.
Through this embedding, research supporting situated
participation has aimed at aggregating local knowledge and
understanding local needs of citizens that can be valuable
resources in planning processes but are often difficult to
Increasingly, research in e-democracy and e-government
has shifted from facilitating state-initiated invitations to
deliberation to leveraging online platforms for anytime,
ongoing civic interactions [6, 11, 38, 73]. This research has
transformed previously discrete engagement mechanisms
In sum, research in this quadrant focuses on rationale
dialogue and harmonious relationships—both among
communities and with the state. And it does so while
embodying the product-residue. This research emphasizes
the in-between aspects of everyday life. Designs target
people’s commuting, going about, and everyday curiosity,
seeking to only minimally disrupt by providing options for
quick feedback and the capturing of small memory cues for
later. Research here also targets the holistic product of the
fragmented activities of everyday life by fostering
productive dialogue, community building, and sustained
relationships among citizens and with the state.
Privileged moments of civic disobedience are also reflected
in Participatory Design research [15, 18, 51]. Inspired by
Agre’s [1] ‘critical technical practice’ and Ratto’s [59]
‘critical making’—which emphasize linking socio-political
critique with technical projects through hands-on
construction and situated reflection—this research invites
and encourages citizens to co-construct alternative values
during participatory design workshops. It concerns itself
with exploring alternative values related to environmental
sustainability, local neighborhood communities, or privacy
in governmental identification schemes [15, 18].
Research fostering disruption provides mechanisms for
citizens to reveal, address, reflect on, and/or call into
question the status quo of values, assumptions, and beliefs
held by a community. It does so by focusing on ‘privileged
moments’ of dissensus, protest, and civic disobedience.
According to Castells [12], moments of collective civic
disobedience arise when people overcome anxiety through
moments of outrage and anger, when others experience
similar moments of injustice that they also share, and when
enthusiasm emerges and hope rises through identification
and togetherness ([12]: 13–15).
Whereas this contestational strand of research in civic HCI
began by supporting individual protest activities and
moments of civic disobedience, it has increasingly
acknowledged recurring practices, the re-appropriation of
technologies, and the need for infrastructural support for
continued and ongoing activism.
Research employing friction as a design strategy embodies
both an engagement with the product-residue of the
everyday and with a philosophy that politics is
fundamentally contestational. Here, civic engagement is
emancipated from privileged moments—those fragmented
or, even, recurring fragmented activities—and interleaved
into the product and residue of everyday life. This transition
shifts the unit of analysis from the activity to the smallerscale gaps and spaces in between activities and the largerscale implications of those activities. The contestational
design approach provokes citizens to reflect and question
the status quo with respect to the conditions of the productresidue of civic life. We find a dearth of research that
embodies this approach to designing for civic engagement,
and so we engage with the only two examples we can find
in more detail.
Researchers have studied the use of technologies during
demonstrations, occupations of public squares, and protest
actions at sites of interest to the local community [2, 35,
71]. Research in this vein frequently focuses on
communication and coordination practices. Such practices
concern, e.g., the sharing, mobilization, and dissemination
of causes and protest actions on social media [2, 71], or the
coordination work required during decentralized forms of
protest, e.g., via FM radio, mobile phones, and text
messaging [33, 35].
Activist moments of civic engagement typically entail
responding opportunistically to dynamic political, legal, and
technical environments [33]. Hence, activist technologies
often support immediate, short-lived campaigns and events
that result in a number of individual protest actions [2, 33].
However, research has also begun to focus beyond the
individual system to larger infrastructural goals. Hirsch [32]
has found that while activists appropriate technologies for
individual projects in a quick-and-dirty fashion, they often
aim to serve multiple communities or protest actions on the
basis of a single development effort. In addition, activists
adopt and implement these technologies in ways that create
infrastructural redundancies and resilience [33]. Asad and
Le Dantec [2] have similarly argued for open, flexible, and
underdetermined design of platforms for activists that
support a range of practices over time.
Clement et al. [15] create friction through the adversarial
redesign of identity infrastructures that receive everyday
use. Their speculative overlays for government-issued ID
cards allow citizens to temporarily black out information
unnecessarily exposed by default in numerous situations
(e.g., buying alcohol requires sharing your photo and date
of birth but not your name or address). The overlays present
small obstacles in a larger infrastructure that citizens are
confronted with on an everyday basis, provoking them to
question the means of government identification and reflect
on privacy more generally.
We find another example of friction in research that has
explored alternative and oppositional media [32, 34, 51].
Activists have long sought to facilitate the exchange of
alternative voices—particularly, but not exclusively so
within state-controlled (i.e., monitored and/or censored)
media landscapes. Research in alternative media explores
ways to bypass government control and allow the
dissemination of potentially dissenting information via
alternative news channels. By building on top of other
In addition to supporting individual or repeated activities of
protest and activism, research has also studied situations of
ongoing crisis, military occupation, and open conflict [71,
78, 79]. Wulf et al. [78], e.g., found that social media came
to be used as a way to organize regular demonstrations, to
communicate and interact with local and global networks of
supporters, and to offer information to the broader public.
stable and pervasive infrastructures (e.g., the web [34] or
mobile phone networks [32]), at times in parasitic
relationships, activists subvert these infrastructures and put
them to new and alternative use. Here, alternative media
use escapes privileged moments by undercutting the power
relationship that allows the state to define what constitutes
residue between activities and in doing so counter the
stasis and complacency caused by typical infrastructures.
3. Designs for friction are naïve and inferior. They are
not intelligent.
Designs for friction do not take agency away from the
citizenry. They do not make use of ‘intelligent’
algorithms to anticipate and represent individuals in civic
participation. Rather than staking a claim to values and
ideals identified a priori by designers, they provoke
individuals to articulate and stake a claim to their own
values and ideals. Within infrastructures of civic
engagement, designing for friction is not primarily about
exerting or extending more power per se, but about
making room for and creating opportunities for citizens to
take power and ownership over issues of public concern.
Both examples of design for friction foreground the
significant role of infrastructuring when the everyday is
emphasized. Yet, as infrastructures are slow to change and
susceptible to inertia, particular design strategies have been
employed to work through and around existing
infrastructures, applying contestation, provocation, and
critique to question the status quo and counter inertia.
Given the dearth of examples of friction applied in designs
for civic engagement and the promise suggested by these
initial examples, we turn next to expand on ‘friction’ as a
strategy for bringing everyday provocation to the
infrastructuring of civic engagement.
4. Designs for friction are not absolute. They
acknowledge that although change is desirable, it still
lies in the hands of individuals with agency.
Designs for friction, because they acknowledge the
agency of citizens, cannot impose change. Citizens
ultimately retain agency over if and when change
happens and if or when reflection about civic life
happens. Designs for friction provide opportunities, not
mandates. There is always an alternative to
acknowledging, responding to, or using infrastructures of
civic engagement. Frictional infrastructures do not stop
citizens from carrying on as intended; they do not break
down completely in the face of inaction or disinterest.
Rather, they serve to make citizens pause, to be
disruptive without bringing things to a halt. Friction
happens in a flicker of a moment in the product-residue
of everyday life.
Hassenzahl et al. have identified four principles for
designing persuasive artifacts within an ‘aesthetics of
friction’, which we take as a starting point for exploring
how to create friction through the infrastructuring of civic
engagement [29, 30, 44]:
1. Designs for friction take a position or a stance. They
are not neutral.
No technologies are value-neutral. And, certainly, any
designers who intend to foster civic engagement have
taken something of an activist stance in their research,
even if implicitly. But designs for friction take an explicit
stance toward their users. In the case of infrastructuring
civic engagement, designs for friction take the stance that
users should be citizens and that the most foundational
and requisite work of infrastructuring civic engagement
has to come from building the social infrastructure of
citizenry—provoking individuals to identify themselves
not as users but as citizens in the most active sense
Design Strategies
Previous research that we see as embodying infrastructural
friction, both within and outside the domain of civic
engagement, has employed a variety of different strategies
for doing so. Synthesizing this existing research, we
identify and characterize an initial suite of design strategies
for infrastructuring civic engagement by creating friction.
• Infrastructuring through intervention
When citizens do not maintain control over existing
infrastructures, strategies of subversive intervention have
been used to create what we would identify as friction.
For example, research has sought to intervene in
established, large-scale infrastructures by manipulating
the representations of infrastructure that citizens hold in
their hands (e.g., overlays for government ID cards [15]).
This case of everyday appropriation of ‘imposed’
infrastructure is reminiscent of de Certeau’s ‘tactics’, and
emphasizes infrastructure’s relational and potentially
evolving character at the interlay between standardization
and local practices. Research has also taken to “graft a
new infrastructure onto an existing one” ([37]: 616) in
order to reveal previously invisible relationships between
various actors within the existing infrastructure,
2. Designs for friction want to cause trouble. They do not
want to help you; rather, they place little obstacles in
your way.
Designs for friction do not merely blend activities of
civic engagement seamlessly into everyday life, enabling
participation ‘anytime, anywhere’. They do not make
civic engagement more efficient or convenient for
citizens. Rather, in the vein of contestational design,
creating friction in infrastructures of civic engagement
means making citizens pause and reflect—reflect on
alternative civic values, reflect on the status quo of civic
doing and acting, reflect on the viewpoints of others, and
reflect on one’s agency as a citizen… not user. The little
obstacles of friction carve out space for reflection in the
empowering a central but previously disenfranchised
stakeholder population in their everyday (work) lives.
Strategies of intervention, then, shift the arrangements of
power and, in doing so, nudge the infrastructures
themselves toward change.
infrastructures of civic engagement in the hands of those in
power, and in their best interest, in order to counter issues
of stasis and complacency and to reach toward the ideal of
the ‘more active citizen’.
• Infrastructuring by creating alternatives
Other research has created what we would identify as
friction by building alternative infrastructures in parallel
to existing ones to facilitate a pluralism of voices (e.g.,
alternative media channels [32, 51] or parallel communication
infrastructures typically ‘piggy-back’ on existing, stable,
and pervasive infrastructures [33], introducing additional
layers and ways around established forms of power
embedded in infrastructure. This design strategy, then,
suggests a mechanism for undercutting and questioning
power relationships that define what constitutes privilege
and, thus, the structures underlying privileged moments
that constrain civic engagement.
We see three open questions as being productive areas for
future research.
Facilitating a shift from user to citizen is a cultural process
through which peripheral participants are scaffolded or
provoked toward more expert participation [45]. While
Lave and Wenger caution against understanding situated
learning as a movement from the periphery to the center,
we do imagine that the movement from user to citizen, from
periphery to expert, will interact in some significant ways
with where one stands with respect to the institutions of
influence over infrastructure. Bowker and Star [9] warn that
individuals farther removed from these centers of power
will more strongly experience torque from the values
embedded into the infrastructure. Studies of infrastructures
of civic engagement, then, will need to pay particular
attention to understanding how legitimate peripheral
participation can be supported while avoiding the
disenfranchisement that comes with being too far at the
periphery. Frictional infrastructures are likely to be a good
first step as the little obstacles they place in the everyday
may more readily suggest possibilities of opting out than
other infrastructures (see also [52]), but this is a question
that necessitates more empirical work.
• Infrastructuring by making gaps visible
To foreground and leave open (rather than to close and
remove) gaps or seams within and between infrastructures is another strategy that we identify from previous
research as embodying friction (e.g., [52]). ‘Seamful
design’ does so by “selectively and carefully revealing
differences and limitations of systems” ([14]: 251) and
the underlying infrastructural mechanisms between them.
If appropriated for friction, moments of infrastructural
breakdown can become moments of awareness,
reflection, and questioning about the activities that
infrastructures enable and the values inscribed in them,
moving beyond the mere feedback cycles of ‘users’ of
public services. For infrastructures of civic engagement,
then, seams and gaps provide both the space in which to
design for friction as well as the substance of everyday
residue on which citizens may be provoked to reflect.
In this paper, we advocate for frictional design contesting
the status quo of citizen–state relationships. Yet, Lefebvre
—and the Situationists he inspired—as well as de Certeau
also deeply engage with questions of ‘passive’ consumers
as political actors [13, 49]. Research has already sought to
inject and intervene in the prevailing metaphor of markets
[37, 39]. Infrastructuring corporate arrangements of power,
then, is a direction for which future research may fruitfully
mobilize frictional design strategies.
• Infrastructuring by using trace data for critique
A final strategy amenable for creating friction is one of
employing trace data of infrastructural use in order to
critique those infrastructures, or even to reveal them in
the first place. For example, researchers have visualized
the traces individuals leave behind when surfing the web
in order to uncover and ultimately critique large-scale,
commercial data mining practices [39]. Working with
trace data hints at such a strategy’s power to emphasize
the relationship between the residue of activities and their
product. Working with trace data related to civic
engagement could foreground issues of data ownership
and opportunities for challenging power dynamics [76].
Lastly, we assume quite pragmatically that the research
community will need to explore a balance between the
benefits and annoyances of friction. Friction is likely not to
scale, for example, to every situation of civic engagement
in which we expect individuals to find themselves. An
empirical question for future research, then, is for whom, in
what contexts, and over what periods of time is friction
useful—and for whom, in what contexts, and after what
periods of time does it stop being useful—for addressing
the dual challenges of infrastructuring civic engagement.
There are also related questions about how one’s identity
and participation as a citizen evolves, as we assume that
designs for friction will need to be nimble enough to engage
citizens through the ebb and flow of engagement. From an
infrastructuring perspective, this is ultimately a question
about sustainability.
The four design strategies described above primarily act as
subversive forces, working to enact change from outside the
centers of power over these infrastructures. However,
frictional design does not necessarily have to be applied
exclusively from the outside. Rather, we posit friction to be
a mechanism that is also applicable to the design of
indicators for a large-scale urban simulation. In Proc.
ECSCW 2005. Springer, 449–468.
8. Bowker, G. (1994). Information mythology: The world
of/as information. In L. Bud-Frierman (Ed.),
Information Acumen: The Understanding and Use of
Knowledge in Modern Business (pp. 231–247). London:
9. Bowker, G.C. & Star, S.L. (1999). Sorting Things Out.
Cambridge: MIT Press.
10. Bødker, S. (2006). When second wave HCI meets third
wave challenges. In Proc. NordiCHI 2006. ACM Press,
11. Carroll, J.M. & Rosson, M.B. (1996). Developing the
Blacksburg electronic village. Commun. ACM 39(12),
12. Castells, M. (2012). Networks of Outrage and Hope:
Social Movements in the Internet Age. Cambridge:
Polity Press.
13. de Certeau, M. (1984). The Practice of Everyday Life.
Berkeley: University of California Press.
14. Chalmers, M. & Galani, A. (2004). Seamful
interweaving: heterogeneity in the theory and design of
interactive systems. In Proc. DIS 2004. ACM Press,
15. Clement, A., McPhail, B., Smith, K.L., & Ferenbok, J.
(2012). Probing, mocking and prototyping: participatory
approaches to identity infrastructuring. In Proc. PDC
2012, ACM Press, 21–30.
16. Crivellaro, C., Comber, R., Bowers, J., Wright, P.C., &
Olivier, P. (2014). A pool of dreams: Facebook, politics
and the emergence of a social movement. In Proc. CHI
2014. ACM Press, 3573–3582.
17. DiSalvo, C. (2012). Adversarial Design. Cambridge:
MIT Press.
18. DiSalvo, C., Nourbakhsh, I., Holstius, D., Akin, A., &
Louw, M. (2008). The Neighborhood Networks project:
a case study of critical engagement and creative
expression through participatory design. In Proc. PDC
2008. ACM Press, 41–50.
19. Dourish, P. & Bell G. (2007). The infrastructure of
experience and the experience of infrastructure:
meaning and structure in everyday encounters with
space. Environment and Planning B: Planning and
Design 34(3), 414–430.
20. Dourish, P. & Bell, G. (2011). Divining a Digital
Future: Mess and Mythology in Ubiquitous Computing.
Cambridge: MIT Press.
21. Dunne, A. & Raby, F. (2001). Design Noir: The Secret
Life of Electronic Objects. Basel: Birkhäuser.
22. Everett, S.P., Greene, K.K., Byrne, M.D., Wallach, D.S.,
Derr, K., Sandler, D., & Torous, T. (2008). Electronic
voting machines versus traditional methods: improved
Bowker [8] has argued for research to undertake
‘infrastructural inversion’—subverting the traditional
figure/ground relationship that perpetuates the analytic
invisibility of infrastructures. In this research, we reposition
infrastructural inversion as an approach to design rather
than as ethnographic practice. The use of friction when
infrastructuring civic engagement is a designerly enactment
of infrastructural inversion. Friction is positioned to
foreground infrastructures through everyday obstacles that
counter the potential of stasis and complacency.
In this paper, we have introduced theories of the everyday
to the emerging bodies of research on contestational design
and infrastructures of civic engagement. Our research contributes a design space distilling and describing four distinct
approaches to designing for civic engagement, including
deliberation, disruption, situated participation, and friction.
We argue that there is untapped potential for designing for
friction—for leveraging critique and contestation as means
of re-unifying politics and the everyday.
We thank Susann Wagenknecht for feedback on early drafts
of the paper. This material is based upon work supported by
the National Science Foundation under Grant Number
1360035. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the
author(s) and do not necessarily reflect the views of the
National Science Foundation.
1. Agre, P. (1997). Toward a critical technical practice:
Lessons learned in trying to reform AI. In G. Bowker,
S.L. Star, W. Turner, & L. Gasser (Eds.), Social
Science, Technical Systems, and Cooperative Work:
Beyond the Great Divide, Hillsdale: Erlbaum, 131–158.
2. Asad, M. & Le Dantec, C. (2015). Illegitimate civic
participation: supporting community activists on the
ground. In Proc. CSCW 2015. ACM Press, 1694–1703.
3. Bardzell, J. & Bardzell, S. (2013). What is “critical”
about critical design? In Proc. CHI 2013. ACM Press,
4. Bardzell, S., Bardzell, J., Forlizzi, J., Zimmerman, J., &
Antanitis, J. (2012). Critical design and critical theory:
the challenge of designing for provocation. In Proc. DIS
2012. ACM Press, 288–297.
5. Bohøj, M., Borchorst, N.G., Bødker, S., Korn, M., &
Zander, P.-O. (2011). Public deliberation in municipal
planning: supporting action and reflection with mobile
technology. In Proc. C&T 2011. ACM Press, 88–97.
6. Borchorst, N.G., Bødker, S., & Zander, P.-O. (2009).
The boundaries of participatory citizenship. In Proc.
ECSCW 2009. Springer, 1–20.
7. Borning, A., Friedman, B., Davis, J., & Lin, P. (2005).
Informing public deliberation: value sensitive design of
preference, similar performance. In Proc. CHI 2008.
ACM Press, 883–892.
23. Farina, C., Epstein, D., Heidt, J., & Newhart, M. (2014).
Designing an online civic engagement platform:
Balancing ‘more’ vs. ‘better’ participation in complex
public policymaking. International Journal of E-Politics
5(1), 16–40.
24. Foth, M., Schroeter, R., & Anastasiu, I. (2011). Fixing
the city one photo at a time: mobile logging of
maintenance requests. In Proc. OzCHI 2011. ACM
Press, 126–129.
25. Galloway, A. (2004). Intimations of everyday life:
Ubiquitous computing and the city. Cultural Studies
18(2/3), 384–408.
26. Habermas, J. (1996). Between Facts and Norms.
Contributions to a Discourse Theory of Law and
Democracy. Cambridge: MIT Press.
27. Halttunen, V., Juustila, A., & Nuojua, J. (2010). Technologies to support communication between citizens and
designers in participatory urban planning process. In S.
Wallin, L. Horelli, & J. Saad-Sulonen (Eds.), Digital
Tools in Participatory Planning (pp. 79–91). Centre for
Urban and Regional Studies, Aalto University.
28. Han, K., Shih, P.C., & Carroll, J.M. (2014). Local News
Chatter: augmenting community news by aggregating
hyperlocal microblog content in a tag cloud.
International Journal of Human-Computer Interaction
30(12), 1003–1014.
29. Hassenzahl, M. (2011). Towards an Aesthetic of
Friction. TEDxHogeschoolUtrecht.
ehWdLEXSoh8 (Retrieved: Dec. 10, 2014).
30. Hassenzahl, M. & Laschke, M. (2015). Pleasurable
troublemakers. In S.P. Walz & S. Deterding (Eds.), The
Gameful World: Approaches, Issues, Applications (ch.
6, pp. 167–195). Cambridge: MIT Press.
31. Hirsch, T. (2008). Contestational Design: Innovation for
Political Activism. Unpublished PhD Dissertation,
Massachusetts Institute of Technology.
32. Hirsch, T. (2009). Communities real and imagined:
designing a communication system for Zimbabwean
activists. In Proc. C&T 2009. ACM Press, 71–76.
33. Hirsch, T. (2009). Learning from activists: lessons for
designers. interactions 16(3), 31–33.
34. Hirsch, T. (2011). More than friends: social and mobile
media for activist organizations. In M. Foth, L. Forlano,
C. Satchell, & M. Gibbs (Eds.), From Social Butterfly to
Engaged Citizen (pp. 135–150). Cambridge: MIT Press.
35. Hirsch, T. & Henry, J. (2005). TXTmob: Text
messaging for protest swarms. In Ext. Abstracts CHI
2005. ACM Press, 1455–1458.
36. Hosio, S., Kostakos, V., Kukka, H., Jurmu, M., Riekki,
J., & Ojala, T. (2012). From school food to skate parks
in a few clicks: using public displays to bootstrap civic
engagement of the young. In Proc. Pervasive 2012.
Springer, 425–442.
37. Irani, L.C. & Silberman, M.S. (2013). Turkopticon:
Interrupting worker invisibility in Amazon Mechanical
Turk. In Proc. CHI 2013. ACM Press, 611–620.
38. Kavanaugh, A., Carroll, J.M., Rosson, M.B., Zin, T.T.,
& Reese, D.D. (2005). Community networks: where
offline communities meet online. Journal of ComputerMediated Communication 10(4).
39. Khovanskaya, V., Baumer, E.P.S., Cosley, D., Voida,
S., & Gay, G. (2013). “Everybody knows what you’re
doing”: a critical design approach to personal informatics.
In Proc. CHI 2013. ACM Press, 3403–3412.
40. Korn, M. (2013). Situating Engagement: Ubiquitous
Infrastructures for In-Situ Civic Engagement.
Unpublished PhD Dissertation, Aarhus University.
41. Korn, M. & Back, J. (2012). Talking it further: from
feelings and memories to civic discussions in and about
places. In Proc. NordiCHI 2012. ACM Press, 189–198.
42. Korsgaard, H. & Brynskov, M. (2014). City bug report:
urban prototyping as participatory process and practice.
In Proc. MAB 2014. ACM Press, 21–29.
43. Kuznetsov, S. & Paulos, E. (2010). Participatory sensing
in public spaces: activating urban surfaces with sensor
probes. In Proc. DIS 2010. ACM Press, 21–30.
44. Laschke, M., Hassenzahl, M., & Diefenbach, S. (2011).
Things with attitude: Transformational products. In
Proc. Create11: The Interaction Design Symposium,
London, UK, 23rd June 2011,
0with%20attitude.pdf (Retrieved: Dec. 10, 2014).
45. Lave, J. & Wenger, E. (1991). Situated Learning:
Legitimate Peripheral Participation. Cambridge:
Cambridge University Press.
46. Leahu, L., Thom-Santelli, J., Pederson, C., & Sengers,
P. (2008). Taming the Situationist beast. In Proc. DIS
2008. ACM Press, 203–211.
47. Le Dantec, C.A. & DiSalvo, C. (2013). Infrastructuring
and the formation of publics in participatory design.
Social Studies of Science 43(2), 241–264.
48. Lee, C.P., Dourish, P. & Mark, G. (2006). The human
infrastructure of cyberinfrastructure. In Proc. CSCW
2006. ACM Press, 483–492.
49. Lefebvre, H. (2014). Critique of Everyday Life. London:
50. Lehner, U., Baldauf, M., Eranti, V., Reitberger, W., &
Fröhlich, P. (2014). Civic engagement meets pervasive
gaming: towards long-term mobile participation. In Ext.
Abstracts CHI 2014. ACM Press, 1483–1488.
51. Lievrouw, L.A. (2006). Oppositional and activist new
media: remediation, reconfiguration, participation. In
Proc. PDC 2006. ACM Press, 115–124.
52. Mainwaring, S.D., Cheng, M.F., & Anderson, K.
(2004). Infrastructures and their discontents:
Implications for Ubicomp. In Proc. Ubicomp 2004.
Springer, 418–432.
53. Mascaro, C.M. & Goggins, S.P. (2011). Brewing up
citizen engagement: the Coffee Party on Facebook. In
Proc. C&T 2011. ACM Press, 11–20.
54. Monroy-Hernández, A., Farnham, S., Kiciman, E.,
Counts, S., & De Choudhury, M. (2013). Smart
societies: from citizens as sensors to collective action.
interactions 20(4), 16–19.
55. Mouffe, C. (2000). Deliberative Democracy or
Agonistic Pluralism. Political Science Series 72.
Vienna: Institute for Advanced Studies.
56. Mouffe, C. (2013). Agonistics: Thinking the World
Politically. London: Verso.
57. Paulos, E., Honicky, RJ, & Hooker, B. (2009). Citizen
science: Enabling participatory urbanism. In M. Foth
(Ed.), Handbook of Research on Urban Informatics (ch.
XXVIII, pp. 414–436). Hershey, PA: IGI Global.
58. Pipek, V. & Wulf, V. (2009). Infrastructuring: Toward
an integrated perspective on the design and use of
information technology. Journal of the Association of
Information Systems 10(5), 447–473.
59. Ratto, M. (2011). Critical making: Conceptual and
material studies in technology and social life. The
Information Society 27(4), 252–260.
60. Rawls, J. (1971). A Theory of Justice. Cambridge:
Harvard University Press.
61. Rawls, J. (1993). Political Liberalism. New York:
Columbia University Press.
62. Robertson, S.P. (2005). Voter-centered design: Toward
a voter decision support system. ACM Trans. Comput.Hum. Interact. 12(2), 263–292.
63. Robertson, S.P. & Vatrapu, R.K. (2010). Digital
government. Ann. Rev. Info. Sci. Tech. 44(1), 317–364.
64. Schroeter, R., Foth, M., & Satchell, C. (2012). People,
content, location: sweet spotting urban screens for
situated engagement. In Proc. DIS 2012. ACM Press,
65. Semaan, B.C., Robertson, S.P., Douglas, S., &
Maruyama, M. (2014). Social media supporting political
deliberation across multiple public spheres: towards
depolarization. In Proc. CSCW 2014. ACM Press,
66. Sengers, P., Boehner, K., David, S., & Kaye, J. (2005).
Reflective design. In Proc. Aarhus 2005: Critical
Computing. ACM Press, 49–58.
67. Star, S.L. & Bowker, G.C. (2006). How to infrastructure.
In L.A. Lievrouw & S. Livingstone (Eds.), Handbook of
New Media: Student Edition (pp. 230–245). Thousand
Oaks: Sage.
68. Star, S.L. & Ruhleder, K. (1996). Steps toward an
ecology of infrastructure: Design and access for large
information. Information Systems Research 7(1), 111–
69. Taylor, N., Marshall, J., Blum-Ross, A., Mills, J.,
Rogers, J., Egglestone, P., Frohlich, D.M., Wright, P., &
Olivier, P. (2012). Viewpoint: empowering communities
with situated voting devices. In Proc. CHI 2012. ACM
Press, 1361–1370.
70. Tsing, A.L. (2005). Friction: An ethnography of global
connection. Princeton: Princeton University Press.
71. Tufekci, Z. & Wilson, C. (2012). Social media and the
decision to participate in political protest: Observations
from Tahrir Square. Journal of Communication 62(2),
72. Vlachokyriakos, V., Comber, R., Ladha, K., Taylor, N.,
Dunphy, P., McCorry, P., & Olivier, P. (2014).
PosterVote: expanding the action repertoire for local
political activism. In Proc. DIS 2014. ACM Press, 795–
73. Voida, A., Dombrowski, L., Hayes, G.R., &
Mazmanian, M. (2014). Shared values/conflicting
logics: working around e-government systems. In Proc.
CHI 2014. ACM Press, 3583–3592.
74. Voida, A., Yao, Z., & Korn, M. (2015). (Infra)structures
of volunteering. In Proc. CSCW 2015. ACM Press,
75. Wagner, I., Basile, M., Ehrenstrasser, L., Maquil, V.,
Terrin, J.-J., & Wagner, M. (2009). Supporting
community engagement in the city: urban planning in
the MR-tent. In Proc. C&T 2009. ACM Press, 185–194.
76. Weise, S., Hardy, J., Agarwal, P., Coulton, P., Friday,
A., & Chiasson, M. (2012). Democratizing ubiquitous
computing: a right for locality. In Proc. UbiComp 2012.
ACM Press, 521–530.
77. Weiser, M. (1991). The computer for the 21st century.
Scientific American 265, 94–104.
78. Wulf, V., Aal, K., Kteish, I.A., Atam, M., Schubert, K.,
Rohde, M., Yerousis, G.P., & Randall, D. (2013).
Fighting against the wall: social media use by political
activists in a Palestinian village. In Proc. CHI 2013.
ACM Press, 1979–1988.
79. Wulf, V., Misaki, K., Atam, M., Randall, D., & Rohde,
M. (2013). 'On the ground' in Sidi Bouzid: investigating
social media use during the Tunisian revolution. In
Proc. CSCW 2013. ACM Press, 1409–1418.
80. Zerubavel, E. (1991). The Fine Line: Making
Distinctions in Everyday Life. Chicago: University of
Chicago Press.
Not The Internet, but This Internet:
How Othernets Illuminate Our Feudal Internet
Paul Dourish
Department of Informatics
University of California, Irvine
Irvine, CA 92697-3440, USA
[email protected]
built – the Internet, this Internet, our Internet, the one to
which I’m connected right now?
What is the Internet like, and how do we know? Less
tendentiously, how can we make general statements about
the Internet without reference to alternatives that help us to
understand what the space of network design possibilities
might be? This paper presents a series of cases of network
alternatives which provide a vantage point from which to
reflect upon the ways that the Internet does or does not
uphold both its own design goals and our collective
imaginings of what it does and how. The goal is to provide
a framework for understanding how technologies embody
promises, and how these both come to evolve.
I ask these questions in the context of a burgeoning recent
interest in examining digital technologies as materially,
socially, historically and geographically specific [e.g. 13,
15, 36, 37]. There is no denying the central role that “the
digital,” broadly construed, plays as part of contemporary
communications, and computational devices may be
concentrated in the urban centers of economically
privileged nations, but even in the most “remote” corners of
the globe, much of everyday life is structured, organized,
and governed by databases and algorithms, and “the digital”
still operates even in the central fact of its occasional
absence, the gap that renders particular spots “off grid.”
Where digital technologies were once objects of primarily
engineering attention, their pervasive presence has meant
that other disciplines – anthropology, political science,
communication, sociology, history, cultural studies, and
more – have had to grapple with the question, “what are the
cultural consequences of ‘the digital’?” The problem,
however, has been to get a grip up on what ‘the digital’
itself is. Rather too often, ‘the digital’ is taken simply as a
metaphor for regimentation and control, or it is used to
name some amorphous and unexamined constellation of
representational and computational practices. The
somewhat casual assertion that “the Internet” has some
particular properties runs this danger. It’s difficult to know
how to read or assess these assertions without, for example,
understanding something of the scope of what is being
claimed through some kind of differential analysis of “the
not-Internet.” We need to take an approach to “the Internet”
that begins with an understanding of its technical reality,
although not one that reduces it to simply that. In other
words, we want to maintain a focus on the social and
cultural reality of technologies such as Internet, but in a
way that takes seriously its material specificities.
Author Keywords
Network protocols; network topology; naming routing;
media infrastructures.
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g., HCI):
What does it mean to say, as many have, that “the Internet
treats censorship as damage and routes around it” [14]? Or
what does it mean to argue, as is also common, that the
Internet is a democratizing force as a result of its
decentralized architecture [8]? Or that it’s a platform for
grass-roots community-building rather than mass media
messaging [35]?
Statements like these have been critiqued for their treatment
of topics such as democracy, censorship, broadcasting, or
community – all complicated terms that are perhaps
invoked and conjured more than they are critically
examined [19]. However, the term that I want to focus on in
these statements is “the Internet”. When we attribute
characteristics to the Internet, specifically, what do we
mean? Do we just mean “digital networks”? Do we mean
digital networks that implement the Internet protocols? Or
do we mean the one very specific network that we have
Two Transitions
To make this argument more concrete, let me begin by
describing two cases from my own working history, two
digital transitions that illustrate the complexity of talking
about the Internet as a force for decentralization.
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
The first of these transitions occurred in the early 1990s
when I worked for Xerox. Xerox had been a pioneer in
digital networking. Early research on Ethernet and
distributed systems constituted an important precursor to
the development of the Internet Protocol (IP) suite, and the
research systems such as PUP [4] and Grapevine [3] had
subsequently given rise to a protocol suite called XNS
(Xerox Network Services), which was the basis of a line of
digital office systems products that Xerox sold in the
marketplace. Xerox’s own corporate network spanned the
globe, linking thousands of workstations and servers
together using the XNS protocols. In the 1990s, as Unix
workstations began to dominate the professional
workstation market, and as the arrival of distributed
information services such as Gopher, WAIS, and WWW
spurred the accelerated growth of the Internet, many inside
Xerox became interested in using TCP/IP on internal
networks too. Particular at research and development
centers, various groups began to run TCP/IP networks
locally and then increasingly looked for ways to connect
them together using the same leased lines that carried XNS
traffic between sites. What began as renegade or illicit
actions slowly became organizationally known and
tolerated, and then organizationally supported as TCP/IP
became a recognized aspect of internal corporate
networking within Xerox.
operated. Since XNS was the dominant technology for
organizational communication, it wasn’t entirely possible
for people using TCP/IP to “route around” the corporate
network, but it started to provide some independence.
The second transition was also a transition to IP, but in a
very different context. This transition was going on while I
briefly worked at Apple in the late 1990s. As at Xerox, the
rise of the Internet in general was reflected in the increasing
use of TCP/IP in a network that had originally been put
together through a different network protocol – in this case,
AppleTalk. AppleTalk was a proprietary network suite that
Apple developed to connect Macintosh computers; it had
evolved over time to operate over the Ethernet networks
commonly deployed in corporate settings, although it had
originally been developed for linking computers together in
relatively small networks. One important feature of the
Appletalk networking protocols is their so-called “plugand-play” approach, which allows a network to be deployed
with minimal manual configuration. For example,
Appletalk does not require that network addresses be preassigned or that a server be available for network resource
discovery; these features are managed directly by the
networked computers themselves. Accordingly, setting up
Appletalk networks requires little or no administrative
intervention. TCP/IP networks, on the other hand, do
require some services to be set up – DHCP servers to
allocate addresses, name servers to resolve network
addresses, and so on. (In fact, the contemporary networking
technologies known as Bonjour or Zeroconf are
mechanisms designed to re-introduce Appletalk’s plug-andplay functionality into TCP/IP networking.) So, where the
transition from XNS to TCP/IP was a decentralizing
transition at Xerox, one that increased people’s
independence from corporate network management, the
transition from Appletalk to TCP/IP at Apple moved in the
other direction, creating more reliance on network
infrastructure and hence on network administration.
XNS was a protocol suite designed for corporate
environments. Although technically decentralized, it
depended on an administratively centralized or managed
model. The effective use of XNS was tied together by a
distributed database service known as the Clearinghouse,
which was responsible for device naming, address
resolution, user authentication, access control, and related
functions. Users, servers, workstations, printers, email lists,
organizational units, and other network-relevant objects
were all registered in the Clearinghouse, which was
implemented as a distributed network of database servers
linked via a so-called “epidemic” algorithm by which they
would keep their database records up to date. Access
control mechanisms distinguished administrators, who
could update Clearinghouse databases, from regular users,
who could look up names but couldn’t introduce new ones.
The Clearinghouse service was central enough to the
operation of XNS services that this administrative access
was needed for all sorts of operations, from adding new
users to installing new workstations.
These examples illustrate two important concerns that
animate this paper. The first is that statements about “the
Internet” and its political and institutional character suffer
for a lack of contrast classes (“critical alternatives,” if you
will). The Internet may well be decentralized – but
compared to what, exactly? More decentralized
internetworks could be imagined and have existed. We
might be able to make some more fruitful observations if
the “Internet” that we are trying to characterize weren’t
such a singular phenomenon. In this paper I will briefly
sketch some cases that might serve as points of comparison
that provide for more specific statements about
contemporary phenomena by showing how they might be,
and have been, otherwise. The first concern, then, is to
provide a framework within which the characterizations can
take on more meaning. The second concern is that the
singular nature of the Internet makes it hard for us to
distinguish the conceptual object of our attention, the object
that we are characterizing. Given that the Internet – the
By contrast, the TCP/IP network, and the Unix workstations
that it generally linked, was much less centrally
administered. For the Unix machines, users could be
defined locally for each computer, and similarly,
workstations could maintain their own machine addressing
and network routing information. Even when systems were
interconnected, much less coordination was required to get
machines connected together effectiveness in the IP
network than in the XNS network. As a result, the rise of
the IP network provided a route by which people could to
some extent become more independent of the corporate IT
management structure through which the XNS network was
specific Internet to which we can buy or obtain access today
– is generally the only Internet we have known, it’s hard to
be able to pin down just what object it is that we are
characterizing when we talk about, say, decentralization.
Do we mean that a world-spanning network-of-networks is
inherently decentralized? Or is decentralization a
characteristic of the specific protocols and software that we
might use to operate that network (the Internet protocols)?
Or is it rather a characteristic of the specific network that
we have build, which doesn’t just use those protocols, but
implements them in a specific network made of particular
connections, an amalgam of undersea fiber-optic cables,
domestic WiFi connections, commercial service providers,
and so on? Is it our Internet that’s decentralized, while we
could still imagine a centralized one being built? Or is our
Internet actually less decentralized than it might be, failing
to achieve its own promise (if that’s something we want)?
read and post messages. In the US, where local phone calls
were generally free, BBSs flourish in the late 1970s and
early 1980s, as the home computer market grew. With very
simple software, they allowed people to communicate by
dialing in to the BBS at different times and posting
messages that others would read later. While one certainly
could make a long-distance call to connect to a BBS, most
BBS use was local to take advantage of toll-free calling, so
that most BBS activity was highly regionalized.
Fido was the name of BBS software first written by Tom
Jennings in San Francisco in 1984 and then adopted by
others elsewhere in the US. Before long, Fido was updated
with code that would allow different Fido BBS systems to
call each other to exchange messages; Fidonet is the name
both of this software and of the network of BBS systems
that exchanged messages through this mechanism.
Fidonet’s growth was explosive; from its start in 1985,
Fidonet had around 500 nodes by the end of 1985, almost
2,000 by 1987, 12,000 by 1991 and over 35,000 by 1995.
Each of these nodes was a BBS that server ten to hundreds
of users, who could exchange email messages, files, and
discussions on group lists.
In order to provide the resources to think about these
questions fruitfully, I will approach the topic from two
perspectives. The first is to briefly catalog some alternatives
to “the” Internet. Some of these are entirely alternative
networks; some are small components of the broader
Internet that do not always operate the same way as the
whole. The second is to take in turn some key aspects of
network function – routing, naming, and so on – and
examine their contemporary specificities, with particular
focus on the relationship between specific commercial and
technical arrangements and the openness or range of
possibilities encapsulated by the network design. Taken
together, these allow us to develop an understanding of the
landscape of potential network arrangements within which
our current arrangements take their place, and perhaps more
accurately then target or assess statements about what “the
Internet” is or does.
Fidonet’s design was (with one critical exception) radically
decentralized. Based as it was on a dial-up model rather
than an infrastructure of fixed connections, it employed a
model of direct, peer-to-peer communication. The Fidonet
software was originally designed with a flat list of up to 250
nodes; system of regions, zones, and networks was
introduced within a year of the original software when it
became clear that the system would very soon grow beyond
that capacity. This structure, which mapped the topology of
the network geographically, provided a message routing
structure which reduced costs (by maximizing local calling
and by pooling messages for long-distance transfer) but
with a minimum of fixed structure; direct communication
between nodes was always central to the Fidonet model.
The structure essentially exhibited a two-level architecture;
one of conventional structures (that is, the conventional
pattern of daily or weekly connections between sites) and
an immediate structure, made up of those nodes
communicating with each other right now. (The Internet –
being made up in its current form primarily of fixed
infrastructure and broadband connectivity – largely
conflates these two.)
Our starting point are what I have been calling “othernets”
– non-Internet internets, if you will. Some of these are
networks of a similar style but which happen to use
different protocols; some of them are radically different
arrangements. Some of them are global networks, and some
more localized; some are specific networks and some are
ways of thinking about or approaching network design. The
collection listed here is far from comprehensive.1 What they
do for us here, though, is to flesh out an internetworking
space of possibilities, one that helps to place “the Internet”
in some context.
Within a year, Fidonet was in in danger of hitting an initial
limit of 250 nodes, and so the network protocols and
software were redesigned around the concepts of regions
and nets. Until this point, Fidonet had used a single, “flat”
list of nodes, which was directly maintained by Jennings.
The introduction of regions and nets allowed for a more
decentralized structure. This was simultaneously an
addressing structure and a routing structure, linked together
– the structure of the network was also the structure that
determined how messages would make their way from one
place to another.
A Bulletin Board System (BBS) hosts messages and
discussions, generally on a quite simple technology
platform such as a conventional home PC with a modem
that allows people to call into it over regular phone lines to
Most problematically, local and community “DIY” networks
usefully illuminate the issues in question but had to be omitted for
space [e.g. 1, 16, 21, 30].
Fidonet was built around a file transfer mechanism that
would allow files to move from one node to another. Other
facilities could be built on top of this mechanism, such as
electronic mail. One of the major features of Fidonet from
the user perspective was the discussion groups known as
“echoes”. Echoes allowed users to post messages that
would be distributed to all users on the system interested in
a topic. A “moderation” mechanism allowed echo managers
to discourage off-topic posts, but this was a post-hoc
mechanism (rather than an approach which required explicit
approval before a message was sent out). As in systems
such as The Well [39], the topically-oriented discussion
forums provided by echoes were the primary site of
interaction and community building across Fidonet
(although unlike The Well, Fido echoes were networked
rather than centralized.)
connection; our example bang path only works if “mcvax”
is one of the computers that “seismo” regularly connects to
directly. One couldn’t, for instance, route a message along
the path “seismo!ukc!itspna!jpd” because seismo only dials
up to certain other computers, and ukc isn’t one of them.
Two aspects of this are worth noting. The first concerns the
dynamics of network structure in the presence of this routebased addressing mechanism. Route-based addressing via
bang paths means not just that you need to understand how
your computer is connected to the network; everybody
needs to understand how your computer is connected to the
network, if they want to reach you, and they need to
understand how all the intermediate computers are
connected to the network too. This arrangement does not
allow, then, for frequent reconfiguration. Should seismo
stop talking to mcvax, then people using that connection as
part of their routing process will find their routes break.
Usenet is a somewhat informal term for a worldwide
network of computers linked by a series of mechanisms
built on top of a facility provided by versions of the Unix
operating system [18]. The facility was known as uucp,
which stands for “Unix-to-Unix copy.” In Unix’s
command-line environment, “uucp” was a command that
users could use to copy files between computers. It was
designed by analogy with the standard “cp” command for
copying files; just as a user might use “cp” to copy a file
from a source to a destination filename, they might also use
“uucp” to copy a file from a source to a destination, where
either the source or destination file location was in fact on a
different computer.
The second noteworthy aspect, a partial consequence of the
first, is that within the open, pairwise connection structure
afforded by Usenet, a backbone hierarchy of major sites
soon arose, at first through conventional practice and later
through explicit design. These were major sites that
engaged in significant data transfer with each other or with
other groups, including ihnp4 at AT&T’s Indian Hill site,
seismo at the Center for Seismic Studies in northern
Virginia and mcvax at the Mathematics Centrum in
Amsterdam, which effectively became the primary transAtlantic gateways, and national sites such as ukc at the
University of Kent at Canterbury, which effectively became
the gateway to the United Kingdom. Significantly, some of
these also served as “well-known nodes” for routing
purposes; one might quote one’s email address as
“…!seismo!mcvax!itspna!jpd”, with the “…” essentially
standing for the directive “use whatever path you use to
regularly use to reach here.” The design of the network,
then, is unstructured but the practice requires a commitment
to some form of well-understood centralization.
Uucp was developed as a basic user file-copy facility, but
the core idea of file-based interconnections between
minicomputers was general enough that many other
facilities could be built on top of it. For instance, uucp
could be used to support email messaging between sites,
exchanging individual messages as files that the remote
system would recognize as drafts to be delivered by email
locally. The same mechanisms that named remote files,
then, might also be used to name remote users.
Uucp mail, with its explicit use of bang paths, was not the
primary or most visible aspect of Usenet, however. A
distributed messaging service which came to be known as
Usenet news was first deployed in 1980, using the same
underlying uucp mechanism to share files between sites.
Unlike email messages directed to specific users, however,
articles in Usenet news were open posts organized into
topically-organized “newsgroups.” Via uucp, these would
propagate between sites. Usenet connected many of the
same academic and industrial research sites that came to be
incorporated into ARPAnet or its successors, and so over
time, internet protocols became a more effective way for
messages to be exchanged, at which point, Usenet
newsgroups were distributed over TCP/IP rather than uucp.
Since it had been designed initially simply to provide a user
interface to a dial-up mechanism for file exchange, Usenet
provided no global naming mechanism to identify sites,
files, objects, or users. Rather, its naming mechanism was a
sequence of identifiers (separated by exclamation points or
“bangs”) that explained how a message should be routed.
So, for instance, the path “seismo!mcvax!ukc!itspna!jpd”
directs a computer to deliver the file first to a computer
called “seismo”, at which point the path will be rewritten to
“mcvax!ukc!itspna!jpd”; subsequently, it will be delivered
to a computer called “mcvax”, then to one called “ukc,” and
so on. “jpd” is the user, whose account is on the computer
“itspna”. To send a message correctly, then, required that
one knew not only the destination, but the route that the
message should take – the series of peer-to-peer
connections that must be made. Each path describes a direct
Digital Equipment Corporation (DEC, or just “Digital”)
was a leading designer and vendor of minicomputers
through the 1970s and 1980s. Indeed, many of the
computers connected to the early ARPAnet/Internet were
DEC machines, especially systems in the DEC-10, DEC-20,
PDP, and VAX ranges, which were widely used in the
academic research community. At the same time, DEC had
its own networking system, delivered as part of its RSX and
VMS operating systems. This network system, known as
DECnet or the Digital Network Architecture (DNA), was
initially introduced in the mid-1970s as simple point-topoint connection between two PDP-11 minicomputers.
Subsequently, the network architecture evolved to
incorporate new technologies and new capabilities. The
design and development of DECnet was, then, largely
contemporaneous with the development of the Internet
protocols. The last fully proprietary versions were DECnet
Phases IV and Phase IV+, in the early 1980’s [10]; DECnet
Phases V and V+ maintained compatibility with the
proprietary DECnet protocols but moved more in the
direction of support for the ISO-defined OSI protocol stack.
computer on a network needs to have a different address,
the size of the address is a limit upon the size of the
network. With 16-bit addresses, DECnet network
implementations were limited to 64449 hosts.
These three of these features of the DECnet design speak to
a particular context of use. They highlight the expectation
that DECnet deployments would be uniform, well-regulated
and actively managed. This makes perfect sense in the
context of DEC’s sales in corporate settings, where network
implementation can be phased, planned, and centrally
directed. Effective use of the shared file facilities, for
instance, require a coordinated approach to the layout and
conventions of filesystems across machines, while the
network management infrastructure suggests that this was a
key consideration in the settings for which DECnet was
designed. An odd little implementation quirk in DECnet
Phase IV similarly supports this. To make routing and
discover easier, the network software running on a DECnet
computer operating over an Ethernet network would
actually reset the hardware network address of the computer
to an address that conformed to the DECnet host address.
This would cause considerable difficulty when DECnet was
running in the same environment as other protocols, but in a
highly managed environment where uniformity could be
guaranteed, it was less troublesome.
Given that DECnet was designed at roughly the same time
as the Internet protocol suite, and given that it connected
many of the same computer system types as the Internet
protocols, it is again a useful point of comparison. DECnet
was based on much the same “layered” protocol model that
was the contemporary state of the art, and its basic
architecture – a point to point connection layer, a routing
layer, a layer for reliable sequenced delivery, and so on – is
similar to that of systems like PUP, XNS, and TCP/IP.
However, some key differences reveal the distinct character
to the contexts in which DECnet was expected to operate.
In sum, although DECnet was based on the same
connectivity that characterizes the Internet protocol suite,
its specific configuration of that approach is one that was in
practice designed for the highly managed, highly controlled
setting of corporate IT.
One of these is that DECnet incorporated a sophisticated
management interface, and indeed, that facilities for
network management were designed into the protocol stack
from an early stage. That is, DECnet was entirely imagined
to be deployed in a managed environment. TCP/IP has to be
managed too, of course, but the management of TCP/IP
networks is not a network function in itself. (The Internet
protocol suite includes a protocol, SNMP, for network
management uses, but network-wide, management is not a
key consideration.)
The conventional historical account of the emergence of
today’s Internet traces its beginnings to the ARPANET.
ARPANET was both a research project and a facility; that
is, the ARPANET project developed the networking
technologies that are the underpinnings of the contemporary
Internet and it also operated a facility – a network – based
on those underpinnings. So this was not simply a research
project funded by ARPA; it was also a facility that
supported ARPA research. ARPA is the research arm of the
US Department of Defense (DOD), and so in order to
qualify as a site to be connected to ARPANET, one had to
be a DOD site or a DOD contractor. At the time, this
included many prominent research universities and
computer science departments, but by no means all, and not
even most.
A second suggestive distinction lies within the sets of
services standardized within DECnet. These included
services, like network terminal access, similar to TCP/IP,
but also some that the Internet protocol suite did not
natively attempt to support, such as seamless remote
filesystem access, in which disks attached to one computer
would appear to be virtually available to users of other,
connected computers. Remote file access of this sort (which
was also a feature that had been part of the Xerox network
system) goes beyond simply file transfer by providing users
with the illusion of seamless access to both local and
remote files. (Email, on the other hand, was not one of the
standardized protocols, although networked email services
were available through operating system applications.)
Recognizing the value that the DOD-contracting
universities were deriving from their participation in the
ARPANET effort, wanting to expand beyond the smallerscale network arrangements upon which they were already
depending [6], and concerned that ‘the ARPANET
experiment had produced a split between the “haves” of the
ARPANET and the “have-nots” of the rest of the computer
science community’ [9], a consortium of US computer
A third – although relatively trivial – distinction was that
DECnet addresses were just 16 bits long. Since each
science departments partnered with the National Science
Foundation and other groups to put together a proposal for
the Computer Science Research Network, or CSNET.
CSNET integrated multiple different network technologies,
with a store-and-forward email system over regular phone
lines as the base level of participation, but leased line and
X.25 networking available for higher performance
other important, user-facing senses, distinct networks,
suggesting intriguingly that there may be more to being “an
Internet” than running TCP/IP on interconnected networks.
Briefly examining some of these “othernets” lets us place
our contemporary experience of the Internet – in its
multiple capacities as a configuration of technologies, a
constellation of services, and an object of cultural attention
– in context. Some of that context is a “design space” – that
is a space of possibilities that are available as outcomes of
specific design decisions. Some lies in “historical
circumstance” – that is, ways that different configurations
arose reflecting their own historical trajectories. In order to
reflect on these in a little more detail, we can take a
different cut at the same question – of the nature of the
Internet within a space of alternatives – by approaching the
it in terms of the kinds of facilities that works can offer.
While the development of CSNET produced many
important technical contributions, CSNET’s most
significant legacies might be historical and institutional, in
that, first, CSNET represented a significant expansion in the
spread of TCP/IP in the academic research community, and
second, it was designed in collaboration with and in order
to support the mission of the National Science Foundation
(rather than, say, the military backers of ARPA and
ARPANET). The CSNET effort laid the foundation for a
later NFS-funded network called NSFNet, and it was the
NSFNet backbone that was later opened up to commercial
traffic, and then in 1995 replaced entirely by private service
providers. The importance of this point is the explicit link at
the time between institutional participation and
connectivity, and the very idea that a network is conceived
around an understanding of quite who and what will be
permitted to connect.
Consider even this simple question: does a computer
“know” its own name? In the naming arrangement of
traditional TCP/IP, the answer is no. A computer knows its
address but not necessarily the name by which it might be
addressed by others, and in fact can operate in the TCP/IP
environment quite effectively without one, as signaled by
the fact that both TCP nor IP use addresses, not names,
internally. So, facilities are built into the Domain Name
Service (DNS) protocols [26] that allow a computer, on
booting, to ask a network server, “What’s my name?”
Naming is entirely delegated to the name service; that is,
the network authority that answers the query “what is the
address at which I can reach” is also the
authority that tells a computer that it is in the
first place.
The more interesting technical point for us here though
concerns the relationship between ARPANET and CSNET.
In a minor sense, CSNET was “in competition with”
ARPANET; it was, after all, designed as a network for
those who were institutionally denied access to ARPANET.
However, in all the ways that matter it was entirely
collaborative with ARPANET; key players in the
ARPANET project, such as TCP/IP co-designer Vint Cerf
at Stanford, participated in the CSNET effort, and the
networks were bridged together. The basis for that bridging
was CSNET’s adoption of the TCP/IP protocols that had
been designed within the ARPANET effort. Through this
bridging, ARPANET became a subnet of CSNET.
However, we normally think of arrangement of subnetting
and internetworking as providing seamless interconnection,
but the interconnection between CSNET and ARPANET
was not quite so seamless, since they adopted different
protocols for delivering services at the higher levels. For
instance, the MMDF network messaging facility developed
as part of CSNET [7] was needed to be able to bridge
between the “phonenet” and TCP/IP components of
CSNET, and that meant that messages destined for CSNET
recipients would need to be routed explicitly to CSNET
rather than simply dispatched using the usual protocols used
on purely TCP/IP networks (such as SMTP, the standard
Internet email transfer protocol). In other words – both
ARPANET and CSNET implemented the Internet protocols
(TCP/IP) but not all the other protocols of what we is
sometimes called the “Internet protocol suite”; accordingly,
even though they were connected together through a
common TCP/IP infrastructure, they remained in some
By contrast, other networking protocols – like AppleTalk,
for example – delegate to individual computers the right to
assign themselves names. This can be implemented in
different ways – by having each computer register a name
with a naming facility, for example, or by entirely
distributing the name lookup process rather than electing
particular computers to implement a name or directory
service – and these different mechanisms have their own
differences in terms of both technological capacities and
organizational expectations. The abilities for a computer to
assign itself a name and to advertise that name to others are
facilities that the designers of the IETF “Zeroconf”
protocol suite felt it important to add, in order to support the
server-free “plug-and-play networking” approach of
AppleTalk [38].
The issue of naming raises a series of questions that
highlight the relationship between specific technical
facilities and the social organization of technological
practice. Who gets to name a computer? Who is responsible
for the names that network nodes use or call themselves?
How is naming authority distributed? How visible are
names? How centralized is control, and what temporalities
shape it? Even such a simple issue as naming presents a
microcosm of my larger argument that questions of online
experience need to be exampled in their technical and
historical specificities.
Elsewhere [12], I discuss the case of Internet routing and
explore the history of the development of routing protocols
as being entwined with the history of the expansion of the
Internet infrastructure. That paper contains a fuller
argument, with a particular focus on the materiality of
routing, but I want to summarise from it one relevant
element here, which is the role of so-called “autonomous
systems” in today’s Internet routing.
Figure 1. The Arpanet in 1972, exhibiting a net-like structure.
If there is one cornerstone of the argument that the Internet
is a “decentralized” phenomenon, it is the case of network
routing. Rather than requiring a central authority to
understand the structure of the network and determine how
traffic should flow, the design of the IP protocol, which gets
Internet packets from one host to another, relies on each
computer along a path deciding independently which way a
piece of information should travel to take it closer to its
destination, and creates the conditions under which
coherent behavior results from this distributed process.
Routing protocols are the protocols by which the
information is distributed upon which these local decisions
are to be made.
the practical realities of the technology as it had developed;
but in the evolutionary cycle of network protocol
development, this meant baking those arrangements right
into the protocol.
The emergence of autonomous systems as a consideration
for routing information has an analogy in the emergence of
structure within the interconnection arrangements of the
network itself.
The abstract model of interconnection that the Internet
embodies is one of decentralized and amorphous structure.
Indeed, the very term “network” was originally coined to
convey not idea of computers connect together, but the
topology of their connection as a loose and variable net-like
mesh, in contrast to the ring, star, or hierarchical
arrangements by which other connection fabrics had been
designed. The basic idea of Internet-style networking is that
computers can be connected together in whatever
arrangement is convenient, and data units (“packets”) will
nonetheless find their way from one point to another as long
as some path (or route) exists from point A to point B. In a
fixed topology, like a ring, star, or hierarchy, the path from
one node to another is fixed and pre-defined. The key
insight of the packet-switching design of the Internet is that
there need be no prior definition of the route from one place
to another; packets could find their own way, adaptively
responding to the dynamics of network topology and traffic
patterns. Amorphous topology is arguably one of the key
characteristics of internet-style technologies.
Early internets relied on fairly simply protocols for
distributing routing information. As the Internet has grown
from a small-scale research project to an international
utility, the complexities of routing, and of the distribution
of routing information, have also grown, and different
protocols have arisen to solve emerging problems. The
major routing information protocol operative in our current
Internet is BGP, the Border Gateway Protocol. The
particular relevance of BGP here is that, like its predecessor
protocol (the Exterior Gateway Protocol or EGP), BGP is
designed not to distribute the information necessary to route
between networks but rather to route between so-called
“autonomous systems.” The essential consideration here is
that corporations, network infrastructure providers,
universities, and so on, have networks that are themselves
both quite complicated and autonomously managed. This in
turn leads to the idea that the unit of network routing should
be one of these systems. This is in many ways a recognition
of a fundamental feature of our Internet, which is that it is
not simply a network of networks, but a network of
institutions and semi-corporate entities, each of which wish
to maintain control over the organization of and access to
their own networks. The protocols by which routing
information are passed around are protocols that reflect this
particular arrangement, encoding
“autonomy” as much as they encode “connection.” As our
Internet grew, the “network” was no longer an effective unit
at which routing information could be encoded, and
“autonomous system” arose as an alternative that reflected
Early network diagrams from the days of ARPAnet, the
predecessor to the Internet, do indeed display this net-work
character (figure 1). Over time, though, more hierarchical
arrangements have arisen. These hierarchical patterns arise
not least from two sources – geography and economics. The
geographical considerations produce a distinction between
metropolitan and long-haul networks, where metropolitan
networks support relatively dense connectivity within small
regions (for instance, the homes and businesses connected
to the network in a town or neighborhood), while long-haul
networks provide high-speed and high-capacity connections
between urban regions [24]. The different demands upon
these different kinds of connection are best met with
different technologies. This is where the economic
considerations also come into play. Ever since the
decommissioning of the NSFNet backbone in 1995 and the
emergence of the commercial Internet, Internet service
provision has been largely in the hands of commercial
operators. To function as a competitive marketplace,
network provision must involve multiple competing
providers, each of whom in turn specialize in different
aspects of service provision. One result of this
specialization is a hierarchical segmentation of network
service provision.
Netflix paying ISP Comcast for access to its facilities and
networks have largely ignored the fact that the problem to
which Netflix was responding was a breakdown in relations
between Comcast and Cogent, one of the transit carriers
that Netflix pays to transmit its traffic to ISPs [32]. A
dispute arose between Comcast and Cogent concerning
whose responsibility it was to address bandwidth
limitations when Cogent began to send more traffic onto
Comcast’s network than their peering agreement allowed.
In the face of this ongoing dispute, Netflix arranged to
locate its servers with direct access to Comcast’s subscriber
network, eliminating their dependence upon Comcast’s
transit network. While this raised the specter in the popular
press and in technical circles of an end-run around “net
neutrality” arguments, the source of the problem in a
dispute between carriers – and in particular at the boundary
between an ISP and a transit carrier – is in some ways more
Broadly, we can distinguish four different commercial
entities – and hence four different forms of service
provision and four different technological arrangements –
in contemporary Internet service. The four are content
providers, content distribution networks (CDNs), internet
service providers (ISPs), and transit carriers.
While the inspiration for the design of the internet protocols
was to allow the creation of a network of networks, the
emergent structure is one in which not all networks are
equal. It’s not surprise that networks of course come in
different sizes and different speeds, but the structure of
contemporary networking relies upon a particular structural
arrangement of networks and different levels of network
providers, which in turn is embodied in a series of
commercial and institutional arrangements (such as
“settlement-free peering” [23]. As in other cases what we
see at work here is an evolutionary convergence in which
the network, as an entity that continues to grow and adapt
(both through new technologies becoming available and
new protocols being designed) does so in ways that
incorporate and subsequently come to reinforce historical
patterns of institutional arrangements.
Content providers are corporations like Netflix, Amazon,
Apple, or others whose business comprises (in whole or in
part) delivering digital material to subscribers and
consumers. Internet Service Providers, by and large, sell
internet services to end-users, either individual subscribers
or businesses. We think of our ISPs as delivering the
content to us, but in truth they are generally responsible
only for the last steps in the network, in our local cities or
neighborhoods. Transit carriers are responsible for getting
data from content providers to the ISPs. While the names of
ISPs like Verizon and Comcast are well known, the names
of transit carriers – such as Level 3, Cogent, or XO – are
much less familiar, despite the central role that they play in
bringing information to any of our devices. Content
providers will typically contract directly with transit
carriers for their content delivery needs. Transit carriers and
ISPs connect their networks through a set of technical and
economic arrangements collectively known as “peering” (as
in the connection between two “peer” networks).
Gillespie [17] has discussed the importance of the so-called
“end-to-end principle” [33] in not just the design of the
Internet platform but also in the debates about the
appropriate function of a network. The end-to-end principle
essentially states that all knowledge about the specific
needs of particular applications should be concentrated at
the “end-points” of a network connection, that is, at the
hosts that are communicating with each other; no other
network components, such as routers, should need to know
any of the details of application function or data structure in
order to operate effectively. This implies too that correctly
routing a packet should not require routers to inspect the
packet contents or transform the packet in any way. So, for
instance, if data is to be transmitted in an encrypted form,
the intermediate nodes do not need to understand the nature
of the encryption in order to transmit the packets. There is a
trade-off in this of course; on the one hand, the network can
be simpler, because every packet will be treated identically,
but on the other, distinctions that we might want to make,
such as between real-time and non-real-time traffic are
unavailable to the network.
Peering arrangements are almost always bilateral, i.e. an
agreement between two parties. They are generally
bidirectional and often cost-free, although they may have
traffic or bandwidth limits beyond which charges are
levied. The term “peer” is a hold-over from the military and
non-profit days preceding the development of the
commercial Internet, and speaks directly to the notion of
“inter-networking” (ie, connecting together different
networks and transmitting traffic between them). The
“peers” that are now linked by peering arrangements,
though, are not peer academic or research networks but
commercial entities engaging in economic relations. These
arrangements are largely hidden from the view of end users,
but become broadly visible when disputes arise around the
adequacy of peering relationships, corporate responsiveness
to changing conditions upon them, or the responsibilities of
carriage associated with them. For example, recent (early
2014) debates around Internet streaming movie provider
The end-to-end principle was originally formulated as a
simple design principle, but it had, as Gillespie notes, many
consequences that are organizational and political as well as
technical. For instance, the separation between application
semantics and packet transit also essentially creates the
opportunity for a separation between application service
providers (e.g. Netflix or Facebook) and infrastructure
providers (e.g. ISPs like Time Warner or Verizon) by
ensuring that applications and infrastructure can evolve
independently. Arguably, these aspects of the Internet’s
design have been crucial to its successful evolution.
However, the end-to-end principle is under assault as a
practical rubric for design. Two examples of obstacles to
the end-to-end principle are the rise of middleboxes and
assaults on net neutrality.
deploying technologies like so-called “deep packet
inspection” (mechanisms that examine not just the headers
of a packet but its contents in order to decide how it should
be treated) that violate the end-to-end principle.
As discussions such as those of Saltzer et al. [33] make
clear, the end-to-end principle was a foundational and
significant principle in the design of the Internet,
representing not just a specific technical consideration but
also a commitment to an evolutionary path for network
service provision. There is no question then that the Internet
was designed around this principle. However, as the rise of
middleboxes and challenges to network neutrality make
clear, that doesn’t imply that the end-to-end principle is a
design feature of our (contemporary) Internet. More
generally, this case illustrates that technical design
reinterpretation and revocation as the technology is
implemented and evolves (c.f. [28]).
Middleboxes is a generic term for devices that intervene in
network connections. Unlike routers, which, designed
according to the end-to-end principle, never examine packet
internals or transform transmitted data, middleboxes may
do both of these. Perhaps the most common middleboxes
are gateways that perform Network Address Translation or
NAT. Almost every residential network gateway or WiFi
base station is actually a NAT device. NAT is a mechanism
that partially resolves the problem of scarce network
addresses. Most people buy a residential internet service
that provides them with just a single network address, even
though they have multiple devices using the service (e.g. a
laptop and a smartphone). Network Address Translation
allows both of these devices to use a single network address
simultaneously. To the outside world, the laptop and the
smartphone appear like a single device that is making
multiple simultaneous connections; the NAT device keeps
track of which connection are associated with which device
and delivers responses appropriately. NAT violates the endto-end principle, however, with various consequences. For
exactly, because the individual devices are not technically
end-points on the public internet, one cannot make a
connection to them directly. Further, network flow control
algorithms may be confused by the presence of multiple
devices with essentially the same network address. Further,
NAT gateways achieve their effects by rewriting packet
addresses before the packets have been delivered to their
The emergent structure of our Internet – the market niches
of transit carrier and ISP, the practical solution of CDNs,
the fragmentation produced by middleboxes, the
relationship between mobile carriers, telecommunications
firms, and media content producers, and so on – draws
attention to a simple but important truth of internetworking:
the Internet comprises a lot of wires, and every one of them
is owned by someone. To the extent that those owners are
capitalist enterprises competing in a marketplace, then the
obvious corollary is that, since all the wires can do is carry
traffic from one point to another, the carriage of traffic must
become profit-generating. The mechanisms by which traffic
carriage over network connections becomes profitable is
basically either through a volume-based or per-byte
mechanism – a toll for traffic – or through a contractual
arrangement that places the facilities of one entity’s
network at the temporary disposal of another – a rental
arrangement. This system of rents and tolls provides the
basic mechanism by which different autonomous systems,
each of which provisions its own services, manages its own
infrastructures, and then engages in a series of agreements
of mutual aid.
If this system seems familiar, it is not so much that it
encapsulates contemporary market capitalism but more that
it is essentially feudal in its configuration. Marx argued for
feudalism and capitalism as distinct historical periods with
their links to material means of production – “The handmill gives you society with the feudal lord; the steam-mill
society with the industrial capitalist” [25]. Recent writers,
though, have used the term “neofeudal” to describe the
situation in late capitalism in which aspects of public life
increasingly become private, gated domains – everything
from toll lanes on the freeway and executive lounges at the
airport, on the small end, to gated communities and
tradeable rights to pollute the environment issued to large
corporations at the other (e.g. [34]). The essential
A second assault on the end-to-end principle is the
emergence of threats to network neutrality. Net neutrality is
a term to express the idea that network traffic should be
treated identically no matter what it contains, and no matter
where it originates or terminates. However, many
organizational entities have reasons to want to violate this
principle. For instance, a network provider which is owned
by a media company might want to prioritize the delivery of
its own media traffic to its customers, giving it a higher
priority than other traffic to ensure a smoother service, or
an ISP might want to limit the bandwidth available to hightraffic users in order to encourage more equitable access to
the network. These kinds of situations often involve
consideration here is the erasure of public infrastructure and
the erection of a system of tariffs, tolls, and rents that
govern the way we navigate a world made up of privatized
but encompassing domains, within which market relations
do not dominate.
ways – in the provision of specific linguistic content [36],
in the regional caching of digital content, in the question of
international distribution rights for digital content (e.g.
which movies can be viewed online in which countries), in
assertions of national sovereignty over information about
citizens (e.g. Vladimir Putin’s public musings that data
about Russian citizens should be stored only on servers in
Russia [22]), in the different regimes that govern
information access (e.g. the 2014 EU directive known
popularly as the “right to be forgotten”), and in local
debates about Internet censorship (from China’s “Great
Firewall” and Singapore’s self-censorship regime to
discussions of nationwide internet filters in Australia and
the UK). The very fact that a thriving business opportunity
exists for commercial Virtual Private Network (VPN)
services that allow users to get online “as if” they were
located in a different country signals the persistent
significance of nation-states and national boundaries in the
experience of our Internet. Similarly, significant debate has
surrounded the role of national interests in Internet
governance [e.g. 27], and the International Telecommunications Union or ITU – a United Nations
organization whose “members” are not technical experts or
corporations but nation-states – remains a significant body
in the development of network technologies and policy. Just
as feudalism reinforced and made all aspects of everyday
life subject to the boundaries of the manor, the shire, and
the nation, so too does our Internet – not necessarily any
Internet, but certainly our Internet – retain a significant
commitment to the relevance of similar geographical,
national, and institutional boundaries.
Beyond the general use of the term “neofeudal” to refer to
the privatization of public goods, let me take the metaphor
of the feudal Internet more seriously for a moment to point
to a couple of significant considerations.
The first is that operating mechanism of feudalism is not the
market transaction but rather long-standing commitments of
fealty, vassalage, and protection. These are not the
instantaneous mutual engagements of market capitalism but
temporally extended (indeed, indefinite) arrangements with
little or nothing by way of choice or options. Indeed, the
constraints upon feudal relations are geographical as much
as anything else: infrastructural, if you will. One can see,
arguably, some similarities to the way that geographical and
infrastructural constraints lead to a pattern of relations
between internet providers that also relies upon long-term,
“residence”-based, partnerships. The ties that bind
individuals to their service providers in semi-monopolistic
conditions of the US broadband market, or perhaps even
more pointedly, the links that connect large-scale data
service providers such as Netflix with transit carriers like
Level 3 are not simply conveniently structured as long-term
arrangements, but rather can only operate that way because
of the infrastructure commitments involved (such as the
physical siting of data stores and server farms.) Similarly,
the need for physical interconnection between different
networks makes high-provider-density interconnection
nodes like One Wilshire in downtown Los Angeles (see,
e.g., [12, 40]) into “obligatory passage points” in Callon’s
language [5] – that is, to get interconnection between
networks, you need to be where all the other networks are,
and they all need to be there too. For all that we typically
talk about the digital domain as fast-paced and everchanging, these kinds of arrangements – not simply
commitments to infrastructure but commitments to the
institutions relationships that infrastructure conditions – are
not ones that can change quickly, easily, or cheaply. These
relations are more feudal that mercantile.
The third point to be emphasized here is the way that these
are simultaneously social and technical arrangements – not
social arrangements that give rise to technological designs,
nor technological designs that provoke social responses.
Middleboxes, deep packet inspection, and the autonomous
systems of BGP should be regarded as, and analyzed as,
both at once. This requires, then, a view that both takes
specific technological configurations (rather than principles,
imaginaries, and metaphors) seriously as objects of analytic
attention, and that identifies and examines the social,
political, and economic contexts within which these come
to operate. To the extent that a feudal regime of hierarchical
relations based on long-term structures of mutual
commitment can be invoked as an account of our Internet, it
can be done only within with context of a sociotechnical
analysis that is both historically specific and symmetric.
The second interesting point that a feudal approach draws
attention to is the persistence of pre-existing institutional
structures – perhaps most obviously, the nation-state.
Although John Perry Barlow’s [2] classic “Declaration of
the Independence of Cyberspace” famously argues that the
“governments of the industrial world… [have] no
sovereignty” in the realm of the digital, and
notwithstanding the IETF’s famous motto that “rejects
kings [and] presidents” in favor of “rough consensus and
running code” [20], the truth is that governments and
presidents continue to manifest themselves quite
significantly in not just the governance but the fabric of our
Internet. National and regional concerns arise in a variety of
The larger project of which the current exploration forms a
part is an attempt to take seriously the materialities of
information and their consequences. It is critical to this
project that we move beyond accounts simply of
information infrastructure, but also recognize the relevant
materialities of representation, and their consequences [11,
13]. As Powell [31] has argued in her study of open
hardware projects, patterns of historical evolution torque
the design principles that often provide not only technical
arrangements but also the social imaginaries that are
mobilized in our discussions of technology. To bring this
perspective to contemporary networking, then, means not
simply that we need to think about the pragmatics of
undersea cables [37], satellite downlinks [29], and server
farms [40], but also the way that specific representations of
action and encodings of data are designed to be
manipulated, transmitted, and moved in particular sorts of
ways, with consequences for the resultant experience of the
network as offering particular opportunities for individual
and collective action.
1. Antoniadis, P., Le Grand, B., Satsiou, A., Tassiulas, L.,
Aguiar, R., Barraca, J.P., and Sargento, S. 2008.
Community building over Neighborhood Wireless Mesh
Networks. IEEE Society & Technology, 27 (1): 48-56.
2. Barlow, J. 1996. A Declaration of the Independence of
Cyberspace. Davos, Switzerland. Downloaded from
on June 4, 2014.
3. Birrell, A., Levin, R., Needham, R., and Schroeder, M.
1982. Grapevine: An Exercise in Distributed
Computing. Communications of the ACM, 25(4), 260274.
In other words, the primary concern is to see networked
arrangements as historically particular crystallizations of
not just technical but also institutional, economic, and
political potentialities. To do this, particularly with respect
to the technology that we rather glibly call “the Internet,” I
have suggested that two moves are needed.
4. Boggs, D., Shoch, J., Taft, E., and Metcalfe, R. 1980.
IEEE Transactions on Communication, 28(4), 612-624.
5. Callon, M. 1986. Elements of a sociology of translation:
Domestication of the Scallops and the Fishermen of St
Brieuc Bay. In Law (ed.), Power, Action and Belief: A
New Sociology of Knowledge? 196-233. London:
The first move is from the idea of “the Internet” to that of
“an Internet” – that is, to re-encounter our contemporary
network as not the only possible Internet that could have
been built, but as one of a range of possible networks.
When we consider a number of possible networks, we start
to pay attention to the range of expectations, institutional
arrangements, policies, technical configurations, and other
dimensions that might characterize the space of potentials.
The second move is from “an Internet” to “this Internet” –
that is, to narrow down once more and so grapple with the
historical, geographical, political and social specificities
that constrain and condition the particular network with
which we are engaged at any particular moment in time.
This Internet is not the one that was conceived of by those
like Paul Baran or Donald Davies, designed by those like
Vint Cerf or Bob Taylor, or opened to commercial
operation by the NSF – it has elements of each of those, but
it is a historically specific construction which has
encompassed, transformed, extended, and redirected any of
those particular networks. This Internet is something that
we can grapple with empirically. We are not in much of a
position to make general statements about “the Internet”;
but when we ask questions about “this Internet,” we may
have a starting point for investigation.
6. Comer, D. 1983. The computer science research
network CSNET: a history and status report.
Communications of the ACM, 26(10), 747-753.
7. Crocker, D., Szurkowski, E., and Farber, D. 1979. An
Internetwork Memo Distribution Facility – MMDF.
Proc. ACM Conf. Computer Communication
SIGCOMM, 18-25.
8. Dahlberg, L. 2001. Democracy via cyberspace:
examining the rhetorics and practices of three prominent
camps. New Media & Society, 3, 187–207.
9. Denning, P., Hearn, A., and Kern, W. 1983. History and
overview of CSNET. Proc. Symp. Communications
Architectures & Protocols (SIGCOMM '83), 138-145.
10. Digital Equipment Corporation. 1982. DECnet
DIGITAL Network Architecture (Phase IV): General
Description. Order number AA-N149A-TC. Maynard,
MA: Digital Equipment Corporation.
11. Dourish, P. 2014. NoSQL: The Shifting Materialities of
Databases. Computational Culture.
12. Dourish, P. 2015. Packets, Protocols, and Proximity:
The Materiality of Internet Routing. In Parks, L. and
Starosielski, N. (eds), Signal Traffic: Critical Studies of
Media Infrastructures, 183-204. University of Illinois
Many key ideas here arose in conversation with Jon
Crowcroft, Alison Powell, and Irina Shklovski. I am
indebted to Morgan Ames, Panayotis Antoniadis, Sandra
Braman, Marisa Cohn, Courtney Loder, Lilly Nguyen,
Katie Pine, and Chris Wolf, each of whom offered valuable
critique at key junctures. This work is part of a larger
project on the materialities of information in which Melissa
Mazmanian has been a key partner, and it has been
supported in part by the Intel Science and Technology
Center for Social Computing and by the National Science
Foundation under awards 0917401, 0968616, and 1025761.
13. Dourish, P. and Mazmanian, M. 2013. Media as
Material: Information Representations as Material
Foundations for Organizational Practice. In Carlile,
Nicolini, Langley, and Tsoukas (eds), How Matter
Matters: Objects, Artifacts, and Materiality in
Organization Studies, 92-118. Oxford, UK: Oxford
University Press.
14. Elmer-Dewitt, P. 1993. First Nation in Cyberspace.
Time, 49, Dec 6.
28. Oudshoorn, N. and Pinch, T. 2003. How Users Matter:
The Co-construction of Users and Technology.
Cambridge, MA: MIT Press.
15. Fernaeus, Y. and Sundström, P. 2012. The Material
Move: How Materials Matter in Interaction Design
Research. Proc. ACM Conf. Designing Interactive
Systems, 486-495.
29. Parks, L. 2012. Satellites, Oil, and Footprints: Eutelsat,
Kazsat, and Post-Communist Territories in Central Asia.
In Parks and Schwoch (eds), Down to Earth: Satellite
Technologies, Industries, and Cultures. New Brunswick:
Rutgers University Press.
16. Gaved, M., & Mulholland, P. 2008. Pioneers,
subcultures, and cooperatives: the grassroots
augmentation of urban places. In Aurigi, A. and De
Cindio, F. (eds.), Augmented urban spaces: articulating
the physical and electronic city, 171-184. England,
30. Powell, A. 2011. Metaphors, Models and
Communicative Spaces: Designing local wireless
infrastructure. Canadian Journal of Communication.
31. Powell, A. 2014. The History and Future of Internet
Openness: From ‘Wired’ to ‘Mobile’. In Swiss and
Herman (eds), Materialities and Imaginaries of the Open
Internet. Routledge.
17. Gillespie, T. 2006. Engineering a Principle: ‘End-toEnd’ in the Design of the Internet. Social Studies of
Science, 36(3), 427-457.
18. Hauben, M. and Hauben, R. 1997. Netizens: On the
History and Impact of Usenet and the Internet.
Wiley/IEEE Computer Press.
32. Rayburn, D. 2014. Here’s How the Comcast & Netflix
Deal is Structured, With Data and Numbers. Streaming
Media Blog, February 27 2014. Downloaded from
19. Hindman, M. 2008. The Myth of Digital Democracy.
Princeton University Press.
33. Saltzer, J., Reed, D. and Clark, D. 1984. End-to-End
Arguments in System Design. ACM Transactions on
Computer Systems, 2(4), 277–88.
20. Hoffman, P. 2012. The Tao of IETF. Downloaded from on June 4, 2014.
21. Jungnickel, K. 2014. DIY WIFI: Re-imagining
Connectivity. Palgrave Pivot.
34. Shearing, C. 2001. Punishment and the Changing Face
of Governance. Punishment and Society, 3(2), 203-220.
22. Khrennikov, I. and Ustinova, A. 2014. Putin’s Next
Invasion: The Russian Web. Downloaded from
on June 4, 2014.
35. Shirky, C. 2008. Here Comes Everyone: The Power of
Organizing Without Organizations. Penguin.
36. Shklovski, I. and Struthers, D. 2010. Of States and
Borders on the Internet: The Role of Domain Name
Extensions in Expressions of Nationalism Online in
Kazakhstan. Policy and Internet, 2(4), 107-129.
23. Laffont, J.-L., Marcus, S., Rey, P., and Tirole, J. 2001.
Internet Peering. Annual Economic Review, 91(2), 287291.
37. Starosielski, N. 2015. The Undersea Network. Durham,
NC: Duke University Press.
24. Malecki, E. 2002. The Economic Geography of the
Internet’s Infrastructure. Economic Geography, 78(4),
38. Steinberg, D. and Cheshire, S. 2005. Zero Configuration
Networking: The Definitive Guide. Sebastopol, CA:
O’Reilly Media.
25. Marx, K. 1847 (1910). The Poverty of Philosophy. Tr:
Quelch, H. Chicago, IL: Charles H. Kerr & Co.
39. Turner, F. 2006. From Counterculture to Cyberculture:
Stewart Brand, the Whole Earth Network, and the Rise
of Digital Utopianism. Chicago, IL: University of
Chicago Press.
26. Mockapetris, P. 1987. Domain Names – Concepts and
Facilities. RFC 1034, Network Working Group.
27. Mueller, M. 2010. Networks and States: The Global
Politics of Internet Governance. MIT Press.
40. Varnelis, K. 2008. The Infrastructural City: Networked
Ecologies in Los Angeles. Barcelona: Actar.
Interacting with an Inferred World: The Challenge of
Machine Learning for Humane Computer Interaction
Alan F. Blackwell
University of Cambridge
Computer Laboratory
[email protected]
internal models) and the economics of network connectivity
(the range of data from which models can be derived). One
set of concerns about those changes of scale is now familiar
as an implication of the phrase ‘big data’ [9], but I contend
that the real problem is more insidious - that it results from
changes in the technical underpinnings of artificial
intelligence research, which are in turn changing designers’
conceptions of the human user. The danger is not the
creation of systems that become maliciously intelligent, but
of systems that are designed to be inhumane through
neglect of the individual, social and political consequences
of technical decisions.
Classic theories of user interaction have been framed in
relation to symbolic models of planning and problem
solving, responding in part to the cognitive theories
associated with AI research. However, the behavior of
modern machine-learning systems is determined by
statistical models of the world rather than explicit symbolic
descriptions. Users increasingly interact with the world and
with others in ways that are mediated by such models. This
paper explores the way in which this new generation of
technology raises fresh challenges for the critical evaluation
of interactive systems. It closes with some proposed
measures for the design of inference-based systems that are
more open to humane design and use.
The structure of the paper is as follows. The first section
discusses the nature of the intellectual changes that have
accompanied developments in AI technology. For many
years, researchers in Human-Computer Interaction (HCI)
have been concerned about the consequences of AI, but I
argue that it is time for some fundamental changes in the
questions asked. The second section takes a specific
research project as a focal case study. That project was not
designed as an interactive system, but suggests future
scenarios of interaction that seem particularly disturbing. I
use the case study as a starting point from which several
key concerns of humane interaction can be explored,
introducing questions of human rights, law and political
economics. The final section is more technically oriented,
describing practical tests available to the critical technical
practitioner, followed by a selection of research themes that
might be used to explore more humane interaction modes.
Author Keywords
Machine learning; critical theory
ACM Classification Keywords
H.5.2. User interfaces; I.2.0 Artificial intelligence
Anxiety about artificial intelligence (AI) has pervaded
Western culture for 50 years - or perhaps 500. As I write,
the BBC News headline reads Microsoft's Bill Gates insists
AI is a threat, while the Cambridge Centre for Existential
Risk, led by past Royal Society President and Astronomer
Royal Sir Martin Rees, lists AI as the first in its list of the
technologies that may represent ‘direct, extinction-level
threats to our species’. The legend of the Golem and other
examples from fiction demonstrate human unease about
machines that act for themselves. Of course, this ability is
shared by devices as humble as the thermostat, the sat-nav,
or predictive text – all somewhat mysterious to their users,
and all possessing some kind of ‘mind’ – that is, an internal
model of the world that determines system behavior.
In recognition of the long view taken at this conference, I
will set out the development of these concerns over a
relatively extended time-frame. The phrase big data has
only recently become popular as a marketing term,
promoting the potential for statistical analysis to model the
world by extracting patterns from data that is becoming
more readily available. The Machine Learning techniques
used to create such models have supplanted earlier AI
technology booms such as symbol-manipulation methods in
the Expert Systems of the 1980s – now remembered
nostalgically as GOFAI, for ‘Good Old-Fashioned AI’.
The goal of this paper is to analyze the ways in which our
relationship with such devices and systems is changing, in
response to the continued developments arising from
Moore's Law (namely, the scale and complexity of their
Copyright© 2015 is held by the author(s). Publication rights licensed to
Aarhus University and ACM
These changing trends are significant to Human-Computer
Interaction (HCI) because GOFAI has long had a
problematic relationship with HCI – as a kind of
quarrelsome sibling. Both fields brought together
5th Decennial Aarhus Conference on Critical Alternatives
August 17 – 21, 2015, Aarhus Denmark
knowledge from Psychology and Computer Science, to an
extent that in the early days of HCI, it was difficult to
distinguish HCI from AI or Cognitive Science. The
program of work at Xerox PARC that resulted in the
seminal book The Psychology of Human-Computer
Interaction [10] was initiated by a proposal from Allen
Newell filed as Applied Information-Processing Psychology
Project Memo Number 1[25], while Norman and Draper's
UCSD book (User Centered System Design) [28] punningly
emerged from the Cognitive Science department at UCSD
(UC San Diego). Many early HCI researchers had taken
degrees in AI or Cognitive Science (including the current
author). Even today, popular accounts of HCI research,
referencing both psychology and computing, often result in
the naive assumption that we must be doing AI.
Expert Systems
Big Data
Table 1. Structural analogy of the new technical context for
HCI, with the focus of this paper as a new critical question.
However, the years of the 1980s Expert Systems boom
were also associated with a critical reaction. The ‘strong
AI’ position anticipated computers that could pass the
Turing Test and simulate people. This possibility was
challenged not only by philosophers of mind, but by
researchers concerned with HCI such as Winograd and
Flores [39], Gill [16], and Suchman [35], all of whom noted
the ways in which symbolic problem-solving algorithms
neglected issues that were central to human interaction,
including social context, physical embodiment, and action
in the world. Each of these researchers had distinct
concerns, but these can be summarized in their reception by
AI researchers as problems of situated cognition – the
failure of formal computational models of planning and
action to deal with the complexity of the real world, where
actions seem more often improvised in response to events
and situations [1, 29].
To summarize those changes, and also the central analogy
leading to the concern of this paper, Table 1 outlines the
relationship between the technoscience landscape of
GOFAI that laid the context for HCI in the 1980s, and the
corresponding scientific, technical and business trends to
which we should be responding today.
The critical challenges to symbol-processing GOFAI were
that the symbols were not grounded, the cognition was not
situated, and there was no interaction with social context.
These are not the critical problems of the ML landscape. In
contrast to those earlier critiques, machine learning systems
operate purely on ‘grounded’ data, and their ‘cognition’ is
based wholly on information collected from the real world.
Furthermore, it appears that ML systems are interacting
with their social context, for example through the use of big
data collected from social networks, personal databases,
online business and so on. However, this question of
interacting with social data introduces the most important
critical concern of this paper.
Although the phrase ‘situated cognition’ has become tainted
by debate and controversy, these debates between HCI and
symbol-processing AI have been underlying concerns of
major theoretical contributions in HCI, such as Dourish's
discussions of context and embodiment [14, 15], and
concepts of interaction offered as design resources in a
critical technical practice [2, 20].
This concern can be framed in terms of the Turing Test. In
that test, judges are asked to distinguish between a human
and a computer. In the classic formulation, we wish to see if
the computer will appear sufficiently human-like that the
two cannot be distinguished. However, I am more
concerned with the reverse scenario. What if the human and
computer cannot be distinguished because the human has
become too much like a computer?
The New Critical Landscape
The goal of this paper is to explore the ways in which this
continued central tension in HCI is now changing in
fundamental ways, because of the technical differences
between the methods of GOFAI, and those that are now
predominant in Machine Learning (ML). Where symbolprocessing approaches failed to take account of the rich
information available in the world, ML algorithms have
access to huge quantities of such information. This has
resulted in enormous changes, by comparison to the
technical and commercial context in which the field of HCI
was formed.
The original version of the Turing Test fails for all the
reasons identified in the 1980s HCI critiques of AI. The
new version ‘passes’ the Turing Test, but in a way that
demands a new critique. My concern is that reducing
humans to acting as data sources is fundamentally
inhumane. A serious additional concern is that technical
optimists appear to be blind to this problem – perhaps
because of their excitement as they finally seem to be
approaching the scientific ‘goal’ of the Turing Test. HCI
researchers and other critical practitioners should be alert to
these technical developments, and be ready to draw
attention to their consequences.
This is particularly important because, whereas Expert
Systems attracted much technical excitement in the 1980s,
they were not widely deployed – at least, not to the extent
that ordinary people would interact with them every day. In
contrast, the statistical techniques of ML are now widely
deployed in interactive commercial products. Everyday
examples include the Pagerank algorithm of Google,
Microsoft's Kinect motion-tracking interface, many
different mobile predictive text entry systems, Amazon's
recommendation engine and so on. If there is a critical
problem, it is not simply an academic or philosophical
concern about a speculative future of intelligent machines.
On the contrary, I believe this may be a fundamental
problem for contemporary society.
model that purports to explain the user, yet cannot explain
This second concern has been elaborated in debate between
Norvig and Chomsky [30]. Chomsky questions the value of
models that contain no symbols for us to inspect, while
Norvig observes that these models clearly work, and have
supplanted symbolic models as a result. As with Breiman’s
Two Cultures essay, this is partly an appeal to technological
pragmatism over epistemological reservations – Breiman
himself says “the goal is not interpretability, but accurate
information” [7, p 210]. However, if this is the case, it is
reasonable to ask whether the most pragmatic approach will
necessarily be the most humane.
These questions are located in a complex nexus of technical
and psychological considerations. Before exploring them
further, I illustrate that context with a specific case study of
research within this nexus. In the following discussion, the
case study also serves as a useful (and, by intention, slightly
distanced from HCI) concrete example.
To summarise this section, it has drawn comparisons
between the technical trends that inspired the HCI critiques
of the 1980s, and the technical trends of today. Much theory
in contemporary HCI originally emerged from those 1980s
critiques of symbol-processing AI. But whereas the core
problem of symbol-processing AI was its lack of
connection to context – the problem of situated cognition –
the core problem of machine learning is the way in which it
reduces the contextualised human to a machine-like source
of interaction data. Rather than cognition that is not
situated, our new concern should be interaction that is not
My case study comes from the work of Jack Gallant’s
research group, described as ‘reconstructing visual
experiences from brain activity’ [27]. Generating wide
public interest from the disturbing suggestion that his team
had created a mind-reading machine, this ML system
retrieved film clips from a database according to the
similarity of EEG readings taken while people watched the
films. Press reports and publications were accompanied by
images such as that in Fig. 1, showing (on the left) a still
from the film that had been shown to an experimental
subject, together with (on the right) a ‘mind-reading’ result
that was often interpreted as the rendering of an image
captured from within the brain of the subject.
The key question is whether ML systems, which create their
own implicit internal model of the world through inference
rather than an explicit symbolic model, carry new
consequences for those interacting with them. There are
some existing concerns regarding the epistemological status
of such systems.
One is summarised by Breiman [7] as resulting from ‘two
cultures’ of statistical modeling. Breiman contrasts the
traditional practice of a statistician building and testing a
mathematical model of the world, to ML techniques in
which the model is inferred directly from data. He observes
that “nature produces data in a black box whose insides are
complex, mysterious, and, at least, partly unknowable” [7,
p.205]. As a result, rather than following Occam’s razor, he
believes that “the models that best emulate nature in terms
of predictive accuracy are also the most complex and
inscrutable” [7, p 209].
Breiman’s analysis, if correct (there are objectors), suggests
that predictive accuracy is more important than
interpretability. Systems built around such models therefore
predict the state of the world and the people in the world,
including the actions of the user, without offering an
explicit symbolic explanation. One consequence of
collecting data without explaining why is to trigger the
now-familiar concerns of surveillance and privacy
associated with ‘big data’. More subtle is the consequence
of interacting with the world through the mediation of a
Figure 1. A visual image reconstructed from brain
activity, as reported in [27] (image reproduced with
permission of the authors, extracted from, and
reused under CCA license). The left side of the figure
shows a film scene presented to an experimental subject.
The right side is a reconstructed image informed by EEG
measurements from the subject’s brain.
The visual rhetoric in these scientific illustrations is
compelling. The captured mind-image on the right of Fig. 1
resembles an oil painting by Goya or by Turner, an artistic
rendering of the deep unconscious. The image on the left is
a scene from a Steve Martin film – reminding us of
Martin’s classic riffs on disembodied brains (The Man with
Two Brains, 1983) and mind-body exchange (All of Me,
1984). But on closer inspection, the resemblance between
the right and left halves is rather slim. Why does the
reconstructed image have dark hair rather than white? Why
is it not wearing a white shirt? In fact, all we can say for
sure is that there appears to be a human figure on the right
of the frame. We are sure because we see its face – as
plainly as we see a face in the moon!
essential considerations in the critical assessment of ML
models. I will not dwell further on the visual rhetoric of
scientific discourse, despite the fact that the case raises
fascinating questions in that area too.
Of course, humans (in the Northern hemisphere) observe a
face in the moon because human brains, as with the brains
of all social mammals, have evolved to detect faces well. If
one were to record EEG signals while humans observe a
wide range of different images, we might be confident that
images of faces would result in some of the most distinctive
signals. We also know that each hemisphere of the brain
responds to only one half of the visual field, so the simplest
possible inference is to determine the side of the head on
which a signal has been detected.
The central part of this paper explores the resulting
questions. The following sections draw out these questions
in turn, starting with those that arise directly from the
domain of film – art works and their readers – but then
moving on to the economic and psychological networks in
which artworks are embedded.
From Case Study to Critical Questions
This case study has illustrated typical techniques from ML
and neuroscience that might (if they worked in the manner
implied) provide a basis for new models of user interaction
that would be able to predict the user’s needs. A ‘natural’
brain-computer interface of this kind has not yet been
proposed – though it may come soon! However, the case
study can also be used as a starting point for critical
GOFAI systems maintained a clear distinction between data
(symbolic representations of the world), and algorithms that
processed this data. In the modern digital economy, business
and legal frameworks still maintain a clear distinction
between (data-like) content and (algorithm-like) services.
However, the behavior of ML systems is derived from data,
through the construction of the statistical model.
So, this is a point at which one might ask more closely how
this image of the unconscious was painted. Might it be
possible to construct such an image, if the ML model were
so crude as to encode only the presence of a face on one
side or other of the visual field? It seems, from close
reading of the research described, that the right hand image
is a blurred average of the 100 film library scenes most
closely fitting the observed EEG signal. That is, it is a
blurred combination of 100 film scenes in which a face
appears in the right hand side of the frame. The other visual
features, given this starting point, are unexceptional. The
location of the face within the right hand side follows a
completely conventional film frame composition. The face
is located by the rule of two thirds, and the gaze of the
figure is directed inward to the center of the frame. In fact,
this could be a scene from any film at all.
The image in Fig. 1 appeared meaningful because it was
derived from actual movie scenes. Similarly, ML-based
interactive technologies such as the Microsoft Kinect game
controller are able to recognize human actions because their
models have been derived from a large database of human
actions [33]. In one sense, these statistical models can be
regarded as an index of the content that created them,
allowing the system to look up an interpretation that was
originally created by a human author.
This close relationship between index and authorship has
been a focus of critical inquiry in the past, for example in
Borges’ Library of Babel [8], which contained every
possible book in the universe that could be written in an
alphabet of 25 characters. If there were a catalogue to this
library, its index would have to contain the full text of the
book itself, to distinguish each book from the volume that is
identical apart from a single character. Borges’ Library was
a thought experiment, but we do now have ML algorithms
that index (nearly) every book in the world, meaning that
their models incorporate a significant proportion of those
books’ content. However, the algorithm collecting an index
does not care whether the data is copyrighted – despite the
fact that the copyrighted content is in some way mashed-up
into the structure of the resulting model.
The rhetorical skill of the researchers in constructing this
striking image is to have chosen just the right number of
film scenes to average. A composite of only one or two
scenes would retain sufficiently clear visual features to
make it obvious if we were not looking at the right film. On
the other hand, an average of 10,000 scenes would be no
more than a brown smear, losing the powerful hints of
contour and shadow that remind us of dream paintings.
Even the level of detail used in the averaging process seems
carefully chosen – the unit pixels, the hints of vertical lines
suggesting scenery – are the size and shape of brush strokes
to further evoke painterly rendering.
In this case study, we can admire the skill of the researchers
in presenting such a compelling story. However, this
example also provides an opportunity to discuss some
To consider the implications, imagine a dynamic audio
filter trained on a single song – perhaps John Lennon's
Imagine. If allowed to process sufficient random noise, this
filter could select enough sounds to reproduce the song. If
applied to any other song, it selects only those parts of the
song that resemble Imagine. The model is not far from
being a copy. But what if we trained the model on two
Lennon songs, or on the whole Beatles repertoire, to the
extent that it selects or simulates that repertoire? This is not
dissimilar to the ‘reconstructed’ film scene described in the
experiment above, although we are closer to technical
feasibility with audio than with film (e.g. [12]).
song recorded with it. If ML models are interpreted (and
applied) as processes, this is a challenge for attribution.
Although they resemble a purely algorithmic construct, they
are also a kind of intertextual content derived from the data
used to train them (just as a John Lennon ‘filter’ represents
all possible John Lennon songs, even including those not
written, but predicted from his body of work).
This situation is exacerbated by the fact that, when we
interact with computer systems, we often over-attribute
intelligence to the system, failing to recognize the fact that
apparent intelligence, in a system that is in some way faulty
or ambiguous, may arise from our own judgments. Collins
and Kusch describe this dynamic as ‘Repair, Attribution
and all That’ (RAT) [11]. RAT explains the enthusiastic
popular reception of the mind-reading demonstration in
Figure 1. Although clearly a poor reproduction of the
original scene, the human observer ‘repairs’ this,
interpreting the computer output as an accurate portrayal of
a dream, and attributing their own subsequent interpretation
of that ambiguous image as resulting from the apparent
intelligence of the system that produced it.
The ethics of copyright are a common enough topic in
critiques of digital media. However, rather than purely
considering the professional arts (loudly defended by
copyright holders in the media industries), we should also
consider the ways in which every digital citizen is an
‘author’ of their own identity, because of the ways that
persistent traces of our experiences and interactions with
others are now extending beyond our own memories into
digital representations. The human self is a narrative of
experiences and relations with others, and ownership of this
narrative is a critical condition of being human. I return to
this issue later.
In symbolic systems, where the system behavior has been
specified and encoded directly by a human designer, the
user can apply a semiotic reading in which the user
interface acts as the ‘designer’s deputy’ [34]. In contrast, if
the system behavior is encoded in a statistical model,
derived by inference from the work of many authors, and
presented to users in a context where any faults are repaired
and attributed to the intelligence of the system itself, then
this humane foundation of the semiotic system is
The logic of digital copyright would assert that the content
of the original material captured in an ML model or index
should still be traced to the authors – the scale of the
appropriation is not the key legal point. However, indexing
of the Internet is still heavily influenced by rather utopian
perspectives derived from champions of the public domain
in which it is often asserted that ‘information wants to be
free’ (attributed to Stewart Brand). But if we acknowledge
that ‘information’ represents the work and property of
individuals, then ‘freedom’ might simply mean the freedom
to appropriate that work by those wishing to encode it in the
form of ML models, especially if there are no copyright
holders leaping to defend their license revenue.
Models derived from big data defy symbolic representation,
because the scale and complexity of the data processing
algorithms makes it very difficult (or even impossible) to
‘run them backward’ and recover the necessary links for
human attribution. We can choose to treat such dynamics as
a necessary sacrifice for the public good of the creative
commons, supporting a massive global project of
postmodern collaging. However, if we are accelerating the
death of the author through technical means, then who will
get paid?
Attribution is already problematic in digital media, as a
result of postmodern collaging practices – remixes,
mashups and so on. Experts in forensic musicology report
that court decisions are contingent on the availability of
uncontestable symbolic representations [3]. Lyrics are easy
to defend. Melodies likewise, so long as they can be written
out as notes on a scale. However, reprocessed samples are
more ambiguous, and distinctive digital production
techniques almost impossible to verify without separate
evidence of provenance. The law is a symbolic system, and
it works well only with symbolic data.
Attribution of authorship is considered an inalienable
human right [37]. However, as noted by Scherzinger [32],
the digital public domain pays lip service to attribution of
authorship, while actually providing unfettered access to
commercial interests. Global indexing and data-centric
service models represent a new era of enclosures, echoing
the enclosure of common grazing land by the British
aristocracy. In particular, Scherzinger notes that there is a
tendency for the information assets of the global South to
be incorporated into the ‘public domain’ administered from
the North, while the revenue in services derived from those
assets continues to flow from the South to the North.
The commercial logic applied in digital music licensing (in
particular, within sample-based genres) is a logic of
contamination – the inclusion of any data fragment results
in a derived work, meaning that attribution is required and
license fees payable. In contrast, the application of
processes and algorithms (whether Autotune, or a fuzzbox)
does not imply that the inventor of the fuzzbox owns the
I have already discussed the way in which the separation of
data and algorithms in the systems of the GOFAI era has
become far less distinct in the statistical models that
underlie the behavior of ML devices such as Kinect. There
is a commercial analog to this technical change, in the
relationship between content and services. In the
contemporary digital economy, we retain a notional
separation between content and services. However, in
practice, the corporations responsible for digital
infrastructure ‘ecosystems’ find it useful to blur those
boundaries. Apps for the iPad and iPhone often prevent the
user from inspecting stored data other than through the
filters of the application itself. The market models of
interactive digital products (such as the AppStore) are
gradually integrated with those of digitized content delivery
(such as iTunes), and the company deploys proprietary
services such as cloud storage, user account authentication
and so on, on top of these. The ecosystem players – Apple,
Google, Facebook and Microsoft – are all attempting to
establish their control through a combination of storage,
behaviour and authentication services that are starting to
rely on indexed models of other people’s data.
The previous section has offered a relatively traditional
Marxian analysis of what we might consider to be humane,
in relation to economic exploitation of many people by a
few. However, the original case study also draws attention
to the ways in which ‘mind-reading’ technologies hold
implications for the psychological identity of the user. This
section considers implications of Machine Learning
technologies for the self, as a psychological rather than
purely legal and economic construct.
Sense of Agency
The first of these is the perception of agency – the sense of
one’s self as an undivided and persisting entity, in control
of one’s own actions. This is a key element of mental
health, and is often disrupted in those suffering from
delusional syndromes such as schizophrenia. In previous
research, we have shown that diagnostic devices used to
measure reduced sense of agency in psychiatric illnesses
can also be used to measure the user’s sense of agency
when interacting with ML systems [13].
The behavior of many ML-based systems is determined, not
only by models of the external world, but by statistical user
models that predict the user’s own actions. Those
predictions may be based on data collected from other
people (as in the case of Kinect), or one’s own past habits.
However, in all of these cases, one frequent outcome is that
the resulting system behavior becomes perversely more
difficult for the user to predict. Rather than an explicit rule
formulated and symbolically expressed by a designer, the
behavior is encoded in the parameters of a statistical model
– the kind of model that Breiman [7] describes as
“complex, mysterious, and, at least, partly unknowable”.
The result is frequently useful, but can also be surprising,
confusing or irritating – as often noted of auto-correct [22].
This is a more serious problem than the commonplace
observation that “if you are not paying for it, you’re not the
customer; you're the product being sold” (e.g. [19]). While
national and international legal frameworks focus on
outdated models of content protection (through copyright
and licensing) and service provision (through free trade and
tariff agreements), the primary mechanism of control over
users comes through statistical index models that are not
currently inspected or regulated. The regulated revenue
models of whole industries are being disrupted by digital
alternatives that bypass retail distribution with proprietary
indexing and access (e.g. Spotify, YouTube and others).
The underlying models are transnational, with the
corporations increasingly resembling the zaibatsu of
William Gibson’s cyberspace, rendering national
jurisdictions irrelevant in comparison with the internal
structure of ML models.
Whether or not the result is immediately useful, the work of
Coyle et al [13] shows that these ML-based predictions
reduce the user’s sense of agency in his or her own actions.
Furthermore, some classes of user may be excluded from
opportunities to control the system, because the prior data
from which the trained model has been constructed does not
take account of their own situation. One such example is
those Kinect users whose body shapes are not consistent
with the training data of the system, for example because
they have absent limbs.
This is a concern that is likely to become far more pressing
after implementation of the proposed Transatlantic Trade
and Investment Partnership, which would allow
corporations to sue a country whose laws obstruct their
interests. As a result, the significance of this analysis
extends beyond the purely commercial, to the political
foundations of our economic system [24]. When we replace
content with services that obscure individual authorship
through the construction of statistical models, Carlo
Vercellone [38] observes that we are developing a new
form of cognitive capitalism, in which access to the
proprietary infrastructure encoded in models and indexes
operates as a rent – extracting surplus value from the labor
of the majority. The owner of the statistical model used to
index and rank content, as in the case of Google’s
PageRank, thus becomes a rentier, rather than a proprietor,
of digital content [31].
These cases draw attention to the ways in which, when
interacting with an inferred model, the user is effectively
submitting to a comparison between their own actions and
those of other people from which the model has been
derived. In many such comparisons, the statistical effect
will be a regression toward the mean – the distinctive
characteristics of the individual will be adjusted or
corrected toward the expectations encoded in the model.
A well-known early example of this challenge was
expressed in the Wall Street Journal through advice on ‘If
TiVo thinks you are gay, here's how to set it straight’ [40].
The choice of this specific theme drew attention to the
implications of surveilling sexual preference, thereby
conflating the concerns of privacy with those of control. In
later reports, and in a classic meme formulation, the
problem has been simplified purely as a matter of
surveillance: ‘my TiVo thinks I’m gay’. However the TiVo
developers at the time experimented with a ‘Teach TiVo’
function that could be used to modify the inferred model.
Although briefly released in a beta version, the company
eventually focused on refining the algorithms, rather than
offering explicit control to users1.
Construction of Identity
This is the second of the ‘psychological’ problems that I
suggest arise when interacting with an inferred world. The
processes through which we construct our own individual
identity depend on the perception that we are distinct
individuals rather than interchangeable members of a
homogeneous society. Processes of individuation, in which
we separate ourselves from family and from others, are
central to the achievement of maturity and personhood.
These processes can be damaged through institutional and
systemic constraints, leading to a widespread concern for
self-determination as a fundamental human right.
If the construction of one’s personal identity is achieved
substantially through narratives in digital media – through
Facebook profiles, photo streams, blogs, Twitter feeds and
so on – then the behavior of these systems becomes a key
component of self-determination. To some extent, digital
media users are highly aware of the need to ‘curate their
lives’ in the presentation of such content. However, they are
less able to control the ML models that are inferred from
their online behavior. At a trivial level, these models may
record unintentional or private actions that the user would
prefer to disown. At a more profound level, regressions to
the mean result in personal identities that are trivialized
(cute animals and saccharine sunsets), or have pandered to a
lowest common denominator of mass-market segmentation
(prurience and porn).
The problem of how a user can express the behavior they
want extends also to the legal relationship between users
and service providers. As already noted in the earlier
discussion of authorship, the structure of legal frameworks
relies on symbolic representation rather than statistical
patterns. Service providers now offer end-user license
agreements that describe the procedures for collecting data
rather than the implications of the model that will be trained
from that data. Observation of user behavior, and datamining from content, have become completely routine, to
the extent that it is hardly possible for users to opt out of
this functionality if they want the product to work.
At present, these universal license agreements do not help
the user to understand what benefits they (or others) will
obtain from the resulting inferred models [18]. They also
provide no option for customizing or restricting either the
model or the contract – the user may opt in or out, but not
select any other trade-off between self-determination and
convenience. This appears to be a joint failure of technical
and legal systems, failing to recognize the interdependence
of the two that arises from interacting with the world
through an inferred model.
The previous sections have referred to examples of
interactive software, although the original case study was
not itself proposed as an interactive system. This section
considers two specific issues that arise when a user needs to
operate products that have been built using ML techniques
The anxieties regarding loss of control in inference-based
interfaces are already well-established: small-scale
behaviors such as Microsoft’s Clippy, Amazon
recommendations, or predictive text errors become the
object of popular ridicule, tinged only slightly with the
anxiety of disempowerment.
The ‘mind-reading’ case study that provided my initial
example has the character of a parlor trick, albeit one that I
have used to introduce some genuine ethical and legal
issues. To reiterate the real concern for interactive systems:
when an ML system sees the world we see, and also
observes our responses, the expectation is that it will make
predictions about us, and about what we want and need.
This inferred model mediates our interaction with the world
and with others.
However, case studies in which the user’s own needs are
modeled point to the issues that arise when more complex
or larger-scale system behaviors are determined through
ML techniques. Since the system behavior is derived from
data, if the user wishes to control or modify that behavior,
they must do so at one remove, by modifying the data rather
than the model. As argued in [4], if the user wishes to
change the behavior of the system more permanently, then
ML-based techniques can inadvertently make the task more
challenging. Rather than simply learning the conventions of
a scripting or policy language in which to express the
desired behavior, the user must second-guess an inference
algorithm, trying to select the right combination of data to
produce a model corresponding to their needs.
I have drawn attention to numerous ways in which the shift
from direct symbolic expression to inferred statistical
models of the world has changed the nature of the
relationship between interactive digital systems and the
Personal communication, Jennifer Rode, 20 Feb 2015.
people that use them. These include questions about the
source of the data in those statistical models, attributions of
authorship and agency, the political consequences of shifts
to non-symbolic expression, and the effects on the identity
of the individual as constructing their own self and
controlling their digital lives.
I have suggested that these shifts result in systems that are
less humane, because of the ways in which the relationship
between the system behavior and human activities has
become obscured through the scale and complexity of the
modeling process. ML models have become less
accountable, open to exploitation by commercial actors,
closed to legal inspection, and resistant to the directly
expressed desires of the user.
Figure 2. Two images synthesized such that they will
result in a classification judgment of ‘school bus’ (from
[26], reproduced with permission of the authors).
perception. A recent publication illustrated this
phenomenon with images such as Fig. 2, and the title ‘Deep
neural networks are easily fooled: High confidence
predictions for unrecognizable images’ [26].
However, the goal of this paper has been to offer a
technically-informed critical commentary, as a supplement
to the many existing critiques that address more traditional
dystopian anxieties of control and surveillance. Rather than
simply sounding further warning alarms, or lamenting the
loss of a golden age of symbolic transparency, a
technically-informed critique should be able to draw
attention to opportunities for technical adjustment, caution,
correction and allowance.
Although neither of the images in Fig. 2 is recognizable as a
school bus, the image on the right offers a more visually
salient account of those features in the training data set
which have been encoded in the training process. They
allow a human viewer to recall, for example, that school
buses in the USA are painted in a characteristic color, and
furthermore to speculate that this ML classifier might not
be effective in other countries where those colors are not
used. In contrast, the image on the left in Fig. 2 does not
provide a basis for this kind of assessment (the original
paper includes many similar noise-based images that
represent different categories, while being indistinguishable
to a human viewer).
In the following sections, technical considerations are
therefore presented as ways in which the structure of the
inferred model might be opened up – more open to
understanding by users and to critical assessment by
commentators. Where symbolic systems offered direct
representations of knowledge, ML systems must be
inspected in terms of their statistical structure.
Contrasts of this kind point toward the design opportunity
for more humane ML systems that reveal the nature of the
features from which judgments have been derived. It is
often the case that such features do not satisfy the symbolic
expectations underlying representational user interfaces.
The semiotic structure of interaction with inferred worlds
can only be well-designed if feature encodings are
integrated into that structure.
A classic student exercise in the GOFAI days was to ask
how a machine vision system might recognize a chair. If the
initial answer described four legs, with a seat and a back,
the tutor would ask what about a stool, or one with three
legs, or a bean bag2? Eventually the student might offer a
functional description – something a person is sitting on.
But then what about a person sitting on a table, or resting
on a bicycle? The discussion might end with
Wittgensteinian reflections on language, but the key
insights are a) that judgments are made in relation to sets of
features, and b) that accountability for a judgment is
achieved by reference to those features.
The models underlying many ML-based systems have been
constructed from sets of training data, where each example
in the data set is labeled according to a so-called ‘ground
truth,’ often an expert judgment (this is characteristic of
supervised learning algorithms – I discuss unsupervised
learning below). The inferred model, however complex it
might be, is essentially a summary of those expert
judgments. However, it is expensive to label large data sets,
so the availability of suitable labels often determines the
models that can be obtained. For example, in the case of
natural language processing systems, the ML model that is
most often used to assign a part-of-speech (POS) to
individual words (POS-tagging) is based on a training set
from the Wall Street Journal. When applied to the speech
of people who do not talk like the WSJ, the accuracy of this
In the case of statistically inferred models, the features can
often be far more surprising. One of the greatest technical
changes in the transition from GOFAI to ML systems has
been the discovery that many very small features are often a
reliable basis for inferred classification models (e.g. [21]).
However, the result is that it becomes difficult to account
for decisions in a manner recognizable from human
This example is taken from a class taught by Peter Andreae at
Victoria University of Wellington in 1986.
model will be reduced. But it is unlikely that expert judges
would invest expensive time labeling (for example) the
street patois of a Brazilian favela, even if there were a
standard textbook description of its linguistic syntax.
interaction, including predictive text and search engines,
offer the user a list of choices that have been ranked
according to relative confidence. However, these systems
do not currently scale the ranked choices in proportion to
the magnitude of the prediction. MacKay’s Dasher is an
alternative example of a model-based predictive text entry
system that directly exposes the confidence of the
prediction in the user interface, by varying the size on
screen of the different choices [36].
The phrase ‘ground truth’ implies a degree of objectivity
that may or may not be justified for any particular labeled
data set. If the interpretation of a case is ambiguous, then
that training item must either be excluded from the data set
(a common expedient), or labeled in a way that discounts
the ambiguity – perhaps because the expert has not even
noticed the problem. Furthermore, expert judges may
approach a data set with different intentions from those who
will interact with the resulting model. One might ask
whether these experts are representatives of the same social
class as the user, or whether their judgments are dependent
on implicit context, perhaps uninformed by situations that
they have never experienced.
The challenge for incorporating confidence in an interactive
system is to do so unobtrusively, allowing the user to take
account of relevant cues without information overload.
However, in order to establish this as a design opportunity,
we first need to acknowledge that confidence does vary,
and that probabilistic inferred models should not be
presented as though they were deterministic.
Moreover, labeling large data sets is tedious and expensive,
to a degree that those with broader expertise of judgment
might be reluctant to spend their own valuable time in such
activity. As a result, many researchers resort to the use of
online labor markets such as Amazon’s Mechanical Turk
(AMT), to commission ‘human intelligence tasks’. This
strategy casts further doubt on the presumption of ground
truth, through the economic relations in which it is
embedded. For example, when the AMT service was
introduced, it was formally available only to users in North
America, with the result that statistical models labeled by
AMT workers might incorporate an embedded form of
cultural imperialism, perhaps of the kind illustrated in the
black and yellow ‘school bus’ category of Fig. 2.
Decisions made on the basis of an inferred model will
include errors. Research results in ML conventionally
report the degree of accuracy in the model (80%, 90%, 99%
etc). However, the user’s experience of such models is
often determined by the consequence of the errors, rather
than the occasions on which the system acts as expected.
90% accuracy is considered a good result in much ML
research, but using such models in an interactive system
means that one in ten user actions will involve correcting a
mistake. User experience researchers understand the need to
focus on breakdowns, rather than routine operation,
although in the past these have tended to result from
indeterminacy in human behavior, rather than in the
behavior of the system itself. It is important to recognize
that departures from routine are more costly to manage than
routine operation, because they require conscious attention
from the user [5]. A system that mostly behaves as
expected, with occasional departures, may be less useful
than one that has no intelligence at all. Furthermore, it is
possible that a 1% error rate will be even more dangerous
than a 10% error rate, because the operator may become
complacent or inattentive to the possibility of error.
Many of the symbolic models created in the early days of
GOFAI were deterministic – a particular output was known
to have resulted from a given range of inputs, and those
inputs were guaranteed always to produce the same output.
In contrast, the behavior of inference-based systems is
probabilistic, with a likelihood of error that varies
according to the quality of the match between the inferred
model and the observed world. This match will always be
approximate, because the model is only a summary of the
world, not an exact replica. (In fact, training a model until it
does replicate the observed world is ‘over-fitting’ – it
results in a fragile model that performs poorly at handling
new data).
Deep Learning
The above discussion of features and labeling applies to the
ML research techniques most popular in the early 2000's
(and now widely applied in commercial practice), but it
should be noted that recent algorithms in the broad category
of ‘Deep Learning’ (including deep belief networks,
convolutional neural networks and many others) raise
somewhat different issues. Deep Learning techniques aim
to be less dependent on explicit feature encoding, and also
emphasise the importance of unsupervised learning, so that
a labeled training set is not needed. However, each of these
attributes leads to further questions for the critical technical
Despite the fact that inferred judgments can carry varying
degrees of confidence, many interactive systems obscure
this fact. In situations where the behavior of the system
results from choosing the most likely of several
possibilities, it might benefit the user to know that this
behavior resulted from one 51% likelihood being compared
to a 49% alternative, as distinct from another case where
the model predicts one choice with 99% likelihood. Some
of the most successful examples of inference-based
The first problem is that, just as it is not possible for a
human to gain information about the world unmediated by
perception, it is difficult for a Deep Learning algorithm to
gain information about the world that is unmediated by
features of one kind or another. These ‘perceptual’ features
may result from signal conditioning, selective sampling,
optical mechanics, survey design, stroke capture – because
every process for capturing and recording data implicitly
carries ‘features’ that have been embedded in the data
acquisition architecture through technical means. If the
features have not been explicitly encoded as a component
of the ML system, then it is necessary for the critic to ask
where they have been encoded. The questions already asked
with regard to obfuscation of the model perhaps become
more urgent, in that only the designers of the associated
hardware may be able to provide an answer.
a summary that is directly readable in the manner of a
symbolic representation.
The second challenge in assessing Deep Learning systems
is that, if the judgments are not made by humans, they must
be obtained from some other source. In one of the most
impressive applications of convolutional neural networks,
the staff of DeepMind Technologies [23] demonstrated a
system that can learn to play early Atari video games
without any explicit human intervention (other than
drawing attention to the score – which is a crucial factor).
It is possible for users to interpret and interact with such
models in a way that places more emphasis on human
concerns, but this requires designs that communicate
essential characteristics of the model. Important aspects
include the features that have been used to train the model,
the source of the data in which those features were
observed, the expert judgments that were applied when
labeling the ground truth, the degree of confidence in any
particular application of the model, the specific likelihood
of errors in the resulting behavior, the infrastructure
through which input data was acquired, and the semiotic
status of the representational worlds in which an
unsupervised model apparently acts.
This paper has presented an historical argument for a new
critical turn in HCI, that steps aside from the
preoccupations of symbolic GOFAI and draws attention to
the consequences of interacting with the inferred models
derived from ‘big data’. It has investigated a number of
specific problems that arise in such models, where these
problems are also attuned to the ways in which distinctions
between data and control, or content and services, are
changing in the digital economy and regulation of the
Examples of this kind are often discussed with the
expectation that the next step after a video game will be
action in the real world. Similar assumptions were often
made in the GOFAI era, although that focus on ‘toy worlds’
was eventually abandoned, in recognition that operating ‘in
the wild’ was overwhelmingly more challenging. This quite
obviously applies to the case of Atari game worlds, and
perhaps such toy applications do not seem a matter for
serious concern. However, we do have reason to be
concerned if similar algorithms are applied to some of the
other representational ‘games’ played in contemporary
society, such as the representational game worlds of
corporate finance, audience ratings, or welfare benefits, and
the ‘scores’ that are assigned to them in financial markets or
monetarist government.
This analysis suggests the need for design considerations
that might help users to engage with inferred models in a
way that is better informed, more effective, and supports
their human rights to act as individuals. We need improved
conceptual constructs that can be used to account for a new
designed relationship between user intentions and inferred
models. The following suggestions are drawn from work
carried out in the author’s research group, in order to
provide concrete illustrations of the kind of design research
that might take account of these considerations.
One such construct is the notion of agency – if the machine
acts on the basis of a world model that is derived from
observations of other users (or from the assumptions of an
expert labeler), then this will be perceived by the user as a
proportionate loss of personal agency through control of
one's own actions. Fundamental human rights of identity,
self-determination and attribution are implicated in this
construct. If the inferred model obfuscates such relations,
then they should be restored through another channel.
So critical questions in the analysis of Deep Learning
systems can be set alongside those of earlier ML
techniques: 1) what is the ontological status of the model
world in which the Deep Learning system acquires its
competence; 2) what are the technical channels by which
data is obtained; and 3) in what ways do each of these differ
from the social and embodied perceptions of human
observers? Each of these questions represents a deeply
humane concern with respect to the representational status
of inferred models and the degree to which we are obliged
to interact with such models.
A second construct is the interaction style previously
described as programming by example – where future
automated behaviors are specified by inference from
observations of user action. Often promoted as an idealised
personal servant, many such systems struggle to allow the
basic courtesies of personal service, such as asking for
confirmation of a command, or responding to a change of
mind. Empowering users through such techniques will
The relationship between inferred models, and the data that
they are derived from, is complex. The model is already a
summarized version of the original data, although this is not
involve explicit representation of the inferred requirements
and actions.
Doing so requires a philosophical framework in which
labour, identity and human rights are recognized as central
concerns of the digital era – concerns that are directly
challenged by recent developments in engineering thinking.
In short, we need a discipline of humane computer
A third construct is the recognition that, although
approximate and errorful inferred models of the user’s
intentions are problematic and worrisome, humans
themselves also develop world models on the basis of
incomplete and selective data. Kahneman's investigations of
heuristics and biases in human decision-making [17] offer a
mirror to the inferred world model of ML systems. There is
a valuable opportunity to create user interfaces that
acknowledge and support such human reasoning styles,
rather than attempting to correct the user on the basis of
unseen data or expert design abstractions.
I am grateful for feedback and suggestions from Luke
Church, Moira Faul, Leonardo Impett, Elisabeth LeedhamGreen, David MacKay, Becky Roach, Advait Sarkar, and
anonymous reviewers of earlier versions.
A fourth construct is to reconsider the role of the state, in
an era when neither intellectual property nor legal policies
need be explicitly formulated as symbolic representations.
The new status of content that underlies inferred models
throws new light on the role of public service broadcasters,
who should be in a position to establish and protect genuine
public value in the public domain [6].
1. Agre, P. (1997) Computation and Human Experience,
Cambridge University Press.
These four illustrative examples are not proposed as the
basis for a unified theoretical framework to be adopted by
future design researchers. The intention is rather to provide
a relatively pragmatic set of observations and suggestions,
showing connections between the ideas in this paper and
established topics within mainstream interaction design and
digital media studies. Hopefully there are many other such
opportunities, which may indeed come together to offer a
basis for future design frameworks and methods.
3. Bennett, J. (2013) Forensic musicology: approaches and
challenges. In The Timeline Never Lies: Audio
Engineers Aiding Forensic Investigators in Cases of
Suspected Music Piracy. Presented at the International
Audio Engineering Society Convention. New York,
USA, October 2013.
2. Agre, P. E. (1997). Towards a critical technical practice:
lessons learned in trying to reform AI. In Bowker, G.
Star, S. L & Turner, W. (Eds) Social Science, Technical
Systems and Cooperative Work, Lawrence Erlbaum,
Mahwah, NJ, pp. 131-157.
4. Blackwell, A.F. (2001). SWYN: A visual representation
for regular expressions. In H. Lieberman (Ed.), Your
wish is my command: Giving users the power to instruct
their software. Morgan Kauffman , pp. 245-270.
5. Blackwell, A.F. (2002). First steps in programming: A
rationale for attention investment models. In Human
Centric Computing Languages and Environments, 2002.
Proceedings. IEEE 2002 Symposia on (pp. 2-10). IEEE.
The central technical assumptions that underpinned the
design of software applications for the first 50 years of the
computer industry are now largely outdated. The
intellectual agenda of data processing and communications,
in which users either interact with each other or make
choices between defined system functions, has not been
succeeded by the autonomous human-like AI that was
anticipated in the 1950s. Of course, HCI has always resisted
such ambitions, drawing attention to the pragmatic human
needs of social conversation and embodied usability.
6. Blackwell, A.F. and Postgate, M. (2006). Programming
culture in the 2nd-generation attention economy.
Presentation at CHI Workshop on Entertainment media
at home - looking at the social aspects.
7. Breiman, L. (2001). Statistical modeling: the two
cultures. Statistical Science 16(3), 199–231
In the new technical environment of the 21st century, users
increasingly interact with statistical models of the world
that have been inferred from a wide variety of data sources
rather than explicit design judgments. This situation forces
us to attend to the politics of information and action, as well
as the attributes and limitations of the inference systems
themselves. Just as the technical competence required of
engineers is shifting from data and algorithms to
information theory and stability analysis, so user experience
designers must reconceive the relationship between content
and services as constituting an ‘inferred world’ that stands
in rich semiotic relation to individual and collective
8. Borges, J.L. (1941/tr. 1962) The Library of Babel.
Trans. by J.E. Irby in Labyrinths. Penguin, pp. 78-86
9. boyd, d. and Crawford, K. (2012). Critical questions for
big data. Information, Communication & Society, 15(5),
10. Card, S.K. Allen Newell, and Thomas P. Moran. 1983.
The Psychology of Human-Computer Interaction. L.
Erlbaum Assoc. Inc., Hillsdale, NJ, USA.
11. Collins, H. and Kusch, M. (1998). The Shape of Actions:
What Humans and Machines Can Do. MIT Press.
12. Cope, D. (2003). Virtual Bach: Experiments in musical
intelligence. Centaur Records.
13. Coyle, D., Moore, J., Kristensson, P.O., Fletcher, P. &
Blackwell, A.F. (2012). I did that! Measuring users'
experience of agency in their own actions. Proceedings
of CHI 2012, pp. 2025-2034.
predictions for unrecognizable images. arXiv:1412.1897
27. Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y.,
Yu, B., & Gallant, J. L. (2011). Reconstructing visual
experiences from brain activity evoked by natural
movies. Current Biology, 21(19), 1641-1646.
14. Dourish, P. (2001). Where the Action Is: The
Foundations of Embodied Interaction. MIT Press.
15. Dourish, P. (2004). What we talk about when we talk
about context. Personal and Ubiquitous Computing
8(1), 19-30.
28. Norman, D. A., & Draper, S. W. (1986). User centered
system design. Hillsdale, NJ.
29. Norman, D.A. Cognition in the head and in the world:
an introduction to the special issue on situated action.
Cognitive Science 17, 1-6 (1993).
16. Gill, K.S. (Ed.) (1986). Artificial Intelligence for
Society. Wiley.
17. Gilovich, T., Griffin, D., & Kahneman, D. (Eds.).
(2002). Heuristics and biases: The psychology of
intuitive judgment. Cambridge University Press.
30. Norvig, P. (2011). On Chomsky and the two cultures of
statistical learning.
31. Pasquinelli, M. (2009). Google’s PageRank algorithm: a
diagram of cognitive capitalism and the rentier of the
common intellect. In K. Becker & F. Stalder (eds), Deep
Search. London: Transaction Publishers.
18. Gomer, R., schraefel, m.c. and Gerding, E. (2014).
Consenting agents: semi-autonomous interactions for
ubiquitous consent. In Proc. Int. Joint Conf. on
Pervasive and Ubiquitous Computing: (UbiComp 14).
19. Kepes, B. (2013). Google users - you're the product, not
the customer. Forbes Magazine, 4 December 2013.
32. Scherzinger, M. (2014). Musical property: Widening or
withering? Journal of Popular Music Studies 26(1),
20. Leahu, L., Sengers, P., and Mateas, M. (2008).
Interactionist AI and the promise of ubicomp, or, how to
put your box in the world without putting the world in
your box. In Proc. 10th int. conf. on Ubiquitous
computing (UbiComp '08), pp. 134-143.
33. Shotton, J., T. Sharp, A. Kipman, A. Fitzgibbon, M.
Finocchio, A. Blake, M. Cook, and R. Moore. (2013).
Real-time human pose recognition in parts from single
depth images. Communications of the ACM 56(1), 116124.
21. Lowe, D. G. (1999). Object recognition from local
scale-invariant features. In Proc 7th IEEE Int. Conf. on
Computer Vision, pp. 1150-1157.
34. de Souza, C.S. (2005) The Semiotic Engineering of
Human-Computer Interaction. The MIT Press.
35. Suchman, Lucy (1987). Plans and Situated Actions: The
Problem of Human-machine Communication.
Cambridge: Cambridge University Press.
22. Madison, J. (2011). Damn You, Autocorrect! Virgin
23. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A.,
Antonoglou, I., Wierstra, D., and Riedmiller, M. (2013).
Playing Atari with deep reinforcement learning.
36. Ward, D.J., Blackwell, A.F. & MacKay, D.J.C. (2000).
Dasher - a data entry interface using continuous gestures
and language models. In Proc. UIST 2000, pp. 129-137.
37. United Nations General Assembly. (1948). Universal
Declaration Of Human Rights, Article 27.
24. Monbiot, G. (2013). Transatlantic trade and investment
partnership as a global ban on left-wing politics. The
Guardian, 4 Nov 2013.
38. Vercellone, C. (2008). The new articulation of wages,
rent and profit in cognitive capitalism. Paper presented
at The Art of Rent Feb 2008, Queen Mary University
School of Business and Management, London.
25. Newell, A. (1974). Notes on a proposal for a
psychological research unit. Xerox Palo Alto Research
Center Applied Information-processing Psychology
Project. AIP Memo 1
39. Winograd, T. and Flores, F. (1986). Understanding
computers and cognition: A new foundation for design.
Intellect Books.
40. Zaslow, J. (2002) If TiVo thinks you are gay, here's how
to set it straight. Wall Street Journal online Nov. 26.
26. Nguyen, A., Yosinski, J. and Clune, J. (2014). Deep
neural networks are easily fooled: High confidence

Similar documents