Context-Aware Assistive Systems for Augmented Work

Transcription

Context-Aware Assistive Systems for Augmented Work
Context-Aware
Assistive Systems
for Augmented Work.
A Framework Using
Gamification and Projection
Oliver KOrn
CONTEXT-AWARE ASSISTIVE SYSTEMS
FOR AUGMENTED WORK.
A FRAMEWORK USING GAMIFICATION AND PROJECTION
Von der Fakultät für Informatik, Elektrotechnik und
Informationstechnik der Universität Stuttgart zur Erlangung
der Würde eines Doktors der Philosophie (Dr. phil.)
genehmigte Abhandlung
vorgelegt von
OLIVER KORN
Erstgutachter:
Prof. Dr. Albrecht Schmidt
Zweitgutachterin:
Prof. Dr. Fillia Makedon, PhD
Tag der mündlichen Prüfung:
21.05.2014
Institut für Visualisierung und Interaktive Systeme (VIS)
der Universität Stuttgart
2014
Abstract
i
ABSTRACT
While context-aware assistive systems (CAAS) have become ubiquitous in cars or
smartphones, most workers in production environments still rely on their skills and
expertise to make the right choices and movements. The quality gates currently estab­
lished by the industry successfully remove “waste” i.e. failed work results from the
workflow, but they usually operate in a spatial and temporal distance from the work­
place and the worker. Thus workers lack the opportunity to learn from problems on
the fly and to improve their work habits.
Today’s motion recognition systems already allow to continuously analyze work ac­
tions. The corresponding middleware can interpret these actions in real-time and
micro projectors allow to display work-relevant information directly into a worker’s
field of vision. Thus the technical pre-requisites for CAAS with instant feedback
directly at the workplace essentially have been established.
Although every worker would benefit from context-aware assistive systems, impaired
persons and elderly persons with reduced physical and mental capabilities require
such systems the most. CAAS have the potential to empower these persons to do more
complex work or to remain in regular jobs longer. Thus they combine economic
benefits with inclusion and address the demographic change.
After an overview of the relevant backgrounds from ethics, psychology, computer
science and engineering as well as the relevant state-of-the-art, we establish require­
ments which result in a model for ideal CAAS. As the framework aims to improve not
only the work results but also the work process and thus the workers’ motivation, the
model incorporates elements from game design. This process of “gamifying” realworld actions is called gamification.
We describe an exemplary implementation covering essential aspects of the model.
The effects of both the augmentation by projection and by gamification are evaluated
with impaired persons in a sheltered work organization using empirical methods com­
mon in human computer interaction. An additional focus lies on the ethical implica­
tions of assistive systems which supervise and model their users in real-time.
While this thesis represents a starting point for research on CAAS in workplaces, im­
portant aspects like real-time error tracking or the integration of emotion detection are
still subjects of future research. We hope that future CASS based on this work will
help to make work not only more productive but also more enjoyable and motivating
for workers.
ii
Zusammenfassung
ZUSAMMENFASSUNG
Während kontextbewusste Systeme (context-aware assistive systems: CAAS) in
Smartphones und Autos allgegenwärtig geworden sind, verlassen sich Arbeiter in Pro­
duktionsumgebungen meist noch auf Fähigkeiten und Erfahrung, um Entscheidungen
zu treffen und Arbeitshandlungen richtig durchzuführen. Die derzeit in der Industrie
etablierten Qualitätsprüfungen entfernen zwar erfolgreich Ausschuss (d. h. fehlerhafte
Produkte) aus dem Durchlauf, werden jedoch meist in räumlicher und zeitlicher Dis­
tanz vom Arbeitsplatz betrieben. Daher fehlt den Arbeitern die Möglichkeit, während
des Arbeitsprozesses aus Problemen zu lernen und ihr Arbeitsverhalten zu verbessern.
Aktuelle Bewegungserkennungssysteme erlauben es bereits, Arbeitshandlungen kon­
tinuierlich zu analysieren. Die dazugehörige Middleware kann diese in Echtzeit aus­
werten und Projektoren erlauben es, arbeitsrelevante Informationen direkt in das
Blickfeld des Arbeiters projizieren. Damit wurden die technischen Voraussetzungen
für arbeitsplatzintegrierte CAAS mit Feedback in Echtzeit geschaffen.
Obwohl jeder Arbeiter von solchen kontextbewussten Systemen profitieren würde, ist
der Bedarf bei Menschen mit Behinderungen sowie älteren Berufstätigen mit redu­
zierter körperlicher und geistiger Leistungsfähigkeit am größten. CAAS haben das
Potenzial diesen Anwender zu ermöglichen, komplexere Arbeiten zu verrichten oder
ihrer regulären Arbeit länger nachzugehen. Damit kombinieren CAAS wirtschaftliche
Vorteile mit Inklusion und adressieren den demographischen Wandel.
Auf Basis einer Übersicht von Grundlagen aus den Bereichen Ethik, Psychologie, In­
formatik und Maschinenbau sowie des relevanten Standes der Technik werden An­
forderungen abgeleitet, die in einem Modell idealer kontextbewusster Systeme
resultieren. Da diese Systeme darauf abzielen, nicht nur die Arbeitsresultate sondern
auch die Motivation der Arbeiter zu steigern, integriert das Modell Elemente aus der
Computerspieleentwicklung. Dieser Prozess wird Gamification genannt.
Die exemplarische Implementierung der essentiellen Aspekte des Modells wird be­
schrieben. Die Auswirkungen der Augmentierung durch Projektion und durch Gami­
fication werden mit behinderten Menschen in einer beschützenden Werkstätte
evaluiert. Hierbei werden übliche empirische Methoden der Mensch-Computer-Inter­
aktion angewandt. Ein zusätzlicher Fokus sind die ethischen Implikationen von As­
sistenzsystemen, welche ihre Anwender in Echtzeit überwachen und modellieren.
Diese Arbeit stellt einen Anfangspunkt für die Forschung zu CAAS im Arbeitsumfeld
dar. Wichtige Aspekte wie Fehlererkennung oder Emotionserkennung in Echtzeit sind
noch immer offene Forschungsfragen. Wir hoffen, dass zukünftige CASS, die auf die­
ser Arbeit aufbauen, dabei helfen, Arbeit nicht nur produktiver sondern auch ange­
nehmer und motivierender für die Arbeiter zu gestalten.
Acknowledgements
iii
ACKNOWLEDGEMENTS
In the past three years I had the opportunity to work with a number of exceptional
people. The following list is not exhaustive but mentions people I worked closely with.
At first I want to thank my supervisor Albrecht Schmidt. He has been a constant
supporter and enabler. Albrecht’s idea of leading by giving freely has been an inspi­
ration to me that goes well beyond academic honors. My second supervisor Fillia
Makedon from the University of Texas Arlington is one of the founders of the field
of assistive technologies. Her support for my work began when we first met at a
PETRA conference in Crete and continues ever since. If I still cannot perform a proper
Sirtaki it is entirely due to my own shortcomings.
The foundations for the use of motion recognition in assistive contexts were estab­
lished during the BMBF-funded project motivotion60+ at the Korion GmbH (2009­
2012, framework Ambient Assisted Living). I want to thank my friend and colleague
Robert Konrad for hacking the first Kinect and being the most reliable programming
genius I know. His basic findings on motion recognition are still used in this work.
This also applies to Bodo Runde whom I want to thank for helping to develop the
first motion-based exergames for elderly people. The gamification component used in
the assistive system would not look and work the way it does without the help of
Valentin Schwind, an excellent graphic designer and developer.
Much of the research presented in this work is related to the funded project ASLM
(assistive systems for persons with impairments working in assembly). It was acquired
by Thomas Hörz from the University of Applied Sciences Esslingen (HE), where I
spent my first year with basic research on motion recognition and assistive technolo­
gies in production. I want to thank him for giving a computer scientist access to the
Faculty of Engineering; it is also thanks to his industriousness that a patent application
on the algorithm used in the system was filed and that we received the Gips-Schüle
Award for applied research in the field “Human and Technology” in 2013.
Many chapters of this thesis are based on the hard work of excellent students. I espe­
cially want to thank Björn Böhmert, Daniel Kaupp and Benjamin Knapp. Björn
worked with me in the basic research on motion recognition and did a great job in
implementing the “adjoined spheres model” for observing 3D areas in workspaces.
Daniel worked on the development of the “Assistive Systems Experiment Designer”
and assisted during the field studies. Benjamin also helped during the studies and de­
veloped a tool that automatically analyzes the experiments’ extensive log files. Finally
Adrian Rees did an excellent job in proof-reading.
The extensive studies would not have been possible without the support of the shel­
tered work organization BWH in Heilbronn. I want to thank its CEO Alfred Grimm
iv
Acknowledgements
for giving management support and supervising the study’s ethical compliance as well
as Christof Sanwald and Jürgen Kleiner for giving valuable expert input in the de­
sign phase and heart-warming on-site support during the study. The assembly table
used during these studies was constructed and built by the Schnaithmann GmbH.
I want to thank its CEO Karl Schnaithmann and the project team: Volker Sieber,
Daniela Bühler, Bernhard Bader and Marc Hentzner.
Seven student project groups had to endure my supervision at the HE. They provided
market data, tested and integrated hardware or sensors and conducted basic studies on
competence levels or object complexity. I want to thank them for their dedication and
good work: Tobias Scheffel, Stefan Leyh, Stephan Kazmierczak, Fabian Dörre,
Christian Löffler, Max Korn, Roman Rinas, Adrian Starczewski, Julian Besserer, Tobias Reim, Johannes Hirneise, Julian-Matthias Gaiser, Simon Schweik­
ert, Philipp Sigle, Torben Kamphans, Marc Dellenbach, Thilo Anhorn and Axel
Baier. In the last phase of the ASLM project it is thanks to my former colleague
Manuel Kölz that the subsequent student groups were perfectly supervised.
In February 2013 I continued my research at the University of Stuttgart where I had
the pleasure of receiving valuable advice for the final stage of my thesis from fellow
PhD students. Stephan Abele gave valuable support for the statistical analysis. At
Albrecht’s Institute for Visualization and Interactive Systems (VIS) my colleague
Markus Funk worked closely with me. I was inspired from his straightforward char­
acter and benefited from his technical expertise in motion and object recognition.
The large-scale federal project motionEAP funded by the Federal Ministry for Eco­
nomic Affairs made it possible to continue the research seamlessly and also supported
the print version of this work. The project was encouraged by my long-standing men­
tor Thomas Wahl, the vice-president of DLR’s project section. Thanks for supporting
me since my first steps in project work in the early noughties at Fraunhofer.
I want to thank my parents Elfi and Ulrich Korn for many things – but especially for
not insisting on prohibiting computer games. The days and nights spent in front of the
ZX81 and the Commodore Amiga laid the foundation of the fascination for computers
and software that still fuels my business and research interests today. I also want to
thank my mother in law Gisela Hainbuch for selflessly supporting me and my family
during the last years: you have been our personal assistive system.
Finally I want to thank my family: my wife Silja Korn, my daughter Helena and my
son Leander. Completing a PhD in the late thirties is a joint effort. Thanks Silja for
letting me spend dozens of nights at the lab while caring for the kids alone. Thanks
for enduring numerous conference travels and sorry for all the evenings and weekends
spent writing papers or this text. You are the light of my life.
Table of Contents
v
TABLE OF CONTENTS
1 Introduction
1.1 1 Motivation
1 1.1.1 Automation versus Manual Assembly
2 1.1.2 Increased Demand for Impaired Workers
3 1.1.3 Demographic Change
4 1.1.4 Pursuing the Ethical Good
4 1.2 Thesis Outline
5 1.3 Research Questions
7 1.4 Methodology
8 1.4.1 State-of-the-Art and Requirement Studies
8 1.4.2 Prototypes
8 1.4.3 Evaluation
9 1.5 Research Contributions
9 1.6 Publications
9 Background
12 2.1 Related Disciplines
12 2.2 Demography and Targeted Users
14 2 2.3 2.4 2.5 2.6 2.2.1 Elderly Persons
14 2.2.2 Disabled and Impaired Persons
16 2.2.3 Aging and Disability
20 2.2.4 Sheltered Work and Supported Work
21 Ethical Dimensions
23 2.3.1 23 Value of Technology
2.3.2 Value of Work
24 2.3.3 Implications for the Work of Persons with Disabilities
26 Psychology
27 2.4.1 HAAT-model
27 2.4.2 Cognitive Workloads
29 2.4.3 Flow
30 Computer Science
33 2.5.1 Human-Computer Interaction
33 2.5.2 Assistive Technology
35 2.5.3 Augmented Reality
37 2.5.4 Artificial Intelligence
38 Engineering and Production Management
40 2.6.1 Measuring Work Processes with MTM
40 2.6.2 Assembly Workplaces
42 vi
Table of Contents
2.6.3 3 Digital and Virtual Factory, Cyber-physical Systems
Related Work
44 48 3.1 Motion Recognition
48 3.2 Projection
51 3.3 Gamification
53 3.4 Assistive Systems at Workplaces
57 3.4.1 Assistive Systems Using Motion Recognition
58 3.4.2 Assistive Systems Using Projection
60 3.4.3 Assistive Systems Using Gamification
62 3.5 Ethical Standards for Assistive Technology
65 Requirements
67 4.1 Standards for Designing Interactive Systems
67 4.2 Constraints and Adaptations
69 4.2.1 69 4 4.3 5 Focus of Interaction
4.2.2 Total Quality
70 4.2.3 Universal Design
71 Requirements Study
74 Model and Architecture
78 5.1 Adapted HAAT-model
78 5.2 Framework for an Adaptive Game System
80 5.3 Flow Curves
82 5.4 Model for Context-Aware Assistive Systems
83 Implementation
86 6.1 System Overview
87 6.2 Physical Integration in the Work Environment
87 6.3 Motion Recognition
89 6.3.1 Technical Restrictions and Solution
89 6.3.2 The Adjoined Spheres Approach
90 6 6.4 6.5 7 Instruction and Performance Analysis
93 6.4.1 Visual Output and Projection
94 6.4.2 Designer Component
96 6.4.3 Runner Component
100 Gamification Component
103 6.5.1 Designing Flow
103 6.5.2 Implementing Flow
105 Studies and Evaluation
110 7.1 Participants and Preparatory Study
110 7.2 Procedure
114 Table of Contents
7.3 vii
7.2.1 Questionnaire Design
114 7.2.2 Assembly Task
117 119 Apparatus
7.3.1 Time Measurement
7.3.2 Quality measurement
119 121 7.4 Data
7.5 Experiment Results and Discussion
128 7.5.1 Overall Results
128 7.5.2 Analysis of Time
129 7.5.3 Analysis of Errors
132 7.5.4 Sub-Populations
135 7.6 123 Questionnaire Results and Discussion
137 7.6.1 Pre-Experiment Results
137 7.6.2 Generic Post-Experiment Results
139 7.6.3 Specific Post-Experiment Results
144 7.7 Qualitative Findings
148 7.8 Discussion
150 8 7.8.1 Similarities between the Scenarios
150 7.8.2 Differences between the Scenarios
151 Conclusion
153 8.1 Summary of Research Contributions
153 8.2 Ethical Implications: Towards Humane Work
161 8.3 Future Research
163 8.3.1 Technical Perspectives
163 8.3.2 Ethical Perspectives
165 9 Supplement
166 9.1 Implementation Details
166 9.2 Questionnaires in Detail
168 9.3 Study Result Details
174 10 References
181 11 Index
191 viii
List of Figures
LIST OF FIGURES
Figure 1: Relation between automation, lot size and the application of CAAS........................... 2 Figure 2: Disciplines related to context-aware assistive systems............................................... 13 Figure 3: Percentage of persons aged 60 and over in 2012 and 2050 ........................................ 14 Figure 4: Development of age groups 1950 to 2100 in more / less developed regions. ............ 15 Figure 5: ICF classification incorporating environmental and personal factors, WHO 2011. .. 17 Figure 6: Disability prevalence rates for thresholds 40 and 50 derived from multi-domain
functioning levels in 59 countries, by country income level, sex, age, place of residence, and
wealth, WHO 2011...................................................................................................................... 19 Figure 7: Age-specific disability prevalence, derived from multidomain functioning levels in 59
countries, by country income level and sex, WHO 2011. .......................................................... 20 Figure 8: Historical development of the concepts work and technology. .................................. 25 Figure 9: HAAT-model by Cook & Hussey, 1995. .................................................................... 28 Figure 10: Different problems related to working memory. ...................................................... 29 Figure 11: Mental states resulting from the interaction between level of challenge and skill. .. 31 Figure 12: Consoles Wii with Wii Remote (left) and X-Box 360 with Kinect (right). ............. 34 Figure 13: Virtual continuum according to Nilsson & Johansson 2008. ................................... 37 Figure 14: Three systems of MTM based on various levels of detail. ....................................... 41 Figure 15: Industrial assembly table constructed by the Schnaithmann GmbH. ....................... 43 Figure 16: Tools and processes for the Digital Factory and the Virtual Factory. ...................... 45 Figure 17: The software Process Designer within Siemens Tecnomatix allows to create detailed
simulations of production processes. .......................................................................................... 47 Figure 18: The Microsoft Kinect (bottom) and the Asus Xtion (top). ....................................... 48 Figure 19: Softkinetic DepthSense 311. ..................................................................................... 49 Figure 20: Projected assessment game based on the UbiDisplays framework. ......................... 52 Figure 21: Wii with Wii Remote (left), Wii Remote with Wii Motion Plus (right). ................. 54 Figure 22: A senior citizen using an exergame developed for balance training. ....................... 56 Figure 23: An implementation of the Poka Yoke concept using an impression. ....................... 57 Figure 24: The Quality Assist System based on ultrasonic waves. ............................................ 59 Figure 25: Pick to light display "PickTerm Flexible". ............................................................... 60 Figure 26: Projection of visual data on welding sports. ............................................................. 61 Figure 27: An assistive system in the medical domain using projected interfaces. ................... 62 Figure 28: MEESTAR-model by Manzeschke et al. 2013. ........................................................ 65 List of Figures
ix
Figure 29: Number of processes at a single workplace. .............................................................75 Figure 30: Requested features of a new assistive system. ..........................................................76 Figure 31: Adapted HAAT-model. .............................................................................................79 Figure 32: Potential framework for an adaptive game system by Charles et al. ........................ 80 Figure 33: Flow curves ................................................................................................................ 82 Figure 34: CAAS-Model – abstract............................................................................................. 83 Figure 35: CAAS-model – detailed............................................................................................. 84 Figure 36: System overview. ....................................................................................................... 87 Figure 37: Experimental assembly table with motion recognition and projection..................... 88 Figure 38: Illustration of Adjoined Spheres Approach. .............................................................. 90 Figure 39: Target area states: Occluded (red), neutral (blue), occupied (green). ....................... 91 Figure 40: Detecting three-dimensional object orientation with interactive areas. .................... 92 Figure 41: Architecture of the ASED software. ..........................................................................94 Figure 42: Instructions on a monitor (left) and as a projection (right). ......................................95 Figure 43: Class hierarchy of designer elements in ASED......................................................... 96 Figure 44: Class hierarchy of panels used in ASED. .................................................................. 97 Figure 45: Screenshot of the ASED designer view..................................................................... 98 Figure 46: Screenshot: Configuring a 3D trigger area. ............................................................... 99 Figure 47: ASED Designer with maximized “Augmented Camera Preview”. .......................... 99 Figure 48: Screenshot of the ASED Runner surveillance monitor. ..........................................101 Figure 49: Design study of the gamification element (left) and final version (right). .............104 Figure 50: Screenshot of the instruction (left) and the gamification component (right) with
shadow brick.............................................................................................................................. 106 Figure 51: The gamification component’s visual feedback makes use of smileys. .................108 Figure 52: Example: Process durations of one user in eight sequences. ..................................108 Figure 53: Lego objects with various degrees of complexity in the pre-study......................... 111 Figure 54: Histogram of the three test populations. ..................................................................112 Figure 55: Test populations’ mean wage-level (left) and age (right) with SD. ........................113 Figure 56: Instructions in the scenarios Gamification (left) and Projection (right). ................117 Figure 57: Logfile Analyzer list view before optimization.......................................................120 Figure 58: Logfile Analyzer list view after optimization. ........................................................121 Figure 59: Undercarriage assembled correctly (left) and with 100% error rate (right). ...........122 Figure 60: Sequence completion time of the SotA population. ................................................124 x
List of Figures
Figure 61: Error rate per sequence of the SotA population. ..................................................... 124 Figure 62: Sequence completion time of the Projection population. ....................................... 125 Figure 63: Error rate per sequence of the Projection population. ............................................. 125 Figure 64: Sequence completion time of the Gamification population.................................... 126 Figure 65: Error rate per sequence of the Gamification population......................................... 126 Figure 66: Histogram of average assembly time (eight sequences). ........................................ 127 Figure 67: Histogram of average error rate (eight sequences). ................................................ 127 Figure 68: Mean sequence durations with SD of the test populations. .................................... 129 Figure 69: Development of mean sequence completion times over eight sequences. ............. 130 Figure 70: Development of mean process completion times. .................................................. 131 Figure 71: Mean error rates with SD of the test populations.................................................... 132 Figure 72: Development of mean sequence error rates. ........................................................... 133 Figure 73: Development of mean process error rates. .............................................................. 134 Figure 74: Development of mean error rates of process 6 in eight sequences. ........................ 134 Figure 75: Production times (left) and errors (right) in the complete population (left column) and
the faster group (right column). ................................................................................................ 135 Figure 76: Temporal development of mean error rates in the subgroups................................. 136 Figure 77: User checking the instruction and the visual gamification element. ...................... 149 Figure 78: User comparing the assembled product with the 1:1 model. .................................. 156 Figure 79: A future setup with multiple sensors combining motion and object recognition... 164 Figure 80: Detailed process times of the SotA population: graph............................................ 175 Figure 81: Detailed process times of the Projection population: graph. .................................. 177 Figure 82: Detailed process times of the Gamification population: graph............................... 179 List of Tables
xi
LIST OF TABLES
Table 1: Thesis chapter structure................................................................................................... 5 Table 2: Summary of research questions. .....................................................................................7 Table 3: Example of MTM analysis for basic movements. ........................................................41 Table 4: Comparison of motion detection sensors available early 2012: ...................................50 Table 5: Requirements for an ideal CAAS.................................................................................. 73 Table 6: Conditions for flow and corresponding design approach. ..........................................105 Table 7: Feedbacks of the gamification component .................................................................106 Table 8: Scenario-specific questions posed after the assembly phase. .....................................116 Table 9: Instructions for the eight assembly steps. ................................................................... 118 Table 10: Means and standard deviations of sequence durations and errors............................ 128 Table 11: Questionnaire item 1: General condition ..................................................................137 Table 12: Questionnaire item 2: Nervousness ..........................................................................137 Table 13: Questionnaire item 3: Anticipation ...........................................................................138 Table 14: Questionnaire item 4: Computer experience ............................................................138 Table 15: Questionnaire item 5: Experience with computer or console games .......................138 Table 16: Questionnaire items 6 and 10: Difficulty ..................................................................139 Table 17: Questionnaire items 7 and 8: Acceptance .................................................................140 Table 18: Questionnaire items 9 and 15: Usability: Ease of Handling .....................................140 Table 19: Questionnaire items 13 and 16: Usability: Learning ................................................141 Table 20: Questionnaire items 11 and 12: Generic User Interface ...........................................142 Table 21: Questionnaire item 14: Self-Perception and Strain ..................................................143 Table 22: Questionnaire items 17, 18 and 19: Instructions.......................................................144 Table 23: Questionnaire item 20 (Projection): Instruction .......................................................145 Table 24: Questionnaire items 21 and 22 (Projection): Additional Product Model .................146 Table 25: Questionnaire items 20, 21 and 24 (Gamification): Game .......................................146 Table 26: Questionnaire items 22 and 23 (Gamification): Comments .....................................147 Table 27: RQ 1: Requirements of CAAS.................................................................................. 153 Table 28: Generic model for CAAS.......................................................................................... 154 Table 29: Implementation of motion recognition .....................................................................155 Table 30: Implementation of projection.................................................................................... 156 Table 31: Implementation of gamification................................................................................ 157 xii
List of Equations, Code and Data
Table 32: Quantitative impact of augmentations on speed and quality.................................... 158 Table 33: Quantitative impact on users: ................................................................................... 159 Table 34: Qualitative impact on users: ..................................................................................... 160 Table 35: Ethical dimension of CAAS: .................................................................................... 160 Table 36: Pre-experiment questions – identical in all scenarios. ............................................. 168 Table 37: Post-experiment questions: SotA.............................................................................. 169 Table 38: Post-experiment answers: SotA. ............................................................................... 170 Table 39: Post-experiment questions: Projection. .................................................................... 171 Table 40: Post-experiment answers: Projection........................................................................ 172 Table 41: Post-experiment questions: Gamification................................................................. 173 Table 42: Post-experiment answers: Gamification. .................................................................. 174 Table 43: Detailed process times of the SotA population: values. ........................................... 176 Table 44: Detailed process times of the Projection population: values.................................... 178 Table 45: Detailed process times of the Gamification population: values. .............................. 180 LIST OF EQUATIONS, CODE AND DATA
Equation 1: Z-values in sphere.................................................................................................... 91 Equation 2: Comparison with reference values. ......................................................................... 91 Equation 3: Transformation of distorted projection using CMAT. ............................................ 96 Equation 4: Computation of the distances to the center of a trigger box. ................................ 101 Equation 5: Log entry generated by ASED Runner. ................................................................ 102 Equation 6: Process Time assessment in the gamification component. ................................... 107 Equation 7: Log entry generated by ASED Runner. ................................................................ 119 Equation 8: Video frame grabber function for RGB data......................................................... 166 Equation 9: OpenNI and NITE configurator function for depth data. ..................................... 167 Equation 10: Setup of the trigger box. ...................................................................................... 167 Equation 11: Check of the trigger box. ..................................................................................... 168 List of Abbreviations
LIST OF ABBREVIATIONS
English translations in [square brackets]
AAL: Ambient Assisted Living
ACM: Association of Computing Machinery
AI: Artificial Intelligence
AIR: Adobe Integrated Runtime
ANOVA: Analysis of Variance
API: Application Programming Interface
AR: Augmented Reality
ASLM: Assistenzsysteme für leistungsgeminderte Menschen in der Montage [Assistive sys­
tems for persons with impairments working in assembly]
ASED: Assistive System Experiment Designer
BWH: Beschützende Werkstätte Heilbronn [Sheltered Work Organization Heilbronn]
CAD: Computer Aided Design
CAM: Computer Aided Manufacturing
CAP: Computer Aided Planning
CAAS: Context-aware Assistive System
CMAT: Concordance-Based Medial Axis Transform
CNC: Computerized Numerical Control
CPS: Cyber-Physical Systems
DIN: Deutsches Institut für Normung [German Institute for Standardization]
ERP: Enterprise Resource Planning
EU: European Union
FACS: Facial Action Coding System
GOMS: Goals, Operators, Methods and Selection (rules)
GPS: Global Positioning System
HCI: Human-Computer Interaction
HE: Hochschule Esslingen [University of Applied Sciences Esslingen]
HUD: Head-up Display
HMD: Head-Mounted Displays
HMI: Human-Machine Interaction
HTN: Hierarchical Task Network (Planning)
ICF: International Classification of Functioning, Disability and Health
IEEE: Institute of Electrical and Electronics Engineers
xiii
xiv
List of Abbreviations
ISO: International Organization for Standardization
MEESTAR: Model for the Ethical Evaluation of Socio-Technological Arrangements
MR: Mixed Reality
MTM: Methods-Time Measurement
NC: Numerical Control
NI: Natural Interaction
NITE: Natural Interaction Technology for End-user
OpenCV: Open Source Computer Vision Library
OpenNI: Open Natural Interaction
PHP: Hypertext Preprocessor, formerly Personal Home Page Tools
PLC: Programmable Logic Controller
PLM: Product-Lifecycle-Management
POCL: Partial-Order Causal Link (Planning)
PPS: Production Planning and Steering
R: Requirement
REFA: Verband für Arbeitsgestaltung, Betriebsorganisation und Unternehmensentwicklung
[Association for Ergonomics, Company Organization and Corporate Development]
RFID: Radio-Frequency Identification
RQ: Research question
RGB: Red Green Blue
SD: Standard Deviation
SotA: State-of-the-art
SUS: System Usability Scale
TMU: Time Measurement Unit
ToF: Time-of-Flight
UN: United Nations
UX: User Experience
VDI: Verein Deutscher Ingenieure [Association of German Engineers]
VIS: Institute for Visualization and Interactive Systems (University of Stuttgart)
VR: Virtual Reality
WHO: World Health Organization
Introduction: Motivation
1
1 INTRODUCTION
1.1 Motivation
Technology advances must not increase the gap between people who are digital im­
migrants, who are older or suffer from impairments1 and those who are not. Some­
times these advances open radically new perspectives and even lead to the “full
inclusion of [these] individuals […] in the mainstream of society” (Cook, 2010). In
the case of assistive technology, systems which are able to consider both the context
and the user in real-time would offer better assistance, increase productivity, make
work more secure and potentially even more enjoyable.
In the form of route guidance systems, context-aware assistive systems (CAAS)2 have
become ubiquitous in cars and smartphones. Thus travelling has ultimately become
easier and more accessible. In work environments, however, context-aware assistance
focusing on the worker remains the exception. While the quality gates established in
modern production lines succeed in removing “waste” (i.e. failed products) from the
workflow, they operate in a spatial and temporal distance from the workplace and the
worker. Thus workers have to rely on their skills and expertise to make the right
choices and movements and lack the opportunity to learn from problems on the fly
and to improve their work habits.
By establishing a feedback system close to the worker, context-aware assistive sys­
tems in production environments potentially improve learning, increase productivity
and even enhance motivation. In the following subchapters the need for such systems
will be explained in more detail.
1
In concordance with the International Classification of Functioning, Disability and
Health (ICF) the term “impairment” prospectively is used in a general sense including
disabilities and problems resulting from old age.
2
Assistive systems sometimes are called “wizards” because they assist users in work
processes, much like the broom in Goethe’s poem Sorcerer’s Apprentice (Zauberlehr­
ling). In this work the terms “assistive systems” and “wizards” are used synony­
mously.
2
Introduction: Motivation
1.1.1 Automation versus Manual Assembly
Increasing efficiency is the central goal when optimizing production. For many com­
panies being competitive implies reducing production costs and this is linked to two
major factors: the workers’ skill level and the cost for automation.
However, manual work is generally slower and prone to human errors – and some
technology enthusiasts already envision a world where most repetitive tasks are taken
over by AI systems and robots (Ford, 2013). So why should it not be possible to au­
tomate production to an extent where human errors are rendered impossible? As a
general rule, large lot sizes of simple tasks are indeed taken over by automation,
whereas for smaller lot sizes manual assembly is often the more economical option,
since the costs of automation are high. This relation is illustrated in Figure 1:
Degree of
automation
“Sweet spot”
for CAAS
in production
Economic advantage
of automation
Trend
Lot size
Figure 1: Relation between automation, lot size and the application of CAAS.
Despite the fact that we refer to our society as digital society, the demand for custom­
ized products has been increasing in recent years and so has the need for manual pro­
duction. Production methods like “build-to-order” and “design-to-order” led to an
increasing number of product variants (Kluge, 2011). A further implication is the trend
towards smaller lot sizes which encourages manual production. In some domains we
can even witness the return of “manufactories”.
Another issue is a direct result of globalization. Many countries established high cus­
tom charges for fully assembled products; so to remain profitable, many high-tech
manufacturers were forced to create sets of parts for components instead of selling the
complete product. These sets are sent to the buyers’ countries and assembled there.
Introduction: Motivation
3
The degree of assembly economical for the manufacturers varies from country to
country depending on the specific custom regulations. Since these regulations are sub­
ject to frequent changes, export-oriented companies like Daimler or Porsche estab­
lished complete departments specialized on analyzing custom regulations and
determining the adequate degree of assembly for various product components.
Sheltered work organizations are ideally suited for packing as well as pre-assembling
these components to fit the requirements of various countries. As an example 1.000
products delivered in 30 parts each might result in ten deliveries to ten countries with
ten different degrees of assembly reaching from 30 parts to two parts each.
These developments make wizards for production work more important because the
number of applications in their “sweet spot” grows, i.e. small lot sizes and a low level
of automation (see Figure 1).
1.1.2 Increased Demand for Impaired Workers
The market for production work with persons with impairments becomes increasingly
important. This is mainly due to three reasons:

Many industrial countries oblige companies to employ a certain percentage
of disabled and impaired employees, e.g. the 1990 Americans with Disabil­
ities Act (DeLeire, 2000) prohibits job-related discrimination against people
with disabilities. The legal obligations resulting from these and analogue
laws can be met by offering integrated working solutions or by contracting
“sheltered work” or “supported work” organizations (see chap. 2.2.4) which
focus on providing adequate conditions for impaired workers.

The increased cost and time requirements for transport when outsourcing
production processes to geographically remote locations make lean produc­
tion with little storage capacity more difficult, so reasonably priced regional
alternatives become more attractive.

The absolute and relative number of impaired persons is increasing due to
demographical reasons (see chap. 2.2 Demography and Targeted Users).
Both the company departments integrating impaired workers and the sheltered work
organizations are eager to establish systems empowering their employees to meet the
rising customer demands and thus become more profitable (Kronberg, 2013).
CAAS offer a way to document and improve skill development and empower workers
with impairments to produce more complex components faster and with lower error
rates. Ideally they allow some of them to move from sheltered work to supported work
or to stay in regular work contexts when impairments occur – which is especially
relevant for the elderly workforce.
4
Introduction: Motivation
1.1.3 Demographic Change
The growing demand for CAAS also is a result of the demographic development:
especially in European countries the percentage of older employees grows (see chap.
2.2.1 Elderly Persons). The prolongation of working lifetime requires many people to
work in older age. The establishment of assistive systems in various areas of life is an
efficient way to meet this demographic challenge and recently the need for adequate
assistance has also become apparent in the workplace (Brach & Korn, 2012).
While older employees often excel in knowledge and experience (Dul et al., 2012) it
has long been established that they tend to suffer from a gradual reduction of short
term memory (Anders, Fozard, & Lillyquist, 1972) resulting in a decrease of learning
abilities (Satre, Knight, & David, 2006). One of the results is an increase of human
errors in manual production tasks. Since in industrial production these errors are de­
tected and tracked back to the person who made them, many slightly impaired elderly
workers decide to retire early – often feeling ashamed and humiliated. CAAS offer a
simple and discrete way to augment the decreasing short term memory. Thus they
have the potential to empower impaired workers to do more complex work or remain
in regular jobs for a longer time. Thus CAAS combine economic benefits and address
the demographic change by preventing an early loss of experienced and potentially
highly productive workforce.
While in the first step CAAS clearly are developed for elderly and impaired users, in
a second step – in the sense of universal design – they are intended to be used by
everyone working in production (see chap. 8.3 Future Research).
1.1.4 Pursuing the Ethical Good
Although this work primarily pursues technological questions, we also have been con­
fronted with ethical problems concerning the relation between technology, work and
human beings. Since CAAS need to “come close” to their users to gather sensible
data, their design should be subject to ethical standards.
CAAS are being developed and will be developed in the future because current sensor
and computer technology allow it. It is our intention to integrate ethical aspects into
this development and thus contribute to a future state of the art incorporating ethical
and technological concepts. We do so by scaling the assistance according to the user’s
individual physical and emotional needs and by implementing motivating elements
that counter the disembodiment of work (Cardoso Machado, 2011). After all, we
highly appreciate the idea of creating motivation and having fun while working.
5
Introduction: Thesis Outline
1.2 Thesis Outline
This thesis consists of 11 chapters. The following table uses abbreviations and sim­
plifications but gives an overview of the work’s structure on levels 1 and 2. It also
shows which chapters are more elaborate.
Table 1: Thesis chapter structure.
1. Introduction
2. Background
1.1 Motivation
1.2 Thesis
Outline
1.5 Contribu­
tions
1.6 Publications
Included
2.1 Related Dis­
ciplines
2.2 Demography 2.3 Ethical
& Users
Dimensions
2.5 Computer
Science
2.6 Engineering
3. Related Work 3.1 Motion
Recognition
1.3 Research
Questions
1.4 Methodology
2.4 Psychology
3.2 Projection
3.3 Gamification 3.4 Assistive
Systems at Work
3.5 Ethical
Standards
4. Requirements
4.1 Standards
4.2 Constraints
4.3 Study
5. Model
5.1 Adapted
HAAT
5.2 Adaptive
Game System
5.3 Flow Curves
5.4 CAAS
Model
6. Implementa­
tion
6.1 Overview
6.2 Physical
Integration
6.3 Motion
Recognition
6.4 Instructions
& Performance
7.1 Participants
7.2 Procedure
7.3 Apparatus
7.4 Data
7.5 Experiment
Results
7.6 Question­
naire Results
7.7 Qualitative
Findings
7.8 Discussion
8. Conclusion
8.1 Implications
on Work
8.2 Ethical
Implications
8.3 Future
Research
9. Supplement
9.1 Implementa­
tion Details
9.2 Question­
naire Details
9.3 Study Result
Details
6.5 Gamification
7. Studies &
Evaluation
10. References
11. Index
6
Introduction: Thesis Outline
The introduction chapter describes the motivation for this work, outlines the research
questions and methodology and lists the major research contributions.
The background chapter contains an overview of the relevant concepts and methods
from ethics, psychology, computer science and engineering. The related works chap­
ter adds more detailed information about state-of-the-art technical implementations
(e.g. motion recognition and projection) or describes models and methods directly
used in the CAAS framework (gamification, assistive systems and ethical standards).
The requirements chapter describes how the requirements were established and pre­
sents a market study. It is followed by the model which addresses the established
requirements by proposing an ideal model for CAAS.
In chapter 6 we describe an exemplary implementation of the CAAS covering essen­
tial aspects of the requirements and the model like motion recognition, projection and
gamification.
The prototype is extensively evaluated in chapter 7. The conclusion sums up the
findings and resumes the research questions.
The supplement section contains additional material that might be of interest for sub­
sequent studies and implementation efforts based on this work.
The reference section is finally followed by the index section (chapter 11) which
allows to quickly access relevant parts of this work and check where concepts or per­
sons are introduced first.
7
Introduction: Research Questions
1.3 Research Questions
The societal developments mentioned above constitute a need for context-aware as­
sistive systems (CAAS) is industrial work, i.e. systems that provide cognitive assis­
tance for impaired and elderly workers.
The aim of the research presented here is to specify the requirements for such systems,
prescind a generic model, describe the implementation of a prototypical augmented
system for production workplaces and evaluate the effects these augmented systems
have on work and workers. An additional focus are the ethical implications of CAAS.
Table 2: Summary of research questions.
No.
Research Question (RQ)
Chapters
RQ1
Which requirements should CAAS
in production environments address?
4, 8
RQ2
How can the requirements be prescinded
to a generic model for CAAS?
5, 8
RQ3
How can motion recognition
be implemented in CAAS?
6.3, 6.4, 8
RQ4
How can projection
be implemented in CAAS?
6.4.1,
7.2.2, 8
RQ5
How can gamification
be implemented in CAAS?
6.5, 8
RQ6
What is the quantitative impact on work speed and quality
when work is augmented by projection and gamification?
7.5, 8
RQ7
What is the quantitative impact on users
when work is augmented by projection and gamification?
7.6, 8
RQ8
What is the qualitative impact on users
when work is augmented by projection and gamification?
7.7, 8
RQ9
What is the ethical dimension of CAAS
and how can they be designed to be humane?
2.3, 8.2
8
Introduction: Methodology
1.4 Methodology
CAAS augmenting the work directly at the workplace are a new field of research.
While a large part of this work is based on technologies which have been available for
decades (assistive technology using computers, projector systems and computer game
design), a decisive element has only recently emerged: motion recognition with 3D
body tracking which is applicable outside of research labs. Affordable marker-less
human body tracking was a premise for research on process-oriented assistance at
workplaces. It allows the kind of implicit and natural interaction (NI) required for a
low-key assistance which is “always on”. Thus the development of CAAS initially
was technology-driven.
The lack of adequate reference systems in the production domain led to a bottom-up
approach where the requirements shaped the model, the model determined the imple­
mentation and the evaluation measured the system’s effects on work and workers.
While this practical approach was extremely useful to identify and address fundamen­
tal challenges, several iterations of this process are needed to create a market-ready
CAAS addressing all requirements.
1.4.1 State-of-the-Art and Requirement Studies
The first approach was to look at the stakeholders in domain: providers of assistive
systems for work (see chap. 3.4) and production companies potentially using such
systems. These preliminary studies were necessary to establish the state-of-the-art of
assistive systems in production and to identify the delta between this state and the
technological possibilities as well as the desires of the production companies. Many
under-graduates contributed in this process. Of particular value was the cooperation
with the colleagues from production management. The derived requirements are de­
scribed in chapter 4.
1.4.2 Prototypes
The only way of finding out how the augmentations described in the requirements
affect work and workers, was building a prototype and testing it in the field. This was
accomplished within the project ASLM (Assistive Systems for Persons with Impair­
ments Working in Assembly) and subsequently in the project motionEAP (System for
Increased Efficiency and Assistance in Production Processes in Factories based on
Motion Detection and Projection).
Thus many students as well as the engineering company Schnaithmann contributed to
the prototypes’ development (see Acknowledgements). The implementation details
are described extensively in chapter 6.
Introduction: Research Contributions
9
1.4.3 Evaluation
Evaluating the behavior of workers with a new prototype is problematic because production is characterized by repetitions and standardized patterns of behavior. However
it was considered an organizational and financial impossibility to get each worker
accommodated with the prototype over several days before testing his or her “normal”
behavior.
The solution to this problem was to additionally map the state-of-the art in the prototype and thus “standardize” the alienating effect of the new device over all groups and
scenarios. Since the tested workers at the sheltered work organization did not use
workplaces with displays before, the state-of-the-art was a new experience for them.
The evaluation benefited from the support of several undergraduates and the expertise
of the works managers and supervisors at the Beschützende Werkstätte Heilbronn (see
Acknowledgements).
1.5 Research Contributions
This thesis makes several contributions to the field of human-computer interaction
(HCI) with a focus on assistive technologies:

we identify and describe the requirements for future context-aware
assistive systems (CAAS) at the workplace

we provide a generic model for CAAS

we describe the implementation of a prototype

we evaluate the prototype and identify future research areas

we analyze and describe the qualitative and quantitative impact
of the prototype’s two augmentations projection and gamification

we discuss the ethical dimension of CAAS and outline a humane approach
for the establishment of CAAS at workplaces
1.6 Publications
The work presented here is based on the following publications
(in chronological order, starting with recent publications):

Korn, O., Funk, M., Abele, S., Hörz, T. & Schmidt, A. (2014) Contextaware Assistive Systems at the Workplace. Analyzing the Effects of Projection and Gamification. In PETRA ’14 Proceedings of the 7th International
Conference on PErvasive Technologies Related to Assistive Environments.
New York, NY, USA: ACM. [in print]
10
Introduction: Publications

Funk, M., Korn, O., & Schmidt, A. (2014). An Augmented Workplace for
Enabling User-Defined Tangibles. In CHI EA ’14 Extended Abstracts of the
ACM SIGCHI Conference on Human Factors in Computing Systems (pp.
1285-1290). New York, NY, USA: ACM.

Funk, M., Korn, O., & Schmidt, A. (2014). Assisitive Augmentation at the
Manual Assembly Workplace using In-Situ Projection. In CHI ’14 Workshop on Assistive Augmentation. April 27th 2014.

Korn, O., Abele, S., Schmidt, A., Hörz, T. (2013): Augmentierte Produktion. Assistenzsysteme mit Projektion und Gamification für leistungsgeminderte und leistungsgewandelte Menschen. In: Boll, S.; Maaß, S. & Malaka,
R. (eds.): Tagungsband der Konferenz Mensch & Computer 2013 (pp. 119–
128), Oldenbourg Verlag, Munich.

Korn, O., Schmidt, A., & Hörz, T. (2013). Augmented Manufacturing: A
Study with Impaired Persons on Assistive Systems Using In-Situ Projection. In PETRA ’13 Proceedings of the 5th International Conference on
PErvasive Technologies Related to Assistive Environments (pp. 21:1–21:8).
New York, NY, USA: ACM. doi:10.1145/2504335.2504356

Korn, O., Schmidt, A., & Hörz, T. (2013). The Potentials of In-Situ-Projection for Augmented Workplaces in Production. A Study with Impaired Persons. In CHI EA ’13 Extended Abstracts of the ACM SIGCHI Conference
on Human Factors in Computing Systems (pp. 979–984). New York, NY,
USA: ACM. doi:10.1145/2468356.2468531

Korn, O., Brach, M., Schmidt, A., Hörz, T., & Konrad, R. (2012). ContextSensitive User-Centered Scalability: An Introduction Focusing on Exergames and Assistive Systems in Work Contexts. In S. Göbel, W. Müller, B.
Urban, & J. Wiemeyer (eds.), E-Learning and Games for Training, Education, Health and Sports (Vol. 7516, pp. 164–176). Berlin, Heidelberg:
Springer Berlin Heidelberg. Retrieved from http://www.springerlink.com/index/10.1007/978-3-642-33466-5_19

Korn, O., Schmidt, A., Hörz, T., & Kaupp, D. (2012). Assistive system experiment designer ASED: A Toolkit for the Quantitative Evaluation of Enhanced Assistive Systems for Impaired Persons in Production. In ASSETS
’12 Proceedings of the 14th international ACM SIGACCESS conference on
Computers and Accessibility (pp. 259–260). Presented at the ASSETS ’12,
Boulder, Colorado, USA: ACM Press. doi:10.1145/2384916.2384982
Introduction: Publications
11

Korn, O. (2012). Industrial playgrounds: how gamification helps to enrich
work for elderly or impaired persons in production. In EICS ’12 Proceed­
ings of the 4th ACM SIGCHI symposium on Engineering Interactive Com­
puting Systems (pp. 313–316). New York, NY, USA: ACM.
doi:10.1145/2305484.2305539

Korn, O., Schmidt, A., & Hörz, T. (2012). Assistive systems in production
environments: exploring motion recognition and gamification. In PETRA
’12 Proceedings of the 5th International Conference on PErvasive Technol­
ogies Related to Assistive Environments (pp. 9:1–9:5). New York, NY,
USA: ACM. doi:10.1145/2413097.2413109
12
Background: Related Disciplines
2 BACKGROUND
Since its foundation in the nineties as described in Assistive technologies: principles
and practice (Cook & Hussey, 1995), assistive technology has always been an inter­
disciplinary field. While it originally focused on persons with motoric or cognitive
impairments, assistive technology has long transcended these traditional boundaries.
Route guidance systems in cars have become regular aids in everyday life and more
recent developments like Google Glass show that the desire for “enhancements” or
“augmentations” of the human body is not only a technical possibility but marketdriven. In spite of these rapid advances Vanderheiden comes to a skeptical conclusion
on the future of assistive technologies in an article on “ubiquitous accessibility”:
However, the high cost of assistive technologies, especially assistive
technologies that can cope with ever-evolving mainstream technologies,
is putting the assistive technology that will work with newer mainstream
technologies out of the reach of many or most. (Vanderheiden, 2008)
The following sub-chapters will show that this view has to be revised: the “ever-evolv­
ing mainstream technologies” partly diffuse into the realm of assistive technologies
allowing new and especially comparatively low-price applications. The combination
of low-price motion recognition with low-price projectors as described and evaluated
in this work is a good example of this development. In fact, these advances allow
assistive technologies with capabilities so far beyond established forms of assistance
that the ethical implications of their usage have to be carefully considered (see chap­
ters 2.3 Ethical Dimensions, 8.2 Ethical Implications: Towards Humane Work).
2.1 Related Disciplines
To make this work more accessible to readers from various disciplines, we will briefly
outline relevant concepts for the design and development of context-aware assistive
systems (CAAS) in the following sub-chapters:

Engineering:
production management, digital factories and cyber-physical systems
(CPS), methods-time measurement (MTM), assembly workplaces and as­
sistive systems in production environments

Computer Science:
human-computer interaction (HCI), assistive technologies, user-centered
software design or user experience design (UX), augmented reality (AR),
13
Background: Related Disciplines
motion recognition, implicit interaction, natural interaction (NI) and artifi­
cial intelligence (AI)

Psychology and Ethics:
assisted working, cognitive workloads, flow and motivation, ethical issues
Engineering
Production
Management
Digital
Factory
Assembly
Workplaces
CPS
Context-Aware
Assistive Systems
AR
HCI
MTM
Assistive Technologies
NI
Implicit Interaction
Motion
Recognition
Computer
Science
UX
AI
Ethical
Issues
Flow
Cognitive
Workloads
Assisted
Working
Psychology
& Ethics
Figure 2: Disciplines related to context-aware assistive systems.
Although these disciplines partly overlap and finally converge in the research topic
presented here, they are vast research areas. For this reason the background and the
state-of-the art chapters focus only on areas and findings relevant for the research on
context-aware assistive systems presented here.
14
Background: Demography and Targeted Users
2.2 Demography and Targeted Users
2.2.1 Elderly Persons
According to the United Nations World Population Prospects (United Nations, De­
partment of Economic and Social Affairs, Population Division, 2013) even on a global
scale the population aged 60 or over (“the elderly”) is the fastest growing. In the more
developed regions, the ratio of the elderly is increasing at 1.0% per year before 2050
and 0.11% annually from 2050 to 2100; it is expected to increase by 45% by the mid­
dle of the century, rising from 287 million in 2013 to 417 million in 2050 and to 440
million in 2100.
In the less developed regions, the population aged 60 or over is currently increasing
at the fastest pace ever, 3.7% annually in the period 2010-2015 and is projected to
increase by 2.9% annually before 2050 and 0.9 per cent annually from 2050 to 2100;
its numbers are expected to rise from 554 million in 2013 to 1.6 billion in 2050 and
to 2.5 billion in 2100.
Figure 3: Percentage of persons aged 60 and over in 2012 and 2050
Adapted from United Nations, Department of Economic and Social Affairs,
Population Division, Population Ageing and Development 2012
Background: Demography and Targeted Users
15
The UN World Population Prospects also indicates reasons for this development:

increased life expectancy

baby boom generation reaching the sixties

inversion in the population pyramid concentrated around sixties
Accordingly, one of the most important challenges governments around the world are
facing is maintaining or improving the quality of life for the elderly. This is especially
relevant for the “more developed regions”3 where the proportion of the cohort aged
60 and over is growing the fastest:
Figure 4: Development of age groups 1950 to 2100 in more / less developed re­
gions.
Adapted from United Nations, Department of Economic and Social Affairs,
Population Division, Population Ageing and Development 2012.
As Figure 4 shows, the population graphs move almost in parallel in the less devel­
oped regions while in the more developed regions the 60+ cohort is growing the fastest
and from 2025 onwards will represent most persons. The rapid growth of this cohort
is referred to as “demographic change” by the affected countries; it will continue until
about 2060 when it will transform into a sideward movement with only slight growth.
3
In UN terminology the more developed regions comprise all regions of Europe and
Northern America, as well as Australia, Japan and New Zealand. The less developed
regions comprise all regions of Africa, Asia (excluding Japan) and Latin America and
the Caribbean, as well as the regions of Melanesia, Micronesia and Polynesia.
16
Background: Demography and Targeted Users
In the next 50 years the more developed regions will have to find technological, social
and ethical solutions to the ever-growing relative and absolute numbers of elderly
people. A good measurement is the “old-age support ratio”: the number of persons
aged 15-64 per person aged 65 or over. These working-age persons are the potential
caretakers and also the potential taxpayers supporting social and medical institutions
used by the elderly. In Germany, Italy, Japan and Sweden there are only 3 workingage individuals for each older person. The only country with less young persons per
elderly person (about 2.5) is Japan. By contrast, countries such as Bahrain or the
United Arab Emirates have over 30 persons of working age per older person. Euro­
pean countries tend to cluster at the lower end of support ratios, while countries from
Western Asia, South-Central Asia and sub-Saharan Africa tend to be at the higher end.
Most countries in the world are in an intermediate transitional phase, with old-age
support ratios between 5 and 20 persons of working-age per older person.
Obviously the European Union (EU) is the first cluster of nations to experience and
to address the challenges of a rapidly increasing ratio of elderly people. Accordingly
the “Ambient Assisted Living Joint Programme” (AAL) has been installed both on
the European level and on the national levels of the 22 participating European coun­
tries. The AAL-programme tries to bring together the special user requirements of
elderly persons and technological solutions. After focusing on health, care and as­
sisted homes in 2013 with the 6th call “ICT-based Solutions for Supporting Occupa­
tion in Life of Older Adults” the research focus broadened to integrate the domain of
work in an office, a factory or any working environment […] and to strengthen the
industrial base in Europe” (AAL Contents Working Group, Task Force, 2013).
2.2.2 Disabled and Impaired Persons
When we talk about “the elderly” it has been established that the term refers to persons
aged 60 and above. However, when we talk about “disabilities” the classification is
far more difficult – especially since recent approaches aim to integrate the interaction
of disabled individuals with the society they live in. In the International Classification
of Functioning, Disability and Health (ICF) the WHO (World Health Organization)
defines disabilities as follows:
Disability is an umbrella term for impairments, activity limitations and par­
ticipation restrictions. It denotes the negative aspects of the interaction be­
tween an individual (with a health condition) and that individual’s
contextual factors (environmental and personal factors).
(World Health Organisation, 2001)
Background: Demography and Targeted Users
17
The ICF categorized problems with human functioning in three interconnected areas:

impairments are problems in body function or alterations in body structure,
e.g. paralysis or blindness;

activity limitations are difficulties in executing activities, e.g. walking or
eating;

participation restrictions are problems with involvement in any area of life,
e.g. facing discrimination in employment or transportation
The advantage of this classification is the widespread acceptance of the ICF. It was
officially endorsed by all 191 WHO Member States in the 54th World Health Assem­
bly in 2001. It marks a decisive shift in the understanding of the concept of disability
best described by the ICF itself:
The ICF puts the notions of ‘health’ and ‘disability’ in a new light. It
acknowledges that every human being can experience a decrement in health
and thereby experience some degree of disability. Disability is not some­
thing that only happens to a minority of humanity. The ICF thus ‘main­
streams’ the experience of disability and recognises it as a universal human
experience. By shifting the focus from cause to impact it places all health
conditions on an equal footing allowing them to be compared using a com­
mon metric – the ruler of health and disability. Furthermore ICF takes into
account the social aspects of disability and does not see disability only as a
'medical' or 'biological' dysfunction. By including Contextual Factors, in
which environmental factors are listed ICF allows to records the impact of
the environment on the person's functioning.
(World Health Organisation, 2001)
Disability now refers to the negative aspects of the interaction between individuals
with a health condition (such as cerebral palsy, Down syndrome, depression) and per­
sonal and environmental factors (such as negative attitudes, inaccessible transporta­
tion and public buildings, and limited social supports). The following models shows
how a combination internal and external factors affects activities:
Figure 5: ICF classification incorporating environmental and personal factors,
WHO 2011.
18
Background: Demography and Targeted Users
In many countries this holistic approach or “bio-psycho-social model” has not yet
been translated into national law. The authoritative German Code of Social Law IX
(Sozialgesetzbuch IX) defines disabilities as follows:
Human beings are disabled if their bodily function, their mental abilities or
their emotional health with a high probability deviate from the typical state
in their age for more than six months and for that reason their participation
in societal life is compromised. 4
(SGB IX, 2001, § 2.1)
In this text, disability is still described as an impairment of the individual which might
have societal consequences – whereas the ICF’s definition allows an impairment in
itself to be a result of societal behavior, e. g. mental and physical barriers. Germany’s
Code of Law is no exception. In most countries the ICF’s universal understanding of
disabilities is just beginning to be established both in the existing medical frameworks
and in the organizations working with impaired persons.
Accordingly the harmonization and standardization of question sets for assessment of
health and disability at the level of different nations are in progress but far from being
established. The data presented here are based on the World Health Survey (World
Health Organization, 2004) conducted 2002 to 2004 which is “the largest multina­
tional health and disability survey ever using a single set of questions and consistent
methods to collect comparable health data across countries” (World Health Organisa­
tion & World Bank, 2011, p. 25). The following passages are based on this survey.
A total of 70 countries were surveyed of which 59, representing 64% of the world
population, had weighted data sets that were used for estimating the prevalence of
disability of the world’s adult population aged 18 years and older. The survey aggre­
gated a disability score ranging from 0 to 100, where 0 represented “no disability” and
100 “complete disability”.
The average prevalence rate in the adult population aged 18 years and over derived
was 15.6% (some 650 million people of the estimated 4.2 billion adults aged 18 and
older in 2004) ranging from 11.8% in higher income countries to 18.0% in lower in­
come countries. This figure refers to adults who experienced significant functioning
difficulties in their everyday lives. The average prevalence rate for adults with very
significant functioning difficulties was estimated at 2.2% or about 92 million people
in 2004.
4
German original: Menschen sind behindert, wenn ihre körperliche Funktion, geistige
Fähigkeit oder seelische Gesundheit mit hoher Wahrscheinlichkeit länger als sechs
Monate von dem für das Lebensalter typischen Zustand abweichen und daher ihre
Teilhabe am Leben in der Gesellschaft beeinträchtigt ist.
Background: Demography and Targeted Users
19
Figure 6: Disability prevalence rates for thresholds 40 and 50 derived from multidomain functioning levels in 59 countries, by country income level, sex, age, place
of residence, and wealth, WHO 2011.
The alternative Global Burden of Disease study in the 2004 update (Lopez, Mathers,
Ezzati, Jamison, & Murray, 2006) comes to similar numbers: It estimates that 15.3%
of the world population (some 978 million people of the estimated 6.4 billion in 2004)
had “moderate or severe disability” while 2.9% or about 185 million experienced “se­
vere disability”.
Based on the population estimates for 2010 (6.9 billion people with 5.04 billion 15
years and over and 1.86 billion under 15 years) and the 2004 disability prevalence
estimates (World Health Survey and Global Burden of Disease) there were around
785 (15.6%) to 975 (19.4%) million persons 15 years and older living with disability.
Of these, around 110 million (2.2%) to 190 million (3.8%) experienced significant
difficulties in functioning. Including children, over a billion people (or about 15% of
the world’s population) were estimated to be living with disability.
20
Background: Demography and Targeted Users
2.2.3 Aging and Disability
Since disabilities are usually interpreted in relation to what is considered “regular
functioning”, several studies found differences between self-reported and measured
aspects (Andresen, Fitch, McLendon, & Meyers, 2000; Ikeda, Murray, & Salomon,
2009). Elderly persons frequently do not think of themselves as being disabled be­
cause they consider their deficits as “normal” or appropriate for their age. This also
depends on the degree of support and social standards: the prevalence of disability in
lower income countries among people aged 60 years and above was 43.4% compared
with 29.5% in higher income countries (World Health Organisation & World Bank,
2011, p. 27).
In spite of these findings the overall relationship between ageing and disability is
straightforward: there is higher risk of disability at older ages, with the higher disabil­
ity rates among older people reflecting the accumulation of health risks across a
lifespan of disease, injury, and chronic illness (Australian Institute of Health and Wel­
fare, 2004). The correlation between age and disability prevalence is obvious in both
low- and high-income countries:
Figure 7: Age-specific disability prevalence, derived from multidomain function­
ing levels in 59 countries, by country income level and sex, WHO 2011.
Given this strong correlation between age and disability prevalence, global ageing has
a major influence on disability trends: both the absolute and the relative number of
persons with disabilities will grow. This growth is increased by the fact that the fast­
est-growing age cohort worldwide, increasing at 3.9% a year, are persons aged 80 to
89 years – and at this age the rates of disability are very high (Robine & Michel, 2004).
While elderly persons are still at work, the awareness of impairments is increased:
although elderly employees often excel in knowledge and experience, it has been es­
tablished 40 years ago that their short term memory is continually declining (Anders
Background: Demography and Targeted Users
21
et al., 1972) which results in an increase of human errors in complex activities (Salt­
house, 1990) as well as a decrease of learning abilities (Satre et al., 2006). The effects
of cognitive deficits are discussed in more detail in chap 2.4.2 Cognitive Workloads.
Since cognitive deficits currently are hardly countered by technological methods5,
many elderly workers decide to retire early as soon as they feel the impact on their
regular functioning. In production environments, such deficits are recognized very
early, because time criticality is combined with frequent quality controls (see chap.
4.2.2 Total Quality). The decision to retire early has negative impacts on the economy
(a former worker and taxpayer becomes a holder of an annuity and the old-age support
ratio declines) and potentially also on the individual (work is an essential component
for self-esteem, see chap. 2.3.2 Value of Work). For this reason cognitive deficits
should actively be countered by assistive systems which support the workers cogni­
tively – both in regular workplaces and in sheltered or supported work.
2.2.4 Sheltered Work and Supported Work
The concept of sheltered work or supported work is offering a protected environment
to persons with impairments who cannot compete on the regular job market. There
are many different forms of sheltered employment reaching from dedicated institu­
tions to daycare centers. Virtually all forms of sheltered employment can generally be
classified into two types (Kregel & Dean, 2002):

Supported work: Transitional employment programs intended to provide
training and experience to individuals in segregated settings, so that they
will be able to acquire the skills necessary to succeed in subsequent com­
petitive employment.

Sheltered work: Extended employment programs designed for long-term or
permanent placements for individuals that will allow them to use their abil­
ities to earn wages in the segregated setting.
The majority of adults with cognitive disabilities work in segregated, sheltered em­
ployment settings. Although sheltered work organizations fulfill a social task in
providing sheltered working conditions, in many countries they are also required to
work economically by law, e.g. in Germany (Kronberg, 2013). However, from an
economic perspective it would be beneficial to empower impaired persons to move to
transitional programs, because sheltered work organizations normally depend on cash
5 Instead organizational methods are used, e.g. reducing the work tasks’ complexity.
This felt loss of status often is combined with a transfer into another team which again
causes social stress. As a result many workers prefer to quit their job completely.
22
Background: Demography and Targeted Users
benefit programs. From the perspective of inclusion, i.e. the integration of persons
with functioning deficits into regular contexts of living, an empowerment of the users
would also be beneficial. It has been shown that once in sheltered employment, only
very few persons are able to return to competitive employment (Murphy & Rogan,
1995). A ten-year longitudinal analysis of 877 individuals with cognitive disabilities
produced a significant amount of evidence indicating that integrated as opposed to
sheltered employment improves the employment outcomes. The assumption that two
different sets of users were compared has been falsified, as “surprisingly little differ­
ence exists in the demographic profiles of persons working in integrated versus shel­
tered employment settings” (Kregel & Dean, 2002, p. 81).
Context-sensitive assistive systems (CAAS) could improve the work conditions for
impaired persons in sheltered work organizations by empowering them to produce
more complex products in better quality. This results in higher pay for the worker,
increases the sheltered work organizations’ profits and decreases the public funding –
a potential “win-win-win-situation”. CAAS also have the potential to ease the transi­
tion of persons from sheltered work to integrated work. A third advantage is that these
assistive systems can help users to remain in regular work contexts even when impair­
ments occur – a perspective which is especially relevant for the elderly workforce.
Background: Ethical Dimensions
23
2.3 Ethical Dimensions
According to the WHO World report on disability “disability is increasingly under­
stood as a human rights issue.” (World Health Organisation & World Bank, 2011).
This makes the use and development of assistive technology an ethical subject.
To understand the value of assistive technology in work contexts and the ethical ques­
tions arising from its use it is helpful to look back in history to see how the concepts
of work and technology changed over time. This analysis can only be very brief and
is not intended for readers with an academic background in philosophy or ethics but
for interested readers of other disciplines.
2.3.1 Value of Technology
There was a time when science, technology and art were one – the mythological times.
In ancient Greece there is a famous dialogue between the rhapsodist Ion of Ephesus
and Socrates where the latter divides these three concepts (Schlaffer, 2005). He does
so by proving to Ion that he may be able to sing and tell Homer’s songs, i.e. apply the
art of poetry (poietike) but neither can he perform the arts or techniques (techne) de­
scribed in the songs himself nor does he know the concepts or the science beneath
those arts (episteme). He proves this using examples like medicine, military, ship nav­
igation and driving chariots. Thus Socrates gives an early definition of professional­
ism (knowledge and ability) and separates science and technology from the fine arts
and poetry, which in his understanding are not part of the world of reason.
However, in ancient Greece this did not mean the arts were inferior – instead they
were considered as gifts of the gods. Socrates’ view did not represent the mainstream
– and many Greeks were persuaded that all important things could be learned by read­
ing Homer’s epic poetic works Ilias and Odyssey. In fact there is an established four­
fold argument of ancient technology criticism (Mitcham, 1994, p. 282):

the will to technology or the technological invention often involves turning
away from faith or trust in nature or providence

technical affluence and the concomitant processes of change tend to under­
mine individual striving for excellence and societal stability

technological knowledge likewise draws human beings into intercourse
with the world and obscures transcendence

technical objects are less real than nature
With some exceptions this skeptical attitude towards technology and science perse­
vered for centuries through the middle ages until the age of renaissance.
24
Background: Ethical Dimensions
2.3.2 Value of Work
The historical development of the value of work resembles the described development
in the perception of technology. In the antiquity work was understood as a basic ne­
cessity best done by slaves. Even administrative work like the management of estates
was considered a necessary evil. The aim was to spend time with art, religion or
simply with life’s pleasures. The only exceptions were architecture, medicine, politics
and military – but as the historian of art and culture Jacob Burckhardt explained these
occupations were not considered work but special forms of art (Burckhardt, 1999) .
This low esteem of work persevered in Christian times. In a central passage of the
Bible work is described as a form of punishment: “By the sweat of your face you shall
eat bread, till you return to the ground, for out of it you were taken; for you are dust,
and to dust you shall return.” (Genesis 3:19, English Standard Version). Until the late
medieval times the doctrine “ora et labora” (pray and work) was not only common
monastic practice but also the church’s guide to a humble life agreeable to God. Work­
ing was by no means considered a form of self-fulfillment but a form of submission.
This changed when the protestant reformer Martin Luther (1483-1546) re-interpreted
the concept of “Schickung” (approximate translation: divine dispensation). He taught
that salvation is not earned by good deeds but received through faith as a free gift from
God. At first this seems to decrease the value of work even further. However, in the
influential essay “The Protestant Ethic and the Spirit of Capitalism” (1904-1905) the
German sociologist, philosopher and economist Max Weber showed that Luther dig­
nified mundane professions as adding to the common good and thus being blessed by
God (Weber, 2009). Lutheran Protestants perceived working as devoting effort to the
praise of God.
In the English-speaking part of Europe a similar line of argument has been put forward
by Francis Bacon (1561-1626) in maintaining that “God has given humanity a clear
mandate to pursue technology [by working] as a means for the compassionate melio­
ration of the suffering of the human condition” (Mitcham, 1994, p. 284).
This Protestant work ethic (or from a historical perspective: renaissance work ethic)
was very successful. Viewing work as an end in itself or as a “calling” made work
more meaningful and potentially increased work motivation. At the same time this
work ethic corresponded to the capitalistic model of the world. It started to emancipate
itself from the religious roots and shaped the rise of a new form of economy where a
dishwasher could become a millionaire. The historical rise of the value of work and
technology is illustrated in Figure 8.
25
Background: Ethical Dimensions
Rise of capitalism
Value of the concepts
work and technology
Bacon
Luther
Greek
Antiquity
Roman
Antiquity
Medieval
Times
Renaissance Today Future
Figure 8: Historical development of the concepts work and technology.
While Protestant work ethic raised the social prestige of work, there were strong em­
pirical forces pushing it downwards. The early days of industrialization where social
security, working hours and security standards were largely unknown, took a high
death toll. Also on the conceptual level industrialization implies a division of labor
and thus a decontextualizing or as Polanyi puts it a “disembedding” of work actions
(Polanyi, 1957). “Disembedded” work is driven by forces independent of the human
body and often the work results also do not relate to concepts within the workers’
minds, i.e. the workers do not know what the product component they are working on
is good for.
Today many work conditions have changed to the better; however increasing special­
ization still results in a high level of disembedding in most jobs. In fact the number of
critical tones regarding today’s eminent prestige of work is growing. Recent discus­
sions about the increasing number of “burnout” patients (Brand & HolsboerTrachsler, 2010) and the loss of “work-life balance” indicate that the prestige of work
has reached a plateau and might even have come to a point where a change in trend is
a likely future scenario.
Nevertheless currently there still is a common social consensus that a reasonable
amount of work is “good” and meaningful. By using the day reconstruction method
where activities are assessed on an affective scale (Kahneman, Krueger, Schkade,
Schwarz, & Stone, 2004) the positive effect of work on well-being has been shown
empirically by comparing 600 employed and unemployed persons (Knabe, Rätzel,
Schöb, & Weimann, 2010). Interestingly the positive effect is not derived from the
time spent working – in fact the activity of working is rated lowest on the affective
scale – but from the resulting income and its positive effect on the overall life satis­
faction (and probably the social prestige).
26
Background: Ethical Dimensions
2.3.3 Implications for the Work of Persons with Disabilities
If we look at the historical development of the value of work and technology, both
have reached a peak in prestige. Work has transcended the time where it was just a
means to earn a living and technology has transcended the phase where it was just a
tool to an end.
It has already been described that within recent years there was a shift in the under­
standing of what being disabled means (see chap. 2.2.2 Disabled and Impaired Per­
sons). Instead of focusing on needs and using a “rhetoric of compassion” (Rogers &
Marsden, 2013) the new focus lies on empowerment. This shift of focus implies a
rising importance of the environment or the context. Designing for accessibility or
designing for diversity (see chap. 2.5.2 Assistive Technology) leads to questions like:
how would a work environment look like that allowed impaired persons to be in­
cluded?
Context-aware assistive systems (CAAS) that augment regular workspaces without
physically changing them clearly will be a huge step towards universal access. Within
due time, implementing such systems will rise from a vision to a requirement in the
industrialized countries. Already in 2008 the proactive approach of inclusion was sup­
ported by the United Nations (UN), when in the “Convention on the Rights of Persons
with Disabilities” work was constituted a right (Article 27: Work and Employment):
States Parties recognize the right of persons with disabilities to work, on an
equal basis with others; this includes the right to the opportunity to gain a
living by work freely chosen or accepted in a labour market and work envi­
ronment that is open, inclusive and accessible to persons with disabilities.
States Parties shall safeguard and promote the realization of the right to
work, including for those who acquire a disability during the course of em­
ployment, by taking appropriate steps, including through legislation, to, in­
ter alia […](United Nations, 2008)
The idea to interpret work as a human right is a direct result of the risen prestige of
the concept of work. It implies that national laws will start converting this new ap­
proach into binding legislation. The ethical consequences of this process are discussed
in chap. 8.2 Ethical Implications: Towards Humane Work.
Background: Psychology
27
2.4 Psychology
When humans interact with computers, human behavior and thus psychology comes
into play. In their interactions with computers humans frequently follow their emo­
tions and instincts. Even in their most primitive forms computers with their ascribed
intelligence (see chap. 2.5.4 Artificial Intelligence) stirred emotions ranging from dis­
gust and bewilderment to ecstasy. Marvin Minsky’s famous 1970 statement6 that “in
from three to eight years we will have a machine with the general intelligence of an
average human being […] that will be able to read Shakespeare […] tell a joke, have
a fight” pointedly illustrates the exuberant expectations of these early years.
Over time the expectations became more rational and the awareness grew that com­
puters are machines that will fundamentally differ from humans for a long time to
come. In the last decade, however, the advances in the fields of sensors and algorithms
allowed an ever-growing part of the spectrum of human abilities to be analyzed, in­
terpreted and partially mirrored by computers. Thus the user experience (UX) can be
designed to fit the human mindset.
In the following we will introduce a model that describes the interactions between
humans and assistive technology. The model is complemented by the concept of cog­
nitive workloads and finally by the concept of flow. Their combination offers a con­
ceptual framework for designing interactive systems that effect human emotions in a
positive way, as described in the CAAS-model presented in this work (see chap. 5
Model and Architecture).
2.4.1 HAAT-model
When human activity is assisted, behavior patterns have to be analyzed and interpreted
to generate appropriate interventions. While the sensors integrated in CAAS allow to
access real-world data, the selection and interpretation of this data depends on the
underlying model.
The Human Activity Assistive Technology Model (HAAT) (Cook & Hussey, 1995)
has formalized the interactions between assistive technology and users. It is based on
the basic Human Performance Model (Bailey, 1989) which already separates human,
activity and context; this model is changed by adding “assistive technology” and mov­
ing “context” into the frame (see Figure 9).
6
The statement was published in Life Magazine. On several occasions Minsky stated
that he was misquoted. Like usually this counterstatement could not balance the im­
pact of the alleged assertion.
28
Background: Psychology
Figure 9: HAAT-model by Cook & Hussey, 1995.
The HAAT-model describes four basic components and functions of assistive tech­
nology:

activity (areas of performance, e.g. work, play, school)

human (“intrinsic enabler”, including skills)

context (setting, social, cultural, physical)

assistive technology (“extrinsic enabler”):
integrates a human-technology interface (HTI),
an activity output, a processor and an environmental interface
In production environments the task might be the assembly of a specific product, the
user a person with cognitive impairments and the context an assembly workshop in a
sheltered work organization. While the impaired person with his or her abilities and
skills is seen as the intrinsic enabler, the CAAS would then be the extrinsic enabler.
Interestingly the terms Cook & Hussey use to describe the human, strongly resemble
the terms used to describe systems in computer science: the user can gather “sensory
input”, the corresponding brain is called “central processing” (subdivided into percep­
tion, cognition, memory and motor control) and legs, hands and fingers etc. are called
“effectors”.
29
Background: Psychology
The human-technology-interface can be “unicausal or bicausal”, so a direct and natu­
ral interaction with the user is already inherent to the model – although the authors
rather thought of prosthetic limbs than of augmented reality. The isomorphism regard­
ing sensor-based computational systems makes the HAAT model a good starting point
for a model for CAAS (see chap. 5 Model and Architecture).
2.4.2 Cognitive Workloads
Assistive systems designed for universal access also address users with impairments
(see chap. 2.2 Demography and Targeted Users). With these users, some of the rec­
ommendations regarding cognitive support like “simple error handling” or “reducing
the short-term memory load” (see chap. 4.1 Standards for Designing Interactive Sys­
tems) become more important than with users in best health.
Salthouse analyzed the process of “cognitive aging” and found that the role of working
memory is crucial (Salthouse, 1990) – a universal finding that can be generalized for
disabled persons suffering from related cognitive deficits. With working memory be­
ing defined as the part of the brain responsible for concurrent information processing
and storage, he introduces the desktop metaphor to illustrate its function: like our
brains desktops are used both for processing and for storing information. Typical
problems in human simultaneous processing and storage abilities can be illustrated by
different “desktops”:
Storage
Processing
Storage
Storage
Problem of smaller
workspace
Processing
Storage
Processing
Storage
Processing
Regular structure of
working memory
Storage
Problem of greater
processing require­
ments
Problem with filtering
and organizing infor­
mation
Figure 10: Different problems related to working memory.
Adapted from Salthouse, 1990.
Salthouse’ experiments indicated that older adults use a smaller capacity of working
memory while performing cognitive tasks and that out-of-context tasks (i.e. tasks not
related to everyday activities, e.g. repeating shown numbers in reversed order) are
30
Background: Psychology
especially problematic. Also using new information and determining its veracity was
more difficult for older people.
These findings have to be taken into account when designing assistive systems with
new forms of augmentations. While the augmentations can potentially support the
working memory (the “storage”) they might also draw the limited processing power
away from the main task.
2.4.3 Flow
One of the augmentations implemented and analyzed in the context of CAAS is gam­
ification (see chap. 3.3). In this work gamification is seen as a means to achieve “flow”
– a mental state in which a person feels fully immersed in an activity, experiences an
energized focus and believes in the success of the activity. This state where high skill
and adequate challenge converge was first proposed by Csíkszentmihályi in 1975.
In various publications (Csíkszentmihályi & Nakamura, 2002; Csíkszentmihályi &
Rathunde, 1992; Csíkszentmihályi, 1975) he and other authors identify the following
factors as accompanying flow:
1.
A high degree of concentration on a limited field of attention (a person en­
gaged in the activity will have the opportunity to focus and to delve deeply
into it).
2.
A loss of the feeling of self-consciousness, the merging of action and
awareness.
3.
Distorted sense of time, one's subjective experience of time is altered.
4.
A sense of personal control over the situation or activity.
5.
A lack of awareness of bodily needs (to the extent that one can reach a
point of great hunger or fatigue without realizing it)
6.
Absorption into the activity, narrowing of the focus of awareness down to
the activity itself, action awareness merging.
Background: Psychology
31
Figure 11: Mental states resulting from the interaction between level of challenge
and skill.
The figure above shows the spectrum of mental states and links them to challenge and
skill. Thus the concept of flow inherently integrates Bailey’s human performance
model described at the beginning of this chapter: high skill results in high perfor­
mance. However, Csíkszentmihályi adds the element of challenge and puts forward
the idea, that only the combination of high skill and high challenge level can create
flow.
This does not imply that flow is a mental state limited to highly skilled persons. The
method to achieve flow is adjusting the challenge level to meet the user’s current skill
level. For this adaptation permanent feedback is required. This can either be achieved
by highly interactive applications like games – or by integrating sensors that measure
and interpret human behavior as described in the HAAT-model.
In a further rendition of the concept, four conditions were deemed necessary to
achieve the flow state (Csíkszentmihályi, Abuhamdeh, & Nakamura, 2005):

One must be involved in an activity with a clear set of goals. This adds
direction and structure to the task.

One must have a good balance between the perceived challenges of the task
and the own perceived skills. One must have confidence to be capable to do
the task.
32
Background: Psychology

The task at hand must have clear and immediate feedback. This helps the
person negotiate changing demands and allows to adjust their performance
to maintain the flow state.

The activity is intrinsically rewarding, so there is a perceived effortlessness
of action.
Clearly achieving a flow state depends strongly on the task. Many tasks are vague and
have unclear borders, e.g. writing or planning: often both time and quality are un­
defined so the result inevitably feels “un-finished”. If such and similar open tasks are
designed to be measurable and clearly defined – like quests in games – they often feel
artificial: both a word constraint for an essay and a time constraint for planning are
not task-inherent. The process of re-designing real-world tasks to feel like games is
called “gamification” (see chap. 3.3 Gamification). However, many tasks are princi­
pally highly structured and easily measurable: these tasks are principally well-suited
for creating conditions that allow achieving a flow state. Amongst many others, as­
sembly work in production environments is such a task. Assembling countable parts
provides both a clear objective and an immediate visual feedback.
If tasks in production are re-designed to create and preserve a feeling of flow, they
ideally have to scale to match a person’s changing level of performance. Otherwise a
worker encountering a longer phase of underperformance (e.g. as a result of a physical
or mental problem) would permanently receive negative feedback and quickly move
to negative mental states like arousal, anxiety or worry. The required scaling of the
performance intensity can be achieved by reducing the task complexity or by reducing
the time requirements (see chap. 6.5 Gamification Component).
Background: Computer Science
33
2.5 Computer Science
2.5.1 Human-Computer Interaction
From a computer science perspective, an assistive system primarily is a computerbased system integrating data from users with special requirements. This makes as­
sistive systems a sub-discipline of the vast field of human-computer interaction (HCI).
The communication and interaction between users and computers was highly formal­
ized for decades: keyboard, mouse and joystick were the only ways to manipulate
things within the system. This is the time when formalized approaches like the GOMS
model (Goals, Operators, Methods, and Selection rules) (Card, Moran, & Newell,
1983) or the simplified keystroke-level model (Kieras, 1993) were developed to de­
scribe HCI. In analyzing and describing the smallest human movements these models
bear a strong resemblance to the approaches used in production management to map
and plan human tasks in production processes like Methods-Time Measurement
(MTM; see chap. 2.6.1 Measuring Work Processes with MTM).
With the success of webcams in the mass market in the early nineties, wide-spread
sensors integrated live data from the real world without the need for human interac­
tion. From this point onwards the domain of HCI began to grow rapidly with new
forms of input (GPS, accelerometers, motion sensors) and new output devices (mobile
phones, tablets, projectors) emerging in quick succession.
It is not surprising that exactly in this time the idea of “ubiquitous computing”
emerged. Mark Weiser, the pioneer in this field, first described the idea that the “most
profound technologies are those that disappear. They weave themselves into the fabric
of everyday life until they are indistinguishable from it” (Weiser, 1999, p. 3)7. He also
clear-sightedly envisioned the success of “call tabs” (smart phones), pads (tablets) and
predicted “hundreds of computers per room”.
This idea of ubiquitous computing has soon been expanded. “Implicit HCI” has been
defined as “an action, performed by the user that is not primarily aimed to interact
with a computerized system but which such a system understands as input” (Schmidt,
2000). A broad understanding of the term “context” is advised, including the environ­
ment, situation, state, surroundings, task etc. The author suggests the use of sensors to
equip devices with perception capabilities and provides “active environments” with
IR networks as an example – an anticipation of the IR-based motion recognition of
the CAAS described in this work (see chap. 5.4 Model for Context-Aware Assistive
Systems).
7
First published 1991 by Scientific American, Vol. 265, No. 3, pp. 94-104.
34
Background: Computer Science
Later the concepts of “embedded interaction” and “implicit use” emerged – they imply
that information is embedded into people’s environments (Schmidt, Kranz, & Holleis,
2005). The authors describe the unobtrusive integration of context-specific infor­
mation on displays in everyday contexts like wardrobes or even umbrella stands. Thus
using everyday motions in work environments to implicitly interact with devices and
projecting information directly into these work contexts (see chap. 3.2 Projection) are
logical advancements of existing HCI lines of research.
Although the concept of implicit interaction was influential in research it took some
years until the small computers and sensors required reached a broad audience. Like
often when information technology crosses the border from specialized applications
for industry and research to the mass market, the game industry was a driving force.
Nintendo’s Wii released in November 2006 in Japan marked a breakthrough: its wire­
less primary controller, the Wii Remote, can measure the acceleration along three axes
using an accelerometer (combined with an optical sensor). Thus it can be used to in­
teract with items on screen via gestures and pointing. This form of interaction is more
direct and “natural” than using a mouse or a keyboard – for this reason it is called
“natural interaction” (NI). Soon these gaming technologies were used and adapted by
researchers and therapists for assistive systems (see chap. 3.3 Gamification).
Figure 12: Consoles Wii with Wii Remote (left) and X-Box 360 with Kinect (right).
Four years after the release of the Wii another controller for a gaming console, this
time Microsoft’s XBox 360, repeated this revolution in HCI: Microsoft’s Project Na­
tal and the launch of the Kinect. In November 2010 the breakthrough was the capa­
bility to interpret three-dimensional human body movements in real-time. While the
Wii still needed the Wii Remote, the Kinect made the human body the controller, so
the requirements for implicit interaction reached the mass market.
Background: Computer Science
35
Since this technology plays a crucial role in the development of process-oriented and
context-aware assistive systems it is described in more detail in the related works sec­
tion (see chap. 3.1 Motion Recognition).
These and other technological advances in the recent years have a paradoxical effect:
while they bring the computer closer to the human they also increase the technological
distance8. This new situation is a result of the more natural interaction. Steering an
application by gestures allows the computer to become invisible or ubiquitous which
essentially is the same. When Weiser puts forward the argument that “sociologically,
ubiquitous computing may mean the decline of the computer addict” (Weiser, 1999,
p. 10) he is right regarding computer usage: easy and intuitive use of today’s devices
like smartphones makes knowledge of computers an option instead of a requirement.
Accordingly the ideal assistive system – especially for impaired persons – is ubiqui­
tous and controlled by natural interaction (see chap. 4 Requirements and chap. 5
Model and Architecture).
2.5.2 Assistive Technology
Assistive technology is not confined to persons with impairments. Depending on how
broadly the term is understood, cars are assistive systems allowing us to move faster
than by foot. The use of tools to increase or “augment” (see next sub-chapter) human
abilities has always driven technological development. Even in a narrower sense to­
day’s widely used driver assistance systems like route guidance systems or head-up
displays (HUDs) re-define the design space of cars for un-impaired users (Kern &
Schmidt, 2009). However, within this work we focus on the research on assistive sys­
tems for persons with impairments.
The established disciplines for research on elderly persons or on persons with impair­
ments and disabilities (for the relation between elderly and impaired persons see chap.
2.2.3 Aging and Disability) are medicine, psychology or social pedagogy. Assisting
these target groups often implies combining technologies: from the wheelchair in­
vented 1933 to today’s augmented workplaces, interdisciplinary approaches were
used sooner and more frequently than in other domains.
8
The lack of a physical representation (which can also be reached by miniaturization)
makes the computer technologically less accessible beyond the confined borders of
the running application.
36
Background: Computer Science
In computer science, research on assistive technologies has been institutionalized in
special fields of human-computer interaction (HCI) and related sub-disciplines. The
terms used in this field are “designing for accessibility”, “designing for diversity9”,
“universal design” or “universal access”.
The development of more accessible solutions has been furthered by legal regulations
like the Section 508 of the Rehabilitation Act or the 1990 Americans with Disabilities
Act (DeLeire, 2000; Lazar & Hochheiser, 2013). However, concepts like universal
design are not restricted to impaired persons – in fact requirements associated with
impairments like the reduction of the cognitive load have been imperative in HCI
without having special user groups in mind (see chap. 4.1 Standards for Designing
Interactive Systems). Thus it is not surprising that within both major organizations for
research in computer science, the Association of Computing Machinery (ACM) and
the Institute of Electrical and Electronics Engineers (IEEE) there are long traditions
of research on assistive technologies in various groups and conferences.
The ACM features the ACCESS special interest group (Assistive technologies for
persons with disabilities) founded in 1973. The ACM conferences ASSETS (Comput­
ers and Accessibility, since 1994) and PETRA (Pervasive Technologies Related to
Assistive Environments, since 2008) both focus on assistive technologies.
Although the IEEE is more focused on technology there are several of the currently
38 societies related to assistive technologies, mainly the Robotics and Automation
Society (since 1984), the Engineering in Medicine and Biology Society (since 196310),
the Systems, Man & Cybernetics Society (since 197211) and the Society on Social
Implications of Technology (since 1972).
In the last years, the potentials of assistive technology grew exponentially: the success
of ubiquitous computing and the accompanying price drop of sensors like accelerom­
eters or motion detectors allowed new applications. Like the virtual factory (see chap.
2.6.3 Digital and Virtual Factory, Cyber-physical Systems) can potentially assess al­
most all product parts and machines within a plant by using sensors, the activities of
9
In this context the term “diversity” also includes groups like children.
10
The predecessor was the IRE Professional Group on Medical Electronics which
formed in 1952 and merged with AIEE's Committee on Electrical Techniques in Med­
icine and Biology in 1963.
11 The earliest predecessor was the IRE Professional Group on Human Factors in Elec­
tronics, which formed in 1958. This name was changed to IEEE Professional Tech­
nical Group on Human Factors in Electronics in 1963, to IEEE Human Factors in
Electronics Group in 1964, to Man-Machine Systems Group in 1968 and to Systems,
Man and Cybernetics Group (1970).
Background: Computer Science
37
an impaired person can potentially be monitored in a very high level of detail. “Assis­
tive Cyber-physical Systems” (assistive CPS) have already been proposed (Makedon,
Le, Huang, Becker, & Kosmopoulos, 2009) and they aim to integrate heart rate, blood
pressure, eye movement, metabolic work demand, mood or affect and compensatory
movements.
While the CAAS prototype described in this work (see chap. 6 Implementation)
mainly focuses on integrating motion depth sensors in the area of production environ­
ments, the underlying model (see chap. 5 Model and Architecture) describes a holistic
sensor-based analysis of work behavior.
2.5.3 Augmented Reality
All forms of assistive systems described in the last sub-chapter try to “augment” (from
Latin augmento: improve or supplement) the reality of their users. However, aug­
mented reality (AR) implies that real-world elements are augmented by computergenerated elements. For this reason a wheelchair is not considered AR.
The term is linked to both Virtual Reality (VR) and Mixed Reality (MR). In opposition
to AR, VR aims to replace the real world completely by a computer-generated repre­
sentation, preferably in 3D. Mixed reality describes the “virtual continuum” (Nilsson
& Johansson, 2008) from augmented reality to “augmented virtuality” as shown in
Figure 13:
Figure 13: Virtual continuum according to Nilsson & Johansson 2008.
Like VR historically AR is focused on graphics. Established technologies like headmounted displays (HMD) or special eyeglasses primarily developed for VR reflect
this tradition. One of the first researchers who put forward the idea of an AR “that
38
Background: Computer Science
enriches rather than replaces the real world” and “annotates reality” was Steven Feiner
(Feiner, Macintyre, & Seligmann, 1993, p. 53). He presented KARMA (Knowledge­
based Augmented Reality for Maintenance Assistance), a system for maintenance and
repair tasks based on a head-mounted, see-through display.
This strong and persistent association to graphics and especially to 3D graphics links
AR to fields where spatial orientation and perspective play an important role – like
architecture, design, navigation, military or medicine. The design of plants also has
been an area of interest which links AR to the concept of the digital factory in the
sphere of industrial production (see chap. 2.6.3 Digital and Virtual Factory, Cyber­
physical Systems).
The implementation of sensors like accelerometers or global positioning systems
(GPS) in handheld devices opened new areas for AR like games, personal navigation
or touristic applications. Similarly the rise of motion sensors using depth data (which
undoubtedly will soon be integrated in handheld devices) simplified the requirements
for interaction with AR because no data gloves or markers are needed to track the
user’s movements.
Context-aware assistive systems (CAAS) as presented in this work use sensory data
(in the current implementation video and depth data, see chap. 6.3) to enhance reality
either on a visual level by projecting instructions directly into the workspace (see
chap. 6.4.1 Visual Output and Projection) or on a psychological level by adding mo­
tivating elements with a gamification component (see chap. 6.5). Especially the latter,
i.e. the computer-based augmentation of the emotional state of workers, is a new ren­
dition of AR. The use of projections is much more established and is described in
some detail in the related works section (see chap. 3.2).
2.5.4 Artificial Intelligence
Artificial Intelligence (AI) is a vast area of research. It originated in parallel to the
establishment of the discipline of computer science and is strongly connected to its
founders Alan Turing and Marvin Minsky.
It remains a philosophical question if a computer will ever be “truly intelligent” in the
sense that it can pass the Turing-Test, i.e. completely simulate or mimic human intel­
ligence. Nevertheless AI has attracted thousands of scientists and as a result special­
ized systems have long started to be applied in various domains. Most of these systems
only use a subset of the traits considered necessary for “true” intelligence: reasoning,
knowledge, planning, learning, communication, perception and manipulation of ob­
jects (Luger, 2009). Still these systems are useful und successful in their domain – be
it chess, path finding or enemy strategies in computer games.
Background: Computer Science
39
Assistive systems or “wizards” are an example of such a specialized area. If they are
used in restricted contexts, the fraction of the real world that has to be modeled for the
AI can be narrowed down which makes it more efficient. Navigational systems in cars
successfully apply path finding to the real-world by combining map data with position
data and traffic data. Although they are very successful many drivers complain about
their intrusiveness. This is a result of their limited capacities: if the systems would
incorporate learning and optical indoor sensors they could recognize the driver and
adjust the level of guidance according to his or her preference and knowledge of the
region. This ability to scale according to their user is a challenge for AI.
When assistive systems are to become “companions” as proposed in the special re­
search field “A Companion-Technology for Cognitive Technical Systems”12 the abil­
ity to scale is essential since companions need to map human cognitive abilities. The
formal approach taken for the companion technology was the combination of two
planning techniques: partial-order causal link planning (POCL) and hierarchical task
network planning (HTN) are joined to “hybrid planning” (Biundo, Bidot, & Schatten­
berg, 2011). Although these planning techniques are designed for applications work­
ing with humans, their robust implementation in sensor-based assistive systems
remains a future research task.
On the other end of formal complexity, the applications most dedicated to user expe­
rience (UX) encounter the same AI-challenge: games need to scale according to their
users. This shows an important parallel between games and assistive systems: they
both do not strive for the best solution but for the solution best adapted to the user.
Accordingly, implementing a game AI far superior to the current player would result
in a feeling of losing and thus counter the desired flow state (see chap. 2.4.3). In par­
allel an assistive system not trying to detect the user’s capabilities will soon underchallenge or overchallenge his or her skills and thus reduce the acceptance rate.
The CAAS framework assumes that the self-regulating or autonomous systems de­
signed to model players can be adapted for assistive systems (see chap. 5 Model and
Architecture) and the comparatively simple13 AI-strategies for goal arbitration in
games (Buckland, 2005, pp. 398–414) can be adapted for goals related to service or
work processes.
12 German original: “Sonderforschungsbereich sfb transregio 62: Eine CompanionTechnologie für kognitive technische Systeme”.
13
When compared to multi-dimensional formal approaches like hybrid planning.
40
Background: Engineering and Production Management
2.6 Engineering and Production Management
In engineering and production management the use of task-specific tools and assistive
systems has a long tradition: when working with heavy weights or dangerous sub­
stances every healthy persons is “impaired” in the sense that the normal capabilities
do not suffice and various augmentations are need. Since it has always been in the
best interest of production companies to minimize accidents and maximize the work­
ers’ performance, several concepts and tools used in assistive technology and com­
puter science originate from the fields of production and automation.
2.6.1 Measuring Work Processes with MTM
While Methods-Time Measurement (MTM) clearly draws on the concept of measur­
ing human behavior established in the social sciences and psychology, the method is
situated in the field of engineering and production management. The underlying con­
cept of measuring human movements in production environments probably goes back
over a hundred years to the time Henry Ford14 successfully standardized work pro­
cesses in the production of Model T by using moving assembly belts (Nevins, 1954).
Like its predecessors Motion-Time-Analysis (MTA) or Predetermined Motion Time
Systems (PMTS), MTM analyzes and measures work processes. The first extensive
system was PMTS or “WORK-FACTOR” (Quick, Duncan, & Malcom, 1962). It was
published in 1938 as a response to 1930's prohibition of stop-watches as a means of
measuring work. Later MTA and PMTS were replaced by MTM which was first de­
scribed in 1948 (Maynard, Stegemerten, & Schwab, 1948). The current state is de­
scribed by REFA (the German Verband für Arbeitsgestaltung, Betriebsorganisation
und Unternehmensentwicklung)15 and the USA / Canada MTM Association for Stand­
ards and Research. MTM breaks human movements down to basic elements like:

reach for an object or a location

grasp an object , touching it or closing the fingers around it

move an object a specified distance to a specified place

re-grasp an object in order to locate it in a particular way

release an object to relinquish control on it
14 More realistically the concept was developed by Ford’s employees, namely Clar­
ence Avery, Peter E. Martin, Charles E. Sorensen and C. Harold Wills.
15 In opposition to MTM where the standard times are defined in tables based on mo­
tion analysis, the REFA approach originally aims to negotiate and adapt the times in
a dialogue between employers and unions.
41
Background: Engineering and Production Management
To speed up measurement times, MTM can be realized in three levels of detail. These
are achieved by combining, statistically averaging, substituting and / or eliminating
certain basic motions. An example of simplification in the second level is combining
the elements reach, grasp and release to produce the new MTM-2 element get.
MTM
Increasing
level
of
detail
MTM-2
MTM-3
Figure 14: Three systems of MTM based on various levels of detail.
The following extract from an MTM-analysis shows the first seven basic elements.
Table 3: Example of MTM analysis for basic movements.
El.
Description
LH
TMU
RH
Description
1
Move hand to container
R14C
15.6
R14B
Moves hand to container
2
Grasp first transformer
G4B
9.1
G1A
Grasps transformer
3
Move hand clear of container
M2B
--­
--­
Holds in box
4
Move transformer to area
M10B 16.9
M14C Transformer to plate
One Time Measurement Unit (TMU) is one hundred-thousandth of an hour, i.e. 0,036
seconds. The codes in the LH and RH columns refer to those in the MTM time tables,
e.g. R14C translates as “reach 14 inches to an object jumbled with other objects in a
group, so that search and select occur”.
As Deuse explains, all MTM-time values are based on a standard performance iden­
tical to the REFA performance index of 100% (Deuse, 2010). Since many users of
context-aware assistive systems will be physically or mentally impaired, it is not pos­
sible to use the regular values of the MTM-framework. It has to be adapted to fit the
skills of impaired users (see chap. 2.2 Demography and Targeted Users).
42
Background: Engineering and Production Management
There has been an attempt to apply MTM to disabled persons called GMTM (from
German: “Geschicklichkeits-MTM” or translated: skillfulness-MTM) published in
the early eighties (Dieterich, 1983). It refers to Mink’s fundamental work started in
the seventies (Mink, 1975) but has not been influential in the domain of work for
disabled persons like sheltered work organizations.
MTM is also interesting regarding the history of ideas: over fifty years after its first
description similar formalized approaches were “re-invented” to describe the interac­
tions between humans and computer interfaces, e.g. the GOMS model (Goals, Oper­
ators, Methods, and Selection rules) (Card et al., 1983) or later the simplified
keystroke-level model (Kieras, 1993). Although the codes for transcribing actions dif­
fer, the idea of atomizing human behavior in order to make it describable is identical
(see chap. 2.5.1 Human-Computer Interaction).
2.6.2 Assembly Workplaces
Manual assembly has always played an important role in industrialization. The MTM
approach to analyze human movements in work shows the high granularity which has
been achieved in this area. Nevertheless manual work is generally slower and prone
to human errors, so assembly workplaces usually are replaced by automated systems
as soon as it is economically feasible.
However, the growing demand for customized products also increased the need for
manual production to the point where we can witness the return of “manufactories”
(see chap. 1.1.1 Automation versus Manual Assembly). Thus in a time of digitaliza­
tion, virtual factories and cyber-physical systems (see chap. 2.6.3 Digital and Virtual
Factory, Cyber-physical Systems) the demand for manual assembly work and the cor­
responding workplaces paradoxically grows.
The typical design of an assembly workplace is based on the definition of the activity
of assembling. According to the authoritative VDI 2860 on Technology for Assembly
and Handling (VDI Verein Deutscher Ingenieure, 1990) the process consists of five
activities:

assembling or joining (e.g. screwing, plugging, gluing)

handling (e.g. grabbing, placing)

checking (e.g. measuring, visual inspection)

adjusting (e.g. setting, tuning)

auxiliary functions (e.g. cleaning, labeling)
These activities all involve the human hands, arms and eyes so ergonomics play an
important role.
Background: Engineering and Production Management
43
Assembly workplaces are usually produced using aluminum profiles. They integrate
lamps and power supplies and support work plates made from various materials, de­
pending on the area of application (see Figure 15).
Figure 15: Industrial assembly table constructed by the Schnaithmann GmbH.
Especially in manual assembly scenarios the workplaces need to offer access to vari­
ous “small load carriers” which contain product components like screws, seals etc.
In the context of this work these carriers will simply be referred to as “boxes” or
“containers”. This is a simplification because for production management and trans­
portation issues their sizes and other properties actually are defined in great detail by
a DIN standard (DIN EN 13199-1:2000-10, 2000). This allows the boxes to be easily
integrated in Kanban-chains or automatic feeding systems.
44
Background: Engineering and Production Management
2.6.3 Digital and Virtual Factory, Cyber-physical Systems
For a long time mechanical engineering and computer science or software engineering
evolved rather isolated from each other. The potentials of computer miniaturization
and their implementation in classical engineering products like cars or machines have
long been under-estimated (see chap. 3.4 Assistive Systems at Workplaces). A notable
exception is the “digital factory”, a concept which came up in the nineties. The VDI
(Verein Deutscher Ingenieure, Association of German Engineers) defines it as follows
their guideline on “Digital Factory Fundamentals”:
The Digital Factory is the superordinate concept for an extensive network
of digital models, methods and tools – amongst others simulations and 3­
dimensional visualization – which are integrated by a continuous data man­
agement. The aim is holistic planning, evaluation and the continuous im­
provement of all essential structures, processes and resources of the real
factory in combination with the product.16
VDI-Guideline 4499 (VDI Verein Deutscher Ingenieure, 2008)
With simulation and 3D-visualization the digital factory integrates two major areas of
computer science. The ideal of a “continuous data management” implies the use of
networks and possibly server technologies so a third realm of informatics is integrated.
This inter-disciplinary approach made the digital factory an exotic topic for “regular”
engineers while at the same time creating a field for co-operation with informatics.
The approach dates back to the first computer-based systems for production planning
and steering (PPS) which were established in the seventies. At about the same time
the steering of machines by numerical control (NC) operations was established which
later became CNC (Computerized Numerical Control) – today’s standard in humanmachine interaction (HMI). The combination of PPS and CNC in principal allowed
“digital production”. The rise of Computer Aided Design (CAD), Computer Aided
Manufacturing (CAM) and Computer Aided Planning (CAP) in the eighties contrib­
uted the visualization aspect to the digital factory approach.
Until today many of the tools developed for the digital factory are not compatible –
for that reason various software solutions support different aspects of the digital fac­
tory, ranging from plant design down to the simulation of NC-operations for a single
machine. The two most established software solutions are Tecnomatix by Siemens
16 Author’s translation. German original: Die Digitale Fabrik ist ein umfassendes
Netzwerk von digitalen Modellen, Methoden und Werkzeugen – u. a. der Simulation
und der 3D-Visualisierung. Sie ist gekennzeichnet durch ein durchgängiges Datenma­
nagement Ziel ist die ganzheitliche Planung, Evaluierung und laufende Verbesserung
aller wesentlichen Strukturen, Prozesse und Ressourcen der realen Fabrik in Verbin­
dung mit dem Produkt.
Background: Engineering and Production Management
45
(incorporating Process Designer, Process Simulate and Plant Simulation) and Delmia
by Dassault Systems.
Both solutions describe themselves as crucial for product-lifecycle-management
(PLM), i.e. the process of managing a product’s lifecycle from the conception through
design and manufacturing to service and disposal. In this holistic approach a digital
factory is the natural starting point for a product which is intended to be transparent
throughout its lifetime. A similar and partly overlapping approach is enterprise re­
source planning (ERP) which focuses more on the resources involved in a product’s
design, manufacturing and sale.
To allow any kind of computer-based simulation or visualization, the plant, the ma­
chines and the product data have to be mapped in detail. This usually is achieved by
aggregating existing data and performing additional value stream mapping. Figure 16
shows the relation between different temporal states and digital representations of a
factory as well as the relevant processes and tools:
Figure 16: Tools and processes for the Digital Factory and the Virtual Factory.
Adapted from Westkämper 2006. Additions in red.
In the context of this work the most interesting aspect in the diagram is the “Virtual
Factory”. It was introduced by Engelbert Westkämper, a renowned expert in the field
of production research (Westkämper, 2006). While the digital factory strictly speaking
46
Background: Engineering and Production Management
is just a “snapshot” of an existing factory (although the modeled data can be used
dynamically for simulations) the virtual factory continuously integrates data from the
real factory.
In this aspect the virtual factory is a predecessor of what is currently called cyber­
physical systems (CPS): physical entities which are controlled by computers and in­
teract with each other17, or as the pioneering Edward Lee describes the concept: “em­
bedded computers and networks monitor and control the physical processes, usually
with feedback loops where physical processes affect computations and vice versa”
(Lee, 2008).
Although CPS rely more on agent-oriented processes than the original concept of the
virtual factory, this feedback loop is exactly what separates the digital factory from
the virtual factory. As long as the flow of data is not optimized by cyber-physical
systems and context-aware assistive systems as described in this work, the digitaliza­
tion of production faces two major challenges:

Current data management systems already use RFID-chips to determine the
location of products or product lots. The real-time combination with ma­
chine data would potentially allow to assess an individual product’s state at
any time – however many machines do not supply that kind of data. If data
is provided it often is more machine-specific than product-specific. Also the
granularity does not fit the high demands, since often only lots or even com­
plete orders are tracked.

Humans in manufacturing contexts result in high variance regarding both
the quantitative and qualitative results. Current approaches to digital factory
mostly just handle humans similar to robots.
The Process Designer incorporates a generic human model called Jack (and a corre­
sponding female version). The aims of the resulting “human resource simulation”
(Kühn, 2006) are: a detailed design of manual operations, checking the feasibility of
tasks, ergonomic analysis, time analysis and generating work instructions. To achieve
this, the digital representations of workers have envelopes representing their field of
vision and their maximal grasp range:
17 Although CPS are predominantly used in industrial contexts (especially in manu­
facturing and automotive) the approach can be applied to other areas including
healthcare, transportation and even “living”, i.e. consumer products like fridges or
washing machines.
Background: Engineering and Production Management
47
Figure 17: The software Process Designer within Siemens Tecnomatix allows to
create detailed simulations of production processes.
Integrated MTM-values can be assigned to the work operations (see chap. 2.6.1 Meas­
uring Work Processes with MTM). While this allows a process modeling and time
estimations based on the pre-defined times, the system lacks adequate sensors to feed
information back into the life PLM or ERP systems so that the digital factory “gets
populated by living workers” and continuous product processing data and thus be­
comes a virtual factory.
An assistive system that is aware of the state of products in interaction with the worker
could help to make production work more transparent and play a significant role on
the way towards a virtual factory. The context-aware assistive systems (CAAS) de­
scribed in this work use sensors that potentially allow the expansion of the model of
the digital factory towards a virtual factory so that it includes human workers (see
chap. 3.1 Motion Recognition). The integration of “human resources” into the ERP
system or related systems today is a technological possibility. While this technological
advancement opens new potentials for assistance, quality control and continuous data
management, it also raises new ethical questions (see chap. 8.2 Ethical Implications:
Towards Humane Work).
48
Related Work: Motion Recognition
3 RELATED WORK
3.1 Motion Recognition
Keyboard, mouse and various game controllers have been the ways of interaction be­
tween humans and computers (HCI) for decades. In using the human body as a con­
troller, robust interactive body tracking establishes a new form of “natural
interaction”. By using the un-augmented human body as a controller, a simple and
direct way of interaction is realized that requires little training.
The technical development of markerless tracking has been greatly simplified by the
introduction of real-time depth cameras for human body tracking (Knoop, Vacek, &
Dillmann, 2006; Siddiqui & Medioni, 2010). However, until recently even the best
solutions required multi-camera setups, wearable sensors (Heinz, Kunze, Gruber,
Bannach, & Lukowicz, 2006) or markers as well as powerful hardware to allow high
performance motion recognition. As described by Shotton et al. (Shotton et al., 2011)
it was not until the launch of Microsoft Kinect in late 2010 that any solution ran at
rates allowing real-time interactions on consumer hardware while being able to handle
a full range of human body shapes and sizes undergoing general body motions.
Both the Microsoft Kinect (released November 2010) and the similar Asus Xtion (re­
leased September 2011) draw their capabilities from a chipset developed by the Israeli
company PrimeSense18. However, the Kinect combines an infra-red (IR) laser projec­
tor with 3D Audio and an RGB camera while the Xtion focuses solely on IR.
Figure 18: The Microsoft Kinect (bottom) and the Asus Xtion (top).
18
PrimeSense was bought by Apple for $350 million on November 24 in 2013.
Related Work: Motion Recognition
49
It is a common misunderstanding that the Kinect uses a time-of-flight camera (ToF),
i.e. a camera measuring the duration a light signal needs to travel between the camera
and the subject – the light transit time. In fact the Kinect uses a structured light sensor
(or as PrimeSense calls it: “light coding”). The technology is based on an IR laser that
projects a static pseudorandom pattern of infrared-points (IR-points) on the environ­
ment. The 3D scene is generated by stereo triangulation which requires two images.
The first image is captured by IR sensor, the second is “virtual” because it is based on
a hardwired pattern in the IR laser. Since there is a distance between laser and sensor
(and both are stable and aligned), the images correspond to different camera positions
and allow stereo triangulation (Ten, 2010).
Since PrimeSense developed the depth sensor and the IR-technology for the Kinect
and the Xtion, their NITE middleware (Natural Interaction Technology for End-user)
is used by both sensors. While it is aimed at Java users (including Java Wrappers),
Microsoft developed its own middleware called Microsoft SDK aimed at C# develop­
ers. NITE consists of an application programming interface (API) and a visual pro­
cessing software described in detail by Ballard (Ballard, 2011).
The middleware processes the depth and image data and translates them into mean­
ingful data, mostly by identifying the skeleton joints. Thus it supports interaction sce­
narios like user identification and gesture control. The algorithms in NITE are based
on the cross-platform, multi-language standards defined by OpenNI, an organization
established by PrimeSense to promote a standard API and to ensure compatibility be­
tween various natural interaction devices. The aim is a single API applicable for all
NI devices. A limitation of both sensors is their inability to resolve the human hand
and to detect fingers – the hand is interpreted as a single point in 3D space19.
Figure 19: Softkinetic DepthSense 311.
19 Within 2012 and 2013 several tools for detecting hands based on depth information
have been developed or adapted to integrate the Kinect, the Xtion and similar depth
sensors. Good examples are TipTep Skeletonizer and Candescent NUI. These tools
were not available in stable versions at the time the prototype was developed – and
recent lab studies show that they still have tracking and latency issues, probably re­
sulting from the low resolution of the depth image.
50
Related Work: Motion Recognition
In early 2012 we managed to inspect the Softkinetic DepthSense 311 (released De­
cember 2011). In opposition to the Kinect and Xtion the DepthSense 311 works with
a 3D ToF sensing chipset from Texas Instruments. It includes the middleware iisu
(derived from the sentence “the interface is you”) which is based on C++ but includes
a C# wrapper. Unlike NITE from PrimeSense, it does not natively support Java. How­
ever it features not only full body tracking but also integrated finger and hand tracking.
Thus it seemed an ideal solution for the aim of detecting intricate finger movements
as common in manual assembly. The following table compares the specifications of
the three sensors:
Table 4: Comparison of motion detection sensors available early 2012:
Solution
Asus
Xtion PRO
Microsoft
Kinect
SoftKinetic
DepthSense 311
Operating
Types
structured light
structured light,
video, audio
time of flight,
video
Operating
Range
80cm - 350cm
Xbox: 80cm - 400cm
PC: 50cm - 300cm
short: 15cm - 100 cm
long: 150 cm - 400 cm
Resolution
Depth
IR: 320x240 in 30fps
IR: 640x480 in 60fps
IR: 640x480
IR: 160x120
Resolution
Video
RGB: 640x320
RGB: 640x480
RGB: 640x480
As the table shows, the SoftKinetic shares a general limitation of all systems on the
market in early 2012: the resolution of the depth image. In fact the DS 311 offers only
a quarter of the pixels of the depth images created by the Kinect or the Xtion.
The resolutions provided by the three depth cameras available on the market are wellsuited for games or activities using gross motor skills rather than fine motor skills.
However, for assembly work fine motor skills are needed. As a result in the imple­
mentation the skeleton joints or tracking of the worker’s hands were not used – instead
we resorted to supervising 3D spheres or areas on the workplace (see chap. 6.3.1
Technical Restrictions and Solution).
Related Work: Projection
51
3.2 Projection
Since it became a technical possibility in the nineties, the idea that computers “weave
themselves into the fabric of everyday life” and become ubiquitous (Weiser, 1999)
has been the subject of technical studies. An important part of this concept is that
computer graphics break the confinement of screens. Projection is the key technology
that enables the augmentation of real world objects by computer-generated images,
the creation of an augmented reality. In recent years the miniaturization combined
with a rapid decline in prices finally allowed applications like mobile phones with
integrated projection.
However, most current systems are still using projection as an alternative way of vis­
ualization rather than a new way of interaction. Interaction requires additional track­
ing equipment like video cameras or even depth-sensing range cameras that work in
combination with the projection.
One of the first systems combining projection with interaction was the “DigitalDesk
Calculator” (Wellner, 1991). In this prototype of tangible interaction the camera
“sees” where the user is pointing, and the system performs adequate actions like pro­
jecting a calculator or reading parts of documents that are placed on a physical desk.
It can be seen as an early realization of what now is called “natural interaction” and
was an inspiration to the subsequent approaches including ours. However, at this early
technical stage the system was not robust enough to make industrial products feasible.
Ten years later the 2001 “Everywhere Displays Projector” (Pinhanez, 2001) was an­
other approach to make office rooms interactive. The device used a rotating mirror to
steer the light from a projector onto different surfaces of an environment and em­
ployed a video camera to detect hand interaction with the projected image using com­
puter vision techniques. It was envisioned as a permanent setup for collaborative work
in meeting rooms. Even ten years after the “DigitalDesk Calculator” the system still
required a high amount of calibration for the projection and especially the interaction
component, so again robust industrial application was not economical.
Nevertheless the time was ripe for camera projector systems – in 2003 a workshops
series started: the IEEE International Workshop on Projector-Camera Systems (Pro­
cams). This series focuses on systems that combine controllable lighting systems with
light-sensing devices. Starting with projection the focus broadened and today includes
3D scanning, flexible display walls and novel display interaction.
In 2004 a system more robust than the Everywhere Displays projector allowed direct
manipulation of digital objects by combining hand tracking with projected interfaces
(Letessier & Bérard, 2004). Although it was confined to a planar display surface, this
simplification allowed a latency below 100 ms on regular computers.
52
Related Work: Projection
In 2009 the “Bonfire” system (Kane et al., 2009) demonstrated how robust real world
augmentation had become within only a few years. By attaching two camera-projec­
tor-units, the display area of a laptop was extended to both sides by an interactive
projection area, allowing users to employ multi-touch gestures and even interact with
objects. The projection worked in environments that vary in their physical character­
istics and lighting.
In 2010 a novel algorithm using a depth camera as touch sensor for arbitrary surfaces
was presented (Wilson, 2010). It allows to interact with projected content without
instrumenting the environment. This algorithm was improved in the UbiDisplays
toolkit (Hardy & Alexander, 2012) by clustering points based on neighbor density.
These advances are the pre-requisite for surfaces with projections to become as re­
sponsive to touch as the capacitive displays used in mobile devices. It is used in cur­
rent research based on this work (see Figure 20 and chap. 8.3.1 Technical
Perspectives).
Figure 20: Projected assessment game based on the UbiDisplays framework.
Lately pico projectors have been used as mobile displays and interaction devices or
“light beams”, sojourning everyday objects in a beam and thus turning them into pro­
jection surfaces and tangible interaction devices (Huber, Steimle, Liao, Liu, & Mühl­
häuser, 2012).
The focus of these developments mostly has been office use, home use (especially
entertainment) or mobile computing. With very few exceptions (see chap. 3.4.2 As­
sistive Systems Using Projection) the use of interactive projections in production en­
vironments has not been in the center of computer science research so far.
Related Work: Gamification
53
3.3 Gamification
Video games have always been designed for accessibility – they “operate by a princi­
ple of performance before competence [so] players can perform before they are com­
petent, supported by the design of the game” (Gee, 2007, p. 218). This intuitive and
playful approach to interaction is now common in many devices and influences con­
ventional software. Playful design has reached many parts of society and is frequently
used to support disabled and impaired or elderly people (Brach & Korn, 2012; Nunes,
Silva, & Abrantes, 2010).
Originating from education contexts the term “Serious Games” was established. The
difference to regular games is that they promote “serious” purposes, i.e. purposes
which are directly linked to the real world – e.g. learning a foreign language or traffic
signs. If we follow the philosopher Bernard Suits’ sententious definition of gaming,
that “playing a game is the voluntary attempt to overcome unnecessary obstacles”
(Suits, 2005), serious games are no “real” games because they have a purpose outside
of themselves – they have “necessary obstacles”. This systemic flaw of serious games
at the same time is the premise for their combination of gaming and real-world prob­
lems. According to a pioneer in the field of serious games, Marc Prensky, their two
major advantages are (Prensky, 2007, p. 147):

contextualizing learning materials and thus add engagement

adding interaction to the learning process
The hypothesis underlying these claims is that increased engagement leads to in­
creased learning success – or allows learning success in the first place. In Prensky’s
view especially “digital natives” (persons who grew up with digital technology as
opposed to “digital immigrants”) are in need of engaging learning experiences be­
cause the regular methods lack the level of immersion required by this target group.
While this assumption surely cannot be applied to all younger persons, at least those
with reduced attention spans, reduced working memory or reduced processing abili­
ties will surely profit (see chap. 2.4.2 Cognitive Workloads). This includes persons
with impairments (see chap. 2.2.2) and elderly persons (see chapters 2.2.1 and 2.2.3).
Serious games have long become an established field for industry and research. An­
other term frequently used is “applied games” which often is used for games address­
ing real-world problems outside of the educational domain. Also the term “games with
a purpose” has been introduced. Strictly speaking it refers to games “in which people,
as a side effect of playing, perform tasks computers are unable [or unfit] to perform”
(von Ahn & Dabbish, 2008, p. 58) so the authors propose a form of crowd-sourcing
to address problems unfit for artificial intelligence.
54
Related Work: Gamification
Unfortunately the attempts to establish clear boundaries between the terms “serious
games”, “applied games” and “games with a purpose” were frequently undermined
by their use as synonyms. The latest addition in this list of terms for gaming technol­
ogies transcending the traditional boundaries of their medium is “gamification” –
defined as an “umbrella term for the use of video game elements to improve user
experience and user engagement in non-game services and applications” (Deterding,
Sicart, Nacke, O’Hara, & Dixon, 2011).
Especially in the context of health, gamification already has a long tradition. One of
the first applied games was “Re-Mission” developed by HopeLab in 2007 – basically
a shooter game, where children with cancer could actively fight against virtual tumor
cells. It is a common misunderstanding that playing this game directly improved the
children’s health which was neither the case nor the intention. Still the visualization
of the abstract threat in the game led to a significantly higher reliability in the chil­
dren’s medicine intake which ultimately was beneficial in the fight against cancer
(Kato, Cole, Bradlyn, & Pollock, 2008).
Figure 21: Wii with Wii Remote (left), Wii Remote with Wii Motion Plus (right).
In 2007 the “games for health” approach reached a new level with the release of Nin­
tendo’s Wii (see Figure 21). The video game console uses the accelerator-based Wii
Remote20 to detect movements in three dimensions. The accessory Wii Balance Board
allows measuring the user's center of balance. This and other new input methods were
first looked at critically by the game industry, since “new input paradigms create im­
mense complications when considering user-centered game design because we must
now account for differing talent levels of the individual players” (Pagulayan, Keeker,
20 The concept of steering by motion was successful but the accuracy of the Wii Re­
mote was limited. For this reason in June 2009 Nintendo introduced an expansion
device called Wii MotionPlus that allows more accurate motion detection by combin­
ing a dual-axis tuning fork gyroscope and a single-axis gyroscope.
Related Work: Gamification
55
Wixon, Romero, & Fuller, 2012, p. 800). These complications were addressed and
when the Wii Fit exercise game was released in 2008 it contained more than 40 activ­
ities designed to engage players in physical exercises like strength training, aerobics
or yoga poses. With almost 23 million copies sold (end of 2013) Wii Fit even became
one the best-selling console games in history.
Soon scientist and physicians started to exploit the motion analysis capabilities of the
console and its input devices for therapeutic use. The effects of this early iteration of
video game devices in health contexts were promising: in an analysis of efficacy be­
tween traditional and video game based balance programs positive evidence has been
shown for using the latter (Brumels, Blasius, Cortright, Oumedian, & Solberg, 2008).
For economic reasons there were hardly games developed explicitly for therapeutic
use in the first years. Only when the natural audience for game-based approaches (kids
and young adults) could be broadened by integrating elderly or disabled persons, the
first explicitly therapeutic games were developed. One of the examples focusing on
this target group is VI-Bowling, a tactile game helping visually impaired users to sig­
nificantly increase their throwing skills (Morelli, Foley, & Folmer, 2010). A repre­
sentative example targeting elderly users is SilverPromenade – a game allowing
players to go on virtual walks using the Wii Remote and the balance board (Gerling,
Schulte, & Masuch, 2011).
When the Kinect was launched in 2010 the technological cycle of adopting and adapt­
ing video game motion technology initiated by the Wii started anew. The Kinect al­
ready included the exercise game Kinect Adventures! which utilizes human body
motion detection in a variety of minigames (as a pre-packaged game it sold 20 million
copies, making it the best-selling game for the Xbox 360). In succession an increasing
number of researchers and therapists wanted to make use of the new markerless mo­
tion tracking capabilities to “gamify” medical treatment. Similar to the situation after
the release of the Wii there were no applications deliberately developed for rehabili­
tation purposes, so they started using commercially available motion-based videogames. An overview of their potential and their limitations is given by Putnam and
Cheng who advise to use motion games for patients suffering from brain injuries (Put­
nam & Cheng, 2013).
One of the first motion-based video games deliberately built for therapeutic purposes
was developed from 2010 to 2012 in the motivotion60+ project. The aim was to create
engaging balance exercises that help senior citizens to prevent falls and thus reduce
painful and costly hip fractures. The applications resulting from this gamification of
exercises are adequately called exergames. The motion-tracking sensor Kinect was
used to allow the elderly users natural interaction with a set of mini games (Korn,
Brach, Schmidt, Hörz, & Konrad, 2012).
56
Related Work: Gamification
Figure 22: A senior citizen using an exergame developed for balance training.
Images courtesy of Wohlfahrtswerk für Baden-Württemberg (left) and Korion
GmbH (right).
Another form of impairment frequently covered by gamification approaches are
strokes, which often imply therapies requiring highly repetitive exercises. An example
is Break the Bricks which helps stroke patients to recover their psychomotor abilities
(Dömők, Szűcs, László, & Sík Lányi, 2012).
The concept of applying gamification to other domains and creating more engaging
workplaces has already been described (Reeves & Read, 2009). However, the authors
focus on general business processes and do not consider natural interaction. To our
knowledge the use of gamification for industrial workplaces has not been described
outside of the context of the research presented here.
Related Work: Assistive Systems at Workplaces
57
3.4 Assistive Systems at Workplaces
Assistive systems have been accompanying work for a long time, from expert dia­
logue systems in customer care centers to step-by-step instructions on monitors in
production environments. While new forms of interaction and assistance are readily
adopted in many areas, the pervasion of the industrial domain is slow. Today Human
Machine Interaction (HMI) still lacks behind the “regular” HCI: even the most ad­
vanced computer-based assistive systems in production have been attributed to use
“suboptimal concepts of information technology” and thus are seen as unapt to ensure
“efficient and ergonomic guidance of assembly workers” (Zäh et al., 2007).
This situation has hardly changed in the last years: as a recent Fraunhofer study on
HMI explains, from the variety of modern interaction techniques only touchscreens
found their way to machine interfaces in production environments (Bierkandt, Preiss­
ner, Hermann, & Hipp, 2011) – and even with typical industry tools much skill and
some manual adaptation is required as soon as interaction methods are to be changed.
The discrepancies may partly be a result of the traditional distance between mechani­
cal engineering and information technology, two disciplines which still are vastly sep­
arated. While CAD (Computer Aided Design) brought computers into manufacturing
companies they were considered tools for construction – and not devices for user in­
teraction. Computers were considered insecure and unfit for the special requirements
in production environments where reliability and security are essential.
Figure 23: An implementation of the Poka Yoke concept using an impression.
The ambitious aim of production companies is to produce “zero errors” or “total qual­
ity” (see chap. 4.2.2). This goal is reflected in the concept of “Poka-Yoke” where
systems are designed to be “fail-safe” or “mistake-proof”. This is can be realized by
organizational methods or physical appliances. An impression that can only hold a
part if it is assembled correctly (see Figure 23) is a good example of how Poka Yoke
is applied in industrial practice.
58
Related Work: Assistive Systems at Workplaces
This seeming obsession with quality becomes more plausible if the potential outcomes
of errors in apps for mobile devices or regular business software are compared with
the effects resulting from human errors or respectively software bugs in production
environments. In this domain errors can immediately result in severe injuries of work­
ers and in combination with just-in-time production also in substantial financial
losses. This is probably the main reason why specialized PLC (Programmable Logic
Controllers) for a long time have been preferred over generic computers. Although
modern PLC can be programmed with developer software running on computers, their
main focus has always been regulating and steering – interaction and especially visu­
alization were implemented later and have long been considered peripheral. As a re­
sult most manufacturers are very conservative when changing HMI and prefer “safe
and slow” over “new and intuitive”.
New forms of HCI are implemented more readily if they have become part of an ac­
cepted standard like ISO 9241 (ISO/TC 159/SC 4, 2006) which is covering the “ergo­
nomics of human-system interaction”21. Although this and related standards like ISO
14915 are updated regularly they are not designed to describe very recent approaches:
while “guidance on tactile and haptic interactions” was added to ISO 9241 as part 920
in 2009 (ISO/TC 159/SC 4, 2009), motion recognition and accordingly natural inter­
action based on movement has not been covered so far, although this type of HCI is
widely used today.
Finally in the domain of engineering, innovative solutions are rather patented than
published – so innovative HMI solutions can be seen at fairs but are rarely described
in journals or conference proceedings. Some of the more advanced forms of interac­
tion in assistive systems in work contexts are portrayed in the next sub chapters.
3.4.1 Assistive Systems Using Motion Recognition
The well-established “pick-by-light”-systems can detect if a worker picked an assem­
bly component from the right container. However, they cannot examine if the product
component was assembled correctly – or even assembled at all. This binary approach
(right pick / wrong pick) is well-established in the industrial domain. Most quality
gates check if products or specific product parts like weld seams are “okay” or “not
okay”. The possibility to obtain more detailed information about what is going on
during the work process is comparatively new.
21
Originally it was titled “Ergonomic requirements for office work with visual display
terminals” but since 2006 the standard tries to cover more recent forms of interaction.
Related Work: Assistive Systems at Workplaces
59
There are first efforts in the industry to reach a continuous analysis of work processes.
A good example is a system based on ultrasonic waves by Sarissa GmbH22. The ul­
trasonic waves are emitted by “trackers” attached to gloves the assembly worker is
required to wear and received by sensors mounted above the assembly table.
Figure 24: The Quality Assist System based on ultrasonic waves.
Image courtesy of Sarissa GmbH.
The system compares the worker’s motions with pre-stored motion sequences in realtime. It offers a high accuracy of 5 millimeters (according to the vendor’s website).
With the transmitter’s weight of 40 grams and the receiver’s hemispherical detection
of eight meters in diameter the system offers good mobility. Still it suffers from some
shortcomings – most notably is the parallel to projection: like with head mounted de­
vices (HMD) a battery-powered physical device (in this case the transmitter) has to
be attached to the users. This raises usability and acceptance issues, especially in pro­
duction environments and even more so with elderly or impaired persons.
While the use of systems using image-based motion detection for the surveillance of
large areas in production environments has been proposed (Sardis et al., 2010), the
use of motion detection to analyze detailed movements of production workers’ has
not been described so far.
22 As common in the domain of engineering, technical details about the system have
not been published. The descriptions of this and subsequent systems are based on in­
terviews, brochures and information on websites.
60
Related Work: Assistive Systems at Workplaces
3.4.2 Assistive Systems Using Projection
The visualization of instructions or context-specific information directly in the work­
space is an important aspect of efficient work in production environments. Most sys­
tems use regular monitors to achieve this.
Two more advanced technologies related to the field of augmented reality (see chap.
2.5.3) are head-mounted displays (HMD) and projections. Although HMDs usually
compromise a worker’s freedom of movement, their industrial potential has been stud­
ied extensively (Weaver, Baumann, Starner, Iben, & Lawo, 2010). However, in direct
comparison projection is by far the less intrusive system; but although numerous ap­
plications for office or entertainment use have been described (see chap. 3.2 Projec­
tion) this technology has rarely been applied to the industrial domain.
Interestingly, another light-based assistance has been playing an important role in pro­
duction assistance for a long time. Most current assistive systems in assembly use
“pick-by-light” – a solution where the next box a worker has to pick parts from is
marked by a small indicator lamp attached below and the pick is controlled by a light
barrier. A reason for the prevalence of this comparatively advanced and intuitive form
of HMI might be that light barriers are integrated as sensors, so this form of assistance
could easily be realized using the programmable logic controllers (PLC) common in
industry.
Figure 25: Pick to light display "PickTerm Flexible".
Images courtesy of KBS Industrieelektronik GmbH.
The most intuitive form of visual assistance – the projection of relevant information
directly into the workspace – is still rarely seen in industrial systems. A patent search
Related Work: Assistive Systems at Workplaces
61
conducted by the University of Applied Sciences Esslingen showed three relevant pa­
tents:

Computer Aided Works by the iie GmbH & Co. KG, Germany

Intelligent Workplace by the Friedrich Martin GmbH & Co. KG, Germany

Light Guide System by OPS Solutions, LLC, USA
The system Computer Aided Works is a visualization system offering several inter­
faces that connect to other assistive systems. The Intelligent Workplace already visu­
alizes information within the work area using a projector. The Light Guide System
seems to be most advanced system used in industry and integrates the two approaches.
However, the system still relies on external devices for confirmation of processes like
touch monitors. The projections themselves are not interactive. Thus even the cur­
rently most advanced system used in industry fails to combine projection with inter­
action as demonstrated by Wellner’s “DigitalDesk Calculator” in 1991 (see chap. 3.2
Projection).
The concept of in-situ projection of information in production environments is also
explored in research. In automotive manufacturing, the quality of spot welding on car
bodies frequently needs to be inspected. Currently most spot welding inspections rely
on printed drawings. Zhou et al. describe a system that projects visual data onto arbi­
trary surfaces. It provides just-in-time in-situ information to a user within a physical
work-cell (Zhou et al., 2011).
Figure 26: Projection of visual data on welding sports.
Images courtesy of Jianlong Zhou and Bruce H. Thomas of the Wearable
Computer Lab, University of South Australia. Zhou et al., 2011.
Like the Light Guide System this approach attempts to solve one problem (instructions
usually require the user to change the field of vision) without addressing a related one
(confirmation of processes usually requires the user to change the field of vision). In
62
Related Work: Assistive Systems at Workplaces
conclusion the advantages of projected interfaces in the industrial domain still remain
to be explored.
Apart from the research presented here, to our knowledge the only system currently
combining projection with interaction in work contexts (although not in a production
context) is an assistive system for guiding workers in sterilization supply departments
(Rüther, Hermann, Mracek, Kopp, & Steil, 2013).
In such departments medical instruments are cleaned, disinfected and sterilized. The
system projects instructions directly into the workplace and assists the workflow.
Moreover a depth sensor is used to detect the user’s movements and thus allows a
projected user interface.
Figure 27: An assistive system in the medical domain using projected interfaces.
Image courtesy of Stefan Rüther, Research Institute for Cognition and Robotics
(CoR-Lab)
The huge advantage in this scenario is that a projected interface can never become
“unclean” so sterilizing the table is sufficient. The system was evaluated successfully:
While the mean time required and the mean number of minor errors remained almost
the same, the mean number of major errors was reduced by almost 63% in comparison
to the paper-based approach used before.
3.4.3 Assistive Systems Using Gamification
Gamification is not a completely new concept but dwells on established approaches
like serious games (see chap. 3.3). However, as described at the beginning of this
chapter the requirements for new technologies or concepts to be integrated in produc­
tion environments are high: ideally the new approach is described in an established
Related Work: Assistive Systems at Workplaces
63
industry standard. If that is not the case usually “safe and slow” is preferred to “new
and intuitive”.
Even if new interaction techniques (currently mainly touchscreens) are implemented,
the current assistive systems used in production environments are purely functional:
they display assembly instructions to decrease the workers’ cognitive load and reduce
the sources of errors like the use of wrong tools.
Sine the enormous success of attractive mobile devices was noticed also by the man­
ufacturers of assistive systems for production, the awareness that user experience
(UX) is important grows. However, making work more attractive or “increasing fun”
so far have not been design goals for tools used in the industrial domain. For this
reason to our knowledge, apart from the research presented here, assistive systems in
production have not yet been influenced by gamification.
Although this work focuses on the domain of production it is helpful to look at other
related domains. There have been several efforts to integrate gamification into work
processes, especially in the service sector. It has often been described what “ingredi­
ents” help to gamify work and thus increase engagement (Reeves & Read, 2009).
Their work meticulously maps existing game elements like avatars, leaderboards, lev­
eling and reputation to general business processes.
The approach is straightforward: several decades ago frequent flyer programs23 or
management approaches like “management by objectives” already implicitly took a
step into the world of games. In gaming, missions and goals need to be stated explic­
itly to make them transparent for the players and measurable for the software. Thus it
is not surprising that gamification was very well received in business contexts: in 2011
the research company Gartner predicted that 70 percent of Global 2000 businesses
will manage at least one “gamified” application or system by 2014 (Cowie, 2013).
Indeed such predictions or the corresponding high market volumes become more rea­
sonable, if “employers can use Gamification to incentivize employees by establishing
clear goals and rewarding those employees that achieve those goals” (Cowie, 2013).
In this very broad understanding gamification is primarily a visualization of manage­
ment by objectives.
Castellan et al. illustrated how gamification can be used in work environments like
call centers to help agents and supervisors managing their performance (Castellani,
23
The very first frequent-flyer program was created in 1972 by Western Direct Mar­
keting, for United Airlines. It already awarded members with special bonuses. In 1979
Texas International Airlines created a program that used mileage tracking to give “re­
wards” to its passengers.
64
Related Work: Assistive Systems at Workplaces
Hanrahan, & Colombino, 2013). They also describe a private social network designed
by PlayVox for contact centers including a gamified training system. The system has
been advertised by quoting the customer GroupOn Latin America as follows:
“PlayVox lets us detect and make a quick diagnosis of underperforming agents or
those who ignore certain important procedures in serving our customers.” Here the
emphasis clearly lies on making life easier for managers: gamification is used as a
tool to find and dismiss underperforming employees. With respect to the goals men­
tioned in the flow approach (see chap. 2.4.3) this might be seen as a perversion of
gamification. However, most games are designed to have both winners and losers –
so while the application of gamification in this example surely is unethical, it is not
unnatural.
The fascination for the gamification approach in the service sector seems to be fueled
by the increased measurability. Given the comparatively good measurability of work
in production both on the process level (see chap. 2.6.1 Measuring Work Processes
with MTM) and on the results level, it is probably due to the different mindset of
people working in the industrial domain that gamified assistive systems have not been
implemented there so far.
As the discussion of the approaches in the service sector shows, new issues arise as
soon as gamification is implemented in work contexts. Apart from obvious potentials
of misuse, there are structural and even philosophical questions. Recently the concern
was raised that replacing intrinsic rewards with explicit ones may in the long run re­
duce work motivation (Finley, 2012). This shows the ethical dimension of assistive
systems in work contexts in general and specifically of those using gamification. This
dimension is discussed in greater detail in the next sub-chapter as well as in the chap­
ters 2.3 Ethical Dimensions, 8.2 Ethical Implications: Towards Humane Work and
8.3.2 Ethical Perspectives.
Related Work: Ethical Standards for Assistive Technology
65
3.5 Ethical Standards for Assistive Technology
To be useful assistive technology needs to “come close” to the user. The extensive use
of sensors described in the previous sub-chapters opens possibilities for a detailed
physical and potentially also emotional surveillance and guidance – a quantified self.
The potential pervasion of our lives by new sensor-based assistive technology – like
the context-aware assistive systems (CAAS) presented in this work – requires that
technical possibilities are discussed with experts from other disciplines. The early in­
tegration of an ethical perspective into the design and development process seems to
be an adequate approach.
The recently proposed MEESTAR-model is a first attempt to formalize this co-oper­
ation. It describes the “ethical evaluation of socio-technological arrangements” in the
domains care and health (Manzeschke, Weber, Rother, & Fangerau, 2013).
Figure 28: MEESTAR-model by Manzeschke et al. 2013.
Image courtesy of Arne Manzeschke, TTN-Institute.
66
Related Work: Ethical Standards for Assistive Technology
The approach separates three layers of analysis24: the societal, the organizational and
the individual. Within these layers seven topics are addressed: care, self-determina­
tion, security, justice, privacy, participation and self-concept.
Based on this analysis, the approach differentiates between four ethical verdicts:
I.
The application is completely uncritical from an ethical point of view.
II.
The application is ethically sensitive, however the issues can be addressed
in practical application.
III.
The application is ethically highly sensitive; it either has to be permanently
monitored or should not be introduced.
IV.
The application should be rejected from an ethical point of view.
Currently already the distribution of verdicts (one neutral, three critical) shows that
the model focuses on negative effects and explicitly states that “positive effects are
not offset directly with MEESTAR” (Manzeschke et al., 2013, p. 13).
This emphasis on potential negative implications also is a reflection of the model’s
focus on care and health: in the domain of work positive effects (e.g. on performance
and motivation) would structurally be ranked more important – for example in indus­
trial contexts slight negative effects have always been tolerated if the overall produc­
tivity was increased.
Still the MEESTAR-model is a first approach to standardize the ethical evaluation of
assistive systems. Thus it provides important guidelines that can potentially be
adapted for CAAS in the production domain.
24
Author’s translations is accordance with Arne Manzeschke. The original German
terms are provided in Figure 28.
Requirements: Standards for Designing Interactive Systems
67
4 REQUIREMENTS
This sub-chapter is based on the following publications:
Korn, O., Schmidt, A., & Hörz, T. (2012). Assistive systems in production environ­
ments: exploring motion recognition and gamification. In PETRA ’12 Proceedings of
the 5th International Conference on PErvasive Technologies Related to Assistive En­
vironments (pp. 9:1–9:5). New York, NY, USA: ACM. doi:10.1145/2413097.2413109
Requirements engineering in our understanding means bringing the technological pos­
sibilities together with the user requirements and the task requirements. This process
is guided by established standards in human-computer interaction (HCI). Since context-aware assistive systems (CAAS) gather sensitive and detailed user data, ethical
questions are also taken into account.
4.1 Standards for Designing Interactive Systems
On an abstract level CAAS in production environments just represent a specific im­
plementation of interactive systems. Thus the first step in their design was inspecting
generic HCI standards and guidelines and see how they can be applied to CAAS.
We will briefly introduce three influential approaches as systems of reference for the
requirements.
An established standard in HCI are the requirements or “basics of dialogue design” as
described in ISO 9241 part 110:

Suitability for the task

Self-descriptiveness

Controllability

Conformity with user expectations

Error tolerance

Suitability for individualization

Suitability for learning
A second influential HCI standard are Ben Shneiderman’s “Eight Golden Rules of
Interface Design” dating back to the 1987 first edition of the textbook Designing the
User Interface: Strategies for Effective Human-Computer Interaction (Shneiderman,
2010):
68
Requirements: Standards for Designing Interactive Systems
1.
Strive for consistency.
2.
Enable frequent users to use shortcuts.
3.
Offer informative feedback.
4.
Design dialog to yield closure.
5.
Offer simple error handling.
6.
Permit easy reversal of actions.
7.
Support internal locus of control.
8.
Reduce short-term memory load.
A third pillar of HCI engineering are Jakob Nielsen’s “Ten Usability Heuristics for
User Interface Design” from the influential 1993 book Usability Engineering (Niel­
sen, 1993):

Visibility of system status

Match between system and the real world

User control and freedom

Consistency and standards

Error prevention

Recognition rather than recall

Flexibility and efficiency of use

Aesthetic and minimalist design

Help users recognize, diagnose, and recover from errors

Help and documentation
These “basics” or “rules” (which are more elaborate in the original texts than the short
versions cited here) clearly and efficiently describe how HCI generally has to be de­
signed. Many of these standards are self-evident, e.g. suitability for the task (ISO 9241
part 110), “strive for consistency” (Golden Rule) and “consistency and standards”
(Usability Heuristic).
However, standards change slowly; the ones presented here originate from the late
eighties and early nineties. When they were established the authors predominantly
formulated them for the systems being used in these years. Thus new forms of inter­
action like natural interaction (NI) might not be addressed and require alterations –
and while the standards provide a proven and tested guideline for HCI, both the
intended use and the intended users of CAAS give rise to several constraints and ad­
aptations. These are described in detail in the following sub-chapters.
Requirements: Constraints and Adaptations
69
4.2 Constraints and Adaptations
There are three major constraints or challenges when designing CAAS for production
environments: the focus of interaction, the demand for total quality in production en­
vironments and the ideal of universal design or universal access. Each challenge re­
sults in specific requirements.
4.2.1 Focus of Interaction
The standards described in the previous sub-chapter implicitly picture users who con­
sciously interact with some kind of interface. However, in a production environment
the main target of a user’s interaction is not an interface of a software but the current
work component. While human-machine-interaction (HMI) plays an important role
when steering advanced machines by computerized numerical control (CNC, see
chap. 2.6.3 Digital and Virtual Factory, Cyber-physical Systems) manual assembly
processes are much less digitalized. According to the authoritative guideline VDI
2860 Technology for Assembly and Handling25 (VDI Verein Deutscher Ingenieure,
1990), the central activities are joining, handling, fitting, controlling and auxiliary
functions26 like labeling. Interacting with assistive systems is not one of them, alt­
hough it might be considered an auxiliary function.
Although visual elements like instructions or even technical details can be shown on
screens or projections close to the working area, the user’s focus will be distracted
from manual work if these elements are too prominent. For CAAS to be useful they
have to be discrete and “stay in the back” while they are not needed. This results in
the primary requirement:
R1: implicit interaction
Some form of implicit interaction has already been realized in the industrial domain:
there are systems with gloves transmitting the hand’s positions by ultra-waves. While
these have certain shortcomings like tracking just one point in the glove’s center (see
chap. 3.4.1 Assistive Systems Using Motion Recognition) they surely are a sort of
motion tracking. Obviously R1 immediately results in a derived more technical re­
quirement R1’:
25 Author’s translation. German original: Montage- und Handhabungstechnik; Hand­
habungsfunktionen, Handhabungseinrichtungen; Begriffe, Definitionen, Symbole.
26
Author’s translation. German original: Fügen, Handhaben, Justieren, Kontrollieren,
Hilfsfunktionen.
70
Requirements: Constraints and Adaptations
R1’: motion recognition
Implicit or natural interaction (NI) allows the user’s regular movements to become
the predominant input required by CAAS. The technologies required to achieve this
have been described in the state-of-the-art section (see chap. 3.1 Motion Recognition).
The implementation of motion recognition automatically addresses several require­
ments: self-descriptiveness (ISO 9241 part 110), “informative feedback” (Golden
Rules) and “recognition rather than recall” (Usability Heuristic); these are adapted to
production environments with the following second derived requirement:
R1’’: detection of the current work state and speed
Just by watching the worker’s picks, motion recognition allows the CAAS to identify
the current process within a sequence and thus the product state. Also the time be­
tween picks can be measured and thus the work speed can be deducted for each pro­
cess. The question if this information is just used internally by the CAAS or if it is
communicated to the worker is addressed later in this chapter (R6: protect the user’s
personal data).
Next to implicit interaction a way to reduce distraction from the center of interaction
and thus “support the internal locus of control” (Golden Rule) is moving elements of
interest (e.g. instructions or visual feedback) closer to this center. While it would be
disadvantageous to integrate a monitor into the work plate, the use of projection has
been extensively described for office environments and recently has also been applied
to the industrial domain (see chapters 3.2 Projection, 3.4.2 Assistive Systems Using
Projection). This results in the following requirement:
R2: projection of information
4.2.2 Total Quality
A constraint that originates from the production domain is the goal of reaching 100%
quality or “total quality”. While this requirement cannot be met in research prototypes
it is important to consider that it is an actual requirement in the industry.
So when the “basics of dialogue design” require error tolerance that is something alien
to production management. Many suppliers are requested to sign agreements where a
quality of 99.9% or above is guaranteed. They face severe penalties if these require­
ments are not met. So if established control instances in production like cameras are
to be replaced or supplemented by CAAS, the error controls offered are essential.
Errors are also an issue in the HCI standards described, most elaborately they are ad­
dressed by Nielsen’s request to “help users recognize, diagnose, and recover from
errors” (Usability Heuristic). This results in the following requirement:
Requirements: Constraints and Adaptations
71
R3: error detection
While it is comparatively easy to use the depth data generated by motion recognition
to check if a box was accessed (and thus model state-of-the-art systems using light
grids) other processes are highly demanding. For analyzing the intricate movements
typical for manual production, CAAS require finger detection. As discussed (see
chapters 3.1 Motion Recognition, 6.3.1 Technical Restrictions and Solution) the lim­
ited resolution of the motion sensors available at the time of the prototype implemen­
tation left this requirement a subject of future research.
A potential alternative or complementation to finger tracking discussed for future re­
search is the implementation of object recognition: the image of the product assem­
bled correctly can be compared with the actual product (Funk, Korn, & Schmidt,
2014a, 2014b). However this pattern recognition approach is complicated by several
everyday challenges in manual assembly like small objects with minimal differences
or heavily jointed products. No matter how the challenge of detecting product-related
errors will be solved – R3 results in the following secondary requirement for the sub­
sequent realization of CAAS:
R3’: finger recognition / object recognition
A future CAAS with finger and / or object recognition would also address another
problem: the “easy reversal of actions” (Golden Rule) is difficult to attain in produc­
tion where components often are joined permanently. A system that detects errors in
the making would offer “error prevention” (Usability Heuristic). Thus it would pre­
vent products from becoming waste that have already progressed very far in the value
chain – since assembly often is the last step before packaging.
4.2.3 Universal Design
The general HCI standards also have to be adapted to the targeted users of CAAS – in
the first step elderly and impaired persons in production. These workers have specific
physical and cognitive needs (see chap. 2.2 Demography and Targeted Users).
However, in a second step future CAAS might be used by everyone working in pro­
duction (see chap. 8.3 Future Research). So although there are constraints resulting
from the primary target group, the overall aim is to create a universal design and uni­
versal accessibility for CAAS.
Several important design principles have already been mentioned in the first sub-chap­
ter. However general standards like “strive for consistency” (Golden Rule) are not
explicitly transferred into the CAAS requirements unless they have to be adapted.
This is the case for “self-descriptiveness” (ISO 9241 part 110) and “offer informative
feedback” (Golden Rule). Since the targeted users – both impaired and un-impaired –
72
Requirements: Constraints and Adaptations
do not primarily interact with the CAAS, its information density should remain low.
This is described graphically by the desktop metaphor (see chap. 2.4.2 Cognitive
Workloads) and summed up by the Golden Rule to “reduce short-term memory load”.
This results in the following requirement:
R4: adaptation to the user’s competence level
Since a user’s time required to complete a certain process is known from R1’’ (detec­
tion of the current work state and speed) the system allows an automatic measurement
of work actions similar to methods-time-measurement (see chap. 2.6.1 Measuring
Work Processes with MTM) on a macro level27. Thus the CAAS measures how fast a
user works and detects changes in work speed. These changes will then lead to some
form of adaptation (incorporating the Golden Rule of “suitability for individualiza­
tion” and the usability heuristic “flexibility and efficiency of use”). For an adequate
scaling the observed changes have to be interpreted correctly – a drop in speed might
result from boredom or exhaustion or even a sudden cognitive problem. This process
is described in the following chapters: 5.3 Flow Curves, 5.4 Model for Context-Aware
Assistive Systems, 6.5.1 Designing Flow and 6.5.2 Implementing Flow.
Like every system that needs to analyze, model and predict user behavior, CAAS ben­
efit from additional data sources. The disambiguation of causes for observed behavior
is simplified by meeting the following derived requirement:
R4’: detection of excitement level and type
Knowing whether the user is excited and whether this excitement is positive (e.g.
pride, relaxation) or negative (e.g. overexertion, fear) is extremely helpful when the
CAAS is adapting to the user. This requirement could be fulfilled by detecting the
user’s heart rate and / or by analyzing facial expressions. In the prototype described
here this requirement has not been addressed, so it remains a topic of future research.
When it comes to adaptation there is an additional requirement resulting from both
ethical and economic considerations:
R5: integration of motivating elements
Work in manual production is repetitive and demanding at the same time: after dozens
of identical iterations of the same production sequence a slight change (e.g. another
color of a label) must break the routine immediately, because otherwise an error will
occur. Therefore it is economically sensible to change the monotonous nature of this
type of work by adding change and challenge. The integration of motivating elements
from game design (gamification) will increase the users’ self-efficacy by visualizing
27
In combination with finger detection even MTM-3 could be realized implicitly.
Requirements: Constraints and Adaptations
73
the current status of work, thus also addressing the “visibility of system status” as well
as the “match between system and the real world” (Usability Heuristics). In a second
step gamification potentially will enhance work satisfaction which is ethically desira­
ble – if the positive influences can be proven to be long-term effects (see chap. 8.3.2
Ethical Perspectives).
Several of the presented requirements – from motion recognition to emotion detection
– analyze the user in an unprecedented level of detail. While this raises ethical ques­
tions, the corresponding technical requirement primarily is “simply” data security:
R6: protect the user’s personal data
While this requirement seems obvious, data security is an issue in a world that strives
for “continuous data management” (see chap. 2.6.3 Digital and Virtual Factory,
Cyber-physical Systems). As the requirements study shows, most enterprises presup­
pose that a CAAS will be integrated in the existing data network structures. At the
same time they are aware of the fact that process-integrated sensor-based systems pro­
duce a different quality of data than the current solutions.
The requirements are summarized in the table below:
Table 5: Requirements for an ideal CAAS.
No.
Requirement
R1
implicit interaction
- R1’
- automatic detection of movements
- R1’’
- detection of the current work state and speed
R2
projection of information
R3
error detection
- R3’
- finger recognition / object recognition
R4
adaptation to the user’s competence level
- R4’
detection of excitement level and type
R5
integration of motivating elements
R6
protect the user’s personal data
74
Requirements: Requirements Study
4.3 Requirements Study
The requirements for CAAS could be derived by extrapolating technical trends and
adapting existing HCI approaches and solutions to the domain of industrial produc­
tion. Since many of these approaches were new and alien to this domain, the views
and problem awareness of potential stakeholders from the industry were evaluated in
a study. 134 industrial companies in Germany have been asked about present and fu­
ture demands for assistive systems in the industrial practice. 29 organizations (21.6%)
answered the questionnaire28.
Most companies (in total 18) were small and medium sized with under 1.000 produc­
tion workers (62%), six were large with 1.000 to 10.000 workers (21%), four were
very small with under 100 workers (14%) and one organization was very large with
more than 10.000 workers.
All companies face the challenge of integrating workers with impairments which are
either related to old age or to accidents. Four companies stated that the percentage of
impaired workers is above 10%. While this study is too small to be truly representative
of the industry as a whole, the fact that already 13.8% of the companies employ more
than 10% of impaired persons is highly relevant for the future demand for CAAS in
work contexts. The high percentage of impaired workers found in our study is sup­
ported by the World Health Survey (World Health Organization, 2004) as portrayed
in chap. 2.2.2: the average prevalence rate of significant functioning difficulties in the
adult population was 15.6% (ranging from 11.8% in higher income countries to 18.0%
in lower income countries) and the average prevalence rate for adults with very sig­
nificant difficulties was estimated 2.2%.
So even in a high income country like Germany about 13% of the working population
are impaired. Considering the above average risks and above average physical exer­
tion in production work the percentage of impaired workers will very probably be
significant higher in this domain. Also all companies clearly see that the number of
elderly and impaired workers will increase further, so the level of problem awareness
in the industry is very high.
28
This high response quota is a result of telephone interviews and personal interviews
being used in addition to an online survey tool.
75
Requirements: Requirements Study
Number of Companies
With regard to the reduced working memory of impaired persons (see chap. 2.4.2
Cognitive Workloads) and R4 (adaptation to the user’s competence level) the follow­
ing question was asked: How many processes or sub-tasks typically have to be com­
pleted by one worker at a single workplace?
10
9
8
7
6
5
4
3
2
1
0
31%; 9
28%; 8
17%; 5
14%; 4
10%; 3
1‐5
6‐10
11‐15
16‐20
> 20
Number of Processes
Figure 29: Number of processes at a single workplace.
As Figure 29 shows the number of processes varies strongly. Still in the majority of
31% of the companies the workers have to handle more than 20 processes within a
production sequence. However, there are product variations and usually also com­
pletely different products: in 29% of the companies changes in the work sequence
occur hourly, in 24% within a shift, in 14% daily and in 24% weekly.
This means that 90% of the production workers will have at least weekly changes, so
to work efficiently a worker will usually need a cognitive mapping of several dozen
steps. With the goal of producing “total quality” it is not surprising that impaired
workers will need assistive systems support to do this demanding work. Also CAAS
would probably allow more elderly workers to continue this line of work when mental
capacities start to decrease.
In spite of their potential benefits, are only 50% of the companies asked are familiar
with assistive systems supporting production work. 11 out of the 14 companies aware
of such systems also use them – i.e. almost 80%. However, the assistive system used
are state-of-the-art systems with little process-orientation and sub-optimal forms of
human-computer interaction (see chap. 3.4 Assistive Systems at Workplaces).
The companies asked only used assistive systems controlling the workers’ picks from
boxes (pick-by-light). More advanced systems like hand trackers based on ultra-waves
or systems which use the projection of information were not mentioned.
76
Requirements: Requirements Study
Figure 30 shows the features most requested from a new kind of assistive system:
25
Number of Companies
75%; 21
68%; 19
20
57%; 16
50%; 14
15
32%; 9
10
5
0
Control
picks.
Only
Control
Detect Train new
number of errors.
workers. instruct on
error.
elements
picked.
Requested Features
Figure 30: Requested features of a new assistive system.
The most important industry requirement is the detection of errors (R3: error detec­
tion). However, this requirement is met by current assistive systems only on the level
of picks (i.e. a light barrier checks if a container was accessed). For error detection on
the product level, motion tracking systems with finger tracking (R3’ finger recogni­
tion / object recognition) are required. Although such hand tracking systems have been
realized using ultra-wave-based tracking (see chap. 3.4.1 Assistive Systems Using
Motion Recognition) these solutions do not allow the tracking of fingers or objects.
Thus from a technical standpoint the introduction of motion tracking cameras is inev­
itable. This also concerns the economic sphere: ultra-wave-based systems currently
cost about ten times the price of systems with depth cameras. This is especially im­
portant when considering the finding that 16 of the 25 companies asked (64%) would
only be willing to pay between € 3.000 and € 7.000 for a new assistive system (and
five companies, i.e. 20% would even pay less). In consequence motion recognition is
the only option process-oriented assistance can be realized economically today.
Other potential benefits of using motion recognition were highly appreciated: 9 out of
29 companies (31%) found an assistive system that can give advice on ergonomic
issues “highly attractive” and 17 companies (59%) considered it “attractive”. When
asked about an assistive system that uses motivating elements (R5) like achievements
or high scores adapted from game design, surprisingly 3 out of 28 (11%) found this
perspective “highly attractive” and 16 (57%) found it “attractive”. So a majority of
the production companies was interested both in ergonomic feedback and gamification.
Requirements: Requirements Study
77
All these features – even the essential requirement to detect errors – require motion
recognition (R1’). However, when asked about “cameras” being installed at work­
places only three out of 28 (11%) companies found that unproblematic, whereas 11
(39%) found it “critical” and 10 (36%) perceived cameras as “very critical”; four com­
panies even felt their installation at the workplace was “impossible”. In this context it
is important to note that 20 out of 29 companies (69%) found that a new assistive
system needs to be connected to the company’s existing data networks like the pro­
duction planning system, whereas only nine companies (31%) would be fine with a
stand-alone system.
If the connection to the company network is seen as essential, a camera naturally be­
comes a great hindrance because employees could unknowingly be visually super­
vised. Thus for the establishment of CAAS in production a clear technical separation
of the user-centered camera-based system from the company’s existing product-ori­
ented systems will be essential (R6: protect the user’s personal data). Only a system
with a “black-boxed” camera will be acceptable and corresponding to ethical guide­
lines (see chap. 8.2 Ethical Implications: Towards Humane Work).
78
Model and Architecture: Adapted HAAT-model
5 MODEL AND ARCHITECTURE
This chapter is based on the following publication:
Korn, O., Funk, M., Abele, S., Hörz, T. & Schmidt, A. (2014) Context-aware Assis­
tive Systems at the Workplace. Analyzing the Effects of Projection and Gamification.
In PETRA ’14 Proceedings of the 7th International Conference on PErvasive Tech­
nologies Related to Assistive Environments. New York, NY, USA: ACM.
In the following we describe the model on an assistive system that addresses the re­
quirements described (see chap. 4): a context-aware assistive system (CAAS) for
workers in production environments. The model draws from three existing concepts
from different domains: the established HAAT-model for designing assistive systems
(see chap. 2.4.1), the concept of flow (see chap. 2.4.3) and the framework for an adap­
tive game system.
5.1 Adapted HAAT-model
The first basis of the CAAS-model is the established HAAT-model (see chap. 2.4.1
HAAT-model) which describes the basic components and functions of an assistive
system. The new model focuses on improving three aspects:

the human-technology interface is realized by projection

the environmental interface is realized by natural interaction (NI)
based on human body tracking

the activity output is enriched by gamification
NI implies real-time motion recognition (requirement R1’) and thus almost real-time
intervention – an important requirement in work contexts, where errors can lead to
accidents. Using NI also led to an addition to the model (Figure 31: Adapted HAATmodel, green dashed arrow): Since the user’s body becomes the direct controller with­
out the need for purposeful or even conscious interaction, the system’s environmental
interface directly interacts with the user. This implicit interaction (R1) is helpful be­
cause errors (R3) will often be performed unconsciously and without notice.
Model and Architecture: Adapted HAAT-model
79
Figure 31: Adapted HAAT-model.
At the same time motion-based implicit interaction opens a channel for continuous
performance analysis. Movement is one of the data sources which can be used to
model the user’s state and scale the challenge level to meet his or her current level of
competence (R4). This is important because especially with impaired users the per­
formance level can change significantly several times within a single day due to the
strong variation of physical and mental states.
The scaling of the challenge level is also crucial for the implementation of contextspecific feedback like the gamification component: if activities are designed to create
and preserve a flow state (see chap. 2.4.3) they have to adapt to changing competence
levels.
The idea of adapting the level of challenge to the user is well-established in a domain
where flow is not an optional feature but the major aim of the development process:
games engineering. An additional advantage is that gaming approaches have always
been trying to incorporate and measure the emotional side of the user (Sykes &
Brown, 2003) so they directly address the requirement R4’(detection of excitement
level and type).
80
Model and Architecture: Framework for an Adaptive Game System
5.2 Framework for an Adaptive Game System
The second base of the CAAS-model is an approach to player-centered game design.
It aims “to provide a more appropriate level of challenge, smooth the learning curve,
and enhance the gameplay experience for individual players regardless of gender, age
and experience” (Charles et al., 2005, p. 1). The authors also imply that such adapta­
tions “decrease task-based failure and error rates among users” which is exactly what
assistive systems at workplaces are to achieve. They propose a “potential framework
for an adaptive game system”:
Models of
player types
Monitor player
performance
Player
preferences
Adapt the game
to individuals
Measure the
effectiveness
of adaptation
Re-model
player types
On-line Adaptive game system
Figure 32: Potential framework for an adaptive game system by Charles et al.
The framework first integrates the player types – typically these are modeled as
“Bartle Quotients”, referring to combinations of the four basic types of players (killer,
socializer, achiever and explorer) as proposed by Richard Bartle (Bartle, 1996). Player
preferences include preferred play styles, e.g. “casual” versus “hardcore” – a prefer­
ence which can easily be met by adjusting the difficulty level.
The player’s performance, i.e. his or her behavior in the game world is then used to
adapt the game. This can be achieved by spawning new enemies to increase challenge
or by spawning a small group instead of a large one to reduce challenge. As the game
allows to assess the results while the player is responding to the adapted challenge it
is possible to measure the effectiveness of the adaptation and potentially come to a
new model of the player. As an example a player who usually follows an aggressive
play style (killer) might suddenly start to gather items (explorer), so the game should
adapt to the new play style and spawn more collectables.
Model and Architecture: Framework for an Adaptive Game System
81
Clearly this model cannot directly address the situation of CAAS in production envi­
ronments. Like in the classic HAAT-model the monitoring of the performance (in
Cook & Hussey’s terms the environmental interface) in this framework is principally
expected to be realized explicitly by mouse and keyboard (or joystick or gamepad).
However, this difference is not relevant because games are designed to interact con­
stantly with the user29 – and since motion tracking ideally provides 30 or more frames
per seconds of “performance” in the form of movements, there is no fundamental dif­
ference in the frequency of interaction. If a system receives real-time or almost realtime feedback, the process of adapting or scaling the output can be almost instantane­
ous as well. In this respect NI can be seen as an instrument that allows game-like
interaction in non-gaming contexts.
Especially in manual production, where the main target of interaction is not software
but the tools and the product, implicit interaction is the pre-requisite for high-fre­
quency interaction30.
An important aspect of this model is the way it describes the adaptation to the player:
an iterative process that leads to a permanent re-assessment of what the user might
want and what he or she currently is capable of. This high degree of flexibility and
responsiveness is what makes the “Framework for an Adaptive Game System” a suit­
able reference point for CAAS.
29
Even turn-based games adapt their graphic output to every mouse movement of the
player and might even pre-calculate AI responses on the player’s current actions.
30 However, the motion recognition systems available at the time of the implementa­
tion did not provide the spatial and temporal resolution necessary to interpret intricate
finger movements as common in manual production, so the model’s implementation
had to be simplified.
82
Model and Architecture: Flow Curves
5.3 Flow Curves
A high frequency of interaction combined with an iterative user re-assessment,
potentially allows to create a “perfect fit” for the user’s competence level. However,
in achieving this, a major aspect would be neglected: to achieve flow, the user has to
reach an area between “arousal” and “control” (see chapter 2.4.3 Flow) – but this area
is neither a point nor is it stable. To be permanently motivating an activity has to be
designed in phases that partly arouse the user and partly give him or her the feeling of
control, so that flow comes in waves or curves. The same applies to the gamification
of assistive processes: to maintain flow the challenge level has to hover above and
below the perfect fit which would be the user’s current performance level.
The modeling of the user type in production environments is simplified because there
is only one activity related to assembly: building or in Bartle’s terminology “achiev­
ing”. Thus the player type or the “worker type” can be characterized to a high amount
by the frequency and amplitude of the flow curve hovering between the two poles of
arousal and control. Figure 33 illustrates this concept:
B
A
Figure 33: Flow curves
The two exemplary curves represent two users with different characteristics: User A
(blue) needs frequent longer phases of lower challenge level (control or even relaxa­
tion) whereas User B (red) needs frequent arousal to maintain flow.
A major challenge of the flow-oriented approach is interpreting the user’s actions cor­
rectly: a decrease in performance can be both, the result of boredom or the result of
resignation due to overextension. Thus an ideal implementation of CAAS would not
only support finger detection but also detect emotions.
83
Model and Architecture: Model for Context-Aware Assistive Systems
5.4 Model for Context-Aware Assistive Systems
The CAAS-model combines the advanced HAAT model with the framework for an
adaptive game system and the concept of flow curves explained above. It is realized
in two levels of detail: a more abstract version (Figure 34: CAAS-Model – abstract)
where all sensory inputs and outputs are combined and a detailed version used to ex­
plain the implementation details (Figure 35: CAAS-model – detailed).
Human
Cognitive Generator (determines Activity)
Cognitive Interpreter
Environmental Interface
Input
Output
Context‐aware Assistive System
Environmental Interface
Output
Input
Motion Recognition
Emotion Recognition
Interpreter
Audio
Haptic
Instructions
Feedback
Generator
Model human state
physi‐
cal
Visual emo‐
tional
men‐
tal
determine
position on
flow curve
adjust opera‐
tion
mode
Game Generate adapted interventions
Figure 34: CAAS-Model – abstract.
On the highest level the model separates the human (green area) and the context-aware
assistive system (CAAS, blue). The model aims to show the parallels in processing
information. Both the human and the CAAS share an environmental interface consist­
ing of sensors on the input side and various actors on the output side. The overall aim
is that the CAAS input side receives enough data for the interpreter to create a fitting
model of the current state of the user.
While the physical input can be measured with motion technology, the direct deriva­
tion of the emotional state from this data only works in obvious cases like trembling
or a stiff pose. However, changes in speed within similar tasks can be used to detect
a tendency towards arousal or control and boredom. Additional data sources increase
the model’s accuracy, e.g. the heart rate or the facial expressions (which both can be
extracted from a high resolution video).
Hands
Figure 35: CAAS-model – detailed.
Fingers
physi‐
cal
emo‐
tional
Model human state
Interpreter
Body
Motion Recognition
Input
Environmental Interface
men‐
tal
Context‐aware Assistive Sys.
Body
Physical
Output
Environmental Interface
Heart
Face
determine
position on
flow curve
Heart
Emotion Recognition
Face
Emotional
Cognitive Generator (determines Activity)
Human
adjust opera‐
tion
mode
Skin
Skin
Instructions
Monitor
Generate adapted interventions
Game Generator
Projection
Visual
Output
Eyes
Physical
Input
Cognitive Interpreter
Feedback
Speakers
Audio
Ears
Wearable
Haptic
Skin
84
Model and Architecture: Model for Context-Aware Assistive Systems
Model and Architecture: Model for Context-Aware Assistive Systems
85
The model’s structural analogies continue on the processing side. The human and the
CAAS share an interpreter and a generator. The CAAS interpreter uses the data from
the environmental interface to model the human state. As an example a detected se­
quence of errors results in a lower mental score, a detected stress symptom shifts the
flow state towards arousal. This model of the user’s current state is then used to de­
termine his or her position on the flow curve, i.e. to analyze if the current trend moves
towards arousal or towards control.
This analysis eventually results in an adjustment of the operational mode. This could
affect the speed of production, the number of steps to be assembled or even the prod­
uct. Since a typical phase of a flow curve lasts several minutes, determining the suit­
able turning point is of essence, e.g. reducing challenge and thus shifting towards
control too early will reduce the momentum.
If for example the interpreter needs to determine if a worker reduces work speed be­
cause of boredom or because of exhaustion, specific data reflecting the emotional state
(e. g. nervous hand movements, sweat or a fixed gaze) increase the accuracy of the
modeled stress level. The behavior after an adaptation of the operation mode will in­
dicate if the human state was modeled correctly – in the above example increased
speed would indicate that the state was interpreted correctly as under-challenge while
reactions showing stress symptoms would indicate that the state was misinterpreted
and the person was in fact already above the upper challenge limit and outside of the
flow channel. Thus the iterative interpretation of behavior changes as results of the
adaptions can be used to correct errors in user modeling.
The CAAS generator adapts the interventions: the gamification component (e.g. the
speed of visual elements or their size and positioning), the instructions (e.g. by in­
creasing the level of detail in situations of stress or after multiple error occurrences)
and the feedback (tone, length and modality). The adapted interventions are then dis­
tributed over various output channels: monitor or projection and also speakers if au­
ditory feedback is needed. Potentially the feedback and the gamification can even
influence wearables: e.g. a smart watch or a wristband could vibrate when errors are
detected.
The implementation of the environmental interface and the interpreter is described in
chapters 6.3 Motion Recognition and 6.4 Instruction and Performance Analysis, the
implementation of gamification is described in chap. 6.5 Gamification Component.
86
Implementation: Model for Context-Aware Assistive Systems
6 IMPLEMENTATION
This chapter is based on the following publications:
Korn, O., Schmidt, A., Hörz, T., & Kaupp, D. (2012). Assistive system experiment
designer ASED: A Toolkit for the Quantitative Evaluation of Enhanced Assistive Sys­
tems for Impaired Persons in Production. In ASSETS ’12 Proceedings of the 14th in­
ternational ACM SIGACCESS conference on Computers and Accessibility (pp. 259–
260). Presented at the ASSETS ’12, Boulder, Colorado, USA: ACM Press.
doi:10.1145/2384916.2384982
Korn, O. (2012). Industrial playgrounds: how gamification helps to enrich work for
elderly or impaired persons in production. In EICS ’12 Proceedings of the 4th ACM
SIGCHI symposium on Engineering Interactive Computing Systems (pp. 313–316).
New York, NY, USA: ACM. doi:10.1145/2305484.2305539
Korn, O., Brach, M., Schmidt, A., Hörz, T., & Konrad, R. (2012). Context-Sensitive
User-Centered Scalability: An Introduction Focusing on Exergames and Assistive
Systems in Work Contexts. In S. Göbel, W. Müller, B. Urban, & J. Wiemeyer (eds.),
E-Learning and Games for Training, Education, Health and Sports (Vol. 7516, pp.
164–176). Berlin, Heidelberg: Springer Berlin Heidelberg. Retrieved from
http://www.springerlink.com/index/10.1007/978-3-642-33466-5_19
Korn, O., Schmidt, A., & Hörz, T. (2012). Assistive systems in production environ­
ments: exploring motion recognition and gamification. In PETRA ’12 Proceedings of
the 5th International Conference on PErvasive Technologies Related to Assistive En­
vironments (pp. 9:1–9:5). New York, NY, USA: ACM. doi:10.1145/2413097.2413109
The requirements for an ideal realization of a context-aware assistive system (CAAS)
have been described (see chap. 4). A model of an ideal system meeting all the require­
ments has been provided in the previous chapter.
Due to the pioneering character of CAAS (at the time of the implementation in 2011
to our knowledge assistive systems based on motion detection have never before been
used in assembly contexts) it was evident that not all of the requirements could be
addressed in the first implementation: finger detection, object detection (and thus realtime error detection) and the detection of mood31 or excitement were out of scope due
31
While the Kinect camera could be used to apply the Facial Action Coding System
(FACS), the required top mounted position did not allow to capture facial expressions.
87
Implementation: System Overview
to the limitations of the available sensors (see chapters 3.1 Motion Recognition, 6.3.1
Technical Restrictions and Solution). The implementation of the system and the tech­
nical challenges faced during that process are described in the following sub-chapters.
6.1 System Overview
As Figure 36 illustrates, the CAAS prototype is based on physical components (the
experimental assembly table, the motion detection system and the projection system)
as well as software components handling the assistance, the gamification and the re­
cording of user data. These components are described in the following sub-chapters.
Physical
Components
Environmental Interface
Experi‐
mental
Assembly table
Projector
and Monitor
Motion Recognitio
nSystem
Software
Components
Interpeter / Generator
Assistive System Experiment Designer (ASED)
Designer
Gamification
Component
Runner
Figure 36: System overview.
6.2 Physical Integration in the Work Environment
The design of the experimental assembly workplace is based on the design of regular
assembly tables as currently used in the industry (see chap. 2.6.2 Assembly Work­
places).
In order to test the effects of the two augmentations in an empirical study, the work
environment’s complexity was deliberately lowered. The aims were reducing distrac­
tions and allowing easy transportability while still being able to adjust the work sur­
face in height to provide equal conditions for wheelchair-bound persons. The resulting
experimental assembly table is described in Figure 37:
88
Implementation: Physical Integration in the Work Environment
top cage holding the mo­
tion recognition sensor
with video recording and
the projector
adjustable monitor base
able to support a touch
screen to map state-of-the­
art assistive systems
work area with boxes for
assembly parts
Figure 37: Experimental assembly table with motion recognition and projection.
Like its industrial counterparts, the system has been constructed to meet the require­
ments of the authoritative VDI 2860 on Technology for Assembly and Handling (VDI
Verein Deutscher Ingenieure, 1990) described in chapter 2.6.2. It was made from alu­
minum profiles and weighs about 70 kilograms.
The base to the left can support a monitor as required to map state-of-the-art assistive
systems. The boxes are arranged horizontally to avoid occlusion by other boxes
mounted above. The table also had to be able to support a projector for the in-situ
projection of instructions and a motion detection system for the recording of the user’s
movements. Both are installed in the top cage mounted 1.4 meters above the table’s
plate to minimize distraction and provide sufficient lens coverage for the working
area.
Implementation: Motion Recognition
89
6.3 Motion Recognition
6.3.1 Technical Restrictions and Solution
To track the assembly processes, motion sensors are used – and since spatial resolu­
tion is of essence in tracking, the sensors needed to provide depth information. To
decide which sensor was best suited for the prototype we tested the Asus Xtion (re­
leased September 2011), the Microsoft Kinect (released in November 2010) and the
Softkinetic DepthSense 311 (released in December 2011).
While the first two sensors draw their capabilities from a chipset developed by the
Israeli company PrimeSense and use structured light, the DepthSense 311 uses timeof-flight technology. We decided to use the Kinect because it combined a high reso­
lution of the depth image (identical to the Asus Xtion and 16 times the resolution of
the DepthSense 311) with a video camera (see chap. 3.1 Motion Recognition).
As described in the model, an ideal context-aware assistive system would recognize
intricate movements including finger movements, it would detect objects and recog­
nize human emotions. At the time of the implementation (mid 2012) none of the avail­
able low-priced motion recognition systems did provide the spatial and temporal
resolution necessary for the robust implementation of these features. Thus the proto­
type had to be simplified. The most striking consequence was that no real-time error
detection could be realized.
A further technical restriction was the impossibility to use the skeletal joint detection
as provided by the existing middleware NITE and Microsoft SDK (see chap. 3.1 Mo­
tion Recognition). Both APIs did not allow to rotate the Kinect or the Xtion because
they assume that the sensor will scan horizontal rather than vertical areas. This re­
striction also applies to vertical mount, i.e. sensors facing downwards on the work­
space. However for assisting assembly processes a horizontal orientation is
impossible32 because the sensor’s opening angle could not scan all relevant areas (de­
positing racks, working area and the worker’s hands).
This led to an architecture where the depth images generated by the sensor software
were used as the input while the skeletal joints extracted by the middleware were not
utilized. Instead of using the joints we created a new system of reference by analyzing
the changes of z-values. We called this the “adjoined spheres approach”.
32 Even the combination of several sensors in a joined 3D space would not have solved
the problem, because the worker’s picks – a basic requirement of assistive systems in
production solved by light grids in the state-of-the-art – could only be monitored ro­
bustly from a top view.
90
Implementation: Motion Recognition
6.3.2 The Adjoined Spheres Approach
In the adjoined spheres approach movements are defined as passages through adjoined
spheres (see Figure 38). By adjusting the spheres’ radius the movement corridors or
trigger areas can be designed to fit both the production scenario and the competence
of the worker. For example a highly competent worker can be granted wider move­
ment corridors to avoid early interventions which would otherwise reduce the sys­
tem’s acceptance. In the following we briefly describe the technical implementation
of the adjoined spheres approach33.
Figure 38: Illustration of Adjoined Spheres Approach.
Although the approach was originally developed using spheres, the algorithm can be
adapted for other target areas or geometric bodies like cuboids or cylinders. These
target areas are not necessarily empty when the system is initialized – they typically
contain parts of objects the worker has to interact with, e.g. the boxes. Therefore it is
not sufficient to detect points inside the defined areas – the CAAS has to detect
changes with respect to the initial condition inside the area.
This can be implemented efficiently by calculating a representative number for the
depth values or z-values inside of each sphere – the sum of all contained depth values.
For a sphere with the radius ‫ ݎ‬around the central point ܲ௧௔௥௚௘௧ the calculation is:
33
Patent pending: Method for guidance and/or control of assembly and commission­
ing processes at workplaces [Verfahren zur Anleitung und/oder Kontrolle von an ei­
nem Arbeitsplatz auszuführenden Montage- und Kommissionierungsprozessen].
91
Implementation: Motion Recognition
Equation 1: Z-values in sphere.
ܼௌ௣௛௘௥௘ ൌ ∑ ܼ௠௘௔௦௨௥௘ௗ |
ଶ
ଶ
ଶ
ට൫‫ݔ‬௠௘௔௦௨௥௘ௗ െ ‫ݔ‬௧௔௥௚௘௧ ൯ ൅ ൫‫ݕ‬௠௘௦௨௥௘ௗ െ ‫ݕ‬௧௔௥௚௘௧ ൯ ൅ ൫‫ݖ‬௠௘௔௦௨௥௘ௗ െ ‫ݖ‬௧௔௥௚௘௧ ൯ ൑ ‫ݎ‬
Using the reference value ܼோ௘௙ of the each target area, the sums are permanently recalculated and compared to the reference values:
Equation 2: Comparison with reference values.
ܼௌ௣௛௘௥௘ െ ܼோ௘௙
ܼோ௘௟ ൌ
ܼோ௘௙
Each target area can have one of three states: occluded, neutral and occupied. The
system infers a collision when ܼோ௘௟ > 0.05. This value proved stable enough in experiments to filter out measurement inaccuracies in the depth images.
In an industrial application it will have to be adapted to the products and lighting
conditions. A value of ܼோ௘௟ < -0.05 means that depth values inside the target area are
missing – in this situation the system infers that the sphere is occluded. Figure 39
shows the tool developed for testing and optimizing the adjoined spheres approach:
Figure 39: Target area states: Occluded (red), neutral (blue), occupied (green). 92
Implementation: Motion Recognition
Since the interactive areas work independent of the skeletal joints used by the tradi­
tional middleware (see chap. 3.1 Motion Recognition) they can also be used for a basic
object analysis. Figure 40 illustrates how the depth image allows to infer the threedimensional orientation of an object:
Depth image top view
Figure 40: Detecting three-dimensional object orientation with interactive areas.
A restriction of the approach resulting from the limited resolution of sensors available
at the time of the implementation is that small-scale movements cannot be detected,
e.g. the number of revolutions when a screw is tightened manually. This would require
either a much more refined object interpretation or real-time interpretation of finger
movements. These advances – as well as a more detailed object recognition – are sub­
jects of future research (see chap. 8.3).
Implementation: Instruction and Performance Analysis
93
6.4 Instruction and Performance Analysis
Our basic research on motion recognition using depth data in combination with the
adjoined spheres approach (see chap. 6.3.2) allowed us to implement a system that
analyzes passages through interactive areas in real-time. Thus work processes can be
analyzed quickly by the interpreter – a prerequisite for context-awareness resulting in
adequate feedback or visualization.
Since the prototype was primarily used for testing the effects of the augmentations,
the software for performance analysis and instructions was called “Assistive Systems
Experiment Designer” (ASED). Although it has been developed with production en­
vironments in mind, ASED can be applied to measure human activities in a wide range
of work processes and potentially also in more generic scenarios where coordinated
movement is required.
ASED implements a designer mode and a runner mode. While the first is used to
design the experiments (mapping of work steps, positioning of interface elements) the
second documents a running experiment on video and by generating time stamps.
Both modes are discussed in greater detail in the following sub-chapters. With respect
to the model (see chap. 5) ASED implements the following elements:

Environmental interface: input: motion recognition

Environmental interface: output: in-situ projection and monitor

Interpreter: model human state: physical

Generator: instructions
The gamification component described later in this chapter handles three additional
elements:

Interpreter: model human state: emotional

Generator: game

Generator: feedback
Figure 41 illustrates the architecture of ASED which was implemented in Java. Addi­
tionally the open libraries OpenCV (Open Source Computer Vision Library) and the
OpenNI (Open Natural Interaction) have been used.
94
Implementation: Instruction and Performance Analysis
Figure 41: Architecture of the ASED software.
6.4.1 Visual Output and Projection
Some basic technical functions underlying the system handle the visual outputs in­
cluding the projection. To avoid discrepancies in the performance of users related to
minor interface issues, two elements were projected in all three test scenarios: the
controls for “next” and “previous” instruction and the indicators for picks (numbers
in front of the boxes emulating the established pick-by-light systems). To display
these interface elements the projector had to be active in all three scenarios. In conse­
quence ASED needed to be able to handle three visual output devices simultaneously:

surveillance monitor: providing all relevant information to an observer

instruction monitor: displaying instructions in the state-of-the-art scenario;
displaying instructions and the gamification component in the gamification
scenario

projector: displaying the controls and the pick indicators in all scenarios;
displaying the instructions in the projection scenario
Accordingly the first challenge for the implementation of ASED was finding a tech­
nical solution that allowed to connect all three output devices to the high performance
computer required for the real-time motion detection simultaneously without produc­
ing glitches.
Implementation: Instruction and Performance Analysis
95
Due to size restrictions USB 3.0 had to be used as a graphic data channel. The only
device with the required DisplayLink-chipset available at that time (end of 2011) was
the “USB 3.0 Superspeed Dual Video Adapter” by Targus which was on sale only in
the USA and had to be imported.
Figure 42: Instructions on a monitor (left) and as a projection (right).
Once this technical issue was solved, the screen coordinates of all devices had to be
synchronized. This was especially demanding with the projector because the tilt re­
quired by the workplace setup resulted in a keystone distortion which had to be cor­
rected in real-time.
Since the distortion is not affine, the native components of the Swing toolkit could not
apply the required transformation. Instead the OpenCV library respectively its Javaimplementation JavaCV was used. Unlike OpenCV, JavaCV does not offer a function
to recognize a chess board pattern, so the generic edge detection was used. This re­
sulted in a semi-automatic three-step calibration process:

an undistorted rectangle is projected on the workspace

a picture of the recognized edges is shown on a monitor

the user identifies the rectangles vertices (by clicking or touching)
When the point and its projection are identified the projection matrix is calculated.
The resulting CMAT (Concordance-Based Medial Axis Transform algorithm) is then
transformed to deskew the image (see Equation 3).
96
Implementation: Instruction and Performance Analysis
Equation 3: Transformation of distorted projection using CMAT.
6.4.2 Designer Component
The ASED designer component is used when specific production sequences are
mapped for the CAAS. It allows to map the steps of a production and to assign causes
and effects. The visual output is also administered by this component.
Each production step can hold an instruction, e.g. a picture of the product assembled
up to that point and any number of interactive 3D trigger areas. These “designer ele­
ments” are assigned as causes and effects. A cause can be the passage of a user’s hand
through a trigger area or just the exceeding of a time limit. An effect can be a sound
or a navigation to the next production step. The following class hierarchy shows the
designer elements implemented in ASED:
Figure 43: Class hierarchy of designer elements in ASED.
Implementation: Instruction and Performance Analysis
97
Although the number of causes and effects including shockwave-files, buttons and
various trigger forms seems high, their implementation has been straightforward. The
implementation’s main challenge was integrating all augmentation scenarios (SotA,
Projection and Gamification) into one easy-to-use application.
As described in the previous sub-chapter, the system had to be able to administer three
visual outputs (instruction monitor, surveillance monitor and projection) because the
replication of state-of-the-art assistive systems required the use of a conventional
monitor. All views had to be integrated in both the designer and the runner component
since all scenarios had to be configured and tested in advance. To avoid re-implement­
ing the panels in the Designer and the Runner component we used a SuperPanel-class
inheriting from all other panels. The relations between the panels and views are shown
in the following class diagram:
Figure 44: Class hierarchy of panels used in ASED.
On a higher level of abstraction the ASED Runner and the ASED Designer are essen­
tially two different graphical user interfaces (GUI) steered by the OptionPanel as in­
dicated in Figure 41: Architecture of the ASED software. They share the same classes
and functions. Nevertheless the system requirements concerning visual outputs re­
sulted in a rather complex dashboard structure:
98
Implementation: Instruction and Performance Analysis
2
1
4
6
3
5
Figure 45: Screenshot of the ASED designer view.
The designer component’s dashboard is segmented into six major panels labeled
clockwise in Figure 45. In the center there are four large panels.
1.
The top left panel is called ‘Augmented Camera Preview’ (1) and shows an
image of the work area which can be switched between video and depth
mode as in Figure 45.
2.
The top right panel is called ‘Projector Preview’ (2) and shows the projected
image (if projection is part of the scenario). In Figure 45 the preview only
shows the navigation elements “next” and “previous” which are projected
in all scenarios (see previous sub-chapter).
3.
The steps are displayed in the panel to the right (3).
4.
The panel to the bottom right is called ‘Instruction Monitor Preview’ (4)
and shows the instructions assigned to the current work step. In Figure 45
the area is empty because currently the navigation element “next” is selected
which is part of the global-layer.
5.
The details of the selected step or the selected element within a step can be
edited in the configuration area to the bottom left (5).
6.
Finally to the left (6) there is a tool with different types of causes and effects
to select from.
The bottom left panel (5) displays information on the selected designer element
and allows to edit it. This configuration area (Figure 46; 5 in Figure 45) allows
to name or delete elements, and select where they are to be displayed (projection,
instruction monitor, camera preview).
Implementation: Instruction and Performance Analysis
99
Figure 46: Screenshot: Configuring a 3D trigger area.
The sensitivity of the 3D trigger areas can be adjusted with a slider (‘Set Threshold
when Box should trigger’). While the dimensions of objects in areas can be referenced
(‘Reference Trigger Now’) additionally their z-range (see chap. 6.3.2 The Adjoined
Spheres Approach) can be adjusted with a slider using a blue handle for the lower
range and a red handle for the upper range. This is necessary for elements like the
boxes: since the hand might cover the box while picking from it, the actual penetration
of the trigger area by the fingers might be occluded.
For fine-tuning of the calibration of trigger areas the ‘Augmented Camera Preview’
panel can be maximized to cover all four central panels (Figure 45: 1, 2, 4 and 5):
Figure 47: ASED Designer with maximized “Augmented Camera Preview”.
100
Implementation: Instruction and Performance Analysis
Figure 47 shows how two obstacles activate the trigger areas placed above and slightly
in front of the containers. The coverage to the left is 1.304, the coverage to the right
is 0.966. Although the other trigger areas also indicate coverage (0.16 and 0.164 to
the left and 0.34 to the right) they remain inactive due to the threshold. The screenshot
shows step 7 of the scenario projection. The instructions in Figure 47 are not assigned
correctly (instruction 3 and 4 in step 7).
This shows how the Designer was used in practice: work steps were copied with their
calibration-values after the fine-tuning of the trigger areas. Afterwards related effects
like the image of the instructions were changed subsequently.
6.4.3 Runner Component
The runner component of ASED is activated whenever the CAAS is used. It handles
the following tasks:




Monitoring of trigger boxes by using the depth sensor.
Generating an extensive automatic log file containing the times
when trigger boxes are activated.
Sending trigger events to the gamification component.
Generating a video using the motion sensor’s RGB camera.
As the list of tasks shows, the trigger box algorithm is central – it is used for interacting
with the main application, for communication with the gamification component and
for the logging. It strongly draws from the “adjoined spheres approach” (see chap.
6.3.2).
To implement these triggers, the PrimeSense OpenNI data (see chap. 3.1 Motion
Recognition) were accessed using the class VideoFrameGrabber partly developed
by the author ‘init1045a’ (see chap. 9.1 Implementation Details). Based on the video
and depth data the implementation of trigger boxes in Java was possible. In the runner
component trigger boxes are the technical realization of the adjoined spheres model.
The implementation is briefly described in the following.
At first the central point of a trigger box and its distance to the corners is calculated.
The closer a covered point is to the center, the greater is its value. The sum of these
values results in a “filling level”. A trigger box is maximally filled if a plane parallel
to the z-axis through the center of the box is covered:
101
Implementation: Instruction and Performance Analysis
Equation 4: Computation of the distances to the center of a trigger box.
Additional details on setting up and checking trigger boxes are provided in the sup­
plement section (see chap. 9.1 Implementation Details).
The activities of the probands, the switching of instructions and the state of all trigger
boxes can be observed in the ASED runner’s dashboard: the surveillance monitor.
1
3
2
5
4
Figure 48: Screenshot of the ASED Runner surveillance monitor.
The screenshot of the ASED Runner surveillance monitor shows the ‘projector pre­
view’ (1, top left), the ‘instruction monitor preview’ (2, mid left – empty because in
the scenario instructions are projected into the workspace) and the ‘augmented camera
preview’ (3, right). To the bottom right (4) there is a time counter. The button below
allows to count up processes and sequences manually – an additional fallback system
(next to the video) that proved helpful when participants frequently skipped processes.
The area at the bottom is used as a console and displays system information like start­
ing times or detected actions.
102
Implementation: Instruction and Performance Analysis
Whenever a trigger box is activated a log entry is generated. As the example log entry
below documents, the screenshot above was taken after the worker successfully as­
sembled the third working step (‘M3’) after his hand passed the trigger box of the
‘resume’ element in the navigation.
Equation 5: Log entry generated by ASED Runner.
While the principle of automated logging using the triggers sounds practical in theory,
an undecided worker’s hand hovering over a box can generate dozens of entries – in
spite of a short latency which was implemented to prevent multiple trigger activation.
To clean up the resulting long log entries an analyzer software was developed (see
chap. 7.3.1 Time Measurement).
In some cases where assembly parts were put back in the boxes, a clear log analysis
was impossible so the video documentation had to be used. Technically it was imple­
mented with the help of the Kinect’s video camera. The video was recorded using
Xuggle, a Java-library for handling video data. In ASED it only was used to timecode the buffered images created by the class VideoFrameGrabber and to encode
them into an MPEG-4 video.
Implementation: Gamification Component
103
6.5 Gamification Component
While the integration of motion recognition was primarily a technical task, developing
the gamification component for the most part was a design task as common in games
engineering.
6.5.1 Designing Flow
The predominant game design question was: how can the concept Jane McGonigal
calls “epic meaning” (McGonigal, 2011) at least partly be integrated into the rather
repetitive work tasks in production environments? When it comes to games, most us­
ers talk about the “graphics” and the “sound” – so obviously these expectations had
to be addressed. Less obvious, but probably more important are the game mechanics,
i.e. the way a game reacts to a user’s inputs. In our case, the principle mode of inter­
action between the user and the CAAS has already been described in the model (see
chap. 5).
However, when it comes to designing the model’s concrete implementation there were
no previous attempts combining real production tasks with games we could have
drawn from. Based on our knowledge of the impaired users (see chap. 2.2), the indus­
trial scenario (see chap. 2.6 and chap. 3.4) as well as the requirements (see chap. 4),
the design goal was to stay as close to the established tasks as possible. This limitation
mainly is due to three reasons:

The cognitive capabilities of the users are limited (see chap. 2.4.2 Cognitive
Workloads) and complex stories or action elements might draw too much
attention and cognitive resources away from the work process.

Game-like reward structures are new to production environments, so the
acceptance of obvious gaming elements like cartoon characters would re­
duce the acceptance rate of the decision makers in the companies (see chap.
4.3 Requirements Study) as well as the users’ acceptance rate.

Pagulayan et al. see that “when comparing games to productivity software,
there are principles and methods that can be successfully applied to both”
(Pagulayan et al., 2012, p. 797). To put this argument further, there actually
are several analogies between the real processes in manual production and
gaming, e.g. building things up to become more complex and “powerful”,
time limits and quality controls. Exploiting these analogies is easier and
more intuitive than establishing a new frame of reference.
104
Implementation: Gamification Component
The basis of the CAAS gamification component is the data generated by the Assistive
Systems Experiment Designer (ASED). It allows measuring the time required for the
assembly of each production component, defined as the time between two uses of the
“forward”-button. As mentioned before (see chap. 6.1 System Overview) this data is
not supplemented by additional emotional data, as the prototype described in this work
does not allow to assess the emotional state directly, e.g. by facial detection or heart
rate sensors.
The basic design approach was to visually represent each work process. This was
achieved by drawing from the classic tile-matching puzzle game Tetris developed by
Alexey Pajitnov in 1984. Figure 49 shows a design study of the concept (left) where
the work processes are color-coded and match the bricks. In the final version (right)
the text element was omitted and the forms have been combined in a Tetris-like man­
ner to show the product’s progression. Also the colors are derived from process dura­
tion and not from process category.
Figure 49: Design study of the gamification element (left) and final version
(right).
An essential requirement for gamification is that some kind of “fun” or motivation is
accomplished. In the context of this work we consider this requirement as achieved,
if a state of flow (see chap. 2.4.3 Flow) is reached and maintained.
The following table shows how the design approach addresses the conditions for
achieving the flow state. It was first proposed in the article “Industrial Playgrounds”
(Korn, 2012):
105
Implementation: Gamification Component
Table 6: Conditions for flow and corresponding design approach.
condition
being involved in an activity with
a clear set of goals
design approach
(1)
macro level: complete a (flawless)
assembly sequence
(2)
micro level: complete the active process
(=brick) as quick (=green) as possible
good balance between
perceived challenges and
perceived skills
adaptation of difficulty level
based on performance
task at hand must have clear and
immediate feedback
(1)
color changes and shadowing dual-code vis­
ual feedback
(2)
sound integrates another sensory channel
(1)
on the micro level: “getting a stone down in
time” is immediately pleasing
(2)
the final dissolvent of the sequence of stones
appeals to the basic human desire for order
and completion
the activity is intrinsically
rewarding
In the following sub-chapter the implementation of the flow-oriented design approach
is described.
6.5.2 Implementing Flow
During a work process, the active brick’s color slowly changes from green to red. The
duration of this color change is derived from the user’s previous process times: if for
example a user completes a process in 8 seconds instead of the personal mean time of
10 seconds, this good performance will result in a dark green brick while a duration
of 14 seconds would result in a yellow brick. During the first assembly sequence when
no comparisons can be made, the system uses reduced process times based on meth­
ods-time measurement (see chap. 2.6.1 Measuring Work Processes with MTM).
From the second sequence onwards the duration of corresponding processes is com­
pared. When a process is completed, the time difference to the last corresponding pro­
cess is displayed in numbers. A percentage value is generated by dividing the current
duration by the mean of the recent processes. The resulting ratio is used to color-code
the feedbacks and to select a short audio comment. The following table lists the feed­
backs used:
106
Implementation: Gamification Component
Table 7: Feedbacks of the gamification component
Percentage of
mean duration
Color
Audio feedback
> 200%
dark red
snail pace
175% - 200%
light red
very slow
150% - 175%
orange
slow
125% - 150%
yellow
below average
100% - 125%
yellow green
quite good
75% - 100%
light green
good
50 - 75%
green
very good
< 50%
dark green
excellent
For un-impaired production workers this scale would of course be stretched out too
far; for impaired workers with a very high degree of performance variation between
different processes, the scaling proved adequate.
Figure 50: Screenshot of the instruction (left) and the gamification component
(right) with shadow brick.
Figure 50 shows how the worker’s mean process speed is also represented by a
“shadow brick” which allows to check at any time how the current work is turning out
compared to the personal average. The shadow brick’s speed can be adjusted to fit the
user’s emotional state, e.g. it can be lowered if the user is in a state of negative arousal
Implementation: Gamification Component
107
or increased if the user is underchallenged. Thus it serves as a guide balancing chal­
lenge and performance. The shadow brick was inspired by racing games, where play­
ers can compete against their own best rounds or those of friends or famous drivers34.
The component was implemented in ActionScript 3 (AS3) and deployed as an Adobe
Integrated Runtime (AIR) application. The communication with the Assistive Sys­
tems Experiment Designer (ASED) was realized over a datagram socket using User
Datagram Protocol messages. The following code shows the implementation of the
time evaluation of a process or working step:
Equation 6: Process Time assessment in the gamification component.
After a sequence is completed, the build-up brick rows disintegrate in an animation.
Like the single processes, the sequence as a whole is compared to its predecessors.
The result is a longer spoken feedback (e. g. “Congratulations – this was an excellent
sequence!”) accompanied by a corresponding visual – a smiley in the adequate color
and “mood” (see Figure 51).
34
A challenge between the workers would not be a technical problem, but the high
variance between impaired persons gives it little practical value. Like in a single
player game the user experience is essential – and multi-user games always add a
sense of failure to underperforming players.
108
Implementation: Gamification Component
Figure 51: The gamification component’s visual feedback makes use of smileys.
The visual feedback already gives the user the important feeling of self-efficacy. How­
ever, in long-term operation gamification can also be used to actively influence and
support the flow curve. To achieve this while giving room for the rapid performance
changes common for impaired persons and while still maintaining the desired flow
state as described in the model (see chap. 5), the interpreter only takes the last three
assembly sequences into account. This number is an approximation based on the pre­
study and potentially needs further data to be established as a reliable constant (prob­
ably the ideal number will also vary between users).
As described above, the durations of the corresponding processes of the last three
sequences are compared to model the user’s emotional state and approximate the cur­
rent position on the flow curve. To illustrate this method, the following example
shows three typical sequences (taken from the study data) with eight processes each:
process duration in seconds
25
20
15
10
5
0
1
2
3
4
5
6
7
8
Figure 52: Example: Process durations of one user in eight sequences.
Implementation: Gamification Component
109
The total duration of all three sequences in this example is 5.8 minutes. In the second
sequence (red) the user is faster than in the first (blue). However, this changes in the
last two processes 7 and 8. In sequence 3 (green) the process continue to become
slower. In this example the shift on the flow curve occurs in process 7 in sequence 2
when the user starts to become slower.
The change can also be observed by the trends: While usually processes tend to be
completed faster within a sequence (sequence 1, blue trend line) the trend lines almost
became flat in sequences 2 and 3.
As described in the model (see chap. 5) the user’s hovering between flow phases is
natural and necessary to sustain flow. However, an intervention can be created if the
interpreter detects that the phase of arousal or control lasts too long or the trend indi­
cates a negative transition (e.g. from arousal to anxiety).
This can be achieved by adapting the production cycle and the speed of the shadow
brick. As an example an excited user can be calmed by slowing the shadow brick.
This leads to more “successful” assembly processes unmistakably mirrored by green
bricks and positive feedback. Thus the user’s self-confidence and motivation can be
actively supported.
This procedure shows that the gamification component has not been designed as a tool
to increase transparency for the management or even the user – motivation and a feel­
ing of self-efficacy and mental stability have been give precedence over exact docu­
mentation. This preference and its ethical dimension are discussed in chapters 8.2
Ethical Implications: Towards Humane Work and 8.3.2 Ethical Perspectives.
110
Studies and Evaluation: Participants and Preparatory Study
7 STUDIES AND EVALUATION
This chapter is based on the following publications:
Korn, O., Funk, M., Abele, S., Hörz, T. & Schmidt, A. (2014) Context-aware Assis­
tive Systems at the Workplace. Analyzing the Effects of Projection and Gamification.
In PETRA ’14 Proceedings of the 7th International Conference on PErvasive Tech­
nologies Related to Assistive Environments. New York, NY, USA: ACM.
Korn, O., Schmidt, A., & Hörz, T. (2013). The Potentials of In-Situ-Projection for
Augmented Workplaces in Production. A Study with Impaired Persons. In CHI ’13
Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Sys­
tems (pp. 979–984). New York, NY, USA: ACM. doi:10.1145/2468356.2468531
Korn, O., Schmidt, A., & Hörz, T. (2013). Augmented Manufacturing: A Study with
Impaired Persons on Assistive Systems Using In-Situ Projection. In PETRA ’13 Pro­
ceedings of the 5th International Conference on PErvasive Technologies Related to
Assistive Environments (pp. 21:1–21:8). New York, NY, USA: ACM.
doi:10.1145/2504335.2504356
The study analyzes the effects of two augmentations (in-situ projection and gamifica­
tion) on the work of impaired persons in production environments.
7.1 Participants and Preparatory Study
The field study was conducted at the Beschützende Werkstätte Heilbronn (BWH),
a German sheltered work organization. The persons working there suffer from various
kinds of cognitive and motoric impairments – ranging from stroke-related problems
and epilepsies to Down syndrome. To ensure ethical compliance, we aligned the study
design to the “PD-Net Approach to Supporting Ethics Compliance” (Langheinrich,
Schmidt, Davies, & José, 2013) which was primarily designed for research involving
public displays.
Although it was not possible to follow the proposed method to the letter and establish
an independent ethical advisory board, the BWH is already split in a profit and a non­
profit unit. The governing board of both is chaired by a priest. It is organized as a
registered society and has incorporated the Corporate Governance Codex of the
Diakonisches Werk, a huge Protestant social welfare association with over 450.000
employees and about 700.000 volunteers. All materials used in the study as well as
the documented procedures have been acknowledged by the governing board.
Studies and Evaluation: Participants and Preparatory Study
111
While all persons at the BWH are able to work, their competence levels vary strongly
according to the kind of impairment. Although (almost) all of them are able to eat on
their own or express thoughts in simple words, some would not be able to count to ten
while others could perform algorithmic operations like multiplying. A similar varia­
tion occurs in assembly skills. While the augmented context-aware assistive systems
(CAAS) are being designed to be “universal” and assist as many persons as possible,
the competence level of many impaired persons at the BWH would not suffice to per­
form assembly tasks at all. So the first task was finding a population of test subjects
suited for simple assembly tasks in a preparatory study.
To avoid a pre-assessment of all impaired persons, the participants were pre-selected
based on their wage-level number. This number is generated by the sheltered work
organization based on an extensive analysis of an impaired person’s skills. It includes
a physical and a cognitive rating as well as psychological factors like work motivation
and stress resistance. To find a suitable range to start with, we interviewed the work
instructors and broadened the recommended wage level range to ensure no potential
subjects were lost. A range between 55 and 125 resulted in 135 potential test subjects.
To uphold the established ethical standards of data gathering (Langheinrich et al.,
2013; Runeson & Höst, 2008) the test subjects or their legal guardians were contacted
by the sheltered work organization’s board and asked to sign a written consent to par­
ticipate in the experiment. This form guaranteed anonymity and explained the benefits
arising from augmented or enhanced assistive systems in simple language. Due to that
direct access and the beneficial aim of the study, the return quota was very high: Al­
together 100 of the 135 impaired persons contacted (74%) or their respective guardi­
ans consented to participate in the experiments.
The pre-study was conducted to match the complexity of the assembly task and the
cognitive skills of the impaired workers. The participants were asked to assemble
Lego objects of varying complexity reaching from a simple asymmetrical bridge to a
rather complex vehicle (see Figure 53).
Figure 53: Lego objects with various degrees of complexity in the pre-study.
112
Studies and Evaluation: Participants and Preparatory Study
Technically speaking, the pre-study’s goal was finding the upper and the lower bound
of the wage level range. The boundary conditions were defined as follows:

lower bound: the inability to assemble an object consisting of four parts

upper bound: the ability to assemble an object consisting of four parts
from memory at the fourth repetition
The pre-study showed that the wage level range required to perform simple assembly
work usually lies between 70 and 125. This requirement narrowed down the number
potential test subjects from 100 to 81. The resulting test population was divided into
three groups or scenarios35:
(1) State-of-the-Art: no augmentation
(2) Projection: Augmentation by in-situ projection of instructions
(3) Gamification: Augmentation by gamification
The participants were distributed over the three groups; to sustain an average wagelevel index among the groups, the selection process was designed as follows:

sort participants based on their wage-level number

sequentially add a participant to the group 1, 2 and 3

if one group’s mean wage-level index differs by more than 2 points, take
the participant that is best suited to equalize the wage-level scores
Figure 54 shows a histogram of the wage level distribution in the three groups. Under
ideal conditions the bars would be of equal length in all classes, in practice the limited
number of test subjects (60 out of 81) and the “re-randomization” in the course of the
experiments due to illnesses and dropouts resulted in more varied distributions.
5
4
3
2
1
0
71‐80
81‐90
91‐100
SotA
Projection
101‐110
111‐120
121‐130
Gamification
Figure 54: Histogram of the three test populations.
35
The terms SotA, Projection and Gamification written with a capital letter refer to
the corresponding scenarios or test populations.
113
Studies and Evaluation: Participants and Preparatory Study
The mean wage level (Figure 55) for the SotA test population is ‫= ̅ݔ‬100.7 with a stand­
ard deviation (SD) of 13.4, for the scenario Projection it is ‫= ̅ݔ‬100.2 (SD=19.0) and
for the scenario Gamification it is ‫= ̅ݔ‬101.3 (SD=17.3) – so the variance was greater
in the two groups working with the augmented system.
However, the differences between the three test populations altogether are minimal
and not significant (p>0.978 in an ANOVA over all scenarios).
120
60
80
age in years
wage level
100
60
40
40
20
20
0
0
SotA
Projection
Gamification
SotA
Projection
Gamification
Figure 55: Test populations’ mean wage-level (left) and age (right) with SD.
With regard to age the mean for SotA subjects is ‫= ̅ݔ‬42.5 (SD=10.8), with the Gami­
fication subjects being slightly older with ‫= ̅ݔ‬43.5 (SD=10.8) and the Projection sub­
jects being slightly younger with ‫= ̅ݔ‬39.8 (SD=11.5). Again the differences are very
small and not significant (p>0.553 in an ANOVA over all scenarios).
Summing up the analysis shows that of the differences between the three test popula­
tion in the independent variables wage level and age are insignificant so the observed
effects can be attributed to the augmentations.
114
Studies and Evaluation: Procedure
7.2 Procedure
Each experiment consisted of four phases:
1.
initial interview and questionnaire part 1
2.
short introduction to the system used in the specific scenario
3.
assembly phase (eight sequences)
4.
questionnaire part 2 and final interview
For phase 1, 3 and 4 the test persons’ attitude towards the specific system used and
the resulting work experience was measured with a questionnaire. During the assem­
bly phase the process times were measured using the CAAS prototype and the ASED
software (see chap. 6 Implementation). Assembly errors were documented on photos.
The procedure is described in more detail in the following sub-chapters. Qualitative
findings from the interviews are described in the results section (see chap. 0).
7.2.1 Questionnaire Design
The questionnaire is an adapted version of the established system usability scale
(SUS) (Lewis & Sauro, 2009). We also drew from the well-described AttrakDiff (Has­
senzahl, Burmester, & Koller, 2008). Based on our experiences with impaired users
we reduced the number of items on the Likert-scale from seven to five.
The questionnaire consists of a generic part which is identical in all scenarios and a
scenario-specific part. As a result, the SotA version of the questionnaire contains 19
questions, the Projection version contains 22 questions and the gamification version
contains 24 questions. Furthermore the questionnaire is divided in a pre and a post
experiment section.
The generic questions aim to determine the physical and emotional state of the test
subjects before the experiment. A second aim was determining if the proband is ex­
perienced with computers or computer games, since measuring performance in a computer-based system always implies the risk of measuring computer affinity and thus
creating reliable but non-valid results. The following five generic questions were
posed before the assembly phase:
1.
How are you today?
2.
Are you nervous?
3.
Do you look forward to the experiment?
4.
How many hours per week do you work at the computer?
5.
How many hours per week do you play computer or console games?
Studies and Evaluation: Procedure
115
The next 11 generic assertions were presented after the assembly phase:
6.
The help device36 was easy.
7.
I would use the help device again.
8.
I would recommend the help device to my colleagues.
9.
I need somebody’s help to use the help device.
10. The help device was unnecessary complicated.
11. The indicator for picks was easy.
12. I found it tiring to confirm each step with a button.
13. Working with the help device can be learned quickly.
14. I found the work with the help device exhausting.
15. I could use the help device proficiently.
16. I had to learn a lot before I could work with the help device.
The assertions in this list are deliberately redundant. Also contradictory items are used
(e.g. items 6 and 10 or items 15 and 16). The aim was to detect the tendency to “yes”,
i.e. probands always answering in the affirmative. The order of the items in the ques­
tionnaires was different from the list presented here – the items were scrambled to
make the redundancies and contradictions less obvious. The original questionnaires
are provided in the supplement section (see chap. 9.2 Questionnaires in Detail).
Item 11 refers to the number projected in front of the box to pick from in all scenarios
which emulates an industry standard pick-by-light system (see chap. 3.4.2 Assistive
Systems Using Projection).
The following table shows the specific questions for each scenario posed after the
assembly phase.
36 We tried to use “simple language” whenever possible and avoided the term “assis­
tive system”. However, all questions were posed in a dialogue and could be rephrased
if the test subject had difficulties understanding them.
116
Studies and Evaluation: Procedure
Table 8: Scenario-specific questions posed after the assembly phase.
SotA
Projection
Gamification
I found the instruction on
the monitor complicated.
I found the instruction left
of me on the table compli­
cated.
I found the instruction on
the monitor complicated.
I found the instruction on
the monitor easy.
I found the instruction left
of me on the table easy.
I found the instruction on
the monitor easy.
I found it tiring to always
check the instruction on the
monitor above me.
I liked having the instruc­
tion directly to my left on
the table.
I found it tiring to always
check the instruction on the
monitor above me.
-
The instruction left of me
on the table disturbed my
work.
-
-
The model in the center of
the workspace helped me.
-
-
The model in the center of
the workspace bothered me.
-
-
-
I liked the game on the
monitor.
-
-
The game on the monitor
bothered me.
-
-
The comments on my work
motivated me.
-
-
The comments on my work
disturbed me.
-
-
I would like to always have
the game during my work.
Studies and Evaluation: Procedure
117
7.2.2 Assembly Task
The test subjects were asked to assemble eight identical car undercarriages (no varia­
tions) using Lego bricks. Each of these eight assembly sequences consists of eight
assembly processes (short: “steps”) so a perfect experiment consisted of 64 successful
assembly steps.
During each steps a corresponding instruction was shown (see Figure 56) either on
the monitor or directly in the workspace. After completing a step the test subjects had
to use the permanently projected navigation element (see chap. 6.4.1 Visual Output
and Projection) to go to the next step by touching the green “resume” button or to go
back to the previous step by touching the red “back” button.
Figure 56: Instructions in the scenarios Gamification (left) and Projection (right).
While the process of picking the right parts for the assembly process at hand was
identical in all three scenarios (the boxes to pick from were marked by a number), the
difference between the SotA and the Projection scenario was the way instructions are
presented to test subjects: while the scenarios SotA and Gamification use a monitor,
in the Projection scenario the work-relevant information is projected directly into the
user’s workspace. In addition to the instructions a 1:1 model of the correct result of
the assembly is projected directly into the center of the workspace; also the active step
is marked by a green arrow (see Figure 56).
118
Studies and Evaluation: Procedure
The instructions themselves remained identical in all three experiments: for each of
the eight assembly steps a product image before and after that specific assembly step
is shown. The instructions used are displayed in the table below.
Table 9: Instructions for the eight assembly steps.
Process 1-4
Process 5-8
The red steps (M1 and M4) were the ones with deliberately designed challenges like
asymmetries (see chap. 7.3.2 Quality measurement).
Studies and Evaluation: Apparatus
119
7.3 Apparatus
The physical features of the prototypical CAAS, i.e. the design of the assembly work­
place and the integration of the sensors and the projector, have already been described
(see chap 6.2). In the scenario State-of-the-Art (SotA) and the scenario Gamification
just the monitor is used. In the latter both the instruction and the gamification element
were shown on the monitor. While it would have been technically possible to project
them into the workspace and thus combine projection and gamification, this combined
scenario would not have allowed to analyze the effects of the two augmentation sep­
arately. However, the navigation and the indicator for the picks were always projected
to avoid discrepancies between the users’ performance related to minor interface is­
sues (see chap. 6.4.1 Visual Output and Projection).
7.3.1 Time Measurement
All three scenarios were documented using the ASED software (see chap. 6.4 Instruc­
tion and Performance Analysis). Each passage through a trigger area generated an
entry into an XML-file as illustrated by the example in Equation 7:
Equation 7: Log entry generated by ASED Runner.
During the field study several test subjects forgot to confirm processes or mixed up
process steps, so log files were incomplete or out of sync. Some participants also un­
decidedly hovered their hands over the component boxes, generating dozens of entries
within a few seconds while others used the “next” and “previous” buttons excessively.
As a result some log files contained several hundred entries where potentially (in a
perfect experiment) 128 entries would suffice.
120
Studies and Evaluation: Apparatus
Since we had already encountered these problems in the pre-study, we addressed them
in three ways:

the software captured a video of the workspace

the software allowed the observer to manually log process times
in addition to the automatic logs

as a software-independent fallback we used a camcorder
to record each experiment
Although the video documentations allowed the manual supplementing of flawed logs
this was a “final resort”. To avoid checking all log files manually, the “Logfile Ana­
lyzer” was developed for the automatic parsing and cleanup of extensive XML logs.
It was implemented in PHP and MySQL using the Zend Framework and jQuery.
The analyzer parses both the automatic and the manual log file using the extended
parser class and stores the data into a parser object as associative arrays. Once the data
is parsed it is shown in a list view with the automatic data on the left and the manual
data on the right. Discrepancies are highlighted in red color.
In this view functions are used to optimize the log-files: e.g. ‘del’ (deletes working
step and adds duration to the previous step), ‘moveRight’ (inserts automatic log step
into manual log step and recalculates aggregation) or ‘use’ (uses the duration stored
in the manual log instead of the automatic log duration). Figure 57 and Figure 58 show
a logfile before and after optimization.
Figure 57: Logfile Analyzer list view before optimization.
Studies and Evaluation: Apparatus
121
Figure 58: Logfile Analyzer list view after optimization.
In this case the resolution required the following deductions: Firstly in step 28 the
process declaration P2 follows a P3-process and precedes a P4-process. Secondly step
28 has a duration time of 0.0 s. The obvious solution is removing step 28.
7.3.2 Quality measurement
The eight steps required for the assembly sequence used in the experiment vary in
complexity. Based on our experience from the pre-study we deliberately included two
major potential sources of error highlighted red in Table 9 Instructions for the eight
assembly steps:

The first step requires abandoning the desire for symmetry because the first
Lego cube is not aligned with the base plate.

The fourth step requires to attach the 4x1 Lego plate at the correct side of
the middle plate.
Unlike time measurement the analysis of errors had to be performed manually. Each
of the 480 assembled products or 3840 steps in the three experiments was analyzed
individually.
In this analysis the outcome was looked at favorably – so a mistake in step one (e.g.
aligning the cube with the plate) that was neutralized in step two (assembling the cube
as required in step one) was not counted as two but as zero mistakes. From a produc­
tion management view such a change in sequence would be interpreted as two mis­
takes, even if the result was correct.
122
Studies and Evaluation: Apparatus
Our more favorable approach is due to the fact that even by interpreting videos with
facial expressions it is impossible to ascertain if a correct assembly is the result of
chance or the test subject’s deliberate correction of a previous mistake.
Figure 59: Undercarriage assembled correctly (left) and with 100% error rate
(right).
When looking at the error rates it is important to consider that a 100% error rate (see
Figure 59) is very rare. Although it did appear a few times, an assembly with an error
rate above 80% requires a violation of several “common sense concepts” – e.g. wheels
have to be assembled on top of the undercarriage or in a way that the vehicle would
drive in circles or sideward.
Studies and Evaluation: Data
123
7.4 Data
In this chapter we will portray the data “as it is” using only descriptive statistics
(means). However even this data is aggregated – for example we will look at sequence
completion times rather than process completion times (there are eight processes in
each sequence, i.e. 64 processes to analyze for each user). The complete data is pro­
vided in the supplement section (see chap. 9.3 Study Result Details).
In the ensuing sub-chapters we will look deeper and use inferential statistics like the
Student’s t-test and ANOVA (analysis of variance).
In the following, the subsequent data graphs are provided:




SotA population
o
Sequence completion time for all participants
o
Error rate for all participants
Projection population
o
Sequence completion time for all participants
o
Error rate for all participants
Gamification population
o
Sequence completion time for all participants
o
Error rate for all participants
Complete Population
o
Histogram of average assembly time
o
Histogram of average error rate
124
Studies and Evaluation: Data
600
subject 1
time in seconds to complete an assembly sequence
subject 2
subject 3
subject 4
500
subject 5
subject 6
subject 7
400
subject 8
subject 9
subject 10
300
subject 11
subject 12
subject 13
subject 14
200
subject 15
subject 16
subject 17
100
subject 18
subject 19
subject 20
0
1
2
3
4
5
6
7
8
Figure 60: Sequence completion time of the SotA population.
800%
700%
subject 20
subject 19
subject 18
error rate in the assembly sequences
600%
subject 17
subject 16
subject 15
500%
0%
38%
50%
400%
38%
50%
subject 14
25%
13%
0%
13%
0%
50%
63%
25%
25%
0%
38%
13%
25%
13%
63%
subject 13
13%
50%
13%
13%
25%
0%
50%
63%
300%
63%
25%
25%
0%
38%
200%
38%
38%
13%
0%
38%
13%
100%
63%
0%
13%
0%
38%
0%
13%
1
38%
25%
25%
0%
38%
13%
50%
0%
13%
38%
0%
13%
0%
38%
13%
63%
63%
13%
0%
38%
25%
13%
2
63%
25%
13%
25%
0%
3
subject 12
13%
25%
13%
75%
75%
25%
13%
25%
13%
0%
38%
0%
38%
13%
63%
25%
13%
13%
38%
0%
13%
0%
13%
63%
13%
0%
13%
13%
25%
13%
50%
0%
4
50%
0%
5
63%
63%
50%
25%
0%
13%
25%
13%
13%
0%
38%
13%
63%
25%
25%
13%
25%
25%
13%
0%
38%
13%
63%
63%
0%
25%
subject 11
subject 10
subject 9
subject 8
subject 7
subject 6
subject 5
38%
0%
13%
0%
13%
subject 4
63%
subject 2
13%
0%
13%
25%
25%
13%
0%
13%
0%
25%
13%
0%
13%
25%
25%
6
7
8
Figure 61: Error rate per sequence of the SotA population.
subject 3
subject 1
125
Studies and Evaluation: Data
600
subject 1
time in seconds to complete an assembly sequence
subject 2
subject 3
subject 4
500
subject 5
subject 6
subject 7
400
subject 8
subject 9
subject 10
300
subject 11
subject 12
subject 13
subject 14
200
subject 15
subject 16
subject 17
100
subject 18
subject 19
subject 20
0
1
2
3
4
5
6
7
8
Figure 62: Sequence completion time of the Projection population.
800%
88%
700%
subject 20
25%
subject 19
63%
subject 18
error rate in the assembly sequences
600%
100%
88%
subject 17
38%
subject 16
100%
38%
500%
0%
25%
13%
25%
0%
13%
0%
50%
100%
13%
400%
100%
63%
0%
13%
13%
13%
13%
25%
0%
13%
0%
38%
0%
300%
63%
50%
200%
63%
100%
63%
13%
13%
38%
100%
38%
25%
13%
63%
13%
0%
13%
13%
25%
13%
0%
13%
0%
13%
0%
13%
63%
88%
13%
13%
13%
0%
13%
0%
13%
0%
13%
13%
13%
13%
0%
13%
0%
13%
0%
13%
63%
63%
13%
0%
13%
0%
13%
0%
100%
13%
100%
100%
38%
63%
subject 13
13%
75%
13%
subject 12
75%
13%
13%
13%
13%
0%
13%
0%
13%
13%
13%
75%
13%
13%
13%
0%
13%
0%
13%
0%
13%
63%
63%
25%
0%
0%
13%
13%
0%
13%
0%
13%
13%
13%
63%
25%
0%
25%
0%
100%
100%
100%
88%
100%
75%
0%
63%
subject 11
subject 10
subject 9
subject 8
subject 7
subject 6
subject 5
subject 4
subject 3
subject 2
88%
75%
subject 15
subject 14
13%
63%
13%
13%
38%
13%
13%
25%
1
2
3
4
63%
50%
5
subject 1
63%
25%
6
63%
50%
7
25%
8
Figure 63: Error rate per sequence of the Projection population.
126
Studies and Evaluation: Data
600
subject 1
subject 2
time in seconds to complete an assembly sequence
500
subject 3
subject 4
subject 5
subject 6
400
subject 7
subject 8
subject 9
subject 10
300
subject 11
subject 12
subject 13
200
subject 14
subject 15
subject 16
subject 17
100
subject 18
subject 19
subject 20
0
1
2
3
4
5
6
7
8
Figure 64: Sequence completion time of the Gamification population.
800%
700%
50%
25%
0%
38%
25%
0%
13%
error rate in the assembly sequences 600%
50%
500%
25%
0%
50%
13%
13%
38%
13%
300%
200%
50%
0%
13%
13%
38%
13%
25%
25%
13%
13%
0%
13%
13%
38%
13%
25%
25%
25%
0%
13%
25%
50%
13%
38%
25%
63%
100%
88%
63%
0%
13%
0%
1
50%
0%
13%
13%
38%
38%
0%
13%
13%
25%
50%
50%
50%
25%
13%
13%
38%
50%
63%
50%
0%
13%
0%
38%
63%
38%
13%
25%
13%
25%
25%
13%
2
3
4
13%
0%
5
subject 16
subject 14
subject 13
subject 12
subject 11
13%
25%
13%
13%
subject 10
50%
subject 9
subject 8
subject 7
25%
13%
38%
38%
0%
25%
subject 5
100%
100%
63%
63%
13%
25%
25%
13%
25%
13%
38%
0%
13%
6
7
8
38%
25%
100%
subject 6
subject 4
subject 3
subject 2
63%
63%
subject 17
63%
100%
63%
subject 18
subject 15
63%
63%
38%
88%
63%
25%
25%
25%
25%
88%
50%
38%
63%
75%
13%
50%
50%
25%
38%
0%
13%
63%
63%
38%
subject 19
50%
50%
38%
50%
50%
50%
75%
63%
400%
63%
63%
50%
subject 20
38%
50%
38%
50%
63%
63%
38%
25%
38%
0%
13%
subject 1
Figure 65: Error rate per sequence of the Gamification population.
127
Studies and Evaluation: Data
10
number of test subjects
9
8
7
SotA
6
Projection
5
Gamification
4
Poly. (SotA)
3
2
Poly. (Projection)
1
Poly. (Gamification)
0
0‐50
50‐100 100‐150 150‐200 200‐250 250‐300 300‐350 350‐400
mean time for one sequence in seconds
Figure 66: Histogram of average assembly time (eight sequences).
number of test subjects
12
10
8
SotA
6
Projection
4
Gamification
Poly. (SotA)
2
Poly. (Projection)
0
Poly. (Gamification)
mean error rates categrized Figure 67: Histogram of average error rate (eight sequences).
Already by looking at the data as presented here, several observations can be made:

In all scenarios there is a high variance between the test subjects.

In all scenarios there is one subject completely divergent when it comes to
time measurements (SotA: 9, Projection: 1, Gamification: 19).

In all scenarios there is some kind of learning effect concerning speed:
subsequent sequences tend to be completed faster.

In all scenarios speed is roughly normally distributed among the subjects.

The learning effect does not seem to imply errors: they either roughly
remain constant (SotA) or even go up (Projection and Gamification).

In all scenarios errors are not normally distributed among the subjects.

More errors occur in the augmented scenarios.
128
Studies and Evaluation: Experiment Results and Discussion
7.5 Experiment Results and Discussion
The following analysis is based on our previous work on the effectiveness of augmen­
tations for workspaces (Korn, Funk, Abele, Hörz, & Schmidt, 2014; Korn, Schmidt,
& Hörz, 2013).
7.5.1 Overall Results
While we looked at the whole test population in the analysis of the independent vari­
ables wage level and age (see chap 7.1 Participants and Preparatory Study), the de­
pendent variables in the experiments are process time and error rate. We used both
analysis of variance (ANOVA) and Student’s t-test to evaluate the results. Because of
the high variance between single participants already p-values below 0.1 (marginally
significant) are considered significant results.
Although the prototype of the CAAS used for this study could not provide productrelated feedback on errors (see chapter 4.2.2 Total Quality and chapter 6.3.1 Technical
Restrictions and Solution) the effects of the augmentations on quality are an important
subject of the study. Mostly two p-values will be discussed because the difference
between the state-of-the-art (SotA) scenario and one of the two augmented scenarios
is relevant – i.e. the difference between SotA and Projection or between SotA and
Gamification. The following table shows the results on a macro level and also lists the
standard deviations (SD). The delta (∆) refers to the SotA values.
Table 10: Means and standard deviations of sequence durations and errors.
Variable
SotA
Projection
Gamification
Mean
production
time
‫ = ̅ݔ‬25.6 min
‫ = ̅ݔ‬23.6 min
∆ = -7.8%
‫ = ̅ݔ‬22.4 min
∆ = -12.5%
SD = 9.0 min
SD = 10.4 min
∆ = 15.5%
SD = 6.9 min
∆ = 23.3%
‫ = ̅ݔ‬22.6%
‫ = ̅ݔ‬29.1%
∆ = 28.7%
‫ = ̅ݔ‬33.1%
∆ = 46.5%
SD = 17.5%
SD = 27.3%
∆ = 56.0%
SD = 22.1%
∆ = 26.3%
Mean
error rate
The mean duration is reduced by both augmentations with Gamification having a
greater impact than Projection. On the other hand the error rates rise as a result of the
augmentations, so potentially a speed-accuracy trade-off has occurred. The high var­
iance among the participants results in high standard deviations in all scenarios.
Studies and Evaluation: Experiment Results and Discussion
129
7.5.2 Analysis of Time
As Figure 68 shows, the differences in time needed for the assembly sequences were
comparatively small with large variations:
mean production time in minutes
35,00
30,00
25,00
20,00
15,00
10,00
5,00
0,00
SotA
Projection
Gamification
Figure 68: Mean sequence durations with SD of the test populations.
If all 40 users are taken into account, the h1-hypothesis that CAAS augmented by
projection will make the users faster (with the h0-hypothesis assuming that there is no
significant difference) cannot be confirmed. A t-test shows that the average time re­
duction of 7.8% is not enough to be statistically significant (p>0.259 one-sided), thus
h0 is maintained.
The analogous h1-hypothesis for the augmentation by gamification also cannot be
confirmed – the average time reduction of 12.5% is not enough to make the difference
between the two scenarios statistically significant (p>0.110 one-sided). However, the
comparatively low p-values indicate that research with larger or more homogenous
groups or an analysis of aggregated data probably will show that gamification does
improve production speed.
130
Studies and Evaluation: Experiment Results and Discussion
sequence completion time (s)
When looking at the graphs in Figure 69 which show how the group means developed
over the eight production sequences, there clearly are relevant differences between the
test populations:
400
350
300
250
200
150
100
s1
s2
s3
s4
s5
s6
s7
s8
SotA
348
227
197
179
170
150
131
133
Projection
313
221
177
175
146
133
135
115
Gamification 290
202
169
152
137
137
133
125
Figure 69: Development of mean sequence completion times over eight sequences.
As expected the sequence times in all three test populations drop – very probably due
to learning effects and the users’ adjustment to the setup and the system setup used.
In this aggregated perspective there are no significant differences between SotA and
Projection: the h1-hypothesis that a CAAS augmented by projection will significantly
reduce the mean sequence completion times is falsified (p>0.200 one-sided).
However, in the aggregated perspective the h1-hypothesis that CAAS augmented by
gamification will significantly reduce the mean sequence completion times is con­
firmed (marginally significant with p<0.064 one-sided).
131
Studies and Evaluation: Experiment Results and Discussion
process completion time (s)
Since the processes within the eight sequences do not change (i.e. there are no varia­
tions) it is possible to compare the development of process times: the mean times of
all processes of one specific assembly. In opposition to the horizontal perspective of
the sequence durations, the process durations add a “vertical perspective”.
400
350
300
250
200
150
100
p1
p2
p3
p4
p5
p6
p7
p8
SotA
258
162
185
185
185
154
182
225
Projection
255
145
183
189
159
146
177
162
Gamification 223
151
162
161
163
153
173
158
Figure 70: Development of mean process completion times.
This form data visualization shows that the lines representing the process times are
mostly parallel (see Figure 70), so no process seemed to benefit especially.
An interesting observation in this vertical perspective is that the subjects in the SotA
scenario needed significantly more time for the eighth process. The h0-hypothesis that
the eighth process will be completed faster than the previous process can be rejected
(p<0.008 one-sided when compared with Projection; p<0.005 one-sided when com­
pared with Gamification).
While the low p-values indicate that this observation is no random effect, a reliable
explanation would require more data. We can only speculate that the stimulative effect
of both augmented scenarios countered a natural speed reduction when the product is
“almost finished”.
132
Studies and Evaluation: Experiment Results and Discussion
7.5.3 Analysis of Errors
With respect to the errors made during the eight assembly sequences, the difference
between the three scenarios were much more obvious than with production speed (see
Figure 71). Again there were strong variations.
60,0%
50,0%
error rate
40,0%
30,0%
20,0%
10,0%
0,0%
SotA
Projection
Gamification
Figure 71: Mean error rates with SD of the test populations.
When looking at the data of the 40 users, the h1-hypothesis is that CAAS augmented
by projection which do not provide feedback on product-specific errors will increase
the number of errors. The corresponding h0-hypothesis assumes that there is no sig­
nificant difference. The results show that the h1-hypothesis cannot be confirmed and
h0 is maintained: the mean increase in errors (6.1% absolute and 28.7% relative) is not
enough to be statistically significant (p>0.186 one-sided). While this result is “good”
for CAAS, the high variance shows that further research with larger or more homog­
enous groups is required.
The parallel h1-hypothesis that CAAS augmented by gamification not providing feed­
back on product-specific errors will increase the error rate can be confirmed. In spite
of the high variance the average increase in errors (10.5% absolute and 46.5% relative)
is statistically significant (p<0.052 one-sided) so h0 is falsified. Obviously the users’
motivational gain from gamification was transformed into speed. And since no feed­
back on quality was given, a speed-accuracy trade-off occurred.
The analysis of mean error rates follows the schema introduced with the analysis of
time: the horizontal perspective focusing on the errors made successively over eight
sequences is complemented by the vertical perspective focusing on the errors made in
each one of the eight processes.
133
Studies and Evaluation: Experiment Results and Discussion
40%
sequence error rates
35%
30%
25%
20%
15%
10%
s1
s2
s3
s4
s5
s6
s7
s8
SotA
24%
24%
21%
23%
23%
23%
22%
20%
Projection
39%
28%
24%
24%
31%
31%
29%
27%
Gamification 29%
29%
31%
34%
36%
37%
35%
34%
Figure 72: Development of mean sequence error rates.
The SotA line in Figure 72 shows that the error rate almost stays constant. This means
that there was no learning effect. Although this can partially be attributed to the lack
of quality-related feedback it is important to note that eight repeated assemblies of an
identical product did not result in decreased error rates.
In this aggregated perspective with reduced variance (which does not compare the
error rates of all 40 users but the users’ average error rates for the eight sequences)
the differences between the scenarios are more accentuated. The h0-hypothesis assum­
ing that there is no significant difference in mean sequence error rates is falsified for
Projection (p<0.003 one-sided), so as expected from the previous findings this aug­
mentation produces significantly more errors.
However, the development of the graph with a quick reduction of errors until the
fourth sequence and a sudden raise in the fifth cannot be explained by quality-related
versus speed-related feedback and demands a more thorough analysis (see chap. 7.5.4
Sub-Populations).
The h0-hypothesis is also (again) falsified for Gamification, although in the aggre­
gated view the difference becomes much more evident (p<0.00001 one-sided). The
upwards trend of the error line in Gamification can be explained by the implementa­
tion specifics: since the users only received feedback on the speed of production, fast
assembly became their focus while quality was neglected. Thus the raising error line
is very probably the result of a speed-accuracy trade-off induced by the positive feed­
back and potentially a flow state.
134
Studies and Evaluation: Experiment Results and Discussion
70%
process error rates
60%
50%
40%
30%
20%
10%
0%
p1
p2
p3
p4
p5
p6
p7
p8
SotA
56%
3%
35%
44%
27%
14%
0%
1%
Projection
67%
15%
24%
39%
32%
29%
14%
13%
Gamification 56%
29%
39%
51%
34%
46%
7%
5%
Figure 73: Development of mean process error rates.
Figure 73 shows that all participants had problems at the start of a new sequence (pro­
cess 1). While processes 2, 7 and 8 usually were acceptable (error rates below 15%)
the development within the processes 3, 4, 5 and 6 is interesting. Over all scenarios
the error trend consistently goes down from process 5 to 8, the graphs run in parallel.
However, there is one exception in Gamification where process 6 even produces a
substantial 12% more errors than process 5. This process is looked at in more detail:
process 6 error rates
70%
60%
50%
40%
30%
20%
10%
0%
s2
s3
20%
5%
50%
25%
Gamification p6 45%
35%
SotA p6
Projection p6
s1
s4
s5
s6
s7
s8
25%
5%
20%
10%
15%
15%
20%
25%
25%
30%
30%
25%
40%
40%
50%
55%
55%
55%
Figure 74: Development of mean error rates of process 6 in eight sequences.
135
Studies and Evaluation: Experiment Results and Discussion
The h0-hypothesis in this case is that process 6 in Projection and Gamification does
not differ significantly from process 6 in SotA. While in Projection the difference to
SotA is barely significant (p<0.005 two-sided), with Gamification the statistical sig­
nificance reaches astronomical heights (p< 0.000001 two-sided). However, it remains
unclear why the sixth process in Gamification consistently differs from the seven other
processes which develop congruently to the other scenarios.
7.5.4 Sub-Populations
The development of the projection data with the quick reduction of errors until the
fourth sequence and the sudden raise errors in the fifth sequence demanded a more
thorough analysis. The basic idea was to compare two sub-populations:

the faster users completing the sequences faster than the mean time

and the slower users requiring more than the mean time.
35,00
60,0%
30,00
50,0%
25,00
40,0%
error rate
mean production time in minutes
The analysis is based on previous work on the effectiveness of in-situ projection as
an augmentation for workspaces (Korn et al., 2013).
20,00
15,00
30,0%
20,0%
10,00
10,0%
5,00
0,00
0,0%
SotA
Proj.
Game
SotA
Proj.
Game
Figure 75: Production times (left) and errors (right) in the complete population
(left column) and the faster group (right column).
Figure 75 shows that in the Gamification scenario the faster users (left bar) did not
reduce errors so this scenario is not part of the analysis. In SotA and Projection there
are twelve test subjects performing faster than average (referred to with ‘+’).
The mean sequence in SotA+ took ‫ = ̅ݔ‬151 s (SD = 36.6 s). In Projection+ it took
‫ = ̅ݔ‬129 s (SD = 31.9 s): a time reduction of 22 seconds or 14.5%. The hypothesis h1
that projection increases the speed of above average workers can be accepted (p<0.061
one-sided). Obviously the over-performers benefited more from the in-situ projection
of instructions than the group as a whole.
136
Studies and Evaluation: Experiment Results and Discussion
The development of the product quality in the faster subgroup as illustrated in Figure
76 is even more interesting:
40%
35%
34%
error rate 30%
25%
28%
20%
29%
27%
14%
15%
23%
11%
29%
26%
30%
15%
14%
24%
9%
SotA+
Proj+
11%
9%
10%
Expon. (SotA+)
Expon. (Proj+)
5%
0%
0
1
2
3
4
5
6
7
8
production sequences
Figure 76: Temporal development of mean error rates in the subgroups.
The SotA+ group’s mean error rate is ‫ = ̅ݔ‬27.1% (SD = 20.5%). Thus the error rate of
the fast workers is 4.5% higher than the complete population’s average which indi­
cates that the speed gain partly is the result of more careless assembly: a speed-accu­
racy trade-off. Also there is no quality improvement in the SotA+ population as the
trend moves sideward.
The Projection+ group differs strongly: here the mean error rate is only ‫ = ̅ݔ‬14.7%
with a comparatively low standard deviation (SD = 15.5%). Given that the complete
Projection population’s mean error rate is 29.1% the faster users made only about a
third of the mistakes of the slower users. Compared to the SotA+ population there is
a 45.8% error reduction (relative value based on the 27.1% error rate in SotA+ and
the absolute reduction of 12.4%). Thus the hypothesis h1 that projection decreases the
error rate of above average workers can be accepted (p<0.053 one-sided). This is es­
pecially important because these workers – like all the others in the experiments – did
not receive feedback on product-specific errors. Probably they made good use of the
1:1 model projected in the center of the workspace (see chap. 7.2.2 Assembly Task).
Finally there is a clear trend towards making less errors in this group, so augmenting
a workplace with projection also seems to trigger learning and thus continually im­
proves quality.
137
Studies and Evaluation: Questionnaire Results and Discussion
7.6 Questionnaire Results and Discussion
The questionnaire was designed to collect objective quantitative data on how contextaware assistive systems (CAAS) in production environments are experienced by their
users (see chap. 7.2.1 Questionnaire Design). Each group or scenario was comprised
of 20 participants testing a specific version of the CAAS: SotA (no augmentation),
Projection and Gamification. To analyze the data we used analysis of variance
(ANOVA) and Student’s t-test.
7.6.1 Pre-Experiment Results
In the first questions the participants’ general condition and their experience with
computers was checked.
Table 11: Questionnaire item 1: General condition
1. How are you today?
SotA
Proj.
Gamif.
‫( ̅ݔ‬SD)
2.2 (0.8)
2.0 (0.5)
2.1 (0.2)
1: very good, 2: good, 3: normal, 4: not good, 5: bad
Although there is some difference in variance, there was no significant difference in
the condition (p>0.47) before the participants started the experiment.
Table 12: Questionnaire item 2: Nervousness
2. Are you nervous?
SotA
Proj.
Gamif.
‫( ̅ݔ‬SD)
2.2 (1.1)
2.7 (0.7)
2.6 (0.6)
1: not at all, 2: very little, 3: normally, 4: distinctly, 5: very much
Although there is some difference in variance and the participants in Projection and
Gamification were more nervous than in SotA, there was no significant difference in
nervousness (p>0.206) before the participants started the experiment.
138
Studies and Evaluation: Questionnaire Results and Discussion
Table 13: Questionnaire item 3: Anticipation
3. Do you look forward to the experiment?
SotA
Proj.
Gamif.
‫( ̅ݔ‬SD)
3.7 (0.8)
3.4 (0.7)
3.5 (0.5)
1: not at all, 2: very little, 3: normally, 4: distinctly, 5: very much
There is little difference in variance, the participants are in positive anticipation of the
experiment. Of sixty participants only one selected “very little”. There also was no
significant difference in anticipation (p>0.328) before the participants started the ex­
periment.
Table 14: Questionnaire item 4: Computer experience
4. How many hours per week do you work at the computer?
SotA
Proj.
Gamif.
‫( ̅ݔ‬SD)
1.9 (1.1)
1.7 (0.9)
1.7 (1.0)
1: not at all, 2: < 2 hours, 3: 2-5 hours, 4: 5-10 hours, 5: > 10 hours
With little difference in variance, most participants have little to do with computers –
mostly they encounter them at educational gaming sessions with their psychological
advisors. There was no significant difference in computer experience (p>0.806).
Table 15: Questionnaire item 5: Experience with computer or console games
5. How many hours per week do you play computer
or console games?
SotA
Proj.
Gamif.
‫( ̅ݔ‬SD)
1.6 (0.9)
1.3 (0.7)
1.8 (1.0)
1: not at all, 2: < 2 hours, 3: 2-5 hours, 4: 5-10 hours, 5: > 10 hours
Most participants rarely play at the computer or the console. If they play, they mostly
do so during educational gaming sessions with their psychological advisors. There
was no significant difference in gaming experience (p>0.206).
The five pre-experiment questions show that with little variance there are no signifi­
cant differences between the three test populations. This is important for the validity
of the scenario-specific questions, because major differences would have implied un­
randomized test populations.
139
Studies and Evaluation: Questionnaire Results and Discussion
7.6.2 Generic Post-Experiment Results
While we looked at variances concerning the whole test population in the pre-experi­
ment questions, for the post-experiment questions the difference between the state-ofthe-Art (SotA) scenario and one of the two augmented scenarios is relevant – i.e. the
difference between SotA and Projection or between SotA and Gamification.
For this reason two p-values will be discussed after each results table. In several cases
the analysis of two questions will be combined: either because they deliberately are
contradictory in order to detect the tendency to affirmation (see chap. 7.2.1 Question­
naire) or because they are thematically related.
Table 16: Questionnaire items 6 and 10: Difficulty
6. The help device was easy.
SotA
Proj.
Gamif.
‫( ̅ݔ‬SD)
1.9 (0.4)
2.4 (0.7)
1.9 (0.3)
4.3 (0.9)
4.0 (0.5)
4.2 (0.5)
10. The help device was unnecessary complicated.
‫( ̅ݔ‬SD)
1: exactly right, 2: right, 3: neutral, 4: not right, 5: not right at all
At first SotA and Projection are focused. Initial results comparing these scenarios have
already been discussed in previous work (Korn et al., 2013).
The manifest h1-hypothesis is that projection will make the instructions more accessi­
ble and thus will be perceived as easier, while the h0-hypothesis assumes that there is
no significant difference. A t-test shows that h0 is maintained (p>0.027 two-sided).
Also h1 has to be revised: although the difference is not significant there is evidence
that CAAS augmented by projection are perceived as “less easy” than both SotA and
Gamification (∆‫= ̅ݔ‬0.5). The antithetic assertion is consistently rejected. Again there
is no significant difference (p>0.282 two sided) and the delta to SotA is reduced
(∆‫= ̅ݔ‬0.2). However, this relativization mainly seems to be due to the impaired per­
sons’ reluctance to “speak badly” about things (see chap. 7.7 Qualitative Findings).
With the gamification scenario the result is clear: the h1-hypothesis that gamification
will make the device easier is rejected (p=0.500 one-sided in item 6 and p>0.334 onesided in item 10), so the h0-hypothesis stays intact: there is no significant difference
in difficulty. However, this is an important result: since gamification potentially in­
creases the cognitive workload it is an unexpected finding that CAAS augmented by
gamification are not perceived as more difficult than the SotA systems.
140
Studies and Evaluation: Questionnaire Results and Discussion
Table 17: Questionnaire items 7 and 8: Acceptance
7. I would use the help device again.
SotA
Proj.
Gamif.
‫( ̅ݔ‬SD)
1.9 (0.7)
2.0 (0.6)
2.3 (0.7)
1.8 (0.5)
2.4 (0.7)
2.0 (0.5)
8. I would recommend the help device to my
colleagues.
‫( ̅ݔ‬SD)
1: exactly right, 2: right, 3: neutral, 4: not right, 5: not right at all
These questions verify if the augmentation is accepted among the impaired users.
Asking if a system is recommended to colleagues is a good indirect way of checking
the validity of the answer to the direct question because usually a good solution would
be recommendable.
The results show that the SotA system is preferred. Since it diverges least from the
regular workplace this indicates that impaired persons value constancy. However,
both augmentations were rated only slightly below, so the h1-hypothesis that projec­
tion increases acceptance can be denied (p>0.322 one-sided) and the h0-hypothesis
that projection has no significant influence on acceptance is maintained.
Concerning the high level of divergence with respect to the SotA system this is a
positive finding: innovation in the workplace is accepted if the benefits are obvious.
Gamification does not increase acceptance, but neither can h0 (there is no significant
effect) be reconstituted – indeed there is a marginally significant finding that gamifi­
cation reduces acceptance (p<0.092 two-sided).
In the recommendation question there is a stronger tendency not to acclaim the pro­
jection system (p<0.107 two-sided) whereas the rejection of gamification is much
weaker (p>0.364 two-sided), so the weak tendency against the gamification augmen­
tation is relativized.
Table 18: Questionnaire items 9 and 15: Usability: Ease of Handling
9. I need somebody’s help to use the help device.
SotA
Proj.
Gamif.
‫( ̅ݔ‬SD)
3.6 (1.1)
3.6 (0.8)
3.7 (0.9)
2.0 (0.5)
2.2 (0.4)
2.1 (0.6)
15. I could use the help device proficiently.
‫( ̅ݔ‬SD)
1: exactly right, 2: right, 3: neutral, 4: not right, 5: not right at all
141
Studies and Evaluation: Questionnaire Results and Discussion
CAAS are meant to improve the work experience of their users and reduce the work­
load of supervisors. Thus their usability and especially their perceived ease of han­
dling is essential for their success. Again two incompatible questions were used to
reveal affirmative tendencies.
Most participants clearly felt that they would not need help to use the assistive system.
The differences regarding this issue are not significant at all (p>0.871 two-sided in
the comparison between SotA and Projection and p>0.877 two-sided in the compari­
son between SotA and Gamification). As the variance shows, rejecting help (item 9)
was an additional psychological barrier for some of the impaired while expressing
competency was not. So with very little variance the proficient use was confirmed by
all participants. Differences between the scenarios are minimal and not significant
(p>0.260 two-sided in the comparison between SotA and Projection, p>0.537 twosided in the comparison between SotA and Gamification).
Table 19: Questionnaire items 13 and 16: Usability: Learning
13. Working with the help device can be learned
quickly.
SotA
Proj.
Gamif.
‫( ̅ݔ‬SD)
1.8 (0.6)
2.2 (0.6)
2.4 (0.6)
3.6 (0.9)
3.6 (0.9)
3.4 (0.8)
16. I had to learn a lot before I could work with the
help device.
‫( ̅ݔ‬SD)
1: exactly right, 2: right, 3: neutral, 4: not right, 5: not right at all
We already pointed out the importance of a perceived ease of use in the items regard­
ing ease of handling. Since this subject is central for the acceptance of CAAS by the
users and their supervisors, two additional incompatible questions directly focus on
learning.
Similar to the questions focusing on acceptance (see Table 17: Questionnaire items 7
and 8: Acceptance) the impaired users are aware of the status quo and believe that the
least divergent system can be learned most quickly. Accordingly the mean results of
item 7 (“I would use the help device again”) and item 13 are very close in all three
scenarios.
Both the CAAS augmented by projection (p<0.032 two-sided) and the system aug­
mented by gamification (p<0.001 two-sided) are rated significantly lower when com­
pared to the SotA system. So in this case the h0-hypothesis claiming that there is no
effect by augmented CAAS on learning is falsified.
142
Studies and Evaluation: Questionnaire Results and Discussion
The results on the contradictory questions differ. While most users confirm their basic
notion and tend to answer that they had not “to learn a lot” before they could use the
CAAS, there is no significant difference between the grade of denial. The difference
between SotA and Projection is insignificant (p>0.504 two-sided) and so is the differ­
ence between SotA and Gamification (p>0.595 two-sided).
The almost identical values indicate that this question was answered automatically –
the “confession” that the own mental capacities were at their limits (“I had to learn a
lot…”) in this item seems much harder for the impaired persons than the evaluation
of the positive framing of the question.
Table 20: Questionnaire items 11 and 12: Generic User Interface
11. The indicator for picks was easy.
SotA
Proj.
Gamif.
‫( ̅ݔ‬SD)
1.9 (0.7)
1.8 (0.4)
2.0 (0.0)
3.5 (1.2)
3.2 (1.0)
3.4 (0.9)
12. I found it tiring to confirm each step with a button.
‫( ̅ݔ‬SD)
1: exactly right, 2: right, 3: neutral, 4: not right, 5: not right at all
As described in the implementation (see chap. 6.4.1 Visual Output and Projection)
two elements were projected in all three scenarios: the “next” and “previous” naviga­
tion and the indicators for picks. While there is no technical difference in how these
“generic user interfaces” were projected and interacted with, there might be an influ­
ence from the other interfaces – e.g. in Projection additional elements are projected
into the workspace.
However, in spite of the additional projections in the work area, there is no significant
difference in the perception of the indicator between SotA and Projection (p>0.432
two-sided) and between SotA and Gamification (p>0.540 two-sided), so the pick in­
dicator was considered to be “easy” by the complete test population and does not have
to be considered in the further analysis.
A look at the second permanently projected element – the step navigation – shows a
similar picture: neither is there a significant difference in the perception of the navi­
gation between SotA and Projection (p>0.477 two-sided) nor is there a significant
difference between SotA and Gamification (p>0.886 two-sided). The tendency to
“neutral”, in this case a mild rejection of the element, shows that the need to confirm
single work steps was indeed a minor disturbance – but since the whole test population
faced it, it will not be considered. We can conclude that both generic user interface
elements did not have a significant impact on the perception of CAAS.
143
Studies and Evaluation: Questionnaire Results and Discussion
Table 21: Questionnaire item 14: Self-Perception and Strain
14. I found the work with the help device exhausting.
SotA
Proj.
Gamif.
‫( ̅ݔ‬SD)
4.0 (0.7)
4.0 (0.6)
3.8 (0.8)
1: exactly right, 2: right, 3: neutral, 4: not right, 5: not right at all
This important question was to check if the augmented systems increase the perceived
strain. The almost identical values in SotA and Projection show that this does not
apply in their case (p>0.814 two-sided).
The mean strain in the gamification scenario is a bit lower – nevertheless the differ­
ence is not great enough to be statistically significant. The h1-hypothesis that gamifi­
cation decreases strain can be denied (p>0.201 one-sided), so the h0-hypothesis that
gamification has no significant influence on strain is maintained.
144
Studies and Evaluation: Questionnaire Results and Discussion
7.6.3 Specific Post-Experiment Results
While the 11 items discussed in the last chapter were completely identical, the follow­
ing questions differed between the scenarios. Still the first three questions only differ
with respect to the method used to display the instructions (monitor versus projection)
and thus are comparable, whereas the subsequent questions (three in the projection
scenario and five in the gamification scenario) were completely scenario-specific.
Table 22: Questionnaire items 17, 18 and 19: Instructions
17. I found the instruction on the monitor
[Projection: left of me on the table] complicated.
SotA
Proj.
Gamif.
‫( ̅ݔ‬SD)
4.0 (1.0)
3.8 (0.7)
3.8 (0.6)
1.8 (0.4)
2.2 (0.4)
2.1 (0.4)
3.9 (0.9)
2.0 (0.3)
3.5 (0.8)
18. I found the instruction on the monitor
[Projection: left of me on the table] easy.
‫( ̅ݔ‬SD)
19. I found it tiring to always check the instruction
on the monitor above me. [Projection: I liked having
the instruction directly to my left on the table.]
‫( ̅ݔ‬SD)
1: exactly right, 2: right, 3: neutral, 4: not right, 5: not right at all
Since both the instructions and the task were identical in all three scenarios, differ­
ences in the perception of the instructions are largely attributable to the method of
display. However, in Projection an additional 1:1 model was projected into the center
of the workspace (see below), so the instructions in this scenario were redundant.
Regarding the perception of the instructions’ complexity there are no significant dif­
ferences between the scenarios (SotA/Projection: p>0.471 two-sided; SotA/Gamifi­
cation: p>0.571 two-sided).
The control question about the instructions being easy (item 18) shows that SotA was
considered to be significantly easier than Projection (p<0.0072 two-sided) although
there is no significant difference between SotA and Gamification (p>0.033). At first
these results are surprising. However, we already found that negative utterances are
avoided by the test population (see Table 16: Questionnaire items 6 and 10: Difficulty:
“The help device was easy” and “The help device was unnecessary complicated”).
This explains why there are no significant differences in item 17. When feeling “on
145
Studies and Evaluation: Questionnaire Results and Discussion
the safe side” as in item 18, the answers show that the display of instructions in Pro­
jection is considered “less easy”. Considering there is strong evidence that the subjects
value constancy (see Table 17: Questionnaire items 7 and 8: Acceptance, Table 19:
Questionnaire items 13 and 16: Usability: Learning), the new way of displaying the
instructions probably made them “less easy” or better: harder to learn. However, the
time measurements and the subsequent questions show that most subjects were well
capable of adapting to projected instructions.
In item 19 the advantages of projection are fully unfold. We supposed that frequently
having to look up to the monitor with the instructions and losing the cognitive focus
(SotA and Gamification) would bother the workers. However, they seem to be used
to this annoyance from paper-based instructions and tend to “not find it tiring”. As
can be expected with largely identical systems the differences between SotA and
Gamification are not significant (p>0.201 two-sided), although there is a tendency to
perceive gamification as more tiring (∆‫ = ̅ݔ‬0.4). This is attributable to the animated
gamification component which demands additional attention and cognitive focus each
time the participant looks at the instructions.
With very little variation the participants agreed that they liked having the instructions
projected directly on the table. At first this seems contradictory to the perception of
SotA as significantly easier than Projection (see above: item 18). However, it has been
shown that the participants value constancy but acknowledge improvements. So while
they had to learn more, they almost unanimously confirmed the advantages of projec­
tion they experienced during the assembly phase.
Table 23: Questionnaire item 20 (Projection): Instruction
20. The instruction left of me on the table disturbed my work.
Proj.
‫( ̅ݔ‬SD)
4.1 (0.8)
1: exactly right, 2: right, 3: neutral, 4: not right, 5: not right at all
It was an important aim of the survey to be sure about the perceived benefits of pro­
jection. For this reason a question contradictory to item 19 was integrated (“I liked
having the instruction directly to my left on the table”). The workers strongly rejected
this assertion: in the complete survey next to one occurrence of 4.3 and two occur­
rences of 4.2 the value 4.1 (three occurrences) marks the strongest negation.
This result supports the finding from item 19: projected instructions are considered
very helpful.
146
Studies and Evaluation: Questionnaire Results and Discussion
Table 24: Questionnaire items 21 and 22 (Projection): Additional Product Model
21. The model in the center of the workspace helped me.
Proj.
‫( ̅ݔ‬SD)
2.0 (0.2)
22. The model in the center of the workspace bothered me.
‫( ̅ݔ‬SD)
4.1 (0.4)
1: exactly right, 2: right, 3: neutral, 4: not right, 5: not right at all
With the second lowest variance in the survey, the subjects agree that it was helpful
when an additional 1:1 model of the current state of the product component is pro­
jected into the center of the workspace (see chap. 7.2.2 Assembly Task).
When asked the opposing question they responded accordingly and rejected the asser­
tion. These findings strongly support the finding of the previous projection-specific
items 17 to 20 that projection – in spite of the workers’ belief that the system is more
difficult to learn – is highly beneficial for impaired workers.
Table 25: Questionnaire items 20, 21 and 24 (Gamification): Game
20. I liked the game on the monitor.
Gamif.
‫( ̅ݔ‬SD)
1.9 (0.4)
21. The game on the monitor bothered me.
‫( ̅ݔ‬SD)
4.2 (0.5)
24. I would like to always have the game at work.
‫( ̅ݔ‬SD)
2.4 (0.9)
1: exactly right, 2: right, 3: neutral, 4: not right, 5: not right at all
As in several other cases two oppositional items are used. The results are consistent:
while most participants agree that they liked the game, they also agree that the game
did not bother them. An approval rate of 1.9 marks the second-highest agreement
within the post-experiment part of the questionnaire (there are three occurrences of
1.8 and four occurrences of 1.9). Thus the approval level of Gamification is similar to
the approval of Projection.
147
Studies and Evaluation: Questionnaire Results and Discussion
In the third question (item 24) we wanted the test subjects to step back from the expe­
rience in the experiment and imagine a workplace permanently augmented by gami­
fication. The variance is greater with that question – and although the result is still
positive it is significantly different from the acceptance of the game itself in item 20
(p<0.041 two-sided). Here the attitude to look favorably at innovation while retaining
a mildly skeptical attitude again becomes obvious.
Table 26: Questionnaire items 22 and 23 (Gamification): Comments
22. The comments on my work motivated me.
Gamif.
‫( ̅ݔ‬SD)
1.9 (0.4)
23. The comments on my work disturbed me.
‫( ̅ݔ‬SD)
4.1 (0.7)
1: exactly right, 2: right, 3: neutral, 4: not right, 5: not right at all
Comments combine two elements (see chap. 6.5 Gamification Component):

a visual element starting with the dissolving of the built-up brick rows,
followed by a smiley indicating the performance after each sequence

audio feedback which is given briefly after each process and in an extended
form after the completion of a sequence
The acceptance of the comments resembles the acceptance of the game itself: the same
high approval rate with the same low variance. Also the negative framing of the ques­
tion is consistently negated. Thus the approval rate of comments in Gamification is
similar to the approval of Projection.
148
Studies and Evaluation: Qualitative Findings
7.7 Qualitative Findings
In an experiment with 60 impaired persons, behavior can be measured extensively –
but still some experiences are hard to describe in numbers. A first qualitative finding
was that all participants were very co-operative. Not a single participant asked for
money – although they all gladly accepted food and sweets prepared on a trolley table
at the room’s entrance. While some were afraid of the new situation at the beginning
and others were shy in the personal communication, everybody who was able to use
the system left the room in a positive mood – and this applies to all three test scenarios.
This positive disposition may partly be attributable to the fact that “they made it” –
but for the system designers it was important to see that a novel approach is not flatly
rejected or results in a negative mood swing.
As described in the design of the questionnaire (see chap. 7.2.1) we deliberately added
contradictory assertions (e.g. item 6: “The help device was easy” and item 10: “The
help device was unnecessary complicated”) to detect affirmative tendencies. We
found that there was no clear affirmative tendency – however we experienced a “po­
lite” tendency, i.e. the impaired persons’ reluctance to “speak badly” about things:
while advantages are clearly pointed out, the elevation of disadvantages is avoided.
This becomes very clear with questions regarding learning (Table 19: Questionnaire
items 13 and 16: Usability: Learning): the positive framing of the question (…can be
learned quickly) results in significant and highly significant findings whereas the neg­
ative framing (“I had to learn a lot…”) leads to an almost identical pattern of rejection.
In the discussion with the participants they also were reluctant to talk about their own
shortcomings. Although the participants are clearly impaired, in this respect they react
just like everybody else: they want to talk about good things and success rather than
problems and failures. Hierarchy and status are important: Manufacturing parts for
Audi is perceived as “better” than manufacturing parts for an unknown company. Of­
ten a proud impaired worker would say something like “Me – Audi” meaning that he
had a top job within the sheltered work organization.
The projection was very well appreciated by the subjects. It seemed to be natural,
whereas having the instructions on a monitor looked antiquated in direct comparison.
Also the problem of shadows was smaller than we anticipated since the participants
soon learned to avoid them. The small model in the center of the workspace was an
essential improvement. Several participants refrained completely from looking at the
full instructions projected to their left and instead simply focused on mirroring the 1:1
model.
Studies and Evaluation: Qualitative Findings
149
The participants seemed to react more to the gamification component’s auditory feed­
back than to the visual feedback: the Tetris-like production game on the monitor was
rarely looked at during the work processes.
However, the evaluation at the end of each sequence with the smiley and the longer
spoken feedback was explicitly looked at and commented on. The smileys after each
sequence obviously were noticed because the cognitive focus already shifted from the
micro to the macro level when the assembled product was placed behind the boxes.
Figure 77: User checking the instruction and the visual gamification element.
Still the participants really seemed to value and enjoy the feedback. Many started
talking back, especially when the feedback implied room for improvement, e.g. the
feedback “you did that better before” frequently resulted in denials. Obviously the
idea of developing CAAS as companions would be highly appreciated by the impaired
users.
While the current gamification setup drew the focus away from the workspace (see
Figure 77) future approaches need to bring the visual gamification elements closer to
the workspace as the center of the user’s attention. The combination of projection and
gamification can easily achieve that (see chap. 8.3.1 Technical Perspectives).
150
Studies and Evaluation: Discussion
7.8 Discussion
In this discussion we resume the most important quantitative and qualitative findings
and analyze how they are related.
7.8.1 Similarities between the Scenarios
When looking at the survey results at a meta-level, it is striking how many central
HCI issues in the three scenarios were seen as equivalent by the impaired production
workers: there are no significant differences between the SotA and the augmented
scenarios in overall acceptance or in handling. Most participants state that they can
use these systems proficiently and without additional help. This is supported by the
qualitative findings: the impaired persons accepted the systems as they were. With
respect to usability there is a clear finding that neither Projection nor Gamification are
perceived as significantly more difficult.
One could argue that acceptance is a function of the users’ speed and error rate, so
low error rates (e) and low sequence times (t) imply a high acceptance. Indeed there
is a correlation, but it is not strong with SotA rt=0.18 and re=0.20, Gamification rt=0.26
and re=0.13 and Projection rt=0.03 and re=0.35 (example values for item 6: “The help
device was easy”). However, it is interesting that the strongest correlation is linked to
errors in Projection (see below).
The hypothesis that gamification decreases strain could not be statistically confirmed.
Although there is a recognizable tendency in that direction and the gamification sce­
nario is rated the least exhausting, the differences are not accentuated enough to be
significant. Nevertheless both the adapted feedback and the production game are rated
with the second-highest agreement level and show the lowest standard deviation
within the post-experiment questions (‫= ̅ݔ‬1.9, SD=0.4). On the qualitative level the
adapted visual and auditory feedback resulted in the CAAS being perceived as a com­
panion – a pleasant finding that shows how CAAS can be integrated to also generate
ethical benefits.
When comparing the individual users, there was no significant effect of the augmen­
tations on production time. While there are clear trends towards shorter sequence du­
rations, the high variance between the users makes the effects of augmentation by
projection (mean time reduction of 7.8%, p>0.259 one-sided) and gamification (mean
time reduction of 12.5%, p>0.110 one-sided) statistically insignificant.
With regard to the error rate CAAS augmented by projection which do not provide
feedback on product-specific errors increase the error rate by an absolute 6.1% and a
relative 28.7% which again is not statistically significant (p>0.186 one-sided).
Studies and Evaluation: Discussion
151
To resume, the overall findings are positive: the impaired production workers are open
for innovation and look at new approaches without prejudices – and obviously the
augmentations in the CAAS prototype worked well enough to be acceptable. In fact
especially projection already seemed so well integrated and “natural” that having the
instructions on a monitor looked antiquated in direct comparison.
7.8.2 Differences between the Scenarios
Generally the augmentations’ comparatively low quantitative impact on speed and
partly also on errors was unexpected. Methodically this is due to the high variance
between individual impaired users. To balance this effect, we also looked at how the
groups’ means developed and checked if there are deviating sub-groups.
When it comes to error rates the results are more pointed: CAAS not providing prod­
uct-specific feedback augmented by gamification increase the error rate by an absolute
10.5% and relative 46.5% which is significant in spite of the high user variance
(p<0.052 one-sided).
All the results can be acuminated by focusing on the temporal development of mean
production speed and mean error rate in the eight sequences instead of the individual
users. While even in this aggregated perspective CAAS with projection do not signif­
icantly reduce the mean sequence completion times (p>0.200 one-sided), CAAS aug­
mented by gamification do so (significantly with p<0.064 one-sided). When analyzing
the development of mean sequence error rates (see Figure 72) there clearly are signif­
icantly more errors with both Projection (p<0.003 one-sided) and Gamification
(p<0.00001 one-sided).
The Projection data demanded a more thorough analysis. While projected instructions
were generally looked at favorably, they were also perceived as significantly “less
easy” than instructions on the monitor. While the additional 1:1 model was a highly
appreciated augmentation of the production process, this deviation from the known
standard noticeably increased the perceived overall complexity.
This finding contradicts the intuitive assumption that the projection of information
directly into the workspace will be perceived as an improvement. However, the high
correlation between error rate and perceived system complexity mentioned above
shows that projection had a strong impact on the population. And indeed we found
that projection has a catalytic effect: it significantly increases the speed of the faster
workers (p<0.061 one-sided) and even significantly reduces their error rate (by 45.8%
compared to the faster SotA workers, p<0.053 one-sided). At the same time the slower
workers perform worse with CAAS augmented by projection than with the SotA sys­
tem. This aptitude for the successful use of CAAS with projection is not just a function
152
Studies and Evaluation: Discussion
of the users’ wage level, the performance indicator index used by sheltered work or­
ganizations (p>0.147 one-sided) – it relates to a different feature.
Compared to the SotA system both augmented CAAS were seen as significantly more
difficult to learn. So while the augmented systems are perceived as easy to use once
they are mastered (as there are no significant differences in overall acceptance), the
perceived learning curve is steeper with Projection and even steeper with Gamifica­
tion, where the cognitive focus is frequently drawn to the gamification element. How­
ever, we observed a clear trend towards making less errors in the faster subgroup
(Figure 76), so augmenting workplaces with projection also seems to trigger learning.
To resume, CAAS are well-accepted and appreciated by the users. Projection makes
monitors look old-fashioned and the gamification of work is both appreciated and
enjoyed. While these augmentations appear to speed up users (which can be shown in
the aggregated view on the development of sequence time) in their current form they
also lead to increased error rates. The only exception are faster users of the projection
augmentation who perform significantly better in all areas – probably because they
are also faster learners.
However, for CAAS to be successful in sheltered work organizations and in produc­
tion companies they will need to be able to detect product-related errors. Currently
the augmentations draw cognitive processing power away from the assembly so that
the speed gains can be attributed to a speed-accuracy trade-off. This effect has to be
counter-balanced by a real-time analysis of the product state (see chap. 8.3.1 Tech­
nical Perspectives).
153
Conclusion: Summary of Research Contributions
8 CONCLUSION
Context-aware assistive systems (CAAS) will change the way we work. Like route
guidance systems have changed the way we drive in unknown areas (and the amount
of time we spend in preparation), CAAS will permanently change work. In the case
of work in manual production, errors will be addressed “in the making” and persons
with cognitive or motoric impairments will be able to remain in active production jobs
longer. At the same time CAAS allow to detect changes in performance and adjust the
level of challenge and the feedback. Ideally this results in a worker who remains in
the flow channel, i.e. an area where skill and challenge converge and a high level of
concentration is accompanied by a feeling of satisfaction and accomplishment (see
chap. 2.4.3 Flow).
However, research on CAAS in production environments is just beginning. Neverthe­
less the pioneering work presented here sheds light on several important aspects which
have been introduced in the research questions (see chap. 1.3).
8.1 Summary of Research Contributions
In the following important results regarding the research questions (RQ) are summa­
rized. For a more detailed discussion the relevant chapters are indicated.
Table 27: RQ 1: Requirements of CAAS.
No.
Research Question (RQ)
Chapter
RQ1
Which requirements should CAAS
in production environments address?
4
Although common standards provide a proven and tested framework for HCI, the use
in production with impaired persons results in several constraints and adaptations re­
garding the focus of interaction, the demand for total quality and the ideal of universal
design.
From the starting requirement of implicit interaction the requirements of automatic
detection of movements as well as the detection of the current work state and speed
were derived. The easy-to-use motion recognition systems used by CAAS address
these requirements and allow an unprecedented real-time analysis of work actions.
While the requirement to move work-relevant information closer to the worker could
be addressed by applying state-of-the art projection technology, the requirement to
154
Conclusion: Summary of Research Contributions
detect errors (resulting from the demand for total quality in production) could not be
addressed in the described CAAS prototype – it presupposes either finger or object
recognition and remains an area for future research.
The requirement of adaptation to the user’s competence level results from general HCI
standards and the paradigms of universal design and universal access. While the ad­
aptation can be realized with the motion data alone, it would benefit from additional
sensors recognizing excitement (e.g. facial emotion detection, heart rate detection).
The motion data also allows new approaches like the integration of motivating ele­
ments into the comparatively monotonous work in manual production. Gamification
can be used to keep the user’s awareness up and thus to prevent errors. Furthermore
motivation and fun during work are ethically desirable. While this requirement could
be addressed, it is strongly linked to adaptation. Thus it also would strongly benefit
from additional sensors recognizing emotions.
The final requirement is an ethical necessity: the protection of the user’s personal data.
Since CAAS “come very close” to the user, analyzing movements and potentially
even emotions, these data have to remain strictly separated from the regular workrelated data. Meeting this requirement is a future task during the large-scale integra­
tion of CAAS in industry contexts.
Table 28: Generic model for CAAS.
No.
Research Question (RQ)
Chapter
RQ2
How can the requirements be prescinded
to a generic model for CAAS?
5
The CAAS model mainly draws from three concepts: the established HAAT-model
for assistive technology, the framework for an adaptive game system and the flow
concept. The HAAT model is adapted so that the human-technology interface can be
realized by projection, the environmental interface can be realized by natural interac­
tion (NI) based on motion recognition and the activity output can be enriched by gam­
ification. The adaptive game system is re-interpreted so that NI can be used as an
instrument that allows game-like interaction in non-gaming contexts. The concept of
flow is expanded by “flow curves” which imply that an activity has to be designed in
phases to be permanently motivating.
The model aims to show the parallels in processing information. Both the human and
the CAAS share an environmental interface consisting of sensors on the input side
and various actors on the output side. The CAAS input receives data which is prepared
155
Conclusion: Summary of Research Contributions
for the interpreter to create a fitting model of the current state of the user, e.g. a de­
tected sequence of errors results in a lower mental score, a detected stress symptom
shifts the flow curve towards arousal. The modeled user state eventually results in an
adjustment of the operational mode, e.g. the speed of production.
The behavior after an adaptation will indicate if the human state was modeled cor­
rectly, so the iterative interpretation of behavior changes can be used to correct errors
in modeling. The CAAS generator adapts the interventions: the gamification compo­
nent, the instructions and the feedback. The adapted interventions are distributed over
various output channels.
Table 29: Implementation of motion recognition
No.
Research Question (RQ)
Chapter
RQ3
How can motion recognition be implemented in CAAS?
6.3, 6.4
Motion recognition allows implicit interaction. An ideal CAAS as described in the
model recognizes intricate finger movements. However, at the time of the implemen­
tation none of the low-priced motion recognition systems did provide the spatial and
temporal resolution required for this level of detail. Thus errors related to picking
components could be detected, while product-related errors like a side-inverted as­
sembly remained undetectable. A further technical restriction was the impossibility to
use skeletal joint detection because the corresponding middleware requires the sensors
to scan horizontal rather than vertical areas.
Instead of using the joints, the CAAS prototype analyzes interconnected areas by us­
ing the changes of z-values (adjoined-spheres approach). These 3D trigger areas can
be referenced and thus the system’s sensitivity can be adjusted to detect movements
in areas already partially filled. This approach is generic and can be implemented in
other scenarios with similar research or implementation interests.
The approach was implemented in the software “Assistive Systems Experiment De­
signer”. In its designer component the steps of a production sequence can be mapped
by assigning causes and effects. The corresponding runner component monitors the
trigger areas during work and creates a video of the workspace for documentation
purposes. Whenever an interactive area is triggered a log entry is generated. Like the
adjoined-spheres approach the system is generic and allows to use motion recognition
to analyze any kind of interaction or work activity in confined spaces.
156
Conclusion: Summary of Research Contributions
Table 30: Implementation of projection.
No.
Research Question (RQ)
Chapter
RQ4
How can projection be implemented in CAAS?
6.4.1, 7.2.2
Projection allows to move work-relevant information from the user’s periphery closer
to the center of activity. This could be realized by applying state-of-the art technology:
the visual elements (instructions, controls for navigation and indicators for picks)
were projected directly into the workspace.
A challenge was that the prototype CAAS had to be able to administer three visual
output devices simultaneously, so hardware issues had to be addressed. Furthermore
the screen coordinates of all graphic devices had to be synchronized. Finally the pro­
jector’s tilt required by the physical setup implied a keystone distortion which had to
be corrected by a non-affine transformation in real-time.
Figure 78: User comparing the assembled product with the 1:1 model.
On the practical level a simple idea proved very successful: projecting a 1:1 model of
the correct result of the assembly with the current step marked by a green arrow into
the center of the workspace (see Figure 78). Several users relied completely on that
simplification – and we got the impression that the excellent results of a part of the
SotA population have been furthered strongly by this visual aid.
157
Conclusion: Summary of Research Contributions
Table 31: Implementation of gamification.
No.
Research Question (RQ)
Chapter
RQ5
How can gamification be implemented in CAAS?
6.5
Gamification allows to keep a user’s awareness up during repetitive and monotonous
work – and this increased awareness helps to prevent errors; also increasing motiva­
tion and fun during work is ethically desirable.
The basis of the gamification implementation is the motion data that allows measuring
the assembly times. Additional sensors allowing facial emotion detection or heart rate
detection were not implemented and are a subject of future research.
The basic design approach was to address the conditions for flow. This was achieved
by creating a “production game” where each work process is visually represented as
a brick. During the corresponding work process the active brick moves downwards
while its color changes from green to red. This time assessment is based on the user’s
personal mean of a limited number of previous assemblies of the identical product.
This hypothetical production speed is also visualized by a “shadow brick” moving
down at exactly this speed. If a product is completed, all bricks disintegrate. After
each process and after each completed product the user receives multimodal feedback
(visual and audio).
The multimodal feedback already gives the user the essential feeling of self-efficacy.
However, in long-term operation gamification can also be used to actively influence
and support the flow curve. The assembly times of the recent sequences then are used
by the interpreter to determine the user’s current position on the flow curve and detect
trends and phase changes, e.g. from flow to arousal. While such changes are perfectly
normal, an intervention can be created if the phase of arousal or control last too long
or the trend indicates that the user leaves the flow channel (e.g. when moving from
arousal to anxiety). This can be achieved by adapting the production cycle and the
speed of the shadow brick. As an example an excited user can be calmed by slowing
the shadow bricks. This leads to more “successful” assembly processes unmistakably
mirrored by green bricks and positive feedback. Thus a user’s self-confidence and
motivation can be actively supported.
158
Conclusion: Summary of Research Contributions
Table 32: Quantitative impact of augmentations on speed and quality.
No.
Research Question (RQ)
Chapter
RQ6
What is the quantitative impact on work speed and quality
when work is augmented by projection and gamification?
7.5
A main issue for the analysis of effects when working with impaired persons is the
high variance between the users. If all individual user times are taken into account
neither projection (mean time reduction of 7.8%) nor gamification (mean time reduc­
tion of 12.5%) have a statistically significant impact on time.
However, when looking at the data at an aggregated perspective and analyzing how
the groups’ mean speed developed over the eight production sequences, there are sig­
nificant differences: while CAAS with projection cannot be shown to reduce the mean
sequence times (p>0.200 one-sided), CAAS using gamification significantly reduce
the production speed (p<0.064 one-sided). With the error rates the effects are gener­
ally more accentuated: CAAS augmented by projection not providing feedback on
product-specific errors increase the error rate by an absolute 6.1% and a relative
28.7%. If all individual users are taken into account, this is not enough to be statisti­
cally significant (p>0.186 one-sided). CAAS augmented by gamification not provid­
ing product-specific feedback increase the error rate by an absolute 10.5% and a
relative 46.5% which is statistically significant in spite of the high user variance
(p<0.052 one-sided).
When looking at the data at the aggregated perspective to analyze how the groups’
mean error rate developed over the eight production sequences, both augmented sys­
tems resulted in significantly higher error rates (p<0.003 one-sided with projection
and p<0.00001 one-sided with gamification).
In summary the currently observed effects of gamification and projection on work
speed and quality are negative because the increased speed is compensated by an in­
creased error rate so the total quality paradigm is violated. Obviously the users’ ben­
efits from projection and the motivational gain from gamification were transformed
into speed, and since no feedback on quality was given a speed-accuracy trade-off
occurred. Potentially the positive effects on speed can be preserved if future CAAS
manage to create interventions when product-specific errors happen (see chap. 8.3
Future Research).
When looking at sub-groups, there is an important finding regarding the augmentation
by projection. The faster part of this group differs from all other sub-groups: the mean
error rate is only 14.7% – given that the complete population’s mean error rate is
29.1% these faster users made only about a third of the mistakes. Compared to the
159
Conclusion: Summary of Research Contributions
faster users in the state-of-the-art population this is a relative error reduction of 45.8%.
The hypothesis that projection decreases the error rate of above average workers can
be accepted (p<0.053 one-sided). This is especially important because these workers
did not receive feedback on product-specific errors. Probably they made good use of
the 1:1 model projected in the center of the workspace. Finally there is a clear trend
towards making less errors in this group (see Figure 76) so augmenting a workplace
with projection also seems to trigger learning and thus continually improves quality.
Table 33: Quantitative impact on users:
No.
Research Question (RQ)
Chapter
RQ7
What is the quantitative impact on users when work is
augmented by projection and gamification?
7.5
When looking at the results of the survey at a meta-level it is astounding how many
important HCI issues in the different scenarios were seen as equivalent by the im­
paired production workers: there are no significant differences in overall acceptance,
in usability and in handling. Most participants feel that they can use these systems
proficiently and without additional help. These are positive findings: the fact that there
are no significant differences in these decisive aspects shows that the production
workers are open for innovation and look at new approaches without prejudices.
Both the feedback and the production game are rated with the second-highest agree­
ment level and the lowest standard deviation within the post-experiment question
(‫= ̅ݔ‬1.9, SD=0.4). However, the hypothesis that gamification decreases strain could
not be confirmed. Although there is a recognizable tendency in that direction and the
gamification scenario is rated the least exhausting, the differences are not significant.
While Projection was generally looked at favorably, projected instructions were con­
sistently perceived as significantly “less easy” than the instructions on the monitor.
While the additional 1:1 model was an augmentation highly appreciated by all users,
it was also recognized as a deviation from the known standard that increased com­
plexity. This is interesting in the light of the catalytic effect projection had on the
performance: while some workers learned to use the additional aid and improved sig­
nificantly, others were confused and performed worse than with the SotA-system.
So while CAAS are perceived as easy to use once they are mastered, both augmented
systems were perceived as significantly more difficult to learn. The ability to learn or
simply adapt to a new system obviously is crucial for the success.
160
Conclusion: Summary of Research Contributions
Table 34: Qualitative impact on users:
No.
Research Question (RQ)
Chapter
RQ8
What is the qualitative impact on users when work is
augmented by projection and gamification?
7.7
The survey’s positive evaluation of the CAAS acceptance, usability and handling was
confirmed by our qualitative findings: everybody who was able to use the system left
the room in a positive mood. This positive disposition may partly be attributable to
the fact that “they made it” – but for the system designers it was important to see that
a novel approach is not flatly rejected or results in a negative mood swing.
The projection was very well appreciated by the subjects. It appeared natural, whereas
looking at the instructions on a monitor seemed antiquated in direct comparison. The
small model in the center of the workspace was an essential improvement. Several
participants refrained completely from looking at the full instructions projected to
their left and focused completely on this 1:1 model.
The participants seemed to react more to the gamification component’s auditory feed­
back than to the visual: the production game on the monitor was rarely looked at dur­
ing the work processes. However, the participants valued and enjoyed the feedback:
the evaluation at the end of each sequence with the smiley and the longer spoken
feedback were explicitly looked at and commented on. The adapted visual and audi­
tory feedback resulted in the CAAS being perceived as a companion – a pleasant find­
ing that shows how CAAS can be integrated to generate ethical benefits.
The current gamification setup drew the user’s focus away from the workspace; future
approaches will need to integrate such elements into the workspace closer to the center
of the user’s attention. This can be achieved by combining projection and gamifica­
tion.
Table 35: Ethical dimension of CAAS:
No.
Research Question (RQ)
Chapter
RQ9
What is the ethical dimension of CAAS
and how can they be designed to be humane?
2.3, 8.2
The ethical dimension of CAAS is discussed in the following sub-chapter.
Conclusion: Ethical Implications: Towards Humane Work
161
8.2 Ethical Implications: Towards Humane Work
The introduction stated that advances in technology must not increase the gap between
people but can open radically new perspectives and even lead to the “full inclusion of
[these] individuals […] in the mainstream of society” (Cook, 2010). While new tech­
nologies are readily accepted by many users, critics have put forward the argument
that the recent advances in AI and automated systems will in due time allow robots
(or autonomous machines) to take over all forms of routine human work – and that
this development would finally result in an unemployment crisis (Ford, 2013). This
future scenario describes the ancient human fear that one day the potentials of tech­
nology will be used to make humans obsolete.
While this potential future scenario cannot be denied, there is an alternative: the use
of the mentioned advances in the field of assistive technology to enhance human work
to the point where it becomes economically valid, meaningful and enjoyable. Even
more importantly, this “positive” scenario is not a distant vision – it can be realized
with today’s technology. Context-aware assistance systems (CAAS) with augmenta­
tions like projection and gamification can help to shape work activities according to
the user’s abilities while increasing productivity and preserving the sense of complete­
ness previous forms of work had (see chap. 2.3.2 Value of Work). They have the
potential to empower impaired and elderly workers to do more complex work or re­
main in active jobs for a longer time thus addressing the demographic change.
However, CAAS also come at a cost. In order to adapt to a user and support her or
him as good as possible, these systems need to “come close”: they need to know as
much about the person as possible. As discussed one of the requirements for robust
product-specific error detection is high resolution real-time motion analysis, but even
heart rate and emotion detection are technological possibilities being explored right
now. Like a good psychiatrist a futures CAAS would be more “aware” of a user’s
physical, mental and emotional state than we typically are ourselves.
Should continuous data management – the ideal of the virtual factory and of cyber­
physical systems (see chap. 2.6.3) – really integrate “human resources”? Is it ethical
to observe a user’s emotions and create a model of the mindset? Does “full inclusion”
justify to use this data to deliberately manipulate a worker with gamification ele­
ments? Is it ethically tolerable that such gamification elements replace intrinsic re­
wards with explicit ones?
Such questions have already been addressed in the MEESTAR-model (see chap. 3.5
Ethical Standards for Assistive Technology) with respect to the areas care and health.
In work contexts assistive systems like the CAAS described in this work have never
162
Conclusion: Ethical Implications: Towards Humane Work
been used before – and neither have their ethical implications been discussed or eval­
uated. The ultimate question is the value we attribute to work: is it an integral part of
a good life and is there a “right to work”, as the UN Convention on the Rights of
Persons with Disabilities suggests (see chap. 2.3.3)? If we follow this line of argu­
ment, using all available technological means to support persons to be able to work
might become a moral obligation, similar to the medical domain where the preserva­
tion of life outweighs many other aspects. If our societies take this new “right to work”
seriously, humane work in the long run will become not only an ethical recommenda­
tion but a legal necessity.
Once we take the elemental importance of work for granted the next question is how
technology like CAAS, which highly impact the way we work, can be integrated in
the most beneficial way: how can a balance between the technological possibilities of
a quantified self and the desire for freedom, privacy and autonomy be achieved? These
questions require much interdisciplinary research and long-term studies.
While the integration of CAAS into a company’s product-related data systems is per­
ceived as essential (see chap. 4.3 Requirements Study) the movement analysis (phys­
ical state) should be treated differently. This applies even more to the modeled mental
and emotional states. Based on our quantitative and qualitative findings, we propose
the following guideline of ethical standards for future CAAS:
37

Product-related data generated by the CAAS environmental interface37 dur­
ing the work process which could also be gained by subsequent camera in­
spection of products is allowed to be transferred to enterprise data
management systems (like ERP or PLM).

Data reflecting a person’s physical disposition (e. g. movement, stance) may
only be used to improve the product (e.g. detecting errors) or to help the
worker (e.g. indicating a malposition). It remains black-boxed within the
CAAS and is deleted as soon as possible.

Data reflecting a person’s mental or emotional state may only be used to
adapt the CAAS operation mode or to generate adapted interventions (gam­
ification, instructions and feedback) that are eligible for increasing the
worker’s motivation and well-being. It remains black-boxed within the
CAAS and is deleted as soon as possible.
The terminology used is based on the model of CAAS presented in chapter 5.
Conclusion: Future Research
163
8.3 Future Research
This sub-chapter in based on the following publications:
Funk, M., Korn, O., & Schmidt, A. (2014). An Augmented Workplace for Enabling
User-Defined Tangibles. In CHI ’14 Proceedings of the ACM SIGCHI Conference on
Human Factors in Computing Systems. New York, NY, USA: ACM.
Funk, M., Korn, O., & Schmidt, A. (2014). Assisitive Augmentation at the Manual
Assembly Workplace using In-Situ Projection. In CHI ’14 Proceedings of the ACM
SIGCHI Conference on Human Factors in Computing Systems. New York, NY, USA:
ACM.
CAAS represent a new line of research – especially in the industrial domain. While
this work describes first steps into this area, several questions could not be answered
with certainty – and additional questions came up. This applies both to the technical
side and the ethical implications of CAAS in workplaces. Both areas are portrayed in
the following.
8.3.1 Technical Perspectives
At the time of the implementation (mid 2012) none of the low-priced motion recog­
nition systems did provide the spatial and temporal resolution necessary for a robust
detection of intricate finger movements as common in manual production. Thus the
implementation had to be simplified in that respect and no real-time product-specific
error detection with corresponding interventions was integrated (see chap. 6.3.1
Technical Restrictions and Solution).
Thus a major technical issue for future research is implementing real-time productspecific error detection. This can be achieved by supplementing motion recognition,
which essentially focuses on human body tracking, with object recognition: the result
of a work step is compared with a representation of the intended product at that time.
Another possibility is the integration of more powerful low-cost motion recognition
sensors, e.g. the improved Kinect used in the Xbox One which uses a real time-of­
flight camera (see chap. 3.1 Motion Recognition). To address these issues, in the fol­
low-up project motionEAP the 3D-spaces of the Kinect and the Leap Motion sensor
are already being combined (Funk et al., 2014a, 2014b) to allow robust error detection
for CAAS.
164
Conclusion: Future Research
Figure 79: A future setup with multiple sensors combining motion and object
recognition.
The future implementation of excitement and emotion detection will significantly im­
prove CAAS when modelling a user’s flow curve (see chap. 5 Model and Architec­
ture). Here again the ability of next-generation low-price sensors to detect the heart
rate and interpret facial expressions will make a difference. Also this is clearly the
path game hardware developers are taking as the Kinect of Microsoft’s Xbox One also
interprets users’ facial expressions. While the technical implementation of emotion
detection has already been solved in the context of affective computing (Geller, 2014),
the design of adequate interventions and the development of adequate gamification
elements as well as their seamless integration into the workspace will be a huge area
for research and development.
With respect to the study results several questions for future research have been
opened: A long term study is needed to determine if the effects on motivation, ac­
ceptance, product quality and speed will change over time in an everyday usage situ­
ation. Another issue that came up in the study (see chap. 7.5 Experiment Results and
Discussion) is the high variance between the impaired users. This variance can occur
within short periods of times. Thus a future research focus is the development of a
game-based assessment tool for impaired users that allows to measure their motoric
and cognitive skills quickly and efficiently. Such an assessment tool will help future
researchers to “calibrate” the performance index and ensure that changes in an indi­
vidual user’s performance can safely be attributed to the augmentations of CAAS.
This research has already been commenced in the follow-up project.
Conclusion: Future Research
165
Finally we wonder to what extent the augmented assistive systems portrayed here and
developed in the near future can be used with un-impaired persons. It would be inter­
esting to adapt CAAS accordingly and quantify the effects on motivation, speed and
quality of the work of un-impaired workers.
8.3.2 Ethical Perspectives
The ethical implications of CAAS need to be discussed and evaluated. An essential
issue is the security of personal data. Especially when sensors are used to detect a
worker’s heart rate or emotions it will be necessary to provide more than technical
solutions. While the ethical guideline (see chap. 8.2) may be a first step, new ethical
rules and standards need to be established and evaluated in pilot projects.
This cautious attitude regarding personal data also has to be applied to gamification;
while its implementation benefits strongly from the advances mentioned before, the
question remains if it is legitimate to replace or supplement intrinsic motivation with
extrinsic methods. Here studies of larger groups using CAAS with and without gam­
ification under the same conditions (skills, workplace, product, payment etc.) are
needed. Only if gamification elements in work environments increase the motivation
and life satisfaction of a worker in the long run (or if they have no significant influence
but increase the productivity) it is legitimate to use them as common feature of CAAS.
Generally a model for the early integration of ethical experts and users should be es­
tablished. Based on the growing societal importance of assistive systems due to de­
mographic change (see chap. 2.2 Demography and Targeted Users) also the
integration of stakeholders from the economic sphere (unions, work councils) should
be formalized: as the World Health Organization has stated, design is no longer lim­
ited to the technological sphere and the user but involves the environment. Within the
strongly regulated and confined industrial domain the shift towards “design for all”
can be ideally tested and optimized.
166
Supplement: Implementation Details
9 SUPPLEMENT
This chapter contains additional information that was not relevant for the main line of
argumentation in the text but might be of interest for subsequent studies and imple­
mentation efforts based on this work.
9.1 Implementation Details
To implement the triggers the PrimeSense OpenNI data (see chap. 3.1 Motion Recog­
nition) had to be accessed using the class VideoFrameGrabber partly developed by
the author ‘init1045a’:
Equation 8: Video frame grabber function for RGB data.
Supplement: Implementation Details
167
The following code snippet (Equation 9) shows how OpenNI and NITE are accessed
and how the infrared (IR) and depth information is integrated.
Equation 9: OpenNI and NITE configurator function for depth data.
The following code snippets (Equation 10, Equation 11) show how a trigger box is set
up and how it is checked.
Equation 10: Setup of the trigger box.
168
Supplement: Questionnaires in Detail
Equation 11: Check of the trigger box.
9.2 Questionnaires in Detail
As described in the studies and evaluation chapter (see chap. 7.2.1 Questionnaire De­
sign) the items in the questionnaires are deliberately redundant and include contradic­
tory items to detect affirmative tendencies. While the items were thematically sorted
in the main text, in the following we provide the original questionnaires where the
items are scrambled to make the redundancies and contradictions less obvious.
We also provide the corresponding answering data.
Table 36: Pre-experiment questions – identical in all scenarios.
1
very
good
good
normal
not
good
bad
not at
all
hardly normal
dis­
tinctly
very
much
not at
all
<2
hours
5 - 10
> 10
hours hours
How are you today?
2
Are you nervous?
3
Do you look forward to the experiment?
4
How many hours per week do you work at
the computer?
5
How many hours per week do you play com­
puter or console games?
2-5
hours
169
Supplement: Questionnaires in Detail
Table 37: Post-experiment questions: SotA.
fully
agree
6
The help device was easy.
7
I would use the help device again.
8
I would recommend the help device to my
colleagues.
9
I need somebody’s help to use the help device.
10 The help device was unnecessary complicated.
11 The indicator for picks was easy.
12 I found it tiring to confirm each step with a
button.
13 Working with the help device can be learned
quickly.
14 I found the work with the help device ex­
hausting.
15 I could use the help device proficiently.
16 I had to learn a lot before I could work with
the help device.
17 I found the instruction on the monitor com­
plicated.
18 I found the instruction on the monitor easy.
19 I found it tiring to always check the instruc­
tion on the monitor above me.
agree
neutral do not
agree
do not
agree
at all
170
Supplement: Questionnaires in Detail
Question items 1-10
Subject Age Sex Wage level
1
28
0
101
45
1
2
96
3
29
1
91
55
1
87
4
38
1
5
85
36
1
84
6
34
1
7
74
8
48
1
107
41
1
111
9
10
30
1
120
43
1
112
11
63
1
12
115
43
1
121
13
63
1
14
92
15
51
1
108
36
1
108
16
51
0
17
96
51
1
116
18
31
1
19
102
20
33
1
88
Q_it01 Q_it02 Q_it03 Q_it04 Q_it05 Q_it06 Q_it07 Q_it08 Q_it09 Q_it10
4
5
4
3
3
2
1
1
5
5
1
1
4
2
1
2
1
1
3
5
2
1
5
5
3
2
2
2
2
4
3
3
5
3
4
1
4
2
5
5
1
3
5
1
1
2
1
1
4
5
2
3
3
1
1
2
2
2
2
4
2
2
4
2
1
1
1
1
5
4
3
2
3
3
1
2
2
3
4
5
3
4
4
1
1
2
2
2
2
4
3
1
3
3
1
2
2
2
4
4
2
1
3
1
3
2
2
2
4
5
1
2
3
1
1
2
2
2
2
4
2
2
3
1
1
3
3
2
4
4
2
2
3
1
1
2
2
2
4
4
2
2
4
1
1
2
2
2
5
5
2
2
4
1
1
2
1
1
3
5
2
1
5
2
2
2
2
2
4
4
2
3
3
1
1
2
1
2
4
1
2
2
3
2
2
2
2
2
4
4
3
2
3
2
2
1
2
2
2
4
Question items 11-19
Table 38: Post-experiment answers: SotA.
Subject Age Sex Wage level
1
28
0
101
45
1
2
96
3
29
1
91
55
1
87
4
38
1
5
85
36
1
84
6
34
1
7
74
8
48
1
107
41
1
111
9
10
30
1
120
43
1
112
11
63
1
12
115
43
1
121
13
63
1
14
92
15
51
1
108
36
1
108
16
51
0
17
96
51
1
116
18
31
1
19
102
20
33
1
88
Q_it11 Q_it12 Q_it13 Q_it14 Q_it15 Q_it16 Q_it17 Q_it18 Q_it19
2
4
4
2
2
4
1
2
4
1
2
5
1
1
5
2
4
5
1
4
3
2
2
5
2
4
4
1
2
2
2
2
4
2
5
2
2
5
2
1
2
5
2
4
4
2
4
4
2
2
3
3
4
4
2
5
5
1
2
5
2
2
4
2
4
4
1
2
3
2
4
4
2
2
5
2
2
3
2
4
5
2
4
4
2
2
4
2
4
4
2
4
2
2
1
5
2
4
4
2
4
4
3
2
4
3
4
4
3
4
4
2
2
4
1
4
4
1
5
2
1
1
4
2
2
4
4
4
1
2
2
3
2
2
2
1
5
4
1
2
4
2
2
4
2
5
5
2
2
4
2
4
4
2
4
3
2
1
4
2
4
5
2
5
4
2
2
4
2
4
4
2
3
2
2
2
3
2
4
2
171
Supplement: Questionnaires in Detail
Table 39: Post-experiment questions: Projection.
fully
agree
6
The help device was easy.
7
I would use the help device again.
8
I would recommend the help device to my
colleagues.
9
I need somebody’s help to use the help device.
10 The help device was unnecessary complicated.
11 The instruction left of me on the table dis­
turbed my work.
12 The indicator for picks was easy.
13 I found the instruction left of me on the table
complicated.
14 I found it tiring to confirm each step with a
button.
15 Working with the help device can be learned
quickly.
16 The model in the center of the workspace
helped me.
17 I found the instruction left of me on the table
easy.
18 I found the work with the help device ex­
hausting.
19 I could use the help device proficiently.
20 I had to learn a lot before I could work with
the help device.
21 I liked having the instruction directly to my
left on the table.
22 The model in the center of the workspace
bothered me.
agree
neutral do not
agree
do not
agree
at all
172
Supplement: Questionnaires in Detail
Question items 1-10
Subject Age Sex Wage level
1
51
1
75
57
0
2
72
3
35
0
72
29
1
82
4
23
0
5
84
27
0
87
6
33
1
7
89
8
52
0
98
27
0
101
9
10
36
1
108
56
0
111
11
28
1
12
116
27
0
108
13
39
0
14
118
15
52
1
121
34
1
125
16
39
0
17
76
46
1
125
18
56
0
19
114
20
49
0
122
Q_it01 Q_it02 Q_it03 Q_it04 Q_it05 Q_it06 Q_it07 Q_it08 Q_it09 Q_it10
3
3
3
1
1
2
3
4
3
3
2
2
4
1
1
2
2
2
4
5
2
4
4
1
1
2
2
4
4
4
1
2
4
2
1
3
2
2
4
4
2
2
2
1
1
3
2
3
3
4
2
3
4
1
1
2
2
3
4
5
3
3
3
2
1
2
3
2
4
4
2
3
3
4
4
2
2
2
3
4
2
1
4
2
1
2
2
2
4
4
2
2
5
3
1
2
1
2
4
4
2
3
3
2
1
2
2
2
4
4
1
3
3
1
1
2
2
3
2
3
2
3
3
2
1
2
2
2
4
4
2
2
3
3
2
2
1
2
4
4
2
4
4
2
1
2
1
2
4
4
2
2
3
2
2
3
3
3
4
4
2
3
3
1
1
3
2
2
3
4
2
3
3
1
1
1
1
1
4
4
2
3
4
1
1
4
2
2
1
4
2
2
3
1
1
4
2
2
4
4
Question items 11-22
Table 40: Post-experiment answers: Projection.
Subject Age Sex Wage level
1
51
1
75
57
0
2
72
3
35
0
72
29
1
82
4
23
0
5
84
27
0
87
6
33
1
7
89
8
52
0
98
27
0
101
9
10
36
1
108
56
0
111
11
28
1
12
116
27
0
108
13
39
0
14
118
15
52
1
121
34
1
125
16
39
0
17
76
46
1
125
18
56
0
19
114
20
49
0
122
Q_it11 Q_it12 Q_it13 Q_it14 Q_it15 Q_it16 Q_it17 Q_it18 Q_it19 Q_it20 Q_it21 Q_it22
3
2
3
3
2
2
3
3
3
3
3
3
5
1
4
3
1
2
2
4
2
4
2
5
4
2
4
4
2
2
2
4
2
4
2
4
4
2
4
2
2
1
2
4
2
2
2
4
4
2
3
3
3
2
2
3
3
3
2
4
2
2
3
5
4
2
2
4
2
3
2
4
4
2
2
2
3
2
3
3
3
2
2
4
4
2
3
2
2
2
3
4
2
4
2
4
5
2
5
4
2
2
2
4
2
4
2
5
4
1
4
3
2
2
2
5
2
4
1
4
5
1
4
2
2
2
2
5
2
4
2
4
4
2
4
3
2
2
2
4
2
5
2
4
4
2
4
4
2
2
2
5
2
2
2
4
5
2
4
4
2
2
2
4
2
4
2
5
4
1
3
2
2
2
2
3
2
2
2
4
4
2
5
4
2
2
2
4
2
4
2
4
3
2
4
4
2
2
2
4
2
4
2
4
5
1
4
4
2
2
2
4
2
3
2
4
4
2
4
2
2
2
2
4
2
4
2
4
4
2
4
4
2
2
2
4
2
2
2
4
173
Supplement: Questionnaires in Detail
Table 41: Post-experiment questions: Gamification.
fully
agree
6
The help device was easy.
7
I would use the help device again.
8
I would recommend the help device to my
colleagues.
9
I need somebody’s help to use the help device.
10 The help device was unnecessary complicated.
11 The indicator for picks was easy.
12 I found the instruction on the monitor com­
plicated.
13 I liked the game on the monitor.
14 I found it tiring to confirm each step with a
button.
15 Working with the help device can be learned
quickly.
16 The comments on my work motivated me.
17 I found the instruction on the monitor easy.
18 The game on the monitor bothered me.
19 I found the work with the help device ex­
hausting.
20 I could use the help device proficiently.
21 I would like to always have the game at work.
22 I had to learn a lot before I could work with
the help device.
23 The comments on my work disturbed me.
24 I found it tiring to always check the instruc­
tion on the monitor above me.
agree
neutral do not
agree
do not
agree
at all
174
Supplement: Study Result Details
Question items 1-10
Subject Age Sex Wage level
1
31
1
102
2
45
0
72
3
40
1
79
4
52
1
82
5
53
1
84
6
54
1
73
7
26
1
86
8
49
1
125
9
43
0
105
10
29
1
108
11
49
0
116
12
46
1
112
13
47
1
106
14
55
0
110
15
59
1
122
16
37
1
118
17
57
1
124
18
24
1
115
19
45
1
96
20
29
1
91
Q_it01 Q_it02 Q_it03 Q_it04 Q_it05 Q_it06 Q_it07 Q_it08 Q_it09 Q_it10
2
2
3
3
2
2
3
2
5
4
2
2
4
2
2
1
1
2
4
5
2
3
4
2
2
2
3
2
4
4
3
2
3
1
1
2
2
2
2
5
2
3
4
1
1
2
2
2
4
4
2
3
3
1
1
2
2
2
4
4
2
3
4
1
2
1
2
2
4
4
2
2
3
1
1
2
3
2
2
4
2
3
3
1
1
2
4
3
4
4
2
2
3
1
3
2
2
2
4
4
2
4
3
2
2
2
1
1
4
5
2
3
3
4
4
2
3
2
4
3
2
2
4
1
1
2
2
1
4
4
2
3
4
1
1
2
2
1
5
4
2
2
4
1
1
2
2
2
4
4
2
3
3
3
3
2
2
2
3
4
2
3
3
1
1
2
2
3
2
4
2
2
4
1
1
2
3
2
2
4
2
2
3
1
1
2
2
2
4
5
2
2
4
4
4
2
2
2
4
4
Question items 11-24
Table 42: Post-experiment answers: Gamification.
Subject Age Sex Wage level
1
31
1
102
2
45
0
72
3
40
1
79
4
52
1
82
5
53
1
84
6
54
1
73
7
26
1
86
8
49
1
125
9
43
0
105
10
29
1
108
11
49
0
116
12
46
1
112
13
47
1
106
14
55
0
110
15
59
1
122
16
37
1
118
17
57
1
124
18
24
1
115
19
45
1
96
20
29
1
91
Q_it11 Q_it12 Q_it13 Q_it14 Q_it15 Q_it16 Q_it17 Q_it18 Q_it19 Q_it20 Q_it21 Q_it22 Q_it23 Q_it24
2
4
2
4
2
2
2
5
4
2
4
4
3
3
2
4
2
4
2
2
2
4
5
2
2
2
5
2
2
4
2
3
2
2
2
5
4
2
2
2
5
3
2
4
2
3
2
1
2
5
4
2
2
3
5
4
2
4
2
3
2
2
2
4
4
2
2
4
4
4
2
4
2
4
3
2
2
4
4
2
2
3
4
4
2
4
1
5
2
1
2
5
4
2
1
4
4
4
2
4
2
4
3
3
2
4
5
2
3
4
4
4
2
2
2
4
3
2
4
4
4
4
3
3
4
4
2
4
2
2
3
2
2
4
3
2
2
2
4
2
2
4
1
4
2
2
2
4
4
1
1
4
5
4
2
4
2
2
2
2
2
4
4
2
2
4
4
4
2
4
2
4
3
1
2
4
4
2
4
4
4
4
2
4
1
4
2
2
2
4
4
2
2
4
4
4
2
4
2
2
2
2
2
5
2
2
2
4
4
2
2
2
2
2
4
2
2
4
3
2
2
2
4
2
2
4
2
2
2
2
2
4
2
3
2
3
2
4
2
4
2
4
2
2
2
4
4
2
4
4
4
4
2
4
2
4
2
2
2
4
4
2
2
4
4
4
2
4
3
4
2
2
2
3
4
2
4
4
4
4
9.3 Study Result Details
As indicated in chapter 7.4 Data, even at the most detailed view the data provided is
already aggregated in some form – for example sequence completion times are given
instead of the process completion times (there are eight processes in each sequence,
i.e. 64 processes to analyze for each user). The complete data is provided in the fol­
lowing.
The abbreviation “A” is used for assembly process, the abbreviation “S” for assembly
sequence i.e. for eight assembly processes. Since there are very few strong outliers,
the graphs are capped at 150 seconds per process. However, the complete data includ­
ing the outliers can be assessed in the corresponding tables. Process times printed in
red indicate that an error occurred in this process.
Figure 80: Detailed process times of the SotA population: graph.
process time in seconds
0
10
20
30
40
50
60
70
80
90
100
110
120
130
140
150
subject 2
subject 12
subject 1
subject 11
subject 3
subject 13
subject 4
subject 14
subject 5
subject 15
subject 6
subject 16
subject 17
subject 7
subject 18
subject 8
subject 19
subject 9
subject 20
subject 10
A1.1
A1.2
A1.3
A1.4
A1.5
A1.6
A1.7
A1.8
A2.1
A2.2
A2.3
A2.4
A2.5
A2.6
A2.7
A2.8
A3.1
A3.2
A3.3
A3.4
A3.5
A3.6
A3.7
A3.8
A4.1
A4.2
A4.3
A4.4
A4.5
A4.6
A4.7
A4.8
A5.1
A5.2
A5.3
A5.4
A5.5
A5.6
A5.7
A5.8
A6.1
A6.2
A6.3
A6.4
A6.5
A6.6
A6.7
A6.8
A7.1
A7.2
A7.3
A7.4
A7.5
A7.6
A7.7
A7.8
A8.1
A8.2
A8.3
A8.4
A8.5
A8.6
A8.7
A8.8
Supplement: Study Result Details
175
176
Supplement: Study Result Details
Sequences 1 and 2,
Processes 1-16
Subject Age Sex Wage level
1
28
0
101
45
1
96
2
29
1
91
3
55
1
87
4
38
1
85
5
36
1
84
6
34
1
74
7
48
1
107
8
41
1
111
9
30
1
120
10
43
1
112
11
63
1
115
12
43
1
121
13
63
1
92
14
51
1
108
15
36
1
108
16
51
0
96
17
51
1
116
18
31
1
102
19
20
33
1
88
A1.1 A1.2 A1.3 A1.4 A1.5 A1.6 A1.7 A1.8
61
21 147
81
16
21
31
14
47
39
55
74
58
55
38
42
56
33
38
75
48
51
77
48
45
22
31
21
17
11
14
16
54
36
33 122
63
45
55
44
51
34
33
40
20
15
25
31
77
56
31
56
42
30
31
26
89
27
30
37
54
34
48
32
86
54
60
76 129
65 103
36
39
19
28
21
19
23
17
14
41
31
34
31
19
20
44
30
29
31
31
55
34
62
19 102
56
23
25
83
33
37
24
27
69
62
88 198
51
40
58
49
23
39
67
20
13
44
26
18
46
33
35
61
46
29
72
34
71
52
22
77
45
18
42
33
35
22
33
60
25
39
28
48
50
43
46
41
56
25
26
40
50
29
37
28
21
21
27
40
S1 Errors A2.1 A2.2 A2.3 A2.4 A2.5 A2.6 A2.7 A2.8
392 13%
67
31
42
17
17
8
18
28
408
0%
42
65
35
19
32
23
28
36
425 38%
41
30
44
49
65
34
27
57
177
0%
12
27
12
16
6
8
20
15
452 13%
33
23
13
25
24
17
16
47
249 63%
41
31
22
24
19
27
23
17
347 13%
45
26
36
24
25
35
26
23
351 38%
34
20
35
20
28
19
14
34
608
0%
60
39
37
89 140
37
39
47
180 13%
40
11
13
13
12
11
18
13
250 38%
36
16
20
21
50
25
23
21
361 38%
28
19
14
27
44
23
97
70
307
0%
45
27
22
40
42
52
21
14
614
0%
84
44
38
42
22
30
13
23
250 25%
11
7
9
5
5
10
7
20
356 63%
42
12
15
20
17
19
19
25
23
17
24
10
12
23
15
23
359 50%
290 38%
24
16
24
21
24
22
24
34
20
27
39
39
21
22
24
327 38%
27
50
13
27
21
12
19
26
53
252
0%
S2 Errors
228 13%
279 25%
347 38%
116
0%
198 13%
205 63%
239 13%
204 38%
489
0%
131
0%
212
0%
321 25%
264 25%
296 38%
74 25%
169 38%
147 50%
188 13%
217 50%
221 25%
Sequences 3 and 4,
Processes 17-32
Subject Age Sex Wage level
1
28
0
101
45
1
96
2
29
1
91
3
55
1
87
4
38
1
85
5
36
1
84
6
34
1
74
7
48
1
107
8
41
1
111
9
30
1
120
10
43
1
112
11
63
1
115
12
43
1
121
13
63
1
92
14
51
1
108
15
36
1
108
16
51
0
96
17
51
1
116
18
31
1
102
19
20
33
1
88
A3.1 A3.2 A3.3 A3.4 A3.5 A3.6 A3.7 A3.8
83
21
80
12
11
9
12
30
49
25
33
23
32
20
35
49
39
17
43
25
22
18
25
59
11
8
20
7
6
11
9
11
34
18
22
19
13
22
22
27
21
22
22
18
17
15
53
24
54
16
20
13
13
30
23
25
20
11
33
14
51
15
18
18
56
42
38
39
47
34
20
55
41
14
13
14
16
10
13
10
26
14
28
15
26
11
16
20
25
14
34
28
16
33
29 108
48
19
16
24
19
27
20
35
59
25
47
11
18
24
16
33
19
10
5
6
11
6
6
12
27
16
22
13
11
21
11
21
12
8
15
14
16
18
12
12
97
23
24
65
39
15
15
21
24
18
22
22
21
17
26
42
55
15
23
13
10
15
15
39
S3 Errors A4.1 A4.2 A4.3 A4.4 A4.5 A4.6 A4.7 A4.8
258
0%
76
15
29
9
12
13
11
28
265 25%
27
49
44
27
31
22
25
23
248 13%
38
16
19
32
62
26
24
46
84 25%
6
5
25
8
6
5
7
11
176 63%
43
25
18
22
15
12
47
17
192 63%
23
16
22
14
27
19
29
22
194 13%
66
15
31
26
16
14
21
28
180 38%
17
8
17
8
30
11
36
9
331
0%
54
61
43
41
57
28
55
48
131 13%
17
7
8
10
30
22
10
8
157
0%
26
19
22
11
17
12
12
11
287 38%
47
14
20
21
26
22
24
40
208 13%
19
18
17
12
11
15
12
28
233
0%
67
27
16
11
16
43
16
21
76
0%
8
5
3
4
5
3
10
15
140 50%
30
9
12
10
16
14
28
43
108 63%
18
13
9
12
9
26
12
20
299
0%
56
28
15
16
18
17
16
42
193 13%
38
22
29
20
19
12
25
24
184
0%
29
23
16
8
9
11
12
56
S4 Errors
192
0%
247 50%
263 13%
73
0%
199 13%
172 63%
217 13%
137 38%
387
0%
112 38%
128
0%
214 13%
132 25%
216 13%
53 25%
161 75%
120 38%
207
0%
187 25%
164 25%
Sequences 5 and 6,
Processes 33-48
Subject Age Sex Wage level
1
28
0
101
45
1
96
2
29
1
91
3
55
1
87
4
38
1
85
5
36
1
84
6
34
1
74
7
48
1
107
8
41
1
111
9
30
1
120
10
43
1
112
11
63
1
115
12
43
1
121
13
63
1
92
14
51
1
108
15
36
1
108
16
51
0
96
17
51
1
116
18
31
1
102
19
20
33
1
88
A5.1 A5.2 A5.3 A5.4 A5.5 A5.6 A5.7 A5.8
26
19
22
10
8
16
15
13
19
42
35
34
32
30
37
37
40
18
37
29
31
22
23
32
9
13
8
7
6
7
6
13
25
26
16
12
13
15
25
26
24
14
16
10
13
22
18
27
37
15
17
13
18
9
27
31
20
12
25
13
20
16
12
38
59
30
35
58
66
54
45
29
12
17
12
9
11
8
7
9
11
30
36
14
61
39
13
19
24
14
12
13
80
29
23
23
34
17
30
12
15
11
14
16
35
35
15
28
10
17
17
41
8
3
6
6
4
4
17
18
14
8
10
13
6
8
66
19
14
9
12
12
10
12
19
20
20
26
19
17
20
9
17 140
29
14
13
18
12
13
17
21
8
12
11
18
28
12
15
17
S5 Errors A6.1 A6.2 A6.3 A6.4 A6.5 A6.6 A6.7 A6.8
129
0%
22
28
17
9
8
13
14
20
266 50%
37
33
23
26
30
21
24
28
231 13%
32
19
24
34
22
28
17
36
70 25%
7
9
10
8
5
7
7
19
158 13%
22
18
14
12
13
22
21
27
143 63%
17
12
10
11
8
9
41
32
167 13%
26
12
34
14
10
11
51
42
155
0%
13
12
13
9
33
35
22
47
376
0%
46
26
23
39
26
43
39
30
84 13%
20
9
11
11
13
11
9
9
222
0%
20
11
16
12
16
9
12
32
218 38%
19
11
14
14
28
18
26
26
149 13%
38
13
10
10
10
16
23
11
198 13%
25
32
24
36
37
14
27
32
66 25%
11
2
5
3
3
3
10
17
144 75%
15
8
14
12
7
7
10
11
106 63%
8
15
9
11
12
21
16
16
26
268 13%
44
16
18
21
15
10
27
136 25%
18
39
27
19
18
16
14
20
15
10
11
16
120 13%
14
10
13
8
S6 Errors
131 25%
221 25%
213 13%
73
0%
149 13%
139 63%
200 13%
183 38%
272
0%
94 13%
127 13%
156 25%
129 13%
227
0%
54 25%
84 50%
107 63%
177 13%
171 50%
97 13%
Sequences 7 and 8,
Processes 49-64
Table 43: Detailed process times of the SotA population: values.
Subject Age Sex Wage level
1
28
0
101
45
1
96
2
29
1
91
3
55
1
87
4
38
1
85
5
36
1
84
6
34
1
74
7
48
1
107
8
41
1
111
9
30
1
120
10
43
1
112
11
63
1
115
12
43
1
121
13
63
1
92
14
51
1
108
15
36
1
108
16
51
0
96
17
51
1
116
18
31
1
102
19
20
33
1
88
A7.1 A7.2 A7.3 A7.4 A7.5 A7.6 A7.7 A7.8
25
12
12
11
5
8
14
16
39
33
34
25
15
23
26
31
28
18
14
28
17
16
17
46
11
12
8
8
5
6
7
10
23
20
10
17
18
14
11
17
24
11
10
11
11
9
13
24
40
9
28
11
9
15
15
20
22
14
16
9
13
13
14
33
37
23
31
43
39
26
43
28
15
10
13
11
9
8
7
9
23
10
14
10
35
13
13
14
20
14
11
11
19
18
20
26
27
8
15
8
9
14
21
15
20
23
20
21
23
12
16
34
9
3
5
3
2
3
13
18
16
11
7
7
11
16
8
13
15
16
16
12
10
10
12
10
13
13
15
20
28
25
27
14
15
9
22
17
16
17
16
21
14
9
7
10
6
8
7
18
S7 Errors A8.1 A8.2 A8.3 A8.4 A8.5 A8.6 A8.7 A8.8
17
6
8
10
13
103 25%
17
18
11
15
24
22
30
32
226
0%
38
23
34
47
10
22
18
17
184 13%
30
20
19
10
5
5
10
11
67
0%
12
11
15
10
14
14
12
19
131 13%
17
14
15
10
10
14
11
18
113 63%
16
12
8
11
9
15
26
30
147 13%
36
12
27
13
13
11
13
21
134 38%
21
23
13
57
44
25
56
20
269
0%
36
30
38
6
5
7
7
13
84 13%
16
12
8
13
21
9
15
40
132 25%
27
12
16
12
28
13
37
44
139 25%
20
11
11
14
8
9
16
10
116 13%
25
14
12
16
10
16
12
30
169 25%
19
18
9
3
4
7
6
14
56 25%
9
4
4
7
8
20
16
10
90 63%
28
5
5
101 50%
23
9
11
12
10
12
22
12
153
0%
23
12
8
12
12
15
16
24
22
19
24
13
10
17
13
13
133 25%
79 13%
16
11
17
21
11
8
8
16
S8 Errors
100 25%
217 25%
183 13%
79
0%
114 13%
98 63%
166 13%
128
0%
306
0%
75 13%
153
0%
176 38%
108 25%
130
0%
51
0%
99 63%
110 63%
123 13%
130 25%
107 13%
Figure 81: Detailed process times of the Projection population: graph.
process time in seconds
0
10
20
30
40
50
60
70
80
90
100
110
120
130
140
150
subject 2
subject 12
subject 1
subject 11
subject 13
subject 3
subject 14
subject 4
subject 15
subject 5
subject 16
subject 6
subject 17
subject 7
subject 8
subject 18
subject 9
subject 19
subject 10
subject 20
A1.1
A1.2
A1.3
A1.4
A1.5
A1.6
A1.7
A1.8
A2.1
A2.2
A2.3
A2.4
A2.5
A2.6
A2.7
A2.8
A3.1
A3.2
A3.3
A3.4
A3.5
A3.6
A3.7
A3.8
A4.1
A4.2
A4.3
A4.4
A4.5
A4.6
A4.7
A4.8
A5.1
A5.2
A5.3
A5.4
A5.5
A5.6
A5.7
A5.8
A6.1
A6.2
A6.3
A6.4
A6.5
A6.6
A6.7
A6.8
A7.1
A7.2
A7.3
A7.4
A7.5
A7.6
A7.7
A7.8
A8.1
A8.2
A8.3
A8.4
A8.5
A8.6
A8.7
A8.8
Supplement: Study Result Details
177
178
Supplement: Study Result Details
Sequences 1 and 2,
Processes 1-16
Subject Age Sex Wage level
1
51
1
75
57
0
72
2
3
35
0
72
29
1
82
4
23
0
84
5
27
0
87
6
33
1
89
7
52
0
98
8
27
0
101
9
36
1
108
10
56
0
111
11
28
1
116
12
27
0
108
13
39
0
118
14
52
1
121
15
34
1
125
16
39
0
76
17
46
1
125
18
56
0
114
19
20
49
0
122
A1.1 A1.2 A1.3 A1.4 A1.5 A1.6 A1.7 A1.8 S1
88
36
89 115
47
36
48
41 500
85
47
59
61
41
28
25
30 377
52
41
25
41 191
27
26
49 450
33
59
60
41
21
12
49
39 314
73
74
52
27
34
37
61
30 389
87
33
45
34
39
42
31
44 354
28
24
28
31
26
11
23
30 200
57
20
24
86
20
22
23
16 267
44
41
31
27
25
20
89
45 323
65
13
28
31
19
15
21
13 204
73
45
49
47
46
29
66
21 377
17
19
14
15
12
10
14
9 110
56
32
29
75
28
26
42
20 307
33
40
36
47
38
41
36
24 293
42
28
64
22 309
52
34
35
34
17
15
15
13
11
18
13
14 117
35
63 161
76
39
37
20
21 452
27
17
23
56
30
23
30
64 271
49 114
22
21
60
18
40
32 355
57
22
73
44
21
16
38
14 283
Sequences 3 and 4,
Processes 17-32
Subject Age Sex Wage level
1
51
1
75
57
0
72
2
3
35
0
72
29
1
82
4
23
0
84
5
27
0
87
6
33
1
89
7
52
0
98
8
27
0
101
9
36
1
108
10
56
0
111
11
28
1
116
12
27
0
108
13
39
0
118
14
52
1
121
15
34
1
125
16
39
0
76
17
46
1
125
18
56
0
114
19
20
49
0
122
A3.1 A3.2 A3.3 A3.4 A3.5 A3.6 A3.7 A3.8 S3
Errors A4.1 A4.2 A4.3 A4.4 A4.5 A4.6 A4.7 A4.8 S4
Errors
105
31
57
80
44
37
27
33 413 13%
60
27
70
74
88
46 220
30 616 25%
48
36
41
30
36
40
51
29 312 63%
33
17
19
24
19
21
65
22 220 63%
19
9
47
38
15
20
23
18 189 88%
11
14
12
43
16
21
11
8 135 100%
36
16
20
21
14
14
15
14 149
0%
56
17
21
13
17
13
15
20 172
0%
41
73
33
22
24
42
40
18 292 13%
50
32
31
22
14
24
30
23 225 13%
37
15
21
34
16
25
14
26 187 63%
33
16
16
20
17
14
18
42 177 63%
21
31
13
12
12
16
15
21 140 13%
32
14
19
8
9
9
10
19 120 13%
33
12
11
8
10
10
9
9 103
0%
19
9
10
9
9
7
9
14
86
0%
24
13
15
16
11
7
14
22 122 13%
9
16
16
43
14
6
15
18 137 13%
23
18
18
11
9
9
13
18 119
0%
53
9
17
10
10
11
16
14 140
0%
21
34
13
22
16
15
28
15 164 13%
23
20
15
10
12
10
11
11 111 13%
11
5
9
6
7
5
8
4
55
0%
8
4
7
5
5
6
7
5
47
0%
37
15
14
60
13
10
10
18 177 13%
30
11
10
15
13
11
13
20 122
0%
18
15
11
28
22
13
12
21 141 25%
18
11
12
12
16
20
13
12 114 13%
51
16
46
23
33
27
37
37 271 13%
68
25
37
21
31
22
17
15 236 13%
12
8
7
11
8
17
6
10
77 13%
9
7
9
11
7
25
8
11
88 13%
19
19
23
27
43
41
24
31 227
0%
19
15
16
40
48
55
55
50 299 63%
33
10
15
17
20
17
13
12 137 13%
34
9
15
12
19
15
12
12 128 13%
28
11
13
36
16
14
10
38 167 100%
27
15
33
11
37
40
74
22 258 25%
19
9
10
17
8
10
16
20 108 38%
11
16
8
10
9
5
12
8
80 38%
Sequences 5 and 6,
Processes 33-48
Subject Age Sex Wage level
1
51
1
75
57
0
72
2
3
35
0
72
29
1
82
4
23
0
84
5
27
0
87
6
33
1
89
7
52
0
98
8
27
0
101
9
36
1
108
10
56
0
111
11
28
1
116
12
27
0
108
13
39
0
118
14
52
1
121
15
34
1
125
16
39
0
76
17
46
1
125
18
56
0
114
19
20
49
0
122
A5.1 A5.2 A5.3 A5.4 A5.5 A5.6 A5.7 A5.8 S5
Errors A6.1 A6.2 A6.3 A6.4 A6.5 A6.6 A6.7 A6.8 S6
Errors
42
21
87
66
43
93
43
31 426 50%
47
48
55
47
75
45
30
29 377 25%
24
15
14
22
14
22
35
29 174 75%
35
14
19
23
11
17
17
31 168 63%
13
12
7
14
24
82
20
9 181 100%
12
11
12
11
16
14
10
19 105 100%
33
11
18
21
11
11
19
14 138
0%
18
21
10
10
12
9
12
13 106
0%
30
19
23
12
13
16
21
16 150 13%
41
23
40
19
16
20
20
23 202 25%
28
15
15
24
16
14
20
27 158 63%
35
16
16
42
18
17
18
20 182 63%
29
10
9
11
12
10
9
13 102 13%
16
12
11
8
9
11
8
13
88 13%
12
10
8
12
6
8
7
9
72
0%
18
10
10
11
6
6
9
7
77
0%
11
19
10
14
9
9
14
22 107 13%
14
13
8
10
10
7
9
23
94 13%
26
10
14
10
12
9
24
17 122
0%
22
13
12
12
9
8
13
15 105
0%
19
15
9
8
12
10
28
12 114 13%
19
14
12
11
11
7
12
10
96 13%
9
4
5
4
4
4
5
5
40
0%
9
5
6
5
4
5
6
8
46
0%
11
9
9
11
9
11
9
15
85
0%
24
9
13
9
11
9
16
15 106
0%
19
9
13
18
15
11
14
21 120 13%
21
15
24
13
9
11
11
20 122 13%
59
26
30
23
21
16
20
19 213 13%
53
23
28
17
20
16
29
19 204 13%
6
6
9
9
7
9
9
7
62 13%
8
8
8
10
8
11
7
11
70 13%
27
14
12
32
14
10
10
52 172 75%
26
12
12
47
36
14
35
21 204 88%
28
12
14
11
16
13
10
16 120 13%
21
12
13
10
12
12
12
11 102 13%
20
9
8
8
9
18 114
31 217 50%
24
6
15
13
15
13
24
13 122 100%
20
8
8
12
13
6
15
37 118 100%
29
8
9
14
9
26
14
10 120 88%
Sequences 7 and 8,
Processes 49-64
Table 44: Detailed process times of the Projection population: values.
Subject Age Sex Wage level
1
51
1
75
57
0
72
2
3
35
0
72
29
1
82
4
23
0
84
5
27
0
87
6
33
1
89
7
52
0
98
8
27
0
101
9
36
1
108
10
56
0
111
11
28
1
116
12
27
0
108
13
39
0
118
14
52
1
121
15
34
1
125
16
39
0
76
17
46
1
125
18
56
0
114
19
20
49
0
122
A7.1 A7.2 A7.3 A7.4 A7.5 A7.6 A7.7 A7.8 S7
Errors A8.1 A8.2 A8.3 A8.4 A8.5 A8.6 A8.7 A8.8 S8
Errors
46
20
71
36
31
41
66
47 357 50%
57
33
28
32
47
39
26
24 286 25%
22
15
10
13
12
17
25
39 153 63%
17
13
11
9
9
15
9
16
99 63%
23
7
6
23
9
57
13
8 147 100%
16
8
7
25
13
25
10
35 139 100%
21
10
35
14
14
11
12
11 127
0%
23
9
9
10
11
11
15
8
95
0%
24
21
20
11
15
16
20
19 146 25%
48
16
13
15
13
13
20
17 156 25%
44
14
12
29
15
14
22
29 179 63%
36
18
17
17
13
15
17
18 149 63%
25
12
12
14
9
8
12
11 104 13%
19
12
11
10
10
9
14
10
94 13%
20
13
11
9
6
9
10
6
83 13%
14
8
13
7
8
5
7
13
75 13%
15
15
16
9
7
8
11
13
93 13%
18
6
15
10
6
10
6
14
85 13%
45
10
10
18
11
8
12
11 125
0%
18
8
9
9
16
8
11
12
91
0%
17
16
9
11
14
9
12
10
98 13%
16
15
13
12
9
8
17
12 102 13%
8
5
5
6
4
5
6
5
42
0%
9
4
5
9
3
4
6
4
43
0%
25
7
14
10
9
9
13
28 115 13%
12
7
8
9
5
7
9
23
80
0%
11
9
11
10
11
8
11
16
86 13%
12
9
6
10
8
9
13
11
77 13%
50
22
29
16
23
15
17
17 187 13%
50
19
28
20
19
21
15
16 187 13%
6
7
11
9
15
7
7
10
73 13%
7
7
10
10
5
8
17
10
74
0%
20
21
19
41
25
29 229 75%
16
20
21
24
16
20
12
30 158 75%
63
13
27
10
12
9
9
11
9
11
97 13%
21
12
10
8
8
10
9
15
93 13%
30
9
17
17
17
17
14
42 164 63%
16
16
5
27
7
12
27
17 127 63%
22
10
10
20
11
9
11
8 102 38%
11
18
11
12
5
5
10
11
84 38%
Errors A2.1 A2.2 A2.3 A2.4 A2.5 A2.6 A2.7 A2.8 S2
Errors
13%
59
23
40
54
51
53
24
28 331 13%
13%
68
47
47
53
39
22
20
33 329 38%
75%
21
13
37
22
16
47
56
15 226 88%
100%
66
22
26
22
28
26
21
17 227 13%
50% 100
32 130
32
14
35
18
20 380 13%
63%
44
27
65
47
25
20
21
51 299 63%
13%
18
14
12
31
14
17
21
20 145
0%
0%
32
13
9
13
13
13
15
10 118
0%
100%
37
16
21
31
15
18
17
20 175 38%
0%
47
12
16
13
11
10
15
17 141
0%
13%
60
19
21
21
24
13
20
14 192 13%
0%
17
6
10
7
7
6
6
7
66
0%
25%
44
19
15
67
13
18
15
25 215 25%
13%
33
20
22
19
46
15
23
35 213 13%
25%
96
21
68
41
39
19
19
30 332 13%
0%
27
10
9
15
14
8
11
14 108 13%
100%
33
17
39
74
58
34
28
29 313 63%
63%
30
13
51
46
55
21
14
11 240 13%
25%
33
20
12
39
21
15
29
36 205 100%
88%
32
12
27
23
18
12
17
22 162 38%
Figure 82: Detailed process times of the Gamification population: graph.
process time in seconds
0
10
20
30
40
50
60
70
80
90
100
110
120
130
140
150
subject 2
subject 12
subject 1
subject 11
subject 3
subject 13
subject 4
subject 14
subject 5
subject 15
subject 6
subject 16
subject 17
subject 7
subject 18
subject 8
subject 19
subject 9
subject 20
subject 10
A1.1
A1.2
A1.3
A1.4
A1.5
A1.6
A1.7
A1.8
A2.1
A2.2
A2.3
A2.4
A2.5
A2.6
A2.7
A2.8
A3.1
A3.2
A3.3
A3.4
A3.5
A3.6
A3.7
A3.8
A4.1
A4.2
A4.3
A4.4
A4.5
A4.6
A4.7
A4.8
A5.1
A5.2
A5.3
A5.4
A5.5
A5.6
A5.7
A5.8
A6.1
A6.2
A6.3
A6.4
A6.5
A6.6
A6.7
A6.8
A7.1
A7.2
A7.3
A7.4
A7.5
A7.6
A7.7
A7.8
A8.1
A8.2
A8.3
A8.4
A8.5
A8.6
A8.7
A8.8
Supplement: Study Result Details
179
180
Supplement: Study Result Details
Sequences 1 and 2,
Processes 1-16
Subject Age Sex Wage level
1
31
1
102
2
45
0
72
3
40
1
79
4
52
1
82
5
53
1
84
6
54
1
73
7
26
1
86
8
49
1
125
9
43
0
105
10
29
1
108
11
49
0
116
12
46
1
112
13
47
1
106
14
55
0
110
15
59
1
122
16
37
1
118
17
57
1
124
18
24
1
115
19
45
1
96
20
29
1
91
A1.1 A1.2 A1.3 A1.4 A1.5 A1.6 A1.7 A1.8 S1
51
18
28
13
15
33
36
18 211
17
17
37
52
34
55
35
19 266
37
62
32
26
40
31
33
21 281
61
27
33
35
26
22
75
29 308
30
26
22
16
25
48
16
30 212
46
55
59
33
96 102
74
44 509
33
40
28
29
29
19
31
34 243
36
34
27
30
30
22
23
9 211
37
36
41
67
42
27
39
21 310
37
35
17
23
25
15
63
14 228
19
38
25
40
30
25
25
33 236
40
24
38
37
25
22
32
23 241
23
20
20
35
21
26
21
32 199
54
70
83
58
35
39
61
59 457
33
68
25
40
36
23
35
43 303
62
74
43
52
33
28
69
24 385
19
17
18
17
63
41
19
15 208
33
40
55
85
48
49
25
20 355
52
58
37
32
40
34
40
34 328
37
31
29
42
32
36
62
45 314
Sequences 3 and 4,
Processes 17-32
Subject Age Sex Wage level
1
31
1
102
2
45
0
72
3
40
1
79
4
52
1
82
5
53
1
84
6
54
1
73
7
26
1
86
8
49
1
125
9
43
0
105
10
29
1
108
11
49
0
116
12
46
1
112
13
47
1
106
14
55
0
110
15
59
1
122
16
37
1
118
17
57
1
124
18
24
1
115
19
45
1
96
20
29
1
91
A3.1 A3.2 A3.3 A3.4 A3.5 A3.6 A3.7 A3.8 S3
Errors A4.1 A4.2 A4.3 A4.4 A4.5 A4.6 A4.7 A4.8 S4
Errors
42
11
13
15
12
7
11
18 128 13%
24
13
17
22
14
11
10
15 125 13%
30
9
13
8
10
16
20
14 120 25%
18
9
7
8
7
12
16
26 103 25%
24
19
18
18
27
21
22
27 175 13%
33
15
18
21
17
20
17
23 164 25%
27
18
29
26
26
17
19
27 188 63%
17
19
20
22
20
16
30
20 164 63%
23
9
11
8
15
40
13
18 137 88%
20
7
8
8
17
19
19
17 113 88%
38
31
27
21
41
37
30
31 256 25%
43
24
27
35
22
25
39
40 254 25%
27
16
14
19
11
11
14
18 131 38%
18
14
13
18
10
16
33
15 135 38%
22
4
11
9
9
20
12
22 108 13%
20
6
8
11
11
8
8
11
84 25%
51
17
20
19
20
14
19
13 172 50%
34
16
25
20
20
15
18
16 162 38%
25
23
18
14
15
27
18
11 151 25%
32
13
11
16
24
12
12
13 131 13%
16
10
11
7
7
8
11
11
82 13%
19
11
8
11
9
7
8
8
81 13%
27
17
22
21
32
27
15
18 179 0%
41
18
24
43
24
14
14
12 190 0%
27
14
15
15
23
15
14
16 136 25%
15
21
11
10
11
26
11
29 133 50%
35
20
22
36
35
30
25
19 223 50%
22
13
24
18
18
10
11
16 132 63%
27
37
16
12
14
23
27
19 175 38%
22
55
11
11
11
17
22
16 166 38%
32
66
31
32
26
28
23
11 248 63%
23
17
16
17
25
15
13
9 134 63%
30
20
17
9
31
15
13
11 145 50%
22
8
15
10
12
8
8
24 107 38%
23
9
28
12
29
17
13
12 144 13%
24
4
33
18
25
27
34
25 191 38%
47
41
36
30
31
37
42
28 292 0%
44
28
48
33
25
30
31
40 279 0%
52
15
23
23
27
18
18
12 187 25%
33
18
21
53
22
18
18
18 200 25%
Sequences 5 and 6,
Processes 33-48
Subject Age Sex Wage level
1
31
1
102
2
45
0
72
3
40
1
79
4
52
1
82
5
53
1
84
6
54
1
73
7
26
1
86
8
49
1
125
9
43
0
105
10
29
1
108
11
49
0
116
12
46
1
112
13
47
1
106
14
55
0
110
15
59
1
122
16
37
1
118
17
57
1
124
18
24
1
115
19
45
1
96
20
29
1
91
A5.1 A5.2 A5.3 A5.4 A5.5 A5.6 A5.7 A5.8 S5
Errors A6.1 A6.2 A6.3 A6.4 A6.5 A6.6 A6.7 A6.8 S6
Errors
19
11
14
11
16
8
13
16 109 0%
21
9
19
13
11
11
16
15 115 25%
12
12
10
9
8
7
13
10
79 0%
23
6
8
8
11
12
14
9
89 25%
24
13
19
13
16
13
13
31 142 13%
26
17
37
16
14
17
14
30 171 13%
15
18
14
31
16
12
20
9 134 63%
22
13
19
17
18
17
16
26 145 63%
10
11
8
8
7
24
55
30 153 100%
15
6
6
6
14
14
9
58 128 100%
24
24
29
23
34
23
25
26 207 25%
31
25
29
18
23
23
23
30 202 25%
25
15
12
20
10
13
18
14 127 38%
18
11
11
13
10
11
19
21 114 38%
22
4
8
8
6
6
11
11
77 25%
21
17
6
11
15
14
11
13 107 50%
22
27
22
15
16
16
23
22 163 50%
35
19
13
18
19
14
31
13 161 38%
23
21
8
9
11
8
9
10
98 13%
23
29
9
10
10
8
20
11 119 13%
14
14
10
6
8
8
9
9
79 13%
19
7
7
7
6
7
10
9
71 13%
25
13
16
19
17
15
16
13 133 0%
21
12
9
16
15
12
14
14 114 25%
15
11
9
13
12
12
11
21 103 38%
15
37
8
13
11
12
10
19 124 38%
22
10
19
12
8
15
14
15 115 63%
31
14
9
15
15
11
11
15 121 63%
27
10
10
9
8
14
15
21 113 50%
41
11
10
9
9
11
11
14 116 50%
21
4
30
23
22
20
16
20 155 75%
25
7
14
16
27
19
11
7 125 63%
17
21
9
11
15
12
9
12 104 50%
11
7
43
8
15
12
9
11 115 50%
38
15
28
28
24
24
32
17 206 38%
32
10
27
19
17
26
23
14 167 13%
45
36
59
38
40
42
22
31 313 25%
56
22
49
30
38
37
39
40 309 0%
19
15
17
27
16
11
17
14 135 50%
23
12
15
17
11
10
23
22 133 38%
Sequences 7 and 8,
Processes 49-64
Table 45: Detailed process times of the Gamification population: values.
Errors A2.1 A2.2 A2.3 A2.4 A2.5 A2.6 A2.7 A2.8 S2
0%
22
10
15
18
16
11
15
17 124
0%
30
11
11
9
12
15
12
30 131
13%
40
29
17
22
28
22
18
26 202
63%
20
16
14
22
29
34
29
18 181
88%
22
11
15
11
27
13
12
69 179
63%
29
28
42
29
42
30
43
29 272
25%
24
18
15
19
13
15
18
14 136
13%
43
1
11
14
11
18
17
9 124
38%
49
21
21
38
39
34
27
14 242
13%
24
30
26
26
13
17
18
19 173
13%
21
19
16
21
13
13
10
9 123
0%
37
15
25
25
42
17
24
19 205
0%
27
30
31
28
31
18
15
14 193
50%
35
39
43
40
67
31
57
28 340
13%
56
33
58
11
11
16
37
19 241
50%
37
23
25
30
37
33
42
71 299
63%
22
12
23
11
18
19
19
13 137
13%
31
19
29
31
35
30
25
22 221
13%
44
34
27
40
32
43
38
30 288
50%
43
15
29
49
25
21
22
25 229
Errors
38%
25%
25%
63%
75%
25%
25%
13%
38%
13%
13%
0%
13%
13%
25%
63%
50%
50%
0%
25%
Subject Age Sex Wage level
1
31
1
102
2
45
0
72
3
40
1
79
4
52
1
82
5
53
1
84
6
54
1
73
7
26
1
86
8
49
1
125
9
43
0
105
10
29
1
108
11
49
0
116
12
46
1
112
13
47
1
106
14
55
0
110
15
59
1
122
16
37
1
118
17
57
1
124
18
24
1
115
19
45
1
96
20
29
1
91
A7.1 A7.2 A7.3 A7.4 A7.5 A7.6 A7.7 A7.8 S7
Errors A8.1 A8.2 A8.3 A8.4 A8.5 A8.6 A8.7 A8.8 S8
Errors
22
6
14
13
8
9
14
13
98 13%
17
9
31
15
11
9
16
12 119 13%
21
7
8
9
8
8
10
13
82 25%
22
9
8
7
6
8
12
8
80 0%
27
15
10
10
16
14
15
16 123 13%
27
20
16
13
10
15
25
15 142 38%
14
6
20
16
11
19
17
14 117 63%
16
13
16
13
16
15
13
12 113 63%
20
6
8
8
12
10
42
7 112 100%
19
7
6
7
9
13
59
55 175 100%
28
21
24
25
26
23
17
15 178 0%
34
21
17
19
22
26
16
18 174 25%
22
13
11
14
13
13
11
15 111 38%
23
12
10
14
13
11
19
12 114 38%
21
13
10
7
8
14
9
11
94 25%
16
4
11
11
16
6
10
4
77 13%
39
16
18
20
14
17
17
15 157 38%
23
9
19
26
15
11
14
13 129 50%
20
9
9
9
9
7
12
9
82 0%
17
16
8
12
10
6
8
9
86 13%
15
10
8
6
11
6
8
9
74 13%
14
8
6
8
8
6
7
7
65 13%
33
16
12
18
13
13
13
9 126 0%
23
13
12
11
12
14
12
13 109 25%
14
20
10
12
8
11
12
15 101 50%
11
12
9
17
15
12
12
17 105 13%
23
8
17
9
13
11
8
18 107 63%
22
8
11
16
10
12
11
12 102 63%
11
6
7
8
8
7
30
16
92 50%
13
11
11
7
9
6
22
10
88 50%
17
12
16
12
16
10
10
16 108 63%
17
13
14
33
20
22
4
8 131 63%
20
13
9
7
12
24
13
10 107 50%
22
8
12
8
13
11
11
10
94 50%
52
12
24
17
26
17
16
16 179 13%
33
10
20
16
20
15
21
11 146 13%
53
32
60
39
58
59
65
53 418 50%
53
35
50
33
34
35
39
38 317 0%
34
15
19
51
18
11
27
15 190 38%
23
9
14
13
12
21
15
21 128 38%
References
181
10 REFERENCES
English translations of German titles in [square brackets].
AAL Contents Working Group, Task Force. (2013). ICT-based Solutions for Support­
ing Occupation in Life of Older Adults. Retrieved from http://www.aal-eu­
rope.eu/wp-content/uploads/2013/03/AAL-2013-6-call-text-20130326.pdf
Anders, T. R., Fozard, J. L., & Lillyquist, T. D. (1972). Effects of age upon retrieval
from short-term memory. Developmental Psychology, 6(2), 214–217.
doi:10.1037/h0032103
Andresen, E. M., Fitch, C. A., McLendon, P. M., & Meyers, A. R. (2000). Reliability
and validity of disability questions for US Census 2000. American Journal
of
Public
Health,
90(8),
1297.
Retrieved
from
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1446332/
Australian Institute of Health and Welfare. (2004). Disability and its relationship to
health conditions and other factors. Canberra: Australian Institute of Health
and Welfare.
Bailey, R. W. (1989). Human performance engineering : using human factors/ergo­
nomics to achieve computer system usability. Englewood Cliffs, N.J.: Pren­
tice Hall.
Ballard, C. (2011, June 7). Developing for Kinect using open source APIs. Tribal
Labs. Retrieved from http://www.triballabs.net/2011/06/kinectapis
Bartle, R. (1996). Hearts, Clubs, Diamonds, Spades: Players Who suit MUDs. Re­
trieved October 10, 2013, from http://www.mud.co.uk/richard/hcds.htm
Bierkandt, J., Preissner, M., Hermann, F., & Hipp, C. (2011). Usability und humanmachine interfaces in der Produktion. Studie Qualitätsmerkmale für Ent­
wicklungswerkzeuge. (D. Spath & A. Weisbecker, Eds.). Stuttgart: Fraun­
hofer-Verl.
Biundo, S., Bidot, J., & Schattenberg, B. (2011). Planning in the Real World. Informatik-Spektrum, 34(5), 443–454. doi:10.1007/s00287-011-0562-7
Brach, M., & Korn, O. (2012). Assistive technologies at home and in the workplace—
a field of research for exercise science and human movement science. Eu­
ropean Review of Aging and Physical Activity, 9(1), 1–4.
doi:10.1007/s11556-012-0099-z
Brand, S., & Holsboer-Trachsler, E. (2010). Das Burnout Syndrom – eine Übersicht.
Therapeutische Umschau, 67(11), 561–565. doi:10.1024/0040­
5930/a000095
Brumels, K. A., Blasius, T., Cortright, T., Oumedian, D., & Solberg, B. (2008). Com­
parison of efficacy between traditional and video game based balance pro­
grams.
Clin
Kinesiol,
62(4),
26–31.
Retrieved
from
http://clinicalkinesiology.org/content/journals/2008/win­
ter/Brumels_et_al_62_4_26-32/index_files/Brumels_et_al_62_4_26­
32b.pdf
182
References
Buckland, M. (2005). Programming game AI by example. Plano, Texas, USA: Wordware Pub.
Burckhardt, J. (1999). The Greeks and Greek civilization. (O. Murray & S. Stern,
Eds.). New York: St. Martin’s Press.
Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human-computer
interaction. Hillsdale, N.J.: L. Erlbaum Associates.
Cardoso Machado, N. M. (2011). Karl Polanyi and the New Economic Sociology:
Notes on the Concept of (Dis)embeddedness. MPRA Paper. Retrieved Sep­
tember 17, 2013, from http://mpra.ub.uni-muenchen.de/48957/
Castellani, S., Hanrahan, B., & Colombino, T. (2013). Game Mechanics in Support
of Production Environments. In CHI ’13 Proceedings of the ACM SIGCHI
Conference on Human Factors in Computing Systems. Retrieved from
http://gamification-research.org/wp-content/uploads/2013/03/Castel­
lani_etal.pdf
Charles, D., Kerr, A., McNeill, M., McAlister, M., Black, M., Kcklich, J., … Stringer,
K. (2005). Player-centred game design: Player modelling and adaptive dig­
ital games. In Proceedings of the Digital Games Research Conference (Vol.
285).
Retrieved
from
http://www.researchgate.net/publica­
tion/228636408_Player-centred_game_design_Player_model­
ling_and_adaptive_digital_games/file/9fcfd514853cb27e1f.pdf
Cook, A. M. (2010). The future of assistive technologies. In ASSETS ’10 Proceedings
of the 12th international ACM SIGACCESS conference on Computers and
accessibility (p. 1). ACM Press. doi:10.1145/1878803.1878805
Cook, A. M., & Hussey, S. M. (1995). Assistive technologies : principles and practice.
St. Louis: Mosby.
Cowie, P. (2013, September 23). The Phenomena of Gamification – The Next Big
Thing for Employers? Retrieved November 5, 2013, from http://www.en­
terprise-gamification.com/index.php?option=com_content&view=arti­
cle&id=167:the-phenomena-of-gamification-the-next-big-thing-for­
employers&catid=4:blog&Itemid=251&lang=de
Csíkszentmihályi, M. (1975). Beyond boredom and anxiety. San Francisco, USA:
Jossey-Bass Publishers.
Csíkszentmihályi, M., Abuhamdeh, S., & Nakamura, J. (2005). Flow. In Handbook of
Competence and Motivation (pp. 598–608). New York, NY, USA: Guilford
Press.
Csíkszentmihályi, M., & Nakamura, J. (2002). The concept of flow. In The Handbook
of Positive Psychology (pp. 89–92). Oxford: Oxford University Press.
Csíkszentmihályi, M., & Rathunde, K. (1992). The measurement of flow in everyday
life: toward a theory of emergent motivation. In Nebraska symposium on
motivation (Vol. 40, pp. 57–97). Lincoln, NE, USA: University of Nebraska
Press. Retrieved from http://psycnet.apa.org/psycinfo/1993-98639-002
DeLeire, T. (2000). The Wage and Employment Effects of the Americans with Disa­
bilities
Act,
35,
No.
4,
693–715.
Retrieved
from
http://www.jstor.org/pss/146368
References
183
Deterding, S., Sicart, M., Nacke, L., O’Hara, K., & Dixon, D. (2011). Gamification.
using game-design elements in non-gaming contexts. In Proceedings of the
2011 annual conference extended abstracts on Human factors in computing
systems (Vol. 2, pp. 2425–2428). New York, NY, USA: ACM.
doi:10.1145/1979482.1979575
Deuse, J. (2010). MTM – die Prozesssprache für ein modernes Industrial Engineering.
In B. Britzke (Ed.), MTM in einer globalisierten Wirtschaft: Arbeitspro­
zesse systematisch gestalten und optimieren (pp. 65–79). München: Finanz­
buch Verlag.
Dieterich, M. (1983). Geschicklichkeitserprobung mit MTM: e. Beitr. zur Diagnostik
u. Förderung behinderter Mitarb. in Werkstätten für Behinderte u. Indust­
riebetrieben. Esslingen: Verlag Rehabilitations-Technikum.
DIN EN 13199-1:2000-10. (2000). Verpackung - Kleinladungsträgersysteme - Teil 1:
Allgemeine Anforderungen und Prüfverfahren [Packaging - Small Load
Carrier Systems - Part 1: Common requirements and test methods]. Beuth.
Retrieved from http://www.beuth.de/cmd%3Bjsessionid=RN08MOT­
FMB2PBOGEEGVJMFOO.4?workflowname=infoInstantdownload&doc­
name=9022824&contextid=beuth&servicerefname=beuth&ixos=toc
Dömők, T., Szűcs, V., László, E., & Sík Lányi, C. (2012). “Break the bricks” serious
game for stroke patients. In Proceedings of the 13th international confer­
ence on Computers Helping People with Special Needs - Volume Part I (pp.
673–680). Berlin, Heidelberg: Springer-Verlag. doi:10.1007/978-3-642­
31522-0_101
Dul, J., Bruder, R., Buckle, P., Carayon, P., Falzon, P., Marras, W. S., … van der
Doelen, B. (2012). A strategy for human factors/ergonomics: developing
the discipline and profession. Ergonomics, 55(4), 377–395.
doi:10.1080/00140139.2012.661087
Feiner, S., Macintyre, B., & Seligmann, D. (1993). Knowledge-based Augmented Re­
ality. Commun. ACM, 36(7), 53–62. doi:10.1145/159544.159587
Finley, K. (2012, November 14). How “Gamification” Can Make Your Customer Ser­
vice Worse | Wired Enterprise | Wired.com. Wired Enterprise. Retrieved
November
5,
2013,
from
http://www.wired.com/wiredenter­
prise/2012/11/gamification-customer-service/
Ford, M. (2013). Could artificial intelligence create an unemployment crisis? Commun. ACM, 56(7), 37–39. doi:10.1145/2483852.2483865
Funk, M., Korn, O., & Schmidt, A. (2014a). An Augmented Workplace for Enabling
User-Defined Tangibles. In CHI ’14 Proceedings of the ACM SIGCHI Con­
ference on Human Factors in Computing Systems. New York, NY, USA:
ACM.
Funk, M., Korn, O., & Schmidt, A. (2014b). Assisitive Augmentation at the Manual
Assembly Workplace using In-Situ Projection. In CHI ’14 Proceedings of
the ACM SIGCHI Conference on Human Factors in Computing Systems.
New York, NY, USA: ACM.
184
References
Gee, J. P. (2007). What video games have to teach us about learning and literacy
(Rev. and updated ed.). New York, NY, USA: Palgrave Macmillan.
Geller, T. (2014). How Do You Feel?: Your Computer Knows. Commun. ACM, 57(1),
24–26. doi:10.1145/2555809
Gerling, K. M., Schulte, F. P., & Masuch, M. (2011). Designing and evaluating digital
games for frail elderly persons. In Proceedings of the 8th International Con­
ference on Advances in Computer Entertainment Technology (pp. 62:1–
62:8). New York, NY, USA: ACM. doi:10.1145/2071423.2071501
Hardy, J., & Alexander, J. (2012). Toolkit support for interactive projected displays.
In Proceedings of the 11th International Conference on Mobile and Ubiq­
uitous Multimedia (pp. 42:1–42:10). New York, NY, USA: ACM.
doi:10.1145/2406367.2406419
Hassenzahl, M., Burmester, M., & Koller, F. (2008). Der User Experience (UX) auf
der Spur. Zum Einsatz von www.attrakdiff.de [Tracing User Experience
(UX). On using www.attrakdiff.de]. In H. Brau, S. Diefenbach, M. Hassen­
zahl, F. Koller, M. Peissner, & K. Röse (Eds.), Usability Professionals 2008
(pp. 78–82). Stuttgart: Fraunhofer IRB Verlag. Retrieved from http://attrak­
diff.de/files/up08_ux_auf_der_spur.pdf
Heinz, E., Kunze, K., Gruber, M., Bannach, D., & Lukowicz, P. (2006). Using Wear­
able Sensors for Real-Time Recognition Tasks in Games of Martial Arts An
Initial Experiment. In IEEE Symposium on Computational Intelligence and
Games (pp. 98–102). IEEE. doi:10.1109/CIG.2006.311687
Huber, J., Steimle, J., Liao, C., Liu, Q., & Mühlhäuser, M. (2012). LightBeam: inter­
acting with augmented real-world objects in pico projections. In Proceed­
ings of the 11th International Conference on Mobile and Ubiquitous
Multimedia (pp. 16:1–16:10). New York, NY, USA: ACM.
doi:10.1145/2406367.2406388
Ikeda, N., Murray, C. J. L., & Salomon, J. A. (2009). Tracking population health based
on self-reported impairments: Trends in the prevalence of hearing loss in
US adults, 1976-2006. American Journal of Epidemiology, 170(1), 80–87.
doi:10.1093/aje/kwp097
ISO/TC 159/SC 4. (2006). Ergonomics of human-system interaction. International Or­
ganization for Standardization.
ISO/TC 159/SC 4. (2009). Ergonomics of human-system interaction - Part 920: Guid­
ance on tactile and haptic interactions. International Organization for
Standardization.
Kahneman, D., Krueger, A. B., Schkade, D., Schwarz, N., & Stone, A. (2004). Toward
national well-being accounts. The American Economic Review, 94(2), 429–
434. Retrieved from http://www.jstor.org/stable/10.2307/3592923
Kane, S. K., Avrahami, D., Wobbrock, J. O., Harrison, B., Rea, A. D., Philipose, M.,
& LaMarca, A. (2009). Bonfire: a nomadic system for hybrid laptop-tab­
letop interaction. In Proceedings of the 22nd annual ACM symposium on
User interface software and technology (pp. 129–138). New York, NY,
USA: ACM. doi:10.1145/1622176.1622202
References
185
Kato, P. M., Cole, S. W., Bradlyn, A. S., & Pollock, B. H. (2008). A Video Game
Improves Behavioral Outcomes in Adolescents and Young Adults With
Cancer: A Randomized Trial. Pediatrics, 122(2), e305–e317.
doi:10.1542/peds.2007-3134
Kern, D., & Schmidt, A. (2009). Design Space for Driver-based Automotive User
Interfaces. In Proceedings of the 1st International Conference on Automo­
tive User Interfaces and Interactive Vehicular Applications (pp. 3–10). New
York, NY, USA: ACM. doi:10.1145/1620509.1620511
Kieras, D. (1993). Using the keystroke-level model to estimate execution times. Uni­
versity of Michigan. Retrieved from ftp://ai.eecs.umich.edu/peo­
ple/kieras/GOMS/KLM.pdf
Kluge, S. (2011, November 21). Methodik zur fähigkeitsbasierten Planung modularer
Montagesysteme [Methodology for capability-based planning of modular
assembly systems]. University of Stuttgart. Retrieved from http://elib.uni­
stuttgart.de/opus/volltexte/2011/6834/
Knabe, A., Rätzel, S., Schöb, R., & Weimann, J. (2010). Dissatisfied with Life but
Having a Good Day: Time-use and Well-being of the Unemployed*. The
Economic
Journal,
120(547),
867–889.
doi:10.1111/j.1468­
0297.2009.02347.x
Knoop, S., Vacek, S., & Dillmann, R. (2006). Sensor fusion for 3D human body track­
ing with an articulated 3D body model. In ICRA ’06 Proceedings of the
2006 IEEE International Conference on Robotics and Automation (pp. 1686
–1691). doi:10.1109/ROBOT.2006.1641949
Korn, O. (2012). Industrial playgrounds: how gamification helps to enrich work for
elderly or impaired persons in production. In EICS ’12 Proceedings of the
4th ACM SIGCHI symposium on Engineering interactive computing sys­
tems
(pp.
313–316).
New
York,
NY,
USA:
ACM.
doi:10.1145/2305484.2305539
Korn, O., Brach, M., Schmidt, A., Hörz, T., & Konrad, R. (2012). Context-Sensitive
User-Centered Scalability: An Introduction Focusing on Exergames and
Assistive Systems in Work Contexts. In S. Göbel, W. Müller, B. Urban, &
J. Wiemeyer (Eds.), E-Learning and Games for Training, Education,
Health and Sports (Vol. 7516, pp. 164–176). Berlin, Heidelberg: Springer
Berlin Heidelberg. Retrieved from http://www.springerlink.com/in­
dex/10.1007/978-3-642-33466-5_19
Korn, O., Funk, M., Abele, S., Hörz, T., & Schmidt, A. (2014). Context-aware Assis­
tive Systems at the Workplace. Analyzing the Effects of Projection and
Gamification. In PETRA ’14 Proceedings of the 7th International Confer­
ence on PErvasive Technologies Related to Assistive Environments.
Korn, O., Schmidt, A., & Hörz, T. (2013). The Potentials of In-Situ-Projection for
Augmented Workplaces in Production. A Study with Impaired Persons. In
CHI ’13 Proceedings of the ACM SIGCHI Conference on Human Factors
in Computing Systems (pp. 979–984). New York, NY, USA: ACM.
doi:10.1145/2468356.2468531
186
References
Kregel, J., & Dean, D. H. (2002). Sheltered vs. Supported Employment: A Direct
Comparison of Long-Term Earnings Outcomes for Individuals with Cogni­
tive Disabilities. In J. Kregel, D. H. Dean, & P. Wehman (Eds.), Achieve­
ments and Challenges in Employment Services for People with Disabilities:
The Longitudinal Impact of Workplace Supports Monograph. Retrieved
from http://www.szentlazar.hu/media/pdf/2010/04/sheltered-vs-supported­
article.pdf
Kronberg, A. (2013). Zwischen Pädagogik und Produktion. Qualitätsmanagement­
systeme in Werkstätten für behinderte Menschen [Between and Production.
Quality Management in Sheltered Work Organizations]. Lützelsdorf, Ger­
many: Rossol. Retrieved from http://www.verlag-rossol.de/titel/kronberg­
qm-in-wfbm/
Kühn, W. (2006). Digital factory: simulation enhancing the product and production
engineering process. In Proceedings of the 38th conference on Winter sim­
ulation (pp. 1899–1906). Retrieved from http://dl.acm.org/cita­
tion.cfm?id=1218458
Langheinrich, M., Schmidt, A., Davies, N., & José, R. (2013). A Practical Framework
for Ethics: The PD-net Approach to Supporting Ethics Compliance in Pub­
lic Display Studies. In Proceedings of the 2Nd ACM International Sympo­
sium on Pervasive Displays (pp. 139–143). New York, NY, USA: ACM.
doi:10.1145/2491568.2491598
Lazar, J., & Hochheiser, H. (2013). Legal Aspects of Interface Accessibility in the
U.S. Commun. ACM, 56(12), 74–80. doi:10.1145/2500498
Lee, E. A. (2008). Cyber physical systems: Design challenges. In 11th IEEE Interna­
tional Symposium on Object Oriented Real-Time Distributed Computing
(ISORC)
(pp.
363–369).
Retrieved
from
http://ieeex­
plore.ieee.org/xpls/abs_all.jsp?arnumber=4519604
Letessier, J., & Bérard, F. (2004). Visual tracking of bare fingers for interactive sur­
faces. In Proceedings of the 17th annual ACM symposium on User interface
software and technology (pp. 119–122). New York, NY, USA: ACM.
doi:10.1145/1029632.1029652
Lewis, J. R., & Sauro, J. (2009). The Factor Structure of the System Usability Scale.
In Proceedings of the 1st International Conference on Human Centered De­
sign: Held as Part of HCI International 2009 (pp. 94–103). Berlin, Heidel­
berg: Springer-Verlag. doi:10.1007/978-3-642-02806-9_12
Lopez, A. D., Mathers, C. D., Ezzati, M., Jamison, D. T., & Murray, C. J. L. (2006).
Measuring the Global Burden of Disease and Risk Factors, 1990–2001. In
A. D. Lopez, C. D. Mathers, M. Ezzati, D. T. Jamison, & C. J. Murray
(Eds.), Global Burden of Disease and Risk Factors. Washington (DC),
USA:
World
Bank.
Retrieved
from
http://www.ncbi.nlm.nih.gov/books/NBK11817/
Luger, G. F. (2009). Artificial intelligence: structures and strategies for complex
problem solving (6th ed.). Boston: Pearson Addison-Wesley.
References
187
Makedon, F., Le, Z., Huang, H., Becker, E., & Kosmopoulos, D. (2009). An Event
Driven Framework for Assistive CPS Environments. SIGBED Rev., 6(2),
3:1–3:9. doi:10.1145/1859823.1859826
Manzeschke, A., Weber, K., Rother, E., & Fangerau, H. (2013). Ethische Fragen im
Bereich altersgerechter Assistenzsysteme: Ergebnisse der Studie [Ethical
questions in the area of assistive systems for the elderly - results of a study].
Berlin: VDI/VDE Innovation + Technik.
Maynard, H. B., Stegemerten, G. J., & Schwab, J. L. (1948). Methods-time measure­
ment. New York, NY, USA: McGraw-Hill.
McGonigal, J. (2011). Reality is broken: Why games make us better and how they can
change the world. Penguin books.
Mink, J. A. (1975). MTM and the disabled. The MTM Journal of Methods Time Meas­
urement, 2(2), 23–28.
Mitcham, C. (1994). Thinking through technology: the path between engineering and
philosophy. Chicago: University of Chicago Press.
Morelli, T., Foley, J., & Folmer, E. (2010). Vi-bowling: a tactile spatial exergame for
individuals with visual impairments. In Proceedings of the 12th interna­
tional ACM SIGACCESS conference on Computers and accessibility (pp.
179–186). New York, NY, USA: ACM. doi:10.1145/1878803.1878836
Murphy, S. T., & Rogan, P. M. (1995). Closing the shop: conversion from sheltered
to integrated work. Paul H. Brookes Publishing Company.
Nevins, A. (1954). Ford: The Times, the Man, the Company. New York, NY, USA:
Scribner.
Nielsen, J. (1993). Usability Engineering. San Francisco, CA, USA: Morgan Kauf­
mann Publishers Inc.
Nilsson, S., & Johansson, B. (2008). Acceptance of augmented reality instructions in
a real work setting. In CHI ’08 Extended Abstracts on Human Factors in
Computing Systems (pp. 2025–2032). New York, NY, USA: ACM.
doi:10.1145/1358628.1358633
Nunes, F., Silva, P. A., & Abrantes, F. (2010). Human-computer interaction and the
older adult. An Example Using User Research and Personas. In PETRA ’10
Proceedings of the 3rd International Conference on PErvasive Technolo­
gies Related to Assistive Environments (p. 1). ACM Press.
doi:10.1145/1839294.1839353
Pagulayan, R. J., Keeker, K., Wixon, D., Romero, R. L., & Fuller, T. (2012). UserCentered Design in Games. In J. A. Jacko & A. Sears (Eds.), The humancomputer interaction handbook: fundamentals, evolving technologies, and
emerging applications (pp. 795–821). Boca Raton, FL, USA: CRC Press.
Retrieved from http://dl.acm.org/citation.cfm?id=772072.772128
Pinhanez, C. S. (2001). The Everywhere Displays Projector: A Device to Create Ubiq­
uitous Graphical Interfaces. In Proceedings of the 3rd international confer­
ence on Ubiquitous Computing (pp. 315–331). London, UK: SpringerVerlag. Retrieved from http://dl.acm.org/citation.cfm?id=647987.741324
188
References
Polanyi, K. (1957). Aristotle Discovers the Economy. In K. Polanyi, C. M. Arsenberg,
& H. W. Pearson (Eds.), Trade and market in the early empires: economies
in history and theory. New York, NY, USA: Free Press.
Prensky, M. (2007). Digital game-based learning. St. Paul, Minn., USA: Paragon
House.
Putnam, C., & Cheng, J. (2013). Helping therapists make evidence-based decisions
about commercial motion gaming. SIGACCESS Access. Comput., (107), 3–
10. doi:10.1145/2535803.2535804
Quick, J. H., Duncan, J. H., & Malcom, J. A. (1962). Work-Factor Time Standards.
Measurement of manual and mental work. New York, NY, USA: McGrawHill Book Co.
Reeves, B., & Read, J. L. (2009). Total Engagement: Using Games and Virtual
Worlds to Change the Way People Work and Businesses Compete. Harvard
Business Press.
Robine, J.-M., & Michel, J.-P. (2004). Looking forward to a general theory on popu­
lation aging. The Journals of Gerontology. Series A, Biological Sciences
and Medical Sciences, 59(6), M590–597.
Rogers, Y., & Marsden, G. (2013). Does he take sugar? Moving beyond the rhetoric
of compassion. Interactions, 20(4), 48–57. Retrieved from
http://dl.acm.org/citation.cfm?id=2486238
Runeson, P., & Höst, M. (2008). Guidelines for conducting and reporting case study
research in software engineering. Empirical Software Engineering, 14(2),
131–164. doi:10.1007/s10664-008-9102-8
Rüther, S., Hermann, T., Mracek, M., Kopp, S., & Steil, J. (2013). An assistance sys­
tem for guiding workers in central sterilization supply departments. In Pro­
ceedings of the 6th International Conference on PErvasive Technologies
Related to Assistive Environments (pp. 3:1–3:8). New York, NY, USA:
ACM. doi:10.1145/2504335.2504338
Salthouse, T. A. (1990). Working memory as a processing resource in cognitive aging.
Developmental Review, 10(1), 101–124. doi:10.1016/0273-2297(90)90006­
P
Sardis, E., Voulodimos, A., Anagnostopoulos, V., Lalos, C., Doulamis, A., & Kos­
mopoulos, D. (2010). An industrial video surveillance system for quality
assurance of a manufactory assembly. In PETRA ’10 Proceedings of the 3rd
International Conference on PErvasive Technologies Related to Assistive
Environments (p. 66).
Satre, D. D., Knight, B. G., & David, S. (2006). Cognitive-behavioral interventions
with older adults: Integrating clinical and gerontological research. Professi­
onal Psychology: Research and Practice, 37(5), 489–498.
doi:10.1037/0735-7028.37.5.489
Schlaffer, H. (2005). Poesie und Wissen: die Entstehung des ästhetischen Bewusst­
seins und der philologischen Erkenntnis. Frankfurt am Main: Suhrkamp.
References
189
Schmidt, A. (2000). Implicit human computer interaction through context. Personal
Technologies, 4(2-3), 191–199. Retrieved from http://link.springer.com/ar­
ticle/10.1007/BF01324126
Schmidt, A., Kranz, M., & Holleis, P. (2005). Interacting with the Ubiquitous Com­
puter: Towards Embedding Interaction. In Proceedings of the 2005 Joint
Conference on Smart Objects and Ambient Intelligence: Innovative Context-aware Services: Usages and Technologies (pp. 147–152). New York,
NY, USA: ACM. doi:10.1145/1107548.1107588
SGB IX. Sozialgesetzbuch (SGB) Neuntes Buch (IX) – Rehabilitation und Teilhabe
behinderter Menschen – (SGB IX) (2001).
Shneiderman, B. (2010). Designing the user interface: strategies for effective humancomputer interaction (5th ed.). Boston: Addison-Wesley.
Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., … Blake,
A. (2011). Real-time human pose recognition in parts from single depth im­
ages. In Proceedings of the 24th IEEE Conference on Computer Vision and
Pattern Recognition (Vol. 2, p. 7).
Siddiqui, M., & Medioni, G. (2010). Human pose estimation from a single view point,
real-time range sensor. In CVPRW ’10 IEEE Computer Society Conference
on Computer Vision and Pattern Recognition Workshops 2010 (pp. 1 –8).
doi:10.1109/CVPRW.2010.5543618
Suits, B. (2005). The grasshopper: games, life and utopia. Peterborough, Ont., USA:
Broadview Press.
Sykes, J., & Brown, S. (2003). Affective gaming: measuring emotion through the
gamepad. In CHI’03 extended abstracts on Human factors in computing
systems (pp. 732–733). Retrieved from http://dl.acm.org/cita­
tion.cfm?id=765957
Ten, S. (2010). How Kinect depth sensor works - stereo triangulation? Mirror Image.
Retrieved November 28, 2013, from http://mirror2image.word­
press.com/2010/11/30/how-kinect-works-stereo-triangulation/
United Nations. Convention on the Rights of Persons with Disabilities (2008). Re­
trieved from http://hpod.pmhclients.com/pdf/ConventionImplications.pdf
United Nations, Department of Economic and Social Affairs, Population Division.
(2013). World Population Prospects: The 2012 Revision (No. Volume I:
Comprehensive Tables).
Vanderheiden, G. C. (2008). Ubiquitous Accessibility, Common Technology Core,
and Micro Assistive Technology: Commentary on &ldquo;Computers and
People with Disabilities&rdquo; ACM Trans. Access. Comput., 1(2), 10:1–
10:7. doi:10.1145/1408760.1408764
VDI Verein Deutscher Ingenieure. (1990). VDI 2860 Montage- und Handhabung­
stechnik [Technology for Assembly and Handling]. Beuth. Retrieved from
http://www.vdi.de/uploads/tx_vdirili/pdf/2372581.pdf
VDI Verein Deutscher Ingenieure. (2008). VDI 4499 Grundlagen der Digitalen Fab­
rik. Digital factory Fundamentals. Beuth. Retrieved from
http://www.vdi.de/uploads/tx_vdirili/pdf/9856297.pdf
190
References
Von Ahn, L., & Dabbish, L. (2008). Designing Games with a Purpose. Commun.
ACM, 51(8), 58–67. doi:10.1145/1378704.1378719
Weaver, K. A., Baumann, H., Starner, T., Iben, H., & Lawo, M. (2010). An empirical
task analysis of warehouse order picking using head-mounted displays. In
Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems (pp. 1695–1704). New York, NY, USA: ACM.
doi:10.1145/1753326.1753580
Weber, M. (2009). The Protestant ethic and the spirit of capitalism: the Talcott Par­
sons translation interpretations (1st ed.). New York: W.W. Norton & Co.
Weiser, M. (1999). The computer for the 21st century. SIGMOBILE Mob. Comput.
Commun. Rev., 3(3), 3–11. doi:10.1145/329124.329126
Wellner, P. (1991). The DigitalDesk calculator: tangible manipulation on a desk top
display. In Proceedings of the 4th annual ACM symposium on User inter­
face software and technology (pp. 27–33). New York, NY, USA: ACM.
doi:10.1145/120782.120785
Westkämper, E. (2006). Digitale Produktionm [Digital Production]. In H.-J. Bullinger
(Ed.), Technologieführer: Grundlagen - Anwendungen - Trends [Techno­
logy Guide: Principles - Applications - Trends] (pp. 435–439). Heidelberg,
Germany: Springer.
Wilson, A. D. (2010). Using a Depth Camera As a Touch Sensor. In ACM Interna­
tional Conference on Interactive Tabletops and Surfaces (pp. 69–72). New
York, NY, USA: ACM. doi:10.1145/1936652.1936665
World Health Organisation. (2001). The International Classification of Functioning,
Disability and Health (ICF). Retrieved October 29, 2013, from
http://www.who.int/classifications/icf/en/
World Health Organisation, & World Bank. (2011). World report on disability. Ge­
neva; Washington, DC: World Health Organization ; World Bank.
World Health Organization. (2004). World Health Survey. Retrieved from
http://www.who.int/healthinfo/survey/en/
Zäh, M. F., Wiesbeck, M., Engstler, F., Friesdorf, F., Schubö, A., Stork, S., … Wall­
hoff, F. (2007). Kognitive Assistenzsysteme in der manuellen Montage Adaptive Montageführung mittels zustandsbasierter, umgebungsabhängi­
ger Anweisungsgenerierung [Cognitive assistance in manual assembly –
Adaptive assembly guidance through a context-sensitive generation of in­
structions]. wt-online, (9), 644–650.
Zhou, J., Lee, I., Thomas, B., Menassa, R., Farrant, A., & Sansome, A. (2011). Ap­
plying spatial augmented reality to facilitate in-situ support for automotive
spot welding inspection. In Proceedings of the 10th International Confer­
ence on Virtual Reality Continuum and Its Applications in Industry (pp.
195–200). New York, NY, USA: ACM. doi:10.1145/2087756.2087784
191
Index
11 INDEX
AAL 16
Ethical standards 4, 111, 162
ACM 36
EU 16
ActionScript 107
Exergames 55
Adjoined spheres approach 90
Affective computing 164
Facial Action Coding System (FACS)
86
AIR 107
Flow 30, 104
Ambient assisted living 16
Flow curve 82
Applied games 53
Games engineering 79, 103
Artificial intelligence 2, 38, 53, 161
Games with a purpose 53
ASED 93, 96, 100, 107
Gamification 32, 53, 82, 85, 103, 147
ASLM 8
HCI 33, 36, 67
ASSETS 36
HMI 44, 57, 69
Augmented reality 37, 51, 60
ICF 1, 16
Bacon, Francis 24
IEEE 36, 51
Burnout 25
Impaired persons 16, 21, 26, 28, 35,
53, 71, 103, 148, 161
BWH 110
CAD 44, 57
CNC 44, 69
Companions 39
Cyber-physical systems 37, 46, 161
Demographic change 4, 15, 161
Digital factory 44
Digital natives 53
Disabled persons 16, 26
Elderly persons 14, 20, 35, 53, 71,
161
ERP 45
Implicit interaction 33, 69
Inclusion 22, 26, 161
ISO 9241 58, 67
Java 93, 102
Kinect 34, 48, 89, 102
Leap Motion 163
Lee, Edward 46
Luther, Martin 24
McGonigal, Jane 103
MEESTAR 14, 65
Minsky, Marvin 27, 38
192
Index
Mixed reality 37
Structured light 49, 50, 89
MTM 33, 40, 47
Supported work 21
Natural interaction 34, 48, 51, 70
Tetris 104
Nielsen, Jakob 68
Time-of-flight 49, 89, 163
NITE 49, 167
ToF 49, 89, 163
Old-age support ratio 16, 21
Total quality 57, 70
OpenCV 93
Turing, Alan 38
OpenNI 49, 167
Ubiquitous computing 33, 36
PETRA 36
PHP 120
UN Convention on the rights of
persons with disabilities 26, 162
Playful design 53
Universal access 36
PLC 58, 60
Universal accessibility 71
PLM 45
Universal design 36, 71
Poka-Yoke 57
UX 12, 27, 39, 63
Polanyi, Karl 25
Virtual factory 36, 45
PPS 44
Virtual reality 37
Prensky, Marc 53
Weber, Max 24
PrimeSense 49
Weiser, Mark 33, 51
Project Natal 34
Westkämper, Engelbert 45
Quantified self 162
WHO 16, 20
REFA 40, 41
Wii 34
RFID 46
Wizards 1, 39
Serious games 53
Work-life balance 25
Sheltered work 3, 21
Xbox 55, 163
Shneiderman, Ben 67
XBox 34
Smart watch 85
Xtion 48
Socrates 23
Z-values 89, 155
While context-aware assistive systems (CAAS) have
become ubiquitous in cars or smartphones, most workers
in production environments still rely on their skills and
expertise to make the right choices and movements.
The quality gates currently established in the industry
successfully remove failed work results; however, they
usually operate in a spatial and temporal distance from
the workplace and the worker. Thus workers lack the
opportunity to learn from problems on the fly and to
improve their work habits.
Today’s motion recognition systems and micro projectors
allow to display work-relevant information directly into a
worker’s field of vision. Thus the technical pre-requisites
for CAAS with instant feedback at the workplace (in situ)
essentially have been established. Although every worker
would benefit from CAAS, persons with impairments
and elderly persons with reduced physical and mental
capabilities require such systems the most. CAAS have
the potential to empower these persons to do more
complex work or to remain in regular jobs longer. Thus
they combine economic benefits with inclusion and
address the demographic change.
After an overview of the relevant backgrounds from
ethics, psychology, computer science and engineering
as well as the relevant state-of-the-art, we establish
requirements which result in a model for ideal CAAS.
As the framework aims to improve not only the work
results but also the workers’ motivation, the model
incorporates elements from game design.
An exemplary implementation covering essential
aspects of the model is described. The effects of both
the augmentation by projection and by gamification are
evaluated in a study with impaired persons. An additional
focus lies on the ethical implications of assistive systems
which supervise and model their users in real-time.
About the author:
Oliver Korn is a computer
scientist and CEO of the software company Korion. This is
his PhD thesis on interactive
assistive systems and gamification. It was realized at the
Institute for Visualization and
Interactive Systems (VIS) of
the University of Stuttgart.
He studied Computational
Linguistics as well as English
and German Language at
the universities of Stuttgart
and Glasgow. Since 2001 he
works in projects focusing on
human computer interaction,
assistive systems, gaming
and simulations. 2003 he
co-founded the Fraunhofer
spin-off Korion.
He worked as associate
lecturer for the Karlsruhe
Institute for Technology (KIT)
and lectures at the Steinbeis
Academy, the Stuttgart
Media University (HdM) and
the University of Stuttgart.
He also is a certified
Project Manager (IHK) and
Professional Member of the
Association for Computing
Machinery (ACM).
EUR 59,95
ISBN 978­1­291­86486­1
90000 9 781291 864861