D2_1b_StateOfTheArt-Revision2-v4_3hot!

Transcription

D2_1b_StateOfTheArt-Revision2-v4_3hot!
PlayMancer– FP7 215839
A European Serious Gaming 3D Environment
Deliverable (revised)
D2.1b: State of the Art on Serious
games, Games for Health, and
Multimodal Game Technologies
Part B of User requirements, game scenarios,
1
system specification and architecture
Date of delivery: July 31st, 2008
Revised version: February 20th, 2009
Authors: NU, TUW, UOP, UNIGE, SYSTEMA, IDIBELL
Date: 2009-02-18
1
Version: v4.3
Due to its length, Deliverable 2.1 User requirements, game scenarios, system specification and
architecture has been divided into 4 sections. D2.1a: User Requirements, D2.1b: State of the Art,
D2.1c: Games Scenarios, D2.1d: Specifications and Architecture
D2.1b: State of the Art
2/130
Document Control
Title:
State of the Art on Serious games, games for health, and Multimodal
Game Technologies
Project:
Playmancer (FP7 215839)
Type:
Deliverable (revised)
Authors:
NU: Tony Lam, Thierry Raguin
TUW: Christian Schönauer, Hannes Kaufmann, Thomas Pintaric
UOP: Theodoros Kostoulas, Alexandros Lazaridis, Nikos Katsaounos, Iosif
Status*: Restricted
Mporas, Otilia Kocsis, Todor Ganchev
UNIGE: Dimitri Konstantas, Hikari Watanabe
SYSTEMA: Elias Kalapanidas
IDIBELL: Fernando Fernández-Aranda, Susana Jiménez-Murcia, Juan Jose Santamaría
Origin:
NetUnion
Doc ID:
<D2.1b_StateOfTheArt-Revision2-v4.3.doc>
* Public, Restricted, Confidential
Amendment History
Version
Date
Author
Description/Comments
v1.0
2008-03-06
NU
Initial version, base structure and partner assignments
v2.0
2008-05-15
All
2 draft
v3.0
2008-07-15
NU
Almost final version
v3.1
2008-07-31
All
Final version
v3.2
2008-09-25
UNIGE
Slight updates in introduction and section 3.1 after
internal peer review
v4.0
2008-12-29
NU
Initiating revisions based on 1 year review
v4.1
2009-02-06
NU,
IDIBELL
First draft revision based on 1 year review
v4.2
2009-02-16
NU,
IDIBELL,
UOP,
TUW,
UNIGE
Additional revisions and restructuring
v4.3
2009-02-18
NU, ST
Final revisions
nd
st
st
The information contained in this report is subject to change without notice and should not be construed as a commitment by
any members of the PLAYMANCER Consortium. The PLAYMANCER Consortium assumes no responsibility for the use or
inability to use any software or algorithms, which might be described in this report. The information is provided without any
warranty of any kind and the PLAYMANCER Consortium expressly disclaims all implied warranties, including but not limited to
the implied warranties of merchantability and fitness for a particular use.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
3/130
Table of contents
1
INTRODUCTION.................................................................................................. 7
2
EXECUTIVE SUMMARY...................................................................................... 8
3
STATE OF THE ART: GAMES FOR HEALTH RESEARCH ................................ 9
3.1 INTRODUCTION ................................................................................................. 9
3.2 BEHAVIOUR AND MENTAL DISORDERS ............................................................... 9
3.3 REHABILITATION ............................................................................................. 12
3.4 LIMITATIONS OF THE LITERATURE AND CLINICAL IMPACT ................................... 13
3.4.1 Behaviour and Mental Disorders ............................................................ 13
3.4.2 Rehabilitation.......................................................................................... 14
4
REVIEW OF GAMES FOR HEALTH AND SERIOUS GAMES .......................... 16
4.1 RELEVANT TRENDS IN GAMES FOR HEALTH ...................................................... 16
4.2 STORYTELLING / ADVENTURE GAMES FOR PROBLEM SOLVING .......................... 19
4.2.1 Personal Investigator.............................................................................. 19
4.2.2 Earthquake in Zipland ............................................................................ 21
4.2.3 Treasure Hunt ........................................................................................ 22
4.2.4 Conclusion.............................................................................................. 23
4.3 NARRATIVE THERAPY / STORY-BUILDING GAMES .............................................. 23
4.3.1 GIRLS: Girls Involved in Real Life Sharing............................................. 23
4.4 RE-MISSION: PSYCHOEDUCATIONAL GAME ABOUT CANCER .............................. 25
4.5 SELF-ESTEEM GAMES ..................................................................................... 25
4.5.1 EyeSpy: The Matrix ................................................................................ 26
4.5.2 Wham! Self-Esteem Conditioning........................................................... 27
4.5.3 Grow your Chi! ....................................................................................... 27
4.5.4 Further research: EyeZoom, Click’n Smile, Word Search ...................... 28
4.5.5 Conclusion.............................................................................................. 28
4.6 AFFECTIVE AND BIOFEEDBACK GAMES ............................................................ 29
4.6.1 The Journey to Wild Divine: The Passage.............................................. 29
4.6.2 Relax-to-Win........................................................................................... 30
4.6.3 Conclusion.............................................................................................. 31
4.7 GAMES PROMOTING PHYSICAL ACTIVITIES ....................................................... 31
4.7.1 NEAT-o-Games...................................................................................... 32
4.7.2 Nike+ Sports Kit...................................................................................... 34
4.7.3 MyHeart: Sneaks!................................................................................... 35
4.7.4 Conclusion.............................................................................................. 35
4.8 GAMES FOR REHABILITATION .......................................................................... 36
4.8.1 Video capture based games................................................................... 36
4.8.2 Games using customized input devices ................................................. 37
4.8.3 Virtual reality games ............................................................................... 38
4.8.4 Conclusion.............................................................................................. 40
4.9 UNIVERSALLY ACCESSIBLE GAMES ................................................................. 40
4.10 CASUAL SOCIAL GAMES .............................................................................. 41
PLAYMANCER
FP7 215839
D2.1b: State of the Art
4/130
4.11 GUESS AND MATCH GAMES ......................................................................... 42
4.11.1 The ESP Game ................................................................................... 42
4.11.2 Peek-a-Boom ...................................................................................... 43
4.11.3 Conclusion........................................................................................... 44
4.12 CONCLUSION GAMES STATE OF THE ART ........................................................ 44
5
STATE OF THE ART: MULTIMODAL GAME TECHNOLOGIES ....................... 46
5.1 BIOFEEDBACK AND BIOFEEDBACK DEVICES ..................................................... 46
5.1.1 Introduction............................................................................................. 46
5.1.2 Common biosignals................................................................................ 46
5.1.3 Wearable sensors .................................................................................. 49
5.1.4 Linking the sensors ................................................................................ 52
5.1.5 Heart Rate / GSR and Games................................................................ 53
5.1.6 Conclusion.............................................................................................. 54
5.2 MULTIMODAL I/O ............................................................................................ 54
5.2.1 Introduction............................................................................................. 54
5.2.2 Input/Output Devices.............................................................................. 55
5.2.3 Multimodal I/O issues ............................................................................. 64
5.2.4 State of the art........................................................................................ 65
5.2.5 Conclusion.............................................................................................. 68
5.3 SPEECH / DIALOGUE ....................................................................................... 69
5.3.1 Dialogue Systems .................................................................................. 69
5.3.2 Speech Recognition and Understanding ................................................ 70
5.3.3 Speech Synthesis and Natural Language Generation............................ 70
5.3.4 Dialog Management ............................................................................... 73
5.3.5 Speech Interfaces and Mobile Technologies.......................................... 76
5.3.6 Speech Interfaces and Users with Special Needs.................................. 77
5.3.7 Speaker Emotion Recognition ................................................................ 77
5.3.8 Speech Interfaces and Dialogue Management in Games ...................... 78
5.3.9 Conclusions............................................................................................ 79
5.4 GAME ENGINES / TOOLS OVERVIEW AND COMPARISON ..................................... 79
5.4.1 Game Engine Basics.............................................................................. 79
5.4.2 The game engine roadmap: Past, current and future trends .................. 81
5.4.3 Serious features of Game Engines......................................................... 83
5.4.4 Research projects making use of Game engines ................................... 85
5.4.5 Tools for game development .................................................................. 86
5.4.6 Overview and comparison of available game engines and tools ............ 93
5.4.7 Conclusion.............................................................................................. 94
6
APPENDIX 1: LIST OF AVAILABLE GAME ENGINES...................................... 97
7
APPENDIX 2: RESEARCH ON GAMES FOR HEALTH .................................. 111
8
REFERENCES................................................................................................. 114
9
LIST OF ABBREVIATIONS.............................................................................. 127
PLAYMANCER
FP7 215839
D2.1b: State of the Art
5/130
Table of Figures
Fig. 1 Personal Investigator...................................................................................... 20
Fig. 2 Games using PlayWrite (November 2007) ..................................................... 21
Fig. 3 Earthquake in Zipland .................................................................................... 22
Fig. 4 GIRLS – Pictorial Narrative ............................................................................ 24
Fig. 5 EyeSpy: The Matrix ........................................................................................ 26
Fig. 6 Wham! Self-Esteem Conditioning................................................................... 27
Fig. 7 Grow your Chi! ............................................................................................... 28
Fig. 8 The Journey to Wild Divine: The Passage ..................................................... 29
Fig. 9 Relax-to-Win................................................................................................... 30
Fig. 10 Relax-to-Win Sensor .................................................................................... 30
Fig. 11 NEAT-o-Games components ....................................................................... 32
Fig. 12 Architecture of the NEAT-o-Race game ....................................................... 33
Fig. 13 Nike+iPod Sports Kit .................................................................................... 34
Fig. 14 Nike+ Application ......................................................................................... 35
Fig. 15 IREX platform showing the “Birds & Balls” game ......................................... 37
Fig. 16 Biodex balance system with a maze game .................................................. 38
Fig. 17 “Whack a mouse” game ............................................................................... 39
Fig. 18 CyberGrasp and it’s virtual representation ................................................... 40
Fig. 19 The ESP Game ............................................................................................ 43
Fig. 20 Peek-a-Boom ............................................................................................... 44
Fig. 21 g.MOBIlab EEG module (g.tec medical engineering GmbH)........................ 47
Fig. 22 Single channel GSR monitor GSR2 (Thought Technology Ltd.)................... 47
Fig. 23 a) g.MOBIlab ECG/EMG module (g.tec medical engineering GmbH) .......... 48
Fig. 23 b) MobiHealth Mobile™ module ................................................................... 48
Fig. 24 Xpod oximeter (Nonin) and AABACO medical, Inc ear probe ...................... 49
Fig. 25 CXL04LP3 3-axis accelerometer module (Crossbow).................................. 49
Fig. 26 LifeShirt (VivoMetrics) .................................................................................. 50
Fig. 27 LifeVest (ZOLL Lifecor) ................................................................................ 51
Fig. 28 Sensium (Toumaz) ....................................................................................... 51
Fig. 29 Example of a gaming setup using multimodal I/O ........................................ 55
Fig. 30 Wireless pen developed at IMS.................................................................... 56
Fig. 31 Illustration of the iotracker setup (Source: www.iotracker.com).................... 59
Fig. 32 Basic principle of infrared-optical tracking (source: www.iotracker.com)...... 60
Fig. 33 CAVE (Computer Assisted Virtual Environment).......................................... 62
Fig. 34 RAVE (Reconfigurable Assisted Virtual Environment) ................................. 62
Fig. 35 Example of a workbench .............................................................................. 63
Fig. 36 The Sony Glasstron, a HMD using LCD displays ......................................... 64
Fig. 37 Middleware separating application from devices and network ..................... 65
Fig. 38 Visualizations of examples for data flow graphs........................................... 67
Fig. 39 Wii balance board......................................................................................... 68
Fig. 40 Squeezable input device .............................................................................. 68
Fig. 41 A block diagram of the Natural Language Generation component ............... 71
Fig. 42 Detailed block diagram of the NLG component ............................................ 72
PLAYMANCER
FP7 215839
D2.1b: State of the Art
6/130
Fig. 43 Schematic representation of speech-centric multimodal interface architecture.
........................................................................................................................... 74
Fig. 44 Architectural model of the RavenClaw framework........................................ 76
Fig. 45 A typical 3D game engine architecture......................................................... 80
PLAYMANCER
FP7 215839
D2.1b: State of the Art
7/130
1 Introduction
This version of D2.1b has been revised to provide a better focus on Games for
Health and follow up on the research questions and challenges identified in D2.1a
User Requirements.
In brief, this document will seek to answer the following questions:
a. A brief overview to the e-health trends, theoretic and technological
approaches relevant to the Playmancer user requirements.
b. Review specific examples of therapeutic and affective games along with a
critical analysis of their strengths and weaknesses, and lessons learned, of
the games as well as their supporting technology and platform.
c.
Review of state of the art in multi-modal input/output devices, game engines,
speech and dialogue recognition and other vital components and how these
could potentially be applied to the Playmancer scenarios and platform.
Ultimately, D2.1b seeks to provide a framework to examine and integrate innovative
approaches for advancing European research in games for health.
The “serious games” label gained wide circulation since the serious games initiative
was launched in 2002. Serious games regroup diverse sub-categories such as
simulations, training games, games for health, etc.[1].
Games for Health or “Health eGames” are video games that deliver measurable
health benefits. The segment is growing rapidly fuel by the introduction of the Wii
console by Nintendo. In fact, market size of games for health, estimated at more
than 7 billion USD, for the next 12 months, is much greater than the market
estimates for serious games, which is estimated at 1.7 billion USD. (Source:
iConecto e-games marketing report). The IConecto e-games report cites 300+
Health eGames have been developed for people and patients. 35 + Health eGames
have been identified for professionals in the health and medical industry.
Given the rapidly growing list of games and publications, this review will focus on
games for therapeutic support and physical rehabilitation, which is the segment most
relevant to the Playmancer scenarios. This does not however, limit our survey to only
games in this area.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
8/130
2 Executive summary
Following up on D2.1a User and Stakeholder Requirements, the project made a
clinical review of games for health and a technical review of underlying technologies
to support the PlayMancer platform and games.
•
Research on games for health is still very young and results obtained so far
are promising. The PlayMancer video-game prototype to be adapted for
chronic mental disorders (mainly eating disorders and behavioural addictions)
treatment introduces the player to an interactive scenario where the final goal
is to increase his general problem solving strategies, self-control skills and
control over general impulsive behaviours.
•
Except for a few relaxation games using biofeedback, no games are using
speech emotion recognition, facial emotion recognition or multimodal emotion
recognition within a therapeutic context. In addition to developing a
therapeutic game using these innovative technologies, PlayMancer will
provide the necessary tools for the development of games for health
integrating multimodal emotion recognition, spoken dialog interfaces, motion
tracking, etc.
•
While a plethora of independent tools is available, there are no integrated
platforms for developing games for health. Except for the PlayWrite system
developed for Personal Investigator, there are no platforms that allow the
rapid development of therapeutic 3D games.
•
In addition, PlayMancer will apply the Universally Accessible Games design
principles guidelines for the first time in the area of games for health.
Results of this research have been used as input to develop the game scenarios
presented in D2.1c and to select the best components for the PlayMancer platform
as described in D2.1d.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
9/130
3 State of the Art: Games for Health Research
3.1 Introduction
Given the increasing interest of many national health care systems in extending the
accessibility of services and treatment programs, telemedicine has started to be
applied in many illnesses [Mitchell et al., 2000; Hicks et al., 2001] . To date, the use
of new technologies has been applied for a range of mental illnesses, including
obsessive-compulsive disorders [Baer et al., 1995], schizophrenia [Zarate et al.,
1997], eating disorders (EDs) [Myers et al., 2004] and anxiety disorders [Botella et
al., 2004]. Furthermore, additional virtual reality, approaches have already
successfully been applied by minor mental disorders, such as: posttraumatic stress
disorders [Wood et al., 2008], anxiety disorders [Difede et al., 2007] and addictive
behaviours [Lee et al., 2007].
Until recently, the majority of the work carried out on the use of video games have
focused on the negative aspects that they may generate, such as their addictive
potential, the relationship with aggressive behaviour or the medical and
psychological consequences (Griffiths & Hunt, 1998). However, in the last years
some publications have appeared reflecting the possible benefits of some
videogames (Schott 2006, Griffiths 2004).
In the same manner the book “Serious Games: Mechanisms and Effects” (Ritterfeld
2008) emphasises the desirable outcomes of playing videogames. The book
examines how playing video games can supply a transferable learning to the real
world. It focuses on five goals:
1. To define the areas of serious games.
2. To elaborate and discuss the underlying theories that explain suggested
psychological mechanisms through serious game play, addressing cognitive,
affective and social processes.
3. To summarize and discuss the evidence on the effectiveness of serious
games.
4. To introduce research methods as a response to the specific methodological
challenges, measurement of exposure, multitasking, deeper learning, and
transfer from the virtual into the real.
5. Discuss the advantages and disadvantages for educational purposes.
Various virtual reality technologies have also been used for rehabilitation after stroke
and other brain injuries (Holden 2005, Weiss 2004) as well as orthopaedic
rehabilitation following musculoskeletal injuries. (Deutsch 2001)
3.2 Behaviour and Mental Disorders
Despite the fact that the scientific literature on this theme is significantly growing it is
still to its major part speculative (Griffiths, 2004). Nonetheless, each time it is more
PLAYMANCER
FP7 215839
D2.1b: State of the Art
10/130
evident that videogames can be used as therapeutic tools, both in physical disorders
and somatic diseases such as in mental diseases. However, there is still no study
that centres itself in the treatment of pathological gambling, through videogames, or
that consider the possibility of including videogames in treatment programs, as an
additional therapeutic tool or strategy. In spite of that, there are studies that
demonstrate their efficiency in the work with children with emotional and behavioural
problems, especially children diagnosed with ADD (Attention Deficit Disorder), in
traits like hyperactivity and impulsivity (Griffiths, 2004). Likewise, a generalized
opinion exists over the benefits derived from videogames like treatments tools,
especially if they have been specifically designed to invert in a concrete object.
Previous literature review studies suggest that computer games in general can serve
as an alternative form of treatment or as additional intervention in mental disorders,
in areas such as: schizophrenia [Bellack et al., 2005], asthma [Bussey-Smith et al.,
2007] and motor rehabilitation [Broeren et al., 2007]. Although, several naturalistic
studies have been conducted, showing the usefulness of serious videogames for
enhancing some positive attitudes [Beale et al., 2007; Rassin et al., 2004],
increasing problem solving strategies [Coyle et al., 2005] and phobias [Walshe et al.,
2003]. As well video games have been used as physiotherapy or occupational
therapy in different groups of people to focus attention away from potential
discomfort. They have demonstrated to be a powerful tool distracting patients from
the pain associated with some treatments.
Even if there is a lack of controlled studies in the literature dealing with video games
as additional therapeutic tool for mental disorders, some authors working with
videogames have published positive results using videogames as psychological
support in various psychological and psychosomatic illnesses. Research conducted
by Coyle on a game called Personal Investigator, described in section 4.2.1, (Coyle
2004, Coyle 2005) has shown that the game helped to increase adolescent to
engage in therapy and helped therapists to develop their therapeutic relationship
with adolescents. Adolescents improved their self-esteem and problem solving skills.
Furthermore, patients benefited from the 3D environment in which adolescents had a
sense of control and empowerment. Another potential advantage of PI is that peer
narratives can be shared, i.e. allowing adolescents to record and submit multimedia
narratives of their game to be used as examples for other players.
Re-Mission is a game developed by Kato and colleagues, described in section 4.4,
the results of the articles published by the group (Kato 2006, Beale 2007, Kato 2008)
show that the game is an acceptable psychotherapeutic tool to a high percentage of
adolescents and young adults suffering from cancer. The information and experience
gained from playing Re-Mission has been revealed to significantly improve patients’
self-esteem, and knowledge of cancer. Furthermore patients who played Re-Mission
vs. control groups adhered significantly more frequently to their treatment regimen.
In the paper (Brezinka 2008) the researcher analyze the trial version of the video
game “Treasure Hunt”, described in section 4.2.3, with therapeutic purposes for
children of eight to twelve years old who are in cognitive-behavioural treatment for
PLAYMANCER
FP7 215839
D2.1b: State of the Art
11/130
various disorders. The results show that the game is valuable in helping less
experienced therapist structuring sessions and explaining important cognitivebehavioural concepts.
By chronic mental disorders, such as eating disorders and behavioural addictions,
some specific traits are difficult to be modified and resistant to be changed (e.g.
specific personality traits, attitudinal and emotional aspects, and uncontrolled
behaviours), even after using standard and well established evidence based
psychological therapies. Hence, as shown in some preliminary studies, the potential
capacity of videogames to change underlying cognitive processes is going to be
tested within this project (see Appendix 2: Research on Games for Health).
Eating Disorders
In eating disorders, several controlled studies have shown that CBT and IPT are the
two most effective approaches in the treatment of bulimia nervosa (Fairburn, 1993;
Openshaw, Waller, & Sperlinger, 2004; Wilfley et al., 1993), with CBT leading to
more rapid symptomatic change (Fairburn, 1997). However, 30-40% of cases will
show just partial recovery or not successful results [Fairburn & Harriot, 2003;
Fairburn et al., 1993; Fernández et al., 2004]. Some of the predicting factors
associated to poor prognosis are, as well personality traits (e.g. impulsivity and
rigidity), as well some attitudinal processes.
Effective use of new technology for eating disorders, especially BN, have recently
been described in studies using: telemedicine, CD-ROM, Internet based programs,
virtual reality, Personal Digital Assistants (PDAs), e-mail support and additional
mobile text messages (SMS) (Fernandez-Aranda et al., 2008; Carrard et al., 2006;
Rouget, Carrard, & Archinard, 2005). Another type of psycho-education intervention
are using Online guided self-helps, that kind of approach appears to be a valid
treatment option for BN when compared to a waiting list control group, especially for
people who present lower severity of their eating disorder (ED) symptomatology and
some specific personality traits [119]. However, up to now, there is a lack on the
literature analyzing the usefulness of videogame as successful therapy tool for
eating disorders.
Despite that, specific neuropsychological techniques based on games (gambling
task), have showed a certain amount of efficiency at times of dealing with aspects
less susceptible to change, such as rigidity [Tchanturia et al., 2007a; Tchanturia et
al., 2007b].
The majority of studies identify impulsivity and low self-control, as one of the
triggering and maintaining factors of problematic behaviours in ED, such as episodes
of overeating [Vanderlinden et al., 2004; Alvarez-Moya et al., 2007; FernándezAranda et al., 2007], that maintain even after having finalized treatment.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
12/130
Pathological Gambling
Nowadays, several studies have shown that CBT is the most effective approach in
the treatment of pathological gambling (Jiménez-Murcia et al., 2006). Within the
disorders of impulse control, besides the one of pathological gambling, the addictive
technology calls are included. However, there is an existing psychological treatment
program, with a cognitive-behavioural approach, that has been evaluated for Internet
addiction.
Moreover, this program is of the characteristics that it is changing, through the
Internet (Center for Online Addiction, www.netaddiction.com). The sample consisted
of 114 participants, with addictions to certain applications on the Internet (sexual
chat 40%, general chat 4%, pornography 30%, gambling 10%, gaming 10%, auction
houses 4%, shopping 2%). The objectives of the therapy were improved motivation,
online time management, improved social relationships, improved sexual functioning,
engagement in offline activities and ability to abstain from problematic applications.
The treatment had a duration of 12 sessions and carried out a follow-up after 6
months. The results indicated that the majority of patients had recovered after this
type of intervention (Young, 2007).
Another study made with 188 subjects, shows the beneficial effects of using the
Internet in terms of self-confidence, social abilities, social support (Campbell et al.,
2006).
Finally, the efficiency of the use of new technologies, specially videogames, as a
therapeutic tool to intervene in certain mental disorders is a field still left to be
discovered, although the scarce amount of studies published up until now suggest
very positive results.
3.3 Rehabilitation
During the last ten years there has been a substantial amount of publications on the
use of virtual reality and virtual environments in rehabilitation. However, the term
“game” has been carefully avoided in almost all of them, obviously for the reason to
distance themselves from pure entertainment setups. Nevertheless, many of these
publications obviously describe game setups as described in section 4.8. Clearly
serious games are not yet firmly established within the scientific community engaged
in rehabilitation. However, there has been great interest in using virtual environment
or virtual reality (VR) applications for treatment of various types of motor and
cognitive deficits. Reviews show that the use of VR in brain damage rehabilitation is
expanding dramatically and became an integral part of the assessment tool [217].
Research teams aim to develop its use as a flexible, controllable, non-invasive
method of directly manipulating cortical activity in order to reduce the impact of brain
injury.
The majority of literature concentrates on supporting one specific field of
rehabilitation, thus providing functional tasks for occupational therapy (e.g. [157] or
PLAYMANCER
FP7 215839
D2.1b: State of the Art
13/130
[215]) or simple videogames for physiotherapy [219]. For every area there is a
multitude of training exercises that are performed during the therapy sessions
chosen by the therapist depending on the case of the patient. Video games have
been used as physiotherapy for arm injuries [110], in training movements or to
increase hand strength at different pathologies [111]. Therapeutic benefits have also
been reported in pain management, wheelchair users with spinal cord injuries [112],
severe burns [113] and muscular dystrophy [114]. Thus many games and
applications have been developed to serve a certain niche and have in many cases
extended the understanding of movement and therapy [219]. Another reason for
specialization of the games is that in general specific stimulation is considered the
more effective approach, in terms of benefit for the patient. This is in comparison to
unspecific stimulation aimed at improving overall performance. Another scientific
consensus is that videogames or virtual reality applications in general are not
considered a replacement for conventional treatment, but are seen as a new
technological tool that can be exploited to enhance motor retraining. (e.g. [215]).
3.4 Limitations of the Literature and Clinical Impact
3.4.1 Behaviour and Mental Disorders
All of the review studies indicate that computer interventions can be helpful in
various treatments and the naturalistic studies strengthen this claim by showing
differences between pre and post measures. However, when control groups are
added and used in studies there is a lack of significant differences, almost in 50% of
cases.
As shown in Appendix 2: Research on Games for Health, currently there are few
experimental studies using control groups and moreover the sample sizes in the
existing ones are generally low. Therefore there seem to be a need for more studies
using control groups of larger numbers to be applied before any real conclusions can
be made.
The most common types of measures are psychometrical tests, which may lead to
subjective biases. Therefore other types of measures such as biosensor, behavioural
and physiological measures could serve as more objective tools.
Almost 40% of the studies are focused on psycho-education for children and
adolescents. So far, no computer game design for mental health problems has had
the intention to treat concrete aspects of a given disease. The majority of the
computer treatments are directed towards children and adolescents. It can be
questioned, whether computer games is the most adequate form of treatment in
adults. On the other hand, studies of other pathologies, such as cognitive
remediation, and language impairments, are not solely limited to psycho-education.
They also have the intention to improve the symptomatology of the disease.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
14/130
Considering the research conducted and articles published up to now, it seems
obvious that the work on videogames as a treatment in health is still, to an extent, in
its initial phase and is continuously progressing. The results obtained so far are
promising for future investigation.
The PlayMancer video-game prototype to be adapted for chronic mental disorders
(mainly eating disorders and behavioural addictions) treatment, introduces the player
to an interactive scenario where the final goal is to increase his general problem
solving strategies, self-control skills and control over general impulsive behaviours.
After using the game, specific targeted attitudinal, emotional and behavioural
changes will be expected by the subject. The game will encourage the player to
learn and develop new confrontation strategies.
3.4.2 Rehabilitation
Some studies have been conducted on the topic of video games in rehabilitation.
However, in most cases the proposed product was not developed beyond prototype
stadium or was merely an adoption of commercial entertainment products, with no
adaptations to the special requirements of the patients.
Also there often is a significant lack of evaluation methodology and objective
measurements to allow proof of the effectiveness of certain systems.
As with videogames for behaviour and mental disorders there are little to none
controlled studies available. Most systems have only been tested using a very small
sample of patients or healthy subjects as a proof of concept. In many cases the
studies bring forward the argument that the subjects had increased their
occupational performance, i.e. improvement reflected in activities of daily living. This,
of course, should be the goal of any therapy, however, is very hard to measure
objectively.
Four publications by Broeren et al. in 2007 have shown that VR applications can be
used to provide quantitative analysis of hand movements. This was tested on a
medium sample of patients in the chronic phase after stroke. They also proofed that
the benefits are not only to younger people but also for the elderly.
The evaluation part of the Playmancer rehabilitation games system, however, are
planned to go well beyond these promising results. Not only will they be used to find
certain deficits or produce singular measurements, but will provide the therapist with
a whole set of information throughout the whole therapy and during each session.
This includes angular measurements, EMG data and the patient’s range of motion,
which are all obtained during the game sessions.
Using games for motor rehabilitation has the advantage of adaptable parameters,
which allows customization to the patient’s performance. In other words, VR games
offer the capacity to individualize treatment needs, while providing increased
PLAYMANCER
FP7 215839
D2.1b: State of the Art
15/130
standardization of assessment and re-training protocols. Nevertheless, there has
been no study yet to exploit these findings by using a serious game.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
16/130
4 Review of Games for Health and Serious Games
4.1 Relevant Trends in Games for Health
This section presents a review of games for health either commercial or for research
that can serve as inspiration for developing game scenarios in PlayMancer.
This is a non-exhaustive review of the available games, and gaming technology, that
is conducted with the user requirements of Playmancer in mind. The presentation of
the games surveyed are also organised in this fashion.
Games reviewed (including games that were surveyed but not reviewed in more
details in this deliverable) include:
•
•
•
•
•
•
•
Games related to core features of impulse control and eating disorders,
focusing on relaxation games and planning games designed to improve
learning about self-control and increasing tolerance to frustration
Games linked to self-esteem, developing problem solving and pro-social skills
Games dealing with emotion regulation and expression, narrative therapy,
emotional self-awareness and empathy, emotional disclosure
Games promoting physical activities
Psycho-education games
Games with a purpose
Etc.
In summary, we reviewed more than 30 games and surveyed many others that are
not included in this document. We are currently tracking the potentials of emerging
trends, such as online and casual games, and their potential for providing social
support.
We also tracked the following topics that are relevant to the Playmancer platform and
stakeholder requirements:
•
Use of biosensors: very few games are currently using biosensors but this
area is growing and more and more devices are starting to appear.
•
Use of emotion recognition: except for a few relaxation games that use
biofeedback, no games are using speech emotion recognition, facial emotion
recognition or multimodal emotion recognition within a therapeutic context.
Activities and results on Affective Computing of the MIT Media Lab and the
outcomes of the CALLAS, PASION, PRESENCCIA and INDIGO projects and
the HUMAINE network were also considered and will continue to be reviewed
as input throughout the project.
•
Game design paradigms: Human computation paradigm for getting research
data or engaging people in fun while learning or performing valuable tasks.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
17/130
•
Platform and scalability: the PlayWrite system developed for Personal
Investigator is the only tool allowing rapid development of therapeutic 3D
games. Apart from this platform, very few integrated tools or platform are
available for developing therapeutic games.
•
User-generated content: user-generated content is used quite often in CBTgames but sharing of user-generated content is still lacking in most cases.
Further information about these games is provided in the table below (games in bold
have been reviewed in more details and are described in the following sub-sections).
GAME
ACLS Trainer
TOPIC
To learn about cardiac
conditions in emergency
care.
Amazing Food To learn about healthy
Detective
food and get more active
Game
Ben’s Game
Understanding what
cancer really is and how
to battle it.
Bizzarro
Educate players about
Olympics
the lifestyle choices
required to maintain a
good health.
Dance Dance Dancing
Revolution
Earthquake
Learning Teaching
in Zipland
children to deal with
GOAL
Psycho-education
Training skills
TECHNIQUES
Giving Information
Psycho-education
Change attitudes
Giving information
Exercise
Psycho-education
Change attitudes
Visualization
Modelling
Psycho-education
Physical activity
Giving Information
Exercise
Motivational Techniques
Fun
Physical activities
Changing attitudes
Emotional disclosure
Reproducing moves on a
dance pad
Self-observation
Psycho-education
Problem solving
situations where the
parents divorced.
ESP Game
EyeSpy: The
Matrix
Finding Zoe
Annotate images through Human computation
guess and match with
unknown partner
Face-in-the-crowd paradigm
Identify approving/smiling Improve self-esteem
Attentional training
face in crowd of frowning
Self-esteem conditioning
faces
Promote healthy relations Psycho-education
Giving Information
between girls and boys,
Change attitudes
Problem solving
without violence.
Emotional modulation
Self-observation
Guess and match
Food Finder
Showing children how to
make healthy eating
choices.
Psycho-education
Change attitudes
Giving Information
Problem Solving
Free Dive
Distract children who
Relaxation
Pain management
Relaxation techniques
Distraction techniques
undergo frequent and
PLAYMANCER
FP7 215839
D2.1b: State of the Art
GAME
TOPIC
18/130
GOAL
TECHNIQUES
often painful medical
procedures and bring joy
to chronically ill children.
Freedom
HIV/AIDS
Campaign to increase
Psycho-education
HIV/AIDS awareness and Change attitudes
support infected people.
GIRLS
Build and share personal
experience
Grow your
Chi!
Shoot’em up combining
Immune
Attack
Knowledge of the human
Giving Information
Story building and
sharing
Emotional selfawareness and empathy
Improve self-esteem
Narrative therapy
Constructionist theory of
education
Psycho-education
Information
Relaxation
Relaxation techniques using
biofeedback device
Psycho-education
Change attitudes
Giving information
EyeSpy and Wham!
Attentional training
Self-esteem conditioning
immune system and its
functions.
Journey to
Wild Divine
Solving puzzles through
Kids Wisdom
Develop healthy food
relaxation
habits
MyHeart:
Sneaks!
NEAT-oGames
Collect cards around your Improve daily activities
by collecting items
home
outside of your home
Improve Non-Exercise
Race opponents to get
Activity Thermogenesis
hints for other games
NeuroMatrix
Learning about the brain
and motivating for
neurosciences.
Psycho-education
Motivational change
Nike+ Sports
Kit
Running and competing
Walk or run more with
friends
with peers with self-made
goals
Peek-a-Boom Identify different regions
of an image
Personal
Investigator
Helping adolescents with
mental health problems
such as depression,
anxiety and social skills
Promote daily activities
GPS + mobile
Promote daily activities
Game as motivator
Accelerometer + mobile
Giving Information
Short Movies and Games
Accelerometer + iPod
Identify regions of an
Human computation
image through guess
and match with unknown
partner
Learn social skills
Problem solving
Problem solving
Solution focused therapy
Facilitate interaction
between therapists and
adolescents
problems overcome
Reach Out!
Central
PLAYMANCER
problems
Helping young people
understand issues like
depression, anger and
anxiety.
Psycho-education
Change attitudes
Emotional Modulation
Giving Information
Problem solving
Self-observation
FP7 215839
D2.1b: State of the Art
GAME
Real Lives
TOPIC
Learn about other
people’s culture
Relax to Win
Multiplayer racing using
19/130
GOAL
TECHNIQUES
Psycho-education
Semi-random scenariobased RPG
Relaxation
Giving information
Real life simulation
relaxation
Relaxation through
biofeedback device
Re-Mission
Designed for young
persons with cancer.
Psycho -education
Change attitudes
Improve self -esteem
Behavioural change
Giving Information
Visualization
Group feedback
Treasure
Hunt
Puzzles for children
Cognitive and Behavioural
Therapy
Treatment support
Wham!
Pair self-relevant
Treatment support
Psycho-education
Cognitive behaviour
modifications
Improve self-esteem
Improve your BMI
through fitness and
sports exercises
Wii + Balance board
Fitness
Sports
information with
Attentional training
Self-esteem conditioning
approving/smiling face
Wii Fit
Fitness game
4.2 Storytelling / Adventure Games for Problem Solving
4.2.1 Personal Investigator
Personal Investigator (PI) [2] is a therapeutic 3D game for adolescent psychotherapy
initiated at the Media Lab Europe and Trinity College Dublin. The game is based on
Solution Focused Therapy (SFT) [64], a goal-oriented, strengths-based model of
psychotherapy, and is inspired by play therapy and therapeutic storytelling. PI
targets adolescents with mental health problems such as depression, anxiety, and
social skills problems and can be used either as support tool to facilitate the
interaction between therapists and adolescents, or as self-directed use online.
The game uses a detective narrative where adolescents play the role of a “personal
investigator” investigating personal problems. PI therefore uses a Role Playing
Game (RPG) type gameplay where players talk to Non-Playable Characters (NPC)
to advance in their personal investigation. In PI, adolescents create their own goals
and objectives; the game then rewards them for engaging in dialogues and tasks, in
order to achieve these.
At the beginning of the game, players are given a detective notebook, where they
are asked to record their thoughts and ideas. This notebook will also serve as a
record of the therapy at the end of the game. The five SFT conversational strategies
are mapped to five distinct areas in the game. Each area contains a master detective
who talks to players and asks them to answer questions in their notebook. Some of
the dialogues also include videos of adolescents giving examples of strategies they
PLAYMANCER
FP7 215839
D2.1b: State of the Art
20/130
used to solve their own problems. Players have to graduate the academy by
completing the tasks given by each of the master detectives.
Fig. 1 Personal Investigator
Research conducted on PI [3] showed that the game helped increase adolescent
engagement in therapy and helped therapists develop their therapeutic relationship
with adolescents. Adolescents improved their self-esteem and problem solving skills
with the game as hypothesised but an unexpected benefit of the 3D environment is
that it gave adolescents a sense of control and empowerment: the open structure of
the game (players can go to any part of the game in the order they choose) allowed
them to pace their investigation and control the different therapeutic tasks. One of
the main advantages of PI is that it is a generic game based on SFT: unlike games
that target a specific behavioural disorder, the approach used in PI allows therapists
to use the game for different types of disorders.
The initial research on PI also identified some potential improvements to the game
such as sharing of peer narratives, i.e. allowing adolescents to record and submit
multimedia narratives of their game to be used as examples for other players.
The Underlying platform: The PlayWrite system
PI has been developed as a proof-of-concept to evaluate the potential of 3D
computer games in adolescent mental health care. Based on this initial research, the
PlayWrite system was developed to allow therapists to easily create and adapt 3D
therapeutic games for adolescents [4]. The table below lists different games
developed or being developed using PlayWrite [5]:
PLAYMANCER
FP7 215839
D2.1b: State of the Art
21/130
Fig. 2 Games using PlayWrite (November 2007)
With this system, therapists can develop their own content and games for other
disorders and as shown in Fig. 2 above, PlayWrite can be used with different
therapeutic approaches (e.g. Cognitive Behavioural Therapy, Narrative Therapy,
Social Constructionist Therapy, etc.) and address different issues, specific (e.g.
anxiety, anger management, self-esteem, etc.) or generic. Mental health
interventions can therefore be developed much faster than with closed systems
requiring development by game developers.
4.2.2 Earthquake in Zipland
“Earthquake in Zipland” is a computer game designed by Chaya Harash (Family
Therapist and president and CEO of Zipland Interactive) who’s aimed to help
children aged 9 to 12 of separated and divorced parents to deal with their new
situation. In the adventure, the child helps the main character, Moose, dealing with
different scenarios who find inspiration in the real-life of separated or divorced
families, giving them different tools and skills as problem solving or psychoeducation. Those situations include: guilt about the divorce, blame and responsibility
for the loss of the old family structure, being torn between two households, exploring
the fantasy of bringing the parents back together again and other psychological
effects of divorce on children. “Earthquake in Zipland” is also recommended for
therapists and school counsellors as an innovative form of interactive play therapy,
PLAYMANCER
FP7 215839
D2.1b: State of the Art
22/130
as well as a tool for support groups dealing with children of divorce. The story
includes all the proven benefits of bibliotherapy brought to life on screen. The game
is available in 2 versions, For Parents and Children and for Therapists and Helping
Professionals [108].
Fig. 3 Earthquake in Zipland
While playing, children coping with divorce have the opportunity to expand their point
of view, look for alternatives, listen to advice they can use in their daily reality and
desensitize the intensity of their feelings. The computer game not only offers the
opportunity of dealing with feeling that include fright, rage, shame and helplessness,
also becomes an interesting tool to help therapist and parents. The psychological
theories that guided developers were short term strategic Eriksonian family therapy,
bibliotherapy and cognitive and behavioural concepts (CBT) [109].
4.2.3 Treasure Hunt
Another serious game based in 2.5 D Flash is Treasure Hunt [163]. Taking
advantages of the potential of video games in child psychotherapy, the game takes
place in Captain’s Jones ship, a treasure hunter who needs the help of a child to
solve the mystery of an old map that Captain Jones found. Based on cognitive
behavior modification, the child must help Captain Jones to solve different puzzles in
different parts of the ship as the deck, the galley, the dining room or the shipmates
bunks. Each of the different tasks corresponds to some steps in cognitive
behavioural treatment. At the end of the adventure the child receives a sailor’s
certificate that summarizes what the child has learned from the game.
The game is programmed in Actionscript and XML, no installation is needed. Based
on Cognitive Behaviour Therapy, one of the best-researched and empirically
supported treatment methods for adults and children, the Department of Child and
Adolescent Psychiatry of Zürich University has been the development crew. The
intention of Treasure Hunt is not to substitute the therapist but to offer electronic
PLAYMANCER
FP7 215839
D2.1b: State of the Art
23/130
support to the treatment as a form of rehearse and repeat the psychoeducational
concepts they have learned during therapy sessions.
In the paper [164] the researchers try to innovate by using the trial version of the
video game “Treasure Hunt” in therapeutic purposes with children for eight to twelve
year old who are in cognitive-behavioural treatment for various disorders. Children
have been found to appreciate the game and its diverse tasks. Furthermore, the
game is also valuable in helping less experienced therapist structuring sessions and
explaining important cognitive-behavioural concepts.
4.2.4 Conclusion
The three games presented in this section are good examples of therapeutic games
based on different therapeutic processes: Personal Investigator used solution
focused therapy; Earthquake in Zipland uses Eriksonian family therapy, bibliotherapy
and cognitive behavioural therapy; and Treasure Hunt is based on cognitive
behavioural therapy.
The PlayWrite system used to develop PI is of particular interest as it provides a
simple tool allowing therapists to develop 3D therapeutic games that can be applied
to different therapeutic processes as shown in Fig. 2. It is a very good example of
specialised rapid application environment for therapeutic games and has been
designed specifically for therapists.
4.3 Narrative Therapy / Story-building Games
4.3.1 GIRLS: Girls Involved in Real Life Sharing
GIRLS (Girls Involved in Real Life Sharing) is a game developed at the MIT as part
of a long-term research plan for understanding the role that digital technology can
play in helping people reflect, make meaning, and test assumptions they have about
the world and the values they possess [59]. GIRLS uses an interactive narrative
based on the principles of narrative therapy [62] and the constructionist theory of
education [63] to support emotional self-awareness and empathy for teenage girls.
Its goals are to help teenage girls to recognise and manage their emotions, establish
pro-social goals, enhance their interpersonal skills, support emotional selfawareness and empathy, and develop other pertinent skills [57][58].
PLAYMANCER
FP7 215839
D2.1b: State of the Art
24/130
Fig. 4 GIRLS – Pictorial Narrative
Within the game, girls can construct and share a story based on personal
experience:
1. They start by writing their personal experience in the “Memory Closet” as a
first step toward organising thoughts related to an event.
2. Then, they transform this into a story to focus their thoughts on the most
important people, places, etc. through:
a. The “Character Selection” window that allows them to select and
name the different characters in the story using cartoon-like characters
b. The “Pictorial Narrative” that provides tools for them to make a
storyboard of their narrative: they can create scenes by adding a
background image (from a selection of available images), placing the
different characters involved in the scene and selecting the emotions of
each character
3. The step of “Personal Reflection” uses a system developed at the MIT called
ConceptNet. This system is a common sense knowledge base using a natural
language processing toolkit to label written text with an appropriate emotion
[60][61]. For each scene, a girl can set up everything except for the character
representing her in the story. To manipulate this character, she must submit
the caption to ConceptNet for analysis. The system will then try to
empathetically suggest emotions that relate to this event. The girl can then
agree or reject the suggestion from the system, but the system helped her
think about her emotions related to the scene so it is not so important that the
system is 100% accurate.
4. To complement this first reflection on emotions, girls are then asked to weigh
their emotions in the “Emotional Bank”.
GIRLS is offering a safe and supportive environment for girls to write and create
around events happening in their lives. Research proved to be quite successful and
PLAYMANCER
FP7 215839
D2.1b: State of the Art
25/130
a second version of the program is being developed as a free Web tool. Some of the
main limitations of GIRLS came from the affect reasoning of ConceptNet. The
system cannot be 100% accurate and more importantly its reasoning is affected by
the use of slang, abbreviations, misspelling or grammatical mistakes. Even though, it
still reaches its main goal of making girls think about their emotions, slight
improvements would be necessary to improve the system. One possibility that could
be further developed is to use the human computation concepts presented in section
4.11 to improve the system.
4.4 Re-Mission: Psychoeducational Game about Cancer
Re-Mission by HopeLab is a PC game where the player takes the role of Roxxi, a
nano-robot battling a variety of cancer cells inside the body of various human
patients. The game is filled with important information and lessons about cancer and
as such helps cancer patients to understand what their condition is all about,
changes their attitudes towards the illness and promotes adherence to prescribed
treatment regimens, specially self-administered treatments such as oral
chemotherapy. During the game, the player can destroy cancer cells and manage
with treatment-related adverse effects such as nausea, constipation or bacterial
infections. To win, players have to destroy cancer cells using Roxxi’s abilities to fight
infections with antibiotics. In addition players need to reduce stress with relaxation
techniques and need to eat correctly to gain energy. Initially this was not a game
designed to complement treatment, but to change attitudes towards cancer or to
improve the perception of one’s abilities to influence health outcomes.
This game was first described in the literature by Kato and colleagues in 2006 [165]
and has been found to be an acceptable psychotherapeutic tool to a high percentage
of adolescents and young adults suffering from cancer. The information and
experience gained from playing Re-Mission has been revealed to significantly
improve patients’ self-esteem, and knowledge of cancer. Furthermore patients who
played Re-Mission vs. control groups adhered significantly more frequently to their
treatment regimen [166].
4.5 Self-esteem Games
A series of small and simple self-esteem games [6] has been developed at the
McGill University in Montreal to help people feel more secure and confident about
themselves. Their research showed that with enough practice, even people with lowself-esteem could develop beneficial thought processes that might allow them to
gradually feel more secure and self-confident in the long term. The development of
these mini-games started with the idea that playing a specially-designed computer
game might help people improve their thoughts and feelings about themselves. All
the following games are targeting “implicit self-esteem”, which is the automatic and
non-conscious aspect of self-esteem (as opposed to “explicit self-esteem” that
includes a person’s conscious sense of self-esteem) [7].
PLAYMANCER
FP7 215839
D2.1b: State of the Art
26/130
These mini-games are particularly interesting since research has shown that low
self-esteem occurs very commonly in patients with eating disorders [22][23].
4.5.1 EyeSpy: The Matrix
EyeSpy: The Matrix [9] is the first self-esteem mini-game developed at McGill
University. It is based on the face-in-the-crowd paradigm [8] used for social phobia.
The original paradigm presents users with a matrix of 12 faces, all of the same
person but one of the faces has a different expression. Persons with social phobia
would show attentional bias toward threatening faces.
EyeSpy reuses this paradigm and adds the concepts of attentional training and selfesteem conditioning. It teaches people to look for the smiling or approving face in a
crowd of frowning faces. By doing this repeatedly and as quickly as possible, people
learn to look for acceptance and ignore rejection: in order to successfully and
accurately identify the smiling/approving face, one must get in the mind frame “look
for acceptance, and ignore rejection because it slows me down”.
Fig. 5 EyeSpy: The Matrix
Research on EyeSpy [10] has shown that the game reduces the attentional bias for
rejection in people with low implicit self-esteem. This habit of ignoring rejection could
help people with low self-esteem when in difficult social situations. It has also shown
the possibility of developing computer games to measure cognitive responses to
rejection and acceptance.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
27/130
4.5.2 Wham! Self-Esteem Conditioning
The Wham! Self-Esteem Conditioning mini-game [11] reuses the principles of
EyeSpy with a bigger focus on self-esteem conditioning. The first step is to enter
some “self-relevant” information (their first name and birthday) that will contribute to
the sense of identify. The game then presents players with four empty boxes. A word
will appear in one of the boxes and players have to click on it as fast as possible.
This results in a picture of a face appearing in the corresponding box for half a
second, then the game continues with other words. The positive conditioning
involves pairing self-relevant information with smiling or approving faces so if the
displayed word was the player’s first name or birthday, the appearing face will be
smiling; otherwise a frowning face is displayed.
Fig. 6 Wham! Self-Esteem Conditioning
Research on this game [12] showed that by playing the game, users create a pairing
between the self and positive social feedback, thus leading to automatic thoughts of
secure acceptance in relation to the self.
4.5.3 Grow your Chi!
Grow your Chi! [13] combines the concepts of EyeSpy and Wham! in a shoot’em up
game. Within the game, clouds containing faces or text move around the screen
horizontally and players have to click on clouds with a smiling/approving face or selfrelevant information to gain points. If they click on clouds with a frowning face or
non-self-relevant information, they lose points.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
28/130
Fig. 7 Grow your Chi!
The main addition brought by this game compared to EyeSpy and Wham! is the
gameplay that is designed to be more entertaining and therefore more pleasant to
play for a longer period of time.
4.5.4 Further research: EyeZoom, Click’n Smile, Word Search
In addition to Grow your Chi! That is currently being researched, researchers of the
McGill University are already developing new ideas of self-esteem games based on
the same principles [14].
•
•
•
EyeZoom presents users frowning faces and when users click on a face, a
smiling face of the person zooms in for a few seconds.
Click’n Smile shows a neutral face that morphs to a smiling face when the
user clicks on it.
Conditioning Word Search is a simple word search grid where words to find
are self-relevant information.
4.5.5 Conclusion
Researchers from the McGill University in Montreal have developed a series of minigames aiming at increasing the implicit self-esteem of users. These simple games
developed in Flash are based on the face-in-the-crowd paradigm, attentional
training, and self-esteem conditioning and are proven to be effective in raising the
self-esteem of people with low implicit self-esteem.
These self-esteem games are a good example of how a simple set of mini-games
can be used efficiently to deal with behavioural disorders.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
29/130
4.6 Affective and Biofeedback Games
Biofeedback games use techniques that allow the players to view the otherwise
invisible physiological processes that occur within their body through the use of
biofeedback devices or sensors. Bionic Breakthrough, a videogame developed by
Atari in 1983, used the electrical activity in a player’s forehead muscles through the
MindLink device as input to replace the conventional joystick.
Affective gaming is a sub-strain of regular video gaming slightly different from
biofeedback gaming, even though it can use biofeedback as well [41]. In affective
games, the computer is an active intelligent participant in the biofeedback loop and
the intention is to capture the normal affective reactions of the players [42]. In
biofeedback games, players explicitly participate in controlling their physiological
responses in order to control the game, whereas in affective games players might
not even be aware that their physiological state is being monitored.
Different types of physiological data that can be measured by biofeedback devices
are presented in section 5.1 below.
4.6.1 The Journey to Wild Divine: The Passage
The Journey to Wild Divine: The Passage (later referenced as Wild Divine) [38] is
probably the most advanced biofeedback game commercially available on the
market. The gameplay of Wild Divine is similar to the Myst graphic adventure game
series using calm graphics, music and speech, except that the puzzles are solved
through relaxation exercises using the biofeedback device bundled with the game,
the Light Stone.
Fig. 8 The Journey to Wild Divine: The Passage
The Light Stone is a USB biofeedback device that measures the player’s Skin
Conductance Level (SCL) and Heart Rate Variability (HRV) through three finger
sensors called the Magic Rings: the two rings worn on the index and ring fingers
measure the SCL and the middle finger ring measures the HRV. SCL measures
sweat gland activity and can be correlated to excitement and nervousness; HRV is
PLAYMANCER
FP7 215839
D2.1b: State of the Art
30/130
calculated from the differences in heart rate from one heart beat to another [36].
Different research on HRV link a low HRV to a high anxiety level [39] and a high
HRV to a better brain-heart synchronisation and immune system [40].
While Wild Divine works well in teaching players to relax through the different
breathing and relaxation tasks using the Light Stone, player’s relaxation is broken by
the navigation through the game environment (players have to click on different
areas of the screen to move) [37]. Wild Divine is still the most successful game to
integrate biofeedback device in the gameplay but some improvements could still be
made to provide a more immersive experience and avoid breaking the Flow Theory
from Mihály Csíkszentmihályi.
4.6.2 Relax-to-Win
Relax-to-Win is a competitive two-player racing game using biofeedback to control
the game. It was initially developed for research on treating children with anxiety
problems [44]. In the game, players compete against each other by controlling a
dragon in a 3D virtual race using their stress level measured by their Galvanic Skin
Response (GSR): if the player relaxes, their skin resistance increases and the
dragon they control will go faster; on the contrary, if the player starts to be stressed,
the dragon will slow down. The winner of the game will therefore be the player who
managed to relax the most [42]. The game, as distributed by Orange in the UK, uses
a small biofeedback held between the fingers (as shown in Fig. 10) to measure the
GSR.
Fig. 9 Relax-to-Win
Fig. 10 Relax-to-Win Sensor
Relax-to-Win is a good example of multiplayer biofeedback game. The gameplay is
very simple and goes against usual racing games where awareness and stress are
usually the key to the victory. In this game, stress makes the player lose and players
have to learn to relax to perform better. The game exists as a PC desktop game but
also as a mobile game running on a mobile phone.
The whole concept of Relax-to-Win has also been extended through Vyro Games, a
company that provides different stress-relief games using the Personal Input Pod
(PIP) biosensor [43]. Available games include a new version of Relax-to-Win called
Relax & Race, and Stormchaser where the player can control the weather depending
on his/her stress level. The market of biofeedback/affective games is starting to grow
PLAYMANCER
FP7 215839
D2.1b: State of the Art
31/130
out of initial academic research and is definitively getting more and more consumer
visibility and attraction.
4.6.3 Conclusion
This section described two innovative and successful games using biofeedback to
measure relaxation.
Biofeedback is currently being more and more researched as an additional input
mechanism for games and a growing number of devices are appearing on the
market (e.g. Emotiv [144], OCZ’s Neural Impulse Actuator [145], NeuroSky’s
MindSet [146]). This new modality increases the sense of immersion in games and
provides obvious benefits for games for health as the basis for biofeedback devices
usually comes from the medical sector.
4.7 Games Promoting Physical Activities
Exergames (a combination of “exercise” and “games”) are games that also provide
physical exercise [29]. One of the most popular exergame is called Dance Dance
Revolution (DDR) released by Konami in 1998 [30]. This dancing game uses a
dance pad on which players have to reproduce (with their feet) sequences of
movements displayed on the screen. Even though the original purpose of the game
is entertainment, it has been used in lots of different research and also for fitness or
even physical education at school [19][20][33].
The release of the Nintendo Wii in 2006 made gamers more active while playing
through the use of the Wii-mote and research has been conducted to investigate
whether playing Wii Sports, the initial game bundled with the Wii, would provide
sufficient physical activities for children. Even though, Wii players used 2% more
energy than regular players, one hour of Wii Sports did not increase energy
expenditure enough to cover children’s recommended daily exercise level [31]. The
recently released Wii Fit game goes more into exergaming and provides real fitness
exercises in a fun environment through the use of a new input device called the Wii
Balance Board [32]. The game is a great success: it sold over a quarter of a million
copies in its first week and almost 2 million copies have been sold in Japan.
Based on these successes, students from Carnegie Mellon University developed
The Winds of Orbis: An Active Adventure [141]. This game uses the Wiimote and a
DDR dance pad to control the game “actively” within a platform-action game. A
complete description of how the game was developed can be found in [142].
Exergaming is a growing market and promote a more active way of playing games,
but players are still playing inside in front of a screen. The following sections focus
on mobile games promoting daily outside activities.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
32/130
4.7.1 NEAT-o-Games
NEAT-o-Games [15] is a research project initiated by the Computational Physiology
Lab of the University of Houston, Texas. It is based on Non-Exercise Activity
Thermogenesis (NEAT), which according to research conducted at the Mayo Clinic
is the energy expenditure of all physical activities other than volitional physical
exercise (e.g. sports). NEAT is the most variant portion of energy expenditure as it
includes daily activities such as working, playing, or dancing.
Recent work suggests that obesity is driven by a reduction in energy expenditure,
rather than a rise in energy intake: in the UK, where obesity has doubled since the
1980's, energy intake appears to have decreased on average. The hypothesis used
in this research is therefore that NEAT is the culprit behind obesity.
Fig. 11 NEAT-o-Games components
A lot of recent research has already been conducted on ubiquitous HCI for obesity
and weight management using mobile phone applications to help motivate teenage
girls to exercise through a social network [16], mobile phone applications for
encouraging activity by sharing step counts with friends [17], mobile phone
applications for weight management by monitoring caloric balance [18], impact of
Dance Dance Revolution on social life and physical activities [19], or immersive
fitness computer games [20][21]. However, integration of technology is still weak and
monitoring highly relies on user input (intrusive, users can “cheat”, etc.). Another
weakness is that they are based on warnings and encouraging messages, which
have an iffy effect on people with behavioural problems.
The goal of this research is to increase NEAT in users’ lifestyle through a collection
of mobile games where “activity points” can be earned and used across the game
space. The starting point of the project is to provide games as motivators and to use
PLAYMANCER
FP7 215839
D2.1b: State of the Art
33/130
a game design that does not require user attention at all times using wearable
devices (unobtrusive, objective data). NEAT-o-Games are composed of two games
at the moment: NEAT-o-Race and NEAT-o-Sudoku.
NEAT-o-Race
This is the main component of NEAT-o-Games. It uses a tri-axial accelerometer
communicating with a Palm Treo phone through Bluetooth (see Fig. 12). Users wear
the accelerometer on their waist and when they move, the accelerometer transmits
the level of activity of the user to the server. Moving grants activity points and is
translated in the game to the user’s avatar running along a track. The more the user
moves, the faster the avatar runs.
Fig. 12 Architecture of the NEAT-o-Race game
Users can play the game against other users or against a computer that is calibrated
to run a bit slower than the recommended amount of daily physical activity. So if the
user wins versus the computer, he/she is doing the recommended amount of daily
physical activity.
The game usually runs in the background and provides occasional messages to
inform players about major events in the race: motivation messages pop-up if the
player is too much behind, and on the opposite congratulation messages pop-up if
the player is far ahead.
NEAT-o-Sudoku
This is a real sudoku game that users can play on the mobile phone and where they
can spend the activity points gained in NEAT-o-Race to get hints on the game. It can
be considered as the reward of the game and acts as a motivator.
Conclusion
NEAT-o-Games is experiencing a new game paradigm to target the behavioural
aspect of sedentary lifestyle with a bundle of single and multiplayer games using a
PDA or Smartphone coupled with an accelerometer. This project is still in an early
stage of research: a proof-of-concept trial was conducted in 2007 and larger
PLAYMANCER
FP7 215839
D2.1b: State of the Art
34/130
evaluations are currently under way with additional reward games being added as
well.
The main strengths of the NEAT-o-Games are their simple and unobtrusive design,
using games as motivators and the concept of coupling a main game to promote
daily physical activities (as opposed to physical exercise) with puzzle games acting
as rewards.
One caveat of the system is that it could be costly in terms of equipment and
communication costs, especially for multiplayer game. However, communication
costs seem to have already been thought of and the system can save data locally
(i.e. on the phone) when no connection is available and then sent to the server when
the network is available. This could allow players to transmit data only once a day to
reduce communication costs even though it would reduce the interactivity with other
players and could also reduce motivation as well.
4.7.2 Nike+ Sports Kit
In 2006, Nike and Apple worked together to release the Nike+iPod Sports Kit [55].
The Sports Kit is composed of a small accelerometer attached to or embedded in a
special shoe that communicates with a receiver plugged into an iPod to transmit the
distance and pace of the user. Audio feedback is also provided through the iPod to
give notifications and encouragements to the players. The iTunes software can be
used to view the history of the walk or run. The Sports Kit is now also available with
a USB watch to replace the iPod.
Fig. 13 Nike+iPod Sports Kit
Nike also provides a complete game-like Web application that provides charts,
rankings, forums, and the possibility to set goals and to make contests with friends or
strangers from the Nike+ community [56]. This game is a real motivator for users:
they get points for running and can challenge their friends in simple contest such as
first to reach 100 miles [54]. In addition to this Nike+ website, a public API is
available to make custom applications based on the data collected by the Sports Kit.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
35/130
Fig. 14 Nike+ Application
The Nike+ Sports Kit is quite similar in the concept to NEAT-o-Race described in
section 4.7.1 and is already commercially available. The main difference is that it is
more focused on running instead of softer daily activities and rewards players
through social interactions with friends (challenges) instead of getting points to
facilitate other games.
4.7.3 MyHeart: Sneaks!
Within the MyHeart FP6 project [52], Nokia developed Sneakers!, a mobile game to
promote physical activities in young children [53]. The game is a location-based
collectible and trading card game where players have to complete a collection of
virtual cards placed within a certain radius of their home. Cards are collected when
the player walks through GPS hotspots and the distance to each card increases as
the player progresses: the more the player moves, the more cards are collected. The
game also includes community or social aspects by allowing players to form groups,
to view the progress of others and to chat. The game was implemented using MUPE
(Multi User Publishing Environment [65]) on Symbian Series 60 mobile GPRS
phones with a GPS receiver.
One of the main lessons we can learn from this development is that placing the
virtual cards requires careful supervision to avoid placing them in the middle of a
highway or any dangerous or unreachable location. This game is an example of how
mobile games can provide “mixed reality gaming” by combining real-world activities
with game activities.
4.7.4 Conclusion
In this section, we have presented a brief overview of exergames and presented in
more details some examples of novel mobile games promoting activities outside of
the home environment. These games use different strategies to get the players to go
outside and perform daily activities or physical activities.
The most important aspect of these games is the motivation part and, as the latest
poll from the Japanese Website IT Media suggests, it is an aspect that is lacking in
PLAYMANCER
FP7 215839
D2.1b: State of the Art
36/130
Wii Fit: according to their survey, 64% of Wii Fit owners stopped using the game
after purchase [143].
4.8 Games for Rehabilitation
Display and tracking technologies allow patients to be immersed in different virtual
scenarios. There clinical relevant exercises for kinetic training, as well as cognitive
exercises, can be offered in the form of games. In such a virtual environment training
characteristics can be recorded and analyzed and the game options can be adapted
to the patient’s abilities. Contemporary therapy–systems are starting to support the
special requirements of the broad field of rehabilitation. For therapists as well as for
the patients, these new possibilities for a more efficient therapy are enhancing the
success in rehabilitation.
A wide variety of technologies is used for games in rehabilitation. Most systems,
however, were only being used with a small group of patients to provide data for
studies. This is especially true for virtual reality games due to the costly setup.
Based on a classification by technology, a short overview of games in rehabilitation
is presented in the following.
4.8.1 Video capture based games
Video capture games use a video camera and software to track movement in a
single plane. The user’s image is then embedded within a virtual environment, where
he can interact with artificial objects in a natural manner. Gesturetek’s IREX [147]
and Sony’s eye toy [148] platforms are currently the most widely used for
rehabilitation purposes. These systems have been used in several studies for
encouraging the patients to use their impaired upper extremities, but also for balance
training with patients who had a stroke or other brain injuries [149].
Due to the fact that the systems or their predecessors were developed for
entertainment and gaming purposes they use relatively simple setups (usually one
webcam attached to a PC/console with a TV set or beamer for display). This
imposes certain characteristics on such a system which in some cases are
advantageous while in other cases limit utility.
The big advantage of video capture games is that the user doesn’t have to wear
additional devices and the user’s posture is not essential. Thus most of the games
can be played sitting/standing. The main drawback of video capture platforms in
rehabilitation is the limitation to a single plain due to the use of only one camera.
Nevertheless a certain sense of presence and level of enjoyment can be achieved
[149]. Additionally some platforms offer the possibility of identifying certain body
parts (e.g. using coloured gloves Fig. 15). More sophisticated systems like the IREX
can also generate outcome reports and can be customized using a software
development kit.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
37/130
Nevertheless, as mentioned before, video capture platforms suffer from the
technology imposed limited tracking capabilities and therefore a lack of input
modalities and control options.
Fig. 15 IREX platform showing the “Birds & Balls” game
4.8.2 Games using customized input devices
For motor- and neurorehabilitation it is often required to train a specific limb or ability.
Therefore, devices have been constructed to serve just for such a single training
purpose. In some cases they are embedded in a system which enables them to
serve as an input device for games.
Many balance platforms currently used in rehabilitation for example offer a small
display, which can be used to display primitive games. (e.g. the maze game shown
in Fig. 16). The goal of these games is the encouragement of proprioception and
motor control.
Another example would be the Rutgers ankle interface [150], which is a haptic
interface used for rehabilitation after ankle injuries. It can also be used to provide
input to a flight simulator. This device, like others used in orthopaedic rehabilitation is
used to improve the functionality of limbs/joints after a severe injury.
The properties of such systems are very different and thus cannot be generalized
here. They are, however, all limited to very specific interactions and therefore only
useful for a small group of games each.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
38/130
Fig. 16 Biodex balance system with a maze game
4.8.3 Virtual reality games
“A virtual environment (or virtual reality) is a simulation of a real world environment
that is generated through computer software and is experienced by the user through
a human–machine interface. A wide variety of hardware and software devices can
be utilized to create VR simulations of varying degrees of complexity” [151]. An
overview of some devices and software frameworks used for VR is given in section 5
of this document. The use of VR in rehabilitation is a very active research topic and
many systems/games have been developed through the recent years. These
systems, however, have mostly been used only for research purposes. Broader
distribution has so far been hindered by the lack of market ready products and the
prices of VR setups. The following paragraphs, therefore, contain short descriptions
of the more sophisticated VR game systems used in various studies.
Holden et al. developed a system, where a patient had to repeat the actions of a
virtual teacher in order to gather score points. The actions, which had to be
performed in a virtual environment, included moving a ball through a ring, hitting a
nail with a hammer etc. [151] These exercises required shoulder flexion, elbow
extension and forearm supination with grasp. Furthermore some of the actions, like
putting an envelope into a mailbox, could be directly transferred to the real world.
Ma et al. developed a VR based therapeutic training system with a series of games
to encourage stroke patients to practice physical exercise. A magnetic tracker was
used together with a data glove to provide input to the games, while the patient was
wearing an HMD for visual feedback. The games also relied on physics simulation
techniques for more realism. Two of the games have been described in [157]. The
first game involved a catching task in which falling objects (oranges) had to be
caught using a tracked basket. This could either be done with one hand (impaired or
intact) or with both hands for bilateral upper limb training.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
39/130
The second game that has been developed is “whack a mouse” as shown in Fig. 17.
It was intended for post-stroke therapeutic exercise, which should improve accuracy
and speed of the user’s upper extremities movement. During the game the patient
has to hit the stationary mouse with the hammer, which he is able to control via a
sensor on his hand. Difficulty and the area where the mouse appears can be
configured according to the patient’s clinical picture.
Fig. 17 “Whack a mouse” game
All sensor data from the system was recorded and trajectories, etc. analyzed.
Burdea and colleagues developed some games for hand motor-rehabilitation. They
used a CyberGlove/CyberGrasp (Fig. 18) and other devices to capture the flexion of
the fingers and pressure of the hand. This was then used as an input to a couple of
mini-games, where a patient could “catch” a virtual butterfly or reveal a picture on a
screen [152]. The goals of these games were to increase range of motion, speed,
fractionation and strength of the fingers/grip.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
40/130
Fig. 18 CyberGrasp and it’s virtual representation
A Swedish research group has used haptic force feedback and stereoscopic vision
for a virtual ball game that was used for rehabilitation. In this game the patient had to
strike a ball in order to knock over bricks. This game, according to [153] increased
hand-eye coordination, grip strength and range of motion.
Connor et al. used a force-feedback joystick to train the perceptual motor [154].
Therefore a series of letters and numbers were randomly distributed on a computer
screen. The patients had then to connect the sequential items by moving a cursor
over one object after the other and assigning it to the sequence by pressing a button.
Children with severe motor disabilities often suffer from a lack of spatial awareness
[155]. This comes from the fact that they are limited in their possibilities to explore
the environment due to their mobility impairment. Wilson and colleagues used
combined VR/real world training to transfer spatial knowledge in VR to the real world
[156]. In a game-like setup the children were asked to first explore the building in a
VR representation and find certain items (Fire extinguishers, fire exits). Then they
had to guide the experimenter to the objects in a wheelchair tour throughout the real
building.
4.8.4 Conclusion
The VR games/systems presented in this section are examples for how video game
and virtual reality technologies have been used in rehabilitation. For these games the
setups and techniques vary widely, as do the treated impairments. Most games used
are rather minimalist and were only used during studies. Nevertheless, most showed
promising therapy results.
4.9 Universally Accessible Games
Game accessibility deals with the accessibility of video games for disabled users.
According to the Game Accessibility Special Interest Group of the International
Game Developers Association (IGDA), between 10% and 20% of the people in a
country can be considered disabled [137]. However, accessibility is hardly applied to
games as game developers are usually not aware of game accessibility. Accessible
PLAYMANCER
FP7 215839
D2.1b: State of the Art
41/130
games are still mainly developed as proof-of-concept and still far away from
mainstream games in terms of content. Some examples of accessible games are
given in [138] and AbleData also provides a list of accessible learning games [139].
In addition, a pre-conference workshop on game accessibility was organised at the
latest Games for Health conference in May 2008 in Baltimore, USA, showing a
growing concern for this research topic.
The concept of Universally Accessible Games (UAG) is a research activity aiming at
overcoming limitations of the main approaches to game accessibility and at providing
an effective technical approach for game accessibility for creating games that can be
playing concurrently by people with diverse abilities [135].
As defined by the HCI Lab of ICS-FORTH, UAG are interactive computer games
that:
•
Follow the principles of Design for All, being proactively designed to optimally
fit and dynamically adapt to different individual gamer characteristics without
the need of further adjustments via additional developments.
•
Can be concurrently played among people with different abilities, ideally also
while sharing the same computer.
•
May be played on various hardware and software platforms, and within
alternative environments of use, utilising the currently available devices, while
appropriately interoperating with assistive technology add-ons.
Within the scope of their research, the HCI Lab of ICS-FORTH has developed:
•
•
A design method, Unified Design for UAG, that will be followed in PlayMancer
for developing UAG [136]
Proof-of-concept and case studies UAG: UA-Chess, Access Invaders, Game
Over!, and Terrestrial Invaders
4.10 Casual Social Games
Online casual social gaming is a growing trend in online games that takes its roots in
Facebook and combines social networking and games, quite often reusing classical
games such as Battleship, Scrabble, etc. or taking inspiration from popular games on
console or PC.
Multiplayer online casual games are quite common on the Internet and are often
regrouped on large sites such as King.com. Such sites allow people to compete
against each other on puzzle games, word games, etc. and attract millions of users
every day: one study conducted by Park Associates estimates that 34% American
adult Internet users play online games weekly. However, playing such games with
unknown users can be frustrating since quite often people disconnect from the game
when they are starting to lose.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
42/130
This concern is not present in social games since people play with their relatives.
This is, however, not the only advantage social gaming brings to the table. In
addition to reinforcing social links, social games also bring new types of games and
interaction between players. Some social games allow players to buy virtual drinks to
their relatives; others like Friends for Sale go even further by allowing users to buy,
sell and own their friends.
One of Facebook’s most popular social game, Scrabulous, has nearly 700,000 daily
active users on Facebook (and hundreds of thousands more from their own
Website). Hasbro, the owners of the original Scrabble game, eventually heard about
the game and decided to sue the developers and Facebook for trademark
infringement. The two parties are now in discussions and could come up with an
agreement.
The introduction of social networking in video games is still in an early stage but new
ideas are already coming up and the market is growing every minute. Social gaming
is mainly attracting non-gamers into the gaming market and as Shervin Pishevar,
CEO of Social Gaming said, “We’re in the Pong stages of social gaming. In terms of
building new ideas, you should expect to see innovation for what it means to be a
game and tap into the social graph, the people you enjoy playing games with”[35].
4.11 Guess and Match Games
Luis van Ahn is an assistant professor at the Computer Science Department of the
Carnegie Mellon University and one of the inventors of CAPTCHAs. As part of his
research on human computation [28], he created a series of “games with a purpose”
based on simple guess and match principles to help make the Web more accessible
[24].
4.11.1 The ESP Game
The ESP Game [27] is a two-player online game where players have to type the
same word based on the image displayed on the screen. The two players do not
know each other and cannot communicate. If the two players get the same word,
they get points and a new image is displayed. Matches are then used to label
images and existing labels are given as taboo words for the image so players have
to find additional labels.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
43/130
Fig. 19 The ESP Game
This very simple game can help label images on the Web very easily using human
computation. Many people played the game for over 40 hours a week and so far, the
game has collected more than 34 million labels for images from the Web since 2003.
In fact, if the game is played as much as other popular games (e.g. 9 billion human
hours of Solitaire were played in 2003 worldwide), it would be possible to label all
images on the Web in just a few weeks [25].
Benefits of the ESP Game are multiple: it can help to improve image search engines
such as Google Image; labelling images increases the accessibility of the Web for
people with visual impairments; it could also help browsers block inappropriate
content from Websites based on some keywords (e.g. to block adult content); etc.
Moreover this simple yet powerful concept of symmetric verification could be used to
label other types of media: sound or video clips, text, etc.
4.11.2 Peek-a-Boom
While the ESP Game helps label images through symmetric verification, Peek-aBoom [34] reuses the same concepts of human computation with asymmetric
verification to identify which areas of an image corresponds to a label. So instead of
showing the same image to two players and having them guess the same word, one
player (“Boom”) gets an image with a word related to it (the word being a label
attached through the ESP Game) and must reveal progressively parts of the image
for the second player (“Peek”) to guess the correct word. Boom can also provide
hints to Peek to help him guess the word, e.g. Boom can point to a specific area in
the uncovered part of the image or can indicate if Peek’s guess is hot or cold.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
44/130
Fig. 20 Peek-a-Boom
This game helps locate the pixels corresponding to a label in an image and
complements the labelling from the ESP Game. It shows another example of how
games can harness the power of human computation to fulfil tasks that computer
cannot do correctly. Like the ESP Game, Peek-a-Boom is very popular and some
players spent more than 12 hours a day playing. The game’s application to Web
accessibility and image recognition is also very valuable, and complements the input
from the ESP Game [25].
4.11.3 Conclusion
In this section, we examined some very simple guess and match games using the
concepts of human computation to fulfil tasks that computer cannot do efficiently
(image annotation in this case). These 2-player games show how simple game
concepts can be used very efficiently for a greater purpose and can attract a very
large community of players.
Such concepts could be reused to annotate other media such as video clips or
speech files to annotate emotions for instance. Another interesting result from such
mini-games is that the results could be injected as training data for some computer
systems such as emotion recognition software.
4.12 Conclusion games state of the art
The serious game landscape is developing fast as traditional gaming giants are
looking for new market niches. This review of games for health and serious games
has provided a good overview of past and current developments in games related to
the PlayMancer project and showed current trends in the (serious) gaming world:
PLAYMANCER
FP7 215839
D2.1b: State of the Art
•
•
•
•
45/130
Use of biofeedback in games is growing, especially related to relaxation, and
more and more solutions are starting to be available on the market (e.g.
Emotiv, OCZ’s Neural Impulse Actuator, NeuroSky’s MindSet) ;
Extensive research has already been done on exergaming or games to
promote physical activities. However, games promoting physical activities
outside of the home environment are still rare or being researched ;
Simple mini-games can be very efficient and easier to manage than full blown
games. With the very high success rate of casual games and social games,
integrating therapeutic components in such games could prove to be very
beneficial.
A lot could be learned from e-learning game platforms: serious games for
learning have been around for a long time and could provide valuable input in
terms of standalone and multiuser approaches.
This review is giving interesting input on gameplay and game concepts that will be
reused and adapted or enhanced in order to develop the PlayMancer game
scenarios presented in D2.1c.
Several serious games do not require a large amount of cognitive training. There is a
risk that the players will not use their change in cognitive behaviour and attitudes in
real-life and that the change will not go any further than when playing the game.
Moreover, the information given in the game may be harder for the player to apply in
real-life if it has been given while playing a game rather than attending a therapeutic
session in a real-life environment. People with less knowledge and experience about
videogames, and how to play, may have a harder time advancing and focusing on
the cognitive aspects of the game.
These important points will be kept in mind while developing the evaluation
methodologies for the PlayMancer games that will be presented in D2.2 Evaluation
Methodologies on month 11.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
46/130
5 State of the Art: Multimodal Game Technologies
5.1 Biofeedback and Biofeedback Devices
5.1.1 Introduction
The physical and emotional state of a human is reflected by a multitude of changes
in his physiological state. For example, the heart of a person that is scared will beat
faster while at the same time his skin will transpire, In the Playmancer project the
target is to provide a serious game that is functioning and reacting according to the
emotional and physical state of its player. It is thus important to be able to recognise
automatically the state of user. This can be done, up to certain point, by measuring
and analysing the changes of different physiological signals and parameters of the
patient, like heard rate, transpiration (via skin conductivity), oxygen saturation in the
blood, muscular activity etc. A multitude of devices and tools are today available in
the market for measuring biosignals. In this section we present some examples of
the most characteristic types of the devices available, a complete survey being
practically impossible and out of the scope of the project.
Bio-signs or biosignals are the biological signals emitted by the human being. It is
considered as a non-classical user interface, and is continually updated following the
state of the subject. Those signals are monitored through special medical equipment,
which covers the subject’s actual condition, excitement, body temperature, etc. The
feedback generated by those sensors is commonly known as biofeedback, and is
useful in treating disorders, or rehabilitation programs. The non-invasive aspect of
the biofeedback sensors allows them to be widely used in different medical sectors,
they also allow a more effective state analysis of the body and this in a very short
time than a doctor would do with classical instruments. Finally it has the advantage
to produce no side effects, which could trouble the measurements, or degrade the
subject physical state.
Today (mid 2008) there is multitude of wearable biosignal measuring devices on the
market, with prices ranging from a few euros to thousands of euros, and new ones
are announced every month. Each device is characterised of its own accuracy and
configurability. In this section we thus provide some examples of the bio-signal
devices, a complete description of all the devices been practically impossible!
5.1.2 Common biosignals
The following list of biosignals is considered as common in the medical sector, not all
of them will be used in the PlayMancer project due to high cost or size. The sensors
must be as small as possible to be used in a mobile context.
•
•
•
•
•
Electroencephalogram (EEG)
Galvanic skin response (GSR)
Electrocardiogram (ECG)
Electromyogram (EMG)
Pulse oximeter
PLAYMANCER
FP7 215839
D2.1b: State of the Art
•
•
47/130
Motion sensors, accelerometers
Skin temperature
Electroencephalogram (EEG)
An electroencephalograph monitors the activity of brain waves. These brain waves
correspond to different mental states. This biosignal will not be monitored within the
PlayMancer platform due to high cost of the machine, and the fact that it is intrusive.
Fig. 21 g.MOBIlab EEG module (g.tec medical engineering GmbH)
Galvanic skin response (GSR)
Galvanic skin response sensors measure the activity of a patient's sweat and the
amount of perspiration on the skin. It is used in the video game Wild Divine (see
section 4.6.1) as an emotional indicator.
Fig. 22 Single channel GSR monitor GSR2 (Thought Technology Ltd.)
Electrocardiogram (ECG)
An electrocardiogram is a graphic produced by an electrocardiograph, which records
the electrical activity of the heart.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
48/130
Fig. 23 a) g.MOBIlab ECG/EMG module (g.tec medical engineering GmbH)
Fig. 23 b) MobiHealth Mobile™ module
Electromyogram (EMG)
An electromyogram uses electrodes to measure muscle tension. It can be used in
the process of muscle rehabilitation, such as in cases of paralysis resulting from
cerebrovascular accident and heart attack. The MobiHealth Mobile, being
configurable, is able to measure EMG.
Pulse oximeter
Pulse oximetry is a simple non-invasive method of monitoring the percentage of
haemoglobin (Hb) which is saturated with oxygen. The pulse oximeter consists of a
probe attached to the patient's finger or ear lobe which is linked to a computerised
unit. It is based on optical spectroscopy of light absorbed by oxygen molecules in
blood. This sensor is connected to the preprocessor. NONIN Xpod oximeter and the
AABACO ear probe are examples of that kind of device.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
49/130
Fig. 24 Xpod oximeter (Nonin) and AABACO medical, Inc ear probe
Motion sensors, accelerometers
The motion sensors consume low power (2mW), they can sense linear acceleration.
The ADXL311EB from Analog Devices uses 3 accelerometers to measure motions in
three axes to reproduce 3D movements. The sensitivity is about 167mVg.
Fig. 25 CXL04LP3 3-axis accelerometer module (Crossbow)
Skin temperature
The temperature sensors are usually attached to the subject's fingers. Skin
temperature is indicating the stress or relaxation level of the subject.
Conclusion
The biosignals market is large, and many sensors are simply not usable in a game
context. Based on the architecture requirements, the g.tec g.MOBIlab module and
MobiHealth system cover most of the needs for biofeedback information in the
PlayMancer game platform. The wireless capability of the devices is also an
advantage in the non-invasive perspective.
5.1.3 Wearable sensors
A recent sensor type has appeared, so called “smart clothes” or wearable system
implementation. Those smart clothes can measure heart rate, skin temperature,
ECG, GSR and other biosigns. The main problem is the reusability, in a medical
environment, the patients are never the same size, so the sensors have to be
patient’s physiologically independent, and comply with the sterility assurance levels
for medical products.
Wearable sensors can be woven into special clothes that a patient simply wears.
However several issues can make them inefficient, like the need to use the correct
size for the patient (so that the sensors are in the right place) and the use of dry
PLAYMANCER
FP7 215839
D2.1b: State of the Art
50/130
sensors that provide a lower quality signal, like in the case of ECG. The following two
models resume the wearable sensors technology.
•
•
VivoMetrics LifeShirt
ZOLL Lifecor LifeVest
In the same area as the wearable clothes, one use, extra low cost auto adhesive
sensors are appearing in the market. These sensors are fully integrated and
miniaturised sensors, with own power and communications capabilities, attached on
a plaster and able to measure different types of signals. The Toumaz chip is the
most notable example.
VivoMetrics LifeShirt
The LifeShirt by VivoMetrics is based on a miniaturized, ambulatory version of
inductive plethysmography. The signals are recorded by a PDA running the
VivoLogic analysis and reporting software. The LifeShirt is equipped with an ECG
sing channel sensor, respiratory sensor and a three-axis accelerometer. Optional
peripheral devices measure blood pressure, blood oxygen saturation, EEG, EOG,
periodic leg movement, core body temperature, skin temperature, end tidal CO2 and
cough.
Fig. 26 LifeShirt (VivoMetrics)
ZOLL Lifecor LifeVest
The LifeVest by ZOLL Lifecor advertised as the first wearable cardioverter
defibrillator, providing a new treatment option for sudden cardiac arrest, offering
patients protection and monitoring as well as improved quality of life. LifeVest
incorporates a sensor and an actuator: the defibrillator, providing an example of
active automatic response to the measured signals.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
51/130
Fig. 27 LifeVest (ZOLL Lifecor)
Toumaz Sensium
The Sensium™ is an ultra low power sensor interface and transceiver platform for a
wide range of applications in healthcare and lifestyle management. The device
includes a reconfigurable sensor interface, digital block with 8051 processor and an
RF transceiver block. On chip program and data memory permits local processing of
signals. This capability can significantly reduce the transmit data payload.
Together with an appropriate standard external sensor, the Sensium provides ultra
low power monitoring of ECG, temperature, blood glucose and oxygen levels. It can
also interface to 3 axis accelerometers, pressure sensors and includes a
temperature sensor on chip.
One or more Sensium enabled digital plasters continuously monitor key physiological
parameters on the body and report to a basestation Sensium plugged into a PDA or
Smartphone. The data can be further filtered and processed there by application
software.
Fig. 28 Sensium (Toumaz)
Conclusion
Life vests or shirts are as mentioned tailored to fit the patients physical anatomy.
This leads to high costs and is not easy to deploy in a large scale of users.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
52/130
Self adhesive wearable and disposable sensors on the other hand provide an
interesting alternative for non-intrusive biosignal monitoring. However these sensors
are just appearing on the market for clinical testing and are not yet available for
commercial use.
5.1.4 Linking the sensors
In order to analyze multiple biosignals and to research which sensor modalities can
best be utilized within the project the mobile biosignal acquisition device g.MOBIlab
(from g.tec) or MobiHealth will be acquired.
The g.MOBIlab comes with 4 EEG/EOG, 2 ECG/EMG channels, 4 digital channels
and 2 analog inputs which can be used for other sensors. This module includes a
microcontroller running at 7MHz 16-bit with low power consumption (100mW). It has
8 AD inputs, 27Hz sampling rate for 3 channels, based on a RTOS that supports
standard network interface (Zworld LP3500).
The MobiHealth Mobile™ integrates with compact third-party sensor systems
through industry standard interfaces (e.g. Bluetooth) supporting monitoring of a wide
range of physiological parameters 'out of the box'. It can also quickly integrate
custom sensor systems to take advantage of new cutting-edge measurement
technology and improve wearability for patients with even smaller systems.
Supported physiological monitoring functions of MobiHealth Mobile include: multilead ECG, multi-channel EMG, plethysmogram, pulserate, oxygen saturation
(SpO2), respiration and core/skin temperature.
The MobiHealth Mobile module has been successfully used within the MobiHealth
[94] project that was supported by the Commission of the European Union in the 5th
research Framework. The module was used together with a smartphone connected
thru Bluetooth to remotely monitor the state of a patient.
The g.MOBIlab biosignals can be recorded on a notebook equipped with the
g.MOBIlab g.HIsys (SIMULINK) software, g.DAQsys (MATLAB) or an optional device
driver/API enables the user to realize his own applications.
The wireless capability of the MobiHealth Mobile device will allow mobile users to
play in a multi-player game outside the monitored playing room. This will not be
possible for patients in the rehabilitation program. Within the PlayMancer game,
biosignals will give important indications on a patient’s medical condition, his
motivation, excitement and engagement. The game itself will respond to these
signals and provide feedback accordingly.
Conclusion
The g.MOBIlab and MobiHealth Mobile mobile biosignal acquisition systems are the
natural choice for the PlayMancer biofeedback requirements. they are based on a
modular system that allows future development, can accept 12 channels and
transmit the data wirelessly. Recording and analyzing multimodal biosignal data in a
PLAYMANCER
FP7 215839
D2.1b: State of the Art
53/130
single module is the right approach to minimize the presence of those sensors on the
user. They can manage brain, heart, muscle activity, eye movement, respiration,
galvanic skin response, and other body signals that can be connected to the two
analog inputs. Depending on market availability and support conditions one of the
two systems will be acquired for the project.
5.1.5 Heart Rate / GSR and Games
Heart Rate
According to the previous literature on videogames, computer games can be used as
acute laboratory physiological stressors and increase a person’s acute psychological
stress [158]. Furthermore, heart rate (HR) can be reduced if HR feedback is given
while playing the videogame. Players who received HR feedback (biofeedback) while
playing the game, as opposed to the players who did not, reduced their HR
significantly more [159]. [160] showed that subjects receiving feedback showed
greater reductions in HR in response to a videogame. However, they found no
differences in videogame performance between the different groups.
Relaxation has been proved useful as a tool to reduce HR. Knowlton and Larkin
[122] showed that progressive relaxation training (PRT) reduced HR, self-reported
anxiety and self-report measures of tension (SRT).
After revising previous studies more in depth, we have come to the conclusion that it
would be better to measure the patient’s HR at the beginning of each session. For
example, in the beginning of the game during the loading time a message could
appear on the screen indicating the patient to relax while listening to relaxing music.
Walshe et al [120] found that they could generate an increase in HR through a
videogame with a mean increase of 15 BPM (beats per minute). Wang and Arlette
[121] conducted a study in young boys where they had to play a Playstation game
and the following changes in HR were found:
•
•
An increase of 18.8% (86.9 to 103.2 BPM) during or after playing the game
An increase of 33.5% (86.9 to 116.0 BPM) during the most active 3 minutes of
the game play
Therefore, we believe that the appropriate HR increase (in BPM) would range
between 15% and 35%. However, HR can vary between different pathologies. This
is why it would be useful for us to be able to vary (customize) the range of HR
increase for each patient and session, by the therapist.
Najström and Jansson [123] used skin conductance reactivity to assess
psychological distress following emotionally stressful life events. They found that
enhanced skin conductance was a strong predictor of emotional responding to
stressful life events.
Galvanic Skin Response
PLAYMANCER
FP7 215839
D2.1b: State of the Art
54/130
Galvanic skin response (GSR), also known as electrodermal response (EDR),
psychogalvanic reflex (PGR), or skin conductance response (SCR), is a method of
measuring the electrical resistance of the skin. There has been a long history of
electrodermal activity research, most of it dealing with spontaneous fluctuations.
Most investigators accept the phenomenon without understanding exactly what it
means. There is a relationship between sympathetic activity and emotional arousal,
although one cannot identify the specific emotion being elicited. The GSR is highly
sensitive to emotions in some people. Fear, anger, startle response, orienting
response and sexual feelings are all among the emotions which may produce similar
GSR responses.
By means of measuring the heart rate and the galvanic skin response the person
who is playing the game can observe the current state that s/he is in (feedback). An
example of how these could be done is through the use of a ring, on the finger,
which is connected to a computer [161][162].
5.1.6 Conclusion
Although the devices presented above and the general trend for new devices is the
development of biosignal measurement devices for pure medically controlled use,
developers of user applications and services are identifying the advantages offered
by these devices for the personalisation of the services, based on the physical and
psychological state of the user. In the future the simplicity in the use these devices
will be increased (used like a standard plaster and disposable) while their cost will
lower allowing them become a standard accessory of the service and application
users.
The Playmancer project aims in the making use of the, future, readily available
biosignal measurements, in order to provide a personalised game adaptation, based
on the emotional and physical state of the players. We will thus experiment and
integrate in our trials a set of simple, but accurate, easy to wear and robust biosignal
measuring devices that will allow identifying the state of the users. The choice of the
devices to use is based on the need of easy to wear, relatively low cost, and as less
intrusive as possible. In deliverable 2.1d we provide in more details the user
requirements that drive our choice in the used biosignal measurement devices.
5.2 Multimodal I/O
5.2.1 Introduction
Input and output devices are a fundamental part of every computer game. Not only
do they influence the game design and vice versa, but also determine which users
are able to participate in it. One of the major goals of the PLAYMANCER project is to
develop a multi-modal framework for serious games in health, which furthermore
should also be universally accessible. Therefore, the system should be designed to
allow for users with different abilities and impairments to enjoy and benefit from the
PLAYMANCER
FP7 215839
D2.1b: State of the Art
55/130
games. Focussing only on a single device could limit accessibility, which is why the
fusion of multiple input and output modalities is an important objective.
Fig. 29 Example of a gaming setup using multimodal I/O
5.2.2 Input/Output Devices
This subsection gives an overview of some major input as well as output devices (I/O
devices) with an emphasis on those used at the Interactive Media Systems Group
(IMS) at TUW. These devices are primarily intended for virtual reality (VR) or
Augmented Reality (AR) applications but would also be suitable for games. Besides
the application domain we will focus on their abilities as well as what they are not
able to do.
Input devices in VR are devices to mediate the user’s input into the VR simulation
(e.g. 3D Tracker, trackballs, sensing gloves, cubic mouse). VR output devices
provide feedback from the simulation in response to the input; relevant sensory
channels are sight (visual/graphics displays), sound (3D sound/auditory display) as
well as touch (haptic displays).
Input Devices
There are different types of input devices depending on the application domain (e.g.
immersive or desktop applications). So an important part of the 3D user interface
design is choosing the appropriate set of input devices that allow the user to
communicate with the application [49]. One of the most important characteristics
which can be used to describe input devices is the degree of freedom (DOF). A
degree of freedom is a particular, independent way that a body moves in space. A
device’s DOF indicates how complex the device is and the power it has in
accommodation various interaction techniques. Commonly used are:
•
•
•
•
2DOF -> 2D, e.g. mouse
3DOF -> position
3DOF -> orientation (rotation relative to coordinate system axes): roll / pitch /
yaw
6DOF -> position and orientation, this degree of freedom is usually desired
PLAYMANCER
FP7 215839
D2.1b: State of the Art
56/130
Input devices can also be described based on how much physical interaction is
required to use the device. A purely active input device requires the user to perform
some physical action before data is generated (e.g. a button). Purely passive input
devices, also called monitoring input devices, do not require any physical action for
the device to function These devices continue to generate data even if they are
untouched but users can manipulate these devices like active input devices as well
(e.g. a tracker).
All available input devices can be broken up into the following categories:
•
•
•
•
Desktop input devices (e.g. keyboard, 2D mouse, trackballs, pen-based
tablet, joysticks)
Tracking devices (e.g. motion tracker, eye tracker, data gloves)
Special purpose or hybrid input devices (e.g. 3D mice, interaction slippers)
Direct human input (e.g. speech input, bioelectric input, brain input)
In general the usual desktop devices mentioned above cannot be used in 3D,
because users are standing or physically moving and there is no surface to place a
keyboard. In addition it is difficult or even impossible to see keys in low-light
environments or when wearing a head mounted display. But usually symbolic input
(characters, numbers) in the 3D applications are less frequent than in 2D.
Motion Tracking
One of the most important aspects in virtual worlds is providing a correspondence
between the physical and the virtual environment. A tracker is some hardware used
in VR to measure the real-time change in the 3D space of an object’s position and
orientation. Most of the virtual environment applications track head and hand to
ensure a correct viewing perspective and interaction. A common interaction device at
the IMS is a wireless pen which is shown at the following picture. This pen provides
manipulation of virtual objects by pointing and selecting them. This is accomplished
by tracking the pen’s position and orientation as well as by a button at the pen for
selecting objects.
Fig. 30 Wireless pen developed at IMS
PLAYMANCER
FP7 215839
D2.1b: State of the Art
57/130
Other common interaction devices are gloves, hybrid devices which provide
continuous and discrete input at the same time, haptics as well as locomotion
devices like treadmills or stationary bicycles.
Having accurate tracking is a crucial party of making interaction usable within virtual
environment applications. The critical characteristics of motion trackers include:
•
•
•
•
•
Range: distance at which the tracker accuracy is acceptable
Latency: time delay between user’s action and reported result
Jitter: the change in tracker output when the tracker object is stationary
Accuracy: of the tracked position data
Update rate: number of position/orientation updates per second
Currently there are a number of different motion-tracking technologies in use which
include:
•
•
•
•
•
•
Magnetic tracking:
Non-contact position measurement device that uses a magnetic field
produced by a stationary transmitter to determine the real-time position of a
moving receiver element.
Mechanical tracking:
Mechanical trackers have a rigid structure with a number of interconnected
mechanical linkages. One end is fixed in place while the other is attached to
the object to be tracked.
Acoustic tracking:
Non-contact position measurement device that uses an ultrasonic signal
produced by a stationary transmitter to determine the real-time position of a
moving receiver element.
Inertial tracking:
Self-contained sensors that measure the rate of change in an object
orientation, may also measure the rate of change of an object translation
velocity.
Hybrid tracking:
Combination of two or more measurement technologies to optimize the
accuracy of the motion tracking.
Optical tracking:
Non-contact position measurement device that uses optical sensing to
determine the real-time position of a moving receiver element. To determine
an object’s position and orientation in 3D space, so called “markers” are used
which reflects infrared light emitted by the optical tracker.
Two approaches are used for wide-area tracking:
•
Outside-Looking-In: The optical tracker (e.g. camera) is fixed and the markers
are placed on the user. The position measurements are done directly and
PLAYMANCER
FP7 215839
D2.1b: State of the Art
•
58/130
orientation is interfered from position data. This is the traditional way of optic
trackers.
Inside-Looking-Out: The optical tracker (e.g. camera) is attached to tracking
object and the markers are fixed. This way of tracking provides a maximum
sensitivity for changes of orientation.
Optical tracking is mostly used at the IMS because the system offers some
advantages compared to the other measurement techniques. It provides high
accuracy, tracking is done wireless and it has a high update rate (60 Hz). Problems
with optical tracking occur if objects are occluded. The position and orientation of
these objects cannot be determined correctly because the objects cannot be “seen”
by the optical tracker.
The optical tracking system iotracker (www.iotracker.com) was developed at TUW
and will be used in this project. An installation can be provided for the duration of the
project.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
59/130
Fig. 31 Illustration of the iotracker setup (Source: www.iotracker.com)
The typical setup of an iotracker system is shown in the above picture. The cameras
track the user’s HMD position and its orientation to provide a correct viewing
perspective of the virtual world as well as the position and orientation of a wireless
pen with which the user can interact with the virtual environment. The tracking data
of all cameras used in the setup is then synchronised and processed by the
workstation and the resulting output will be displayed at the user’s HMD.
The tracking of the HMD and the wireless pen is accomplished by constellation of
optical markers, the so-called rigid-body targets, attached to each device. The
interaction between the optical tracker’s camera and those targets is illustrated in the
following picture. With this method every kind of object can be tracked by simply
attaching a rigid-body target to the desired object.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
60/130
Fig. 32 Basic principle of infrared-optical tracking (source: www.iotracker.com)
Tracking without markers
Tracking an object in space without using markers is an active research area. Two
possible technologies are available:
•
Natural Feature Tracking:
This is a large area of research and various methods are available. The basic
idea is to detect reliable features in the image (e.g. lines, corners) and
compare them with a reference image.
•
Z-Cameras:
A z-camera outputs beside the RGB values of the image a depth image with
which an object can be tracked in space.
Motion capture
Motion capture or motion tracking is a technique of digitally recording movements for
entertainment, sports, and medical applications [50][51]. The technique can be used
for tracking objects in real-time in virtual environments. Motion capture hardware has
two principal applications. The most common is the capture of data for use in 3d
animation software or motion analysis.
Motion capture is typically accomplished by any of three technologies: Optical,
Magnetic and Electro Mechanical. While each technology has its strengths, there is
not a single motion capture technology that is perfect for every possible use.
•
•
•
•
•
Exoskeleton (Mechanical)
Wireless Magnetic Sensors
Wireless Inertial Sensors
Marker based (Optical)
Pure vision based (no markers are used, under development)
PLAYMANCER
FP7 215839
D2.1b: State of the Art
61/130
Output devices
Output devices, also called display devices, are the hardware that presents
information from the virtual environment to the user in one ore more perceptual
ways; the majority of the displays are focused on stimulating one of the following
senses:
•
•
•
sight/vision
hearing/audition
touch/haptic (force and touch)
The degree of the user’s immersion into the virtual world depends on the number of
human senses which are addressed by the simulation and how convincing the
simulation of reality is. There is the following classification of immersion:
•
•
•
Desktop Virtual Reality = “Window on World” system
o Conventional screen + 3D graphics
“Fishtank” Virtual Reality
o Tracking
o Stereo (shutter glasses)
o Semi-immersive
o CAVE, Workbench, large stereo screens
Full Immersion
o E.g. Head mounted display
o options: audio, haptic interface
Visual displays present information to the user through the human visual system and
are by far the most common display devices used in 3D user interfaces. Sound
displays provide synthetic sound feedback to user interaction with the virtual world.
Touch feedback conveys real-time information on contact surface geometry, virtual
object surface roughness, slippage and temperature. It does not actively resist the
user’s contact motion and cannot stop the user from moving through virtual surfaces.
Force feedback provides real-time information on virtual object surface compliance,
object weight and inertia. It actively resists the user’s contact motion and can stop it
(for large feedback forces).
Display devices need to be considered when designing interaction techniques,
because some interaction techniques are more appropriate than others for certain
displays.
Visual Displays
Below there is a list of common visual display devices, some are explained further in
the following section:
•
•
Monitors
Surround-screen displays
PLAYMANCER
FP7 215839
D2.1b: State of the Art
•
•
•
•
•
•
62/130
Workbenches
Hemispherical displays
Head-mounted displays
Arm-mounted displays
Virtual retina displays
Auto-stereoscopic displays
Surround-screen displays
A surround-screen display is a visual output device that has three or more large
projection-based display screens that surround the human participant. Typically, the
screens are rear-projected so users do not cast shadows on the display surface.
Surround-screen displays allow several users located in close proximity to
simultaneously view an image of the virtual world.
The first surround-screen VR system was called the CAVE (Computer Assisted
Virtual Environment) and consisted of four screens (three walls and one floor).The
RAVE (Reconfigurable Assisted Virtual Environment) is a further development of the
CAVE configuration. It is designed to be a flexible display device that can be used as
flat wall, a variable-angle immersive theatre, a CAVE-like four-screen immersive
environment, an L-shaped cove with separate wall or three separate review walls.
Fig. 33 CAVE (Computer Assisted Virtual
Environment)
Fig. 34 RAVE (Reconfigurable Assisted
Virtual Environment)
Surround-screen displays provide high spatial resolution and a large field of regard.
(FOR – refers to the amount of the physical space surrounding the user in which
visual images are displayed) Furthermore they have a large field of view (FOV –
refers to the maximum number of degrees of visual angle then can be seen
instantaneously on a display) allowing the user to utilize his peripheral vision. Their
biggest disadvantage is that they are expensive and often require a large amount of
physical space. Even the surround-screen displays are designed for multiple users,
all images are rendered from the tracked user’s perspective. If an untracked user
moves, there will be no visual response from the virtual environment. As the tracked
user moves, all non-tracked users see the environment through the tracked user’s
PLAYMANCER
FP7 215839
D2.1b: State of the Art
63/130
perspective which can cause cue conflicts and lead to cybersickness. This problem
is a fundamental limitation of surround-screen displays.
Workbenches
Workbenches are stationary, projection based displays which have been designed to
model and augment interaction that takes place on tables and desks. Workbenches
provided high spatial resolution and make for an intuitive display for certain types of
applications. In many device designs, the displays can be rotated so the screen’s
orientation relative to the user can be from completely horizontal to fully vertical,
making the device quite flexible.
The device accommodates multiple users but all users must share the same
viewpoints. The users have limited mobility when interaction with a workbench
because the display is not head-coupled like a head mounted display.
Fig. 35 Example of a workbench
Head Mounted Displays
Head mounted displays (HMDs), frequently used at IMS, are in comparison with
monitors, surround-screen displays and workbenches not stationary but rather move
with the user. They are one of the most common head-coupled display devices used
for virtual environment applications. The HMD’s main goal is to place images directly
in front of the user’s eye using one (for monoscopic viewing) or two (for stereoscopic
viewing) small screens. HMDs are classified as personal graphic displays because
they output a virtual scene destined to be viewed by a single user.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
64/130
Fig. 36 The Sony Glasstron, a HMD using LCD displays
One of the biggest advantages of HMDs is that the user can have complete physical
visual immersion because the user always sees the virtual world regardless of head
position and orientation. Although, in general, HMDs block out the real world, a
camera is sometimes mounted on the HMD to display both real-world video and
graphic objects. Sometimes HMDs offer see-through options. This type of technology
is used in augmented and mixed reality systems. Because the real world may be
completely blocked out from the user’s view, interaction while wearing an HMD
requires some type of graphical representation of either one or both hands, or the
input device used.
Even though HMDs have a 360° field of regard (FOR) which refers to the amount of
the physical space surrounding the user in which visual images are displayed, many
of them have a small field of view (FOV, between 30° and 60° horizontally) which
refers to the maximum number of degrees of visual angle then can be seen
instantaneously on a display. This small field of view can cause perception and
performance problems. HMDs have the advantage of being more portable, brighter
and less expensive as compared to projection-based displays. However, projectionbased displays and monitors generally have higher spatial resolutions than HMDs
because the display screens have to be small and lightweight to keep the overall
weight of the HMDs low.
When the devices described above are to be used together in a more complex
system some problems can arise that have to be taken into consideration.
5.2.3 Multimodal I/O issues
The combination of data from different devices usually is a non-trivial problem.
Different measurement modalities, update rates or error properties make it difficult to
synchronize the data. Furthermore, with each device added the technical complexity
increases, which usually results in a decrease of the application stability. Additionally
many devices make a system harder to maintain. Finally, the most obvious issue
PLAYMANCER
FP7 215839
D2.1b: State of the Art
65/130
with multimodal I/O is that all devices have to be supplied with power and the
measurements have to be transferred by either cable or wireless transmission. If not
planned carefully this might result in either tangled cables or wireless interferences.
5.2.4 State of the art
Overview
Most applications that use multimodal I/O were developed in the fields of virtual
reality and augmented reality. There the incorporation of tracking data requires
multiple tasks like operation of devices, reading specialized network protocols,
performing calculations to fuse data from different sources and interpret it to provide
multi modal interaction [45]. Offloading these tasks to the application level results in
complex and inflexible software, which lacks extensibility.
Therefore the superior approach is to use a middleware layer which encapsulates
the hardware devices and provides interfaces to the application. This offers the
opportunity to change hardware components without having to rewrite large portions
of the application software. Use in networked UA games poses certain requirements
on such a middleware. The most important are listed in short in the following
subsection.
Fig. 37 Middleware separating application from devices and network
Requirements for multimodal I/O middleware in a game framework
1. Device abstraction: The different input devices should be hidden behind
interfaces to provide reusability and portability of the application.
2. Network transparency: Networked UA games require network transport of
input data. However, applications should be independent of an actual
configuration/setup. Therefore, the middleware should hide how the devices
are distributed over the network.
3. Support for complex configurations: Calibration and configuration of the
devices should be hidden from the application. Furthermore, the non-trivial
task (see 5.2.3) of combining the data of multiple devices into a single virtual
device should be handled by middleware.
4. Low overhead and latency: Real-time response is crucial for most computer
games. Therefore only minimal processing time should be added to the
overall process.
5. Extensibility Hardware evolves and new devices need to be supported by
PLAYMANCER
FP7 215839
D2.1b: State of the Art
66/130
integration of new device drivers. Additionally, modules that process input
data (Transformations, filters etc.) should be straightforward to add and
exchange. Extensibility is especially important to increase the life cycle of the
PLAYMANCER game platform.
Software frameworks for multimodal I/O
For the task of processing data of multiple input devices a handful of frameworks
exist. The most prominent, which are still supported, are Opentracker [45], VRPN
[46], VRCO trackd [47] and Gadgeteer [48]. These four more or less match the
above criterions.
All frameworks provide device abstraction and network transparency. Supported
input devices include different types of trackers (magnetic, optical, etc.) as well as
joysticks, space mouse, cyber gloves, haptic devices etc. Opentracker and
Gadgeteer additionally support the VRPN interface, thus basically include all the
devices of the VRPN library. Supported output hardware ranges from stereoscopic
monitors to projection setups and head mounted displays. Furthermore,
Opentracker, Gadgeteer and VRPN can be easily extended by adding modules for
new devices. VRCO trackd on the other side is proprietary software and lacks this
advantage. All frameworks listed here have relatively low latency and are thus
applicable for games. However, only Opentracker supports advanced operations and
simple configuration thereof. That is necessary for the support of complex
configurations, as described in the last subsection. We therefore consider
Opentracker as the best option for multimodal I/O in a game platform and will thus
focus on this framework.
Opentracker
Opentracker is a generic data flow network library, which was primarily designed to
deal with tracking data. It provides an abstraction of operations on input data that
separates the applications from the devices and setups. Due to its flexible design,
Opentracker can be used to handle not only tracking devices but also other input
hardware.
In a typical VR application/game input data passes through a series of steps. It is
generated by hardware devices, read by drivers, transformed to fit the requirements
of the application and sent over the network connections to other hosts. Different
setups and applications may require different subsets and combinations of the steps
described but the individual steps are common among a wide range of applications.
Examples of such invariant steps are geometric transformations, filters and data
fusion of two or more data sources. The main idea besides forming a layer between
hardware and application is breaking up the whole data manipulation into these
individual steps. Then a data flow network is built from the operations according to
the concepts described in the next subsection.
Concepts of Opentracker
Each unit of operation in OpenTracker is represented by a node in a data flow graph.
Nodes are connected by directed edges to describe the direction of flow. The
PLAYMANCER
FP7 215839
D2.1b: State of the Art
67/130
originating node of a directed edge is called the child whereas the receiving node is
called the parent. To allow more than simple flow graphs nodes, ports and edges are
used as follows.
•
•
•
Port: Is part of the node, which is used to receive (input port)/send (output
port) data events. A node can have multiple input ports (to allow e.g. Merge
operations) but only one output port. Furthermore, an input port can be
connected to multiple output ports and vice versa.
Edge: Connects input- and output ports
Node:
1. Source Node: Most source nodes encapsulate a device driver that
directly accesses a particular device. Other node objects form bridges
to more complex self-contained systems (e.g. VRPN as mentioned
before) or generate data for test purposes.
2. Filter Node: Computes its own state based upon the data events
received via the input ports and outputs the result over the
corresponding
port.
Filter
operations
include
geometric
transformations, boolean- and merge-operations as well as actual filter
functions (e.g. Confidence filter, prediction filter, average filter...)
3. Sink Node: Propagate data to external output.
The structure of the data flow graph (the configuration) is loaded from an XML file,
which makes it convenient to alter and provides some consistency check using DTD.
A recent extension of Opentracker also allows reconfiguration at runtime. Examples
of Opentracker data flow graphs can be seen in the following figures.
Fig. 38 Visualizations of examples for data flow graphs
Hardware and Opentracker
As already introduced before, Opentracker supports a variety of devices. Most
important for the use in games are the ones described in the following. This list
includes not only the input devices needed for the games described in part 4.8 but
also those that might be interesting for game developers using the PLAYMANCER
platform. Therefore it will necessarily be incomplete.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
•
•
•
•
68/130
Optical trackers like the iotracker or A.R.T. tracker: These can be used to
provide input to the motion capture modules that are going to be integrated in
the platform.
Biofeedback sensors: Opentracker can integrate e.g. gTec-gMobilab, which
can be used to retrieve EMG-data that is interesting for the application in
motor rehabilitation.
Haptic devices: Support for hardware like the Phantom Omni could be useful
for games in the physical rehabilitation as well.
Mouse, keyboard and joysticks: Might be useful to provide “conventional”
input to games.
For the games described in part 4.8 source nodes for the following devices will be
integrated:
•
Wii balance board: The balance board or a similar device will be used to move
around in the virtual environment.
Fig. 39 Wii balance board
•
Fig. 40 Squeezable input device
Grabbing device: A device is going to be designed to allow the users to grab
objects in the game. (part 4.8)
Conclusion
Opentracker provides a framework suitable for the use of multimodal I/O in
serious/UA games. It can be easily integrated in the PLAYMANCER platform and
provides extensibility and flexibility. This is important for the further use of the game
platform in the development of serious games.
5.2.5 Conclusion
This section has presented the state-of-the-art in hardware devices and software
frameworks which could be/are being used for multimodal I/O. Although these
devices and the middleware are primarily intended for VR/AR applications they can
also very well be used for serious games. Properties and suitability has been
discussed in short and issues arising from merging input signals from different
PLAYMANCER
FP7 215839
D2.1b: State of the Art
69/130
devices have been presented. Finally a middleware framework well suited for
integrating different devices within the project has been introduced in more detail
5.3 Speech / Dialogue
Although speech-based interaction is the dominating interaction modality between
humans, when it comes to human computer interaction (HCI), it has gained some
terrain, but still it is not the dominating interaction mode. Research efforts in the HCI
area are focused towards integration of human natural interaction modes, such that
no major efforts are required when using technology, specially the one integrated
into ambient and entertaining environment. In games, the use of input/output based
on speech recognition and understanding together with speech synthesis can be
compared to the 3D rendering twenty years ago. Most efforts in integrating speechbased interaction in games have been done under research projects, and mostly for
story-telling games. Use of speech-based interaction in games is limited by the high
degree of difficulty encountered when trying to integrate the spare components into a
flexible and configurable system. In PlayMancer, the integration of speech-based
interaction into therapeutic games is desired. Although recognition of emotional
speech is a high priority, the target envisages also the development of a flexible
system, easy to be integrated into various serious games.
5.3.1 Dialogue Systems
A dialogue system is a computer system which intends to converse with a human
user, considering a coherent structure. The conversation act can employ use of text,
speech, graphics, 2D or 3D gesture, touch and other input or output communication
modes. Depending on the type of the system and the target to be achieved during
the interaction, there are various components which could be included in a dialogue
system. The most important component of a dialogue system is the dialogue
manager, which handles the state of the dialogue and the dialogue strategy. Other
possible components are: input recognizer (speech, graphics, gesture), input
interpreter, output generator (natural language generator), output renderer (text-tospeech synthesizer, avatar), multi-modal fusion, multi-modal interpreter, etc.
Types of dialogue systems could be categorized by style/initiative (command-based,
menu-driven, mixed-initiative, natural language), by application (information service,
entertainment, edutainment, healthcare, assistive systems), by interface (text-based,
speech-based, graphical user interface, multi-modal), by dialog strategy (finite-state,
frame based, plan-based, linguistic interpretation based).
Currently used in various domains, speech-based interfaces are the most difficult to
integrate in the game domain, especially when it comes to use them in a multi-modal
environment. The major problems arise in the integration of natural language
understanding and generation, due to high resources requirements for processing
speech input, the confidence in interpreting the input, and, in case of game
applications, time restrictions.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
70/130
In the following sections state-of-the art status for different speech-based system
components and their use in particular domains is presented.
5.3.2 Speech Recognition and Understanding
Being the dominant modality for human communication, speech can convey levels of
abstraction inaccessible to other input modalities, and it is generally considered selfsufficient, carrying most of the informational content in a multimodal interface.
Several real-time Hidden Markov Model (HMM)-based speech recognition systems,
including development toolkits with programming APIs, are available for Windows
and Unix platforms, such as the Open Speech Recognizer [87], Loquendo ASR [88],
etc. These speech recognizers can achieve high recognition rates (over 95%) and
are easy to configure for various application-dependent command languages.
Unfortunately, speech input typically uses a limited vocabulary, forcing the user to
recall a particular command syntax. Ambiguities can appear as a result of
recognition errors as well as erroneous language constructs. These limitations in
current language understanding technology prevent natural expression of user
control actions through speech.
In [73] SYNFACE is presented. SYNFACE is a telephone aid for hearing-impaired
people, which shows the lip movements of the speaker at the other telephone
synchronized with the speech. The SYNFACE system consists of a speech
recognizer that recognizes the incoming speech and a synthetic talking head. The
output from the recognizer is used to control the articulatory movements of the
synthetic head. The prototype had been evaluated by hard-of hearing users in the
UK, the Netherlands and Sweden, providing quiet satisfactory results.
5.3.3 Speech Synthesis and Natural Language Generation
Although Text to Speech Synthesis (TTS) technology has improved a lot during the
last decade, still most computer games use recorded speech, or even just text
without any audio component. Recorded speech is expensive to produce, and
requires enormous amounts of storage when used with a computer role-playing
game. No contemporary role-playing game use TTS, mostly because contemporary
TTS engines target telephony and handheld devices, such as Real Speak TTS [87]
and Loquendo TTS [88], resulting in some features that are negatives for games.
Recorded speech vs. TTS in games, in 2007, is analogous to sprites vs. 3D
rendering in 1980. Some of the negative features of TTS engines are dealt by voice
conversion engines. Voice conversion is the procedure of modifying the speech of
one speaker (source) so that is sounds as if it was uttered by another speaker
(target). This technique can modify speech characteristics using conversion rules
statistically extracted from a small amount of training data [90]. Many different
methods have been proposed, of which the most common approach is a separate
conversion of the spectral envelope (vocal tract) and of the spectral detail (excitation
residual signal). Researches in the field of speaker identification have shown that
PLAYMANCER
FP7 215839
D2.1b: State of the Art
71/130
spectral envelope alone contains enough information in order to identify a speaker.
Additionally the use of spectral detail in voice conversion systems provides a much
more natural sounding speech. Today the voice conversion systems use continuous
transformation methods in order to achieve the mapping between the source and the
target features. These methods are based mainly on artificial neural network [91][92]
and Gaussian mixture models [90][93].
Natural Language Generation (NLG) components are required by complex systems,
where dialog data are provided in a descriptive way, and not as cursive text. NLG is
used to generate valid output sentences to be presented to the user. The process to
generate text can be as simple as keeping a list of canned text that is copied and
pasted, possibly linked with some glue text. The results are referred to specific
domains.
In a natural language generation module, we often distinguish two components: a
planning and a syntactic component. The task of deciding what should be said is
delegated to a planning component. Such a component might produce an
expression representing the content of the proposed utterance. On the basis of this
representation the syntactic generation component produces the actual output
sentence(s). Although the distinction between planning and syntactic generation is
not uncontroversial, such architecture is considered here (see Fig. 41), in order to
explain some of the issues that arise in syntactic generation.
Fig. 41 A block diagram of the Natural Language Generation component
A (natural language) grammar is a formal device that defines a relation between
(natural language) utterances and their corresponding meanings. In practice this
usually means that a grammar defines a relation between strings and logical forms.
During natural language understanding, the task is to arrive at a logical form that
corresponds to the input string. Syntactic generation can be described as the
problem to find the corresponding string for an input logical form.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
72/130
We are thus making a distinction between the grammar, which defines this relation,
and the procedure that computes the relation on the basis of such a grammar.
Fig. 42 Detailed block diagram of the NLG component
The different types of generation techniques can be classified into four main
categories:
•
•
•
•
Canned text systems constitute the simplest approach for single-sentence
and multi-sentence text generation. They are trivial to create, but very
inflexible.
Template systems, the next level of sophistication, rely on the application of
pre-defined templates or schemas and are able to support flexible alterations.
The template approach is used mainly for multi-sentence generation,
particularly in applications whose texts are fairly regular in structure.
Phrase-based systems employ what can be seen as generalized templates. In
such systems, a phrasal pattern is first selected to match the top level of the
input, and then each part of the pattern is recursively expanded into a more
specific phrasal pattern that matches some sub-portion of the input. At the
sentence level, the phrases resemble phrase structure grammar rules and at
the discourse level they play the role of text plans.
Feature-based systems, which are as yet restricted to single-sentence
generation, represent each possible minimal alternative of expression by a
single feature. Accordingly, each sentence is specified by a unique set of
features. In this framework, generation consists in the incremental collection
of features appropriate for each portion of the input. Feature collection itself
can either be based on unification or on the traversal of a feature selection
network. The expressive power of the approach is very high since any
PLAYMANCER
FP7 215839
D2.1b: State of the Art
73/130
distinction in language can be added to the system as a feature. Sophisticated
feature-based generators, however, require very complex input and make it
difficult to maintain feature interrelationships and control feature selection.
Many natural language generation systems follow a hybrid approach by combining
components that utilize different techniques.
A sophisticated NLG system needs to include stages of planning and merging of
information to enable the generation of text that looks natural and does not become
repetitive. Typical stages are:
(1) Content determination: Determination of the salient features that are worth being
said. Methods used in this stage are related to data mining.
(2) Discourse planning: Overall organisation of the information to convey.
(3) Sentence aggregation: Merging of similar sentences to improve readability and
naturalness.
(4) Lexicalisation: Putting words to the concepts.
(5) Referring expression generation: Linking words in the sentences by introducing
pronouns and other types of means of reference.
(6) Syntactic and morphological realisation: This stage is the inverse of parsing:
given all the information collected above, syntactic and morphological rules are
applied to produce the surface string.
(7) Orthographic realisation: Matters like casing, punctuation, and formatting are
resolved.
5.3.4 Dialog Management
Although human-computer interaction has a long history, the widely spread systems,
independently of used interaction interface, are command or menu-based, which in
most of the cases imply a finite-state or frame-based strategy. In such systeminitiated dialogue, questions are grouped together at design time and dialog
management often coincides with the application itself. Even in systems allowing
some mixed initiative, with enough intelligence to not ask again information which
has been already provided, this rarely affects the ordering of subsequent questions.
More natural dialogues can be achieved by clustering questions dynamically
following changes by the user. The communication in more complex systems (i.e.
interaction with robot companions) cannot be limited only to speech interfaces, but
has to take into account all modalities used in human-human dialogs, such as
gestures or the expression of emotions. Furthermore, such systems are not only
dependent on the communication with the user, but also on other complex
interactions with other platforms and environment. As a result, the dialogue
management cannot be the central control unit of the system, but remains the central
PLAYMANCER
FP7 215839
D2.1b: State of the Art
74/130
interfacing component between human users and the application. The scheme in
Fig. 43 has been implemented as a framework [89], in other words it provides a core
of common functionality that every multimodal dialog system need, while the task
specific parts of the system can be plugged in by a developer to produce a custom
application. In this, the framework follows the object-oriented philosophy of inversion
of control. The framework core calls the plugged-in components and ensures proper
communication between them. This allows for easy and rapid application
development as the developer does not need to have knowledge of the framework
internals, but only needs to implement the interfaces to the framework. Configuration
of the implemented framework is largely declarative: the user specifies structures,
the “what” knowledge, not procedures, the “how” knowledge.
Fig. 43 Schematic representation of speech-centric multimodal interface architecture.
In [66] Furui and Yamaguchi introduced a paradigm for designing multimodal
dialogue systems, through designing and evaluating a variety of dialogue strategies.
As an example, a system was built for retrieving particular information about different
shops in the Tokyo Metropolitan area, such as their names, addresses and phone
numbers. The system presented accepted speech and screen touching as input, and
presents retrieved information on a screen display.
In the Agenda architecture [85] the problem of managing the dialogue is
characterised as complex problem-solving task. In order to do this, a complex data
structure is used to guide the system. This structure consists of a tree of handlers,
while the agenda contains all topics relevant to the current task. Both the user and
the system have the ability to change the order of the items in the agenda according
to their needs. The dialogue management framework RavenClaw [86] is the
successor to the Agenda architecture. RavenClaw separates the domain-dependent
and domain-independent components of the dialogue manager, focusing on defining
a hierarchical decomposition of the underlying task (see Fig. 44).
PLAYMANCER
FP7 215839
D2.1b: State of the Art
75/130
One of the main goals behind the development of the RavenClaw dialog
management framework was providing a solid test-bed for exploring error handling
and grounding issues in spoken language interfaces. Although investigated at some
point, there are still under research some of the most important aspects of dialog
management:
•
Techniques that can be used to learn the optimal system behaviour on-line,
from detected error segments and to make these systems adapt and improve
their performance over time;
•
Timing and turn-taking behaviour – most spoken language interfaces assume
a rigid (you speak – I speak) turn-taking behaviour, which can lead to turnovertaking issues, slow down the dialog, and sometimes lead to complete
communication breakdowns;
•
Multi-participant dialog handling
•
Dynamic dialog task construction, which has been explored in several spoken
dialog systems, but no generic framework for dynamic task construction was
investigated
•
Automatic knowledge extraction to construct language resources, such as
dictionary, language model, grammar, language generation templates,
required by a spoken language interface. The development of these
resources requires significant amounts of expert knowledge and time. For
some domains, this knowledge exists in a different form, not suitable for direct
use in a spoken language interface, and the automatic extraction of
knowledge to build language resources would fasten the design of spoken
dialog interfaces.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
Multiple,
parallel
decoders
SPHINX
SPHINX
SPHINX
Recognition
Server
76/130
Inputs from
modalities
Parsing
PHOENIX
other
Other
domain
DateTime
Confidence
HELIOS
Text I/O
TTYServer
H
U
Synthesis
THETA
Lang. Generation
Galaxy
ROSETTA
Stub
Lang. Generation
ROSETTA
(Perl)
Dialog
Manag.
RAVENCLA
Back-end
(perl)
Galaxy
Stub
Actual Perl
Back-end
PROCESS
MONITOR
Fig. 44 Architectural model of the RavenClaw framework.
The RavenClaw framework provides the highest level of integration between the
various components of a speech-based interaction system, without being limited to it.
The modularity of the framework promises easy integration of other interaction
modalities, and adaptation to any domain.
5.3.5 Speech Interfaces and Mobile Technologies
In [68] multimodal application architecture is described, which target older or
disabled people. This architecture combines finite-state multimodal language
processing, a speech-act based multimodal dialogue manager, dynamic multimodal
output generation and user-tailored text planning, so as to enable rapid prototyping
of multimodal interfaces with flexible input and adaptive output. The application
provides a mobile multimodal speech-pen interface to restaurant and subway
information.
In [71] MiPad is presented, a wireless personal digital assistant which fully integrates
continuous speech recognition and spoken language understanding. In this way the
users have the ability to accomplish many common tasks using a multimodal
interface and wireless technology. This prototype has been based on plan-based
dialogue management, where the system interacts with the user to gather facts,
PLAYMANCER
FP7 215839
D2.1b: State of the Art
77/130
which consequently trigger rules and generate more facts as the interaction
progresses [72].
5.3.6 Speech Interfaces and Users with Special Needs
The design of services and applications to be usable by anyone, without the need of
specialized adaptation (design for all or inclusive design) has lead to the usability of
speech in the context of multimodal interfaces, for peoples with sensory
impairments, as this users target group has limited access to certain services, such
as web based information services.
Kvale and Warakagoda [67], developed a speech-centric composite multimodal
interface to a map-based information service on a mobile terminal, especially useful
for Dyslectics and Aphasics. Severe dyslectic and aphasic individuals could neither
use the public service by speaking nor by writing. However, through this interface,
they could easily point at a map while uttering simple commands to the system.
Therefore, this solution has been very useful for them to get web information.
Kopecek in [69] dealt with the problem of generating a picture of a scene by means
of describing it in the terms of natural language ontology. In that approach there was
a description and discussion of the idea of developing graphics through dialogue, in
a way that is fully accessible to blind users. The described scenario enables visually
impaired people to create their own pictures that can be used in web presentations,
emails, publications etc. Towards the same direction, in [70] a system called iGraph
had been developed. The system’s target is to provide short verbal descriptions to
people with visual impairments, which describe the information illustrated in graphs.
The final target has been to provide immediate benefits to those people, when it
comes to interact with graphs.
5.3.7 Speaker Emotion Recognition
Recognition of emotions in either unimodal or multimodal dialogue systems can
enhance the user friendliness of the provided application. Especially when dealing
with dialogue systems involved in game playing, the detection of the desired user’s
emotions can provide the means for modifying the dialogue flow accordingly, thus
lead to more successful interaction experiences. Even more, in case of serious
games for health, the monitoring of the emotional state of the player will play a
decision role in the adaptation of game content/targets towards the improvement of a
therapeutic output. In [76] the emotional side of games from both the game side and
the human player side, considering speech style and content, facial expressions and
gestures, is reviewed. In [77], an empirical study addressing the use of machine
learning techniques for automatically recognizing student emotional states in two
corpora of spoken tutoring dialogues, one with a human tutor, and one with a
computer tutor, was presented. The results show significant improvements in
prediction accuracy over relevant baselines, and provide a first step towards
enhancing the intelligent tutoring spoken dialogue system to automatically recognize
PLAYMANCER
FP7 215839
D2.1b: State of the Art
78/130
and adapt to students’ states. In [78], an approach for detecting emotions in spoken
dialogue systems is presented. The corpus used had been obtained from a
commercially-deployed human-machine spoken dialogue application, a call centre.
Their experiments had proved that the combination of acoustic and language
information gives the best results for their data, when trying to distinguish negative
from non-negative emotional states. In [79], the use of prosody for the detection of
frustration and annoyance in natural human-computer dialog is presented. The data
were collected under the DARPA Communicator project [80], where the users of the
dialogue system made air-travel arrangements over the telephone. When
discriminating only frustration from the rest emotional states the accuracy of the
emotion recognition system increases, especially when the main majority of the
annotators used in order to label the corpus agreed. In [81] an attempt to utilize
dialogue history by using contextual information in order to improve emotion
detection is presented. The experiments have been made on speech data resulting
from the interaction of individuals with the “How May I Help YouSM” dialogue system
[82]. In [83] Batliner et al. exemplifies difficulties on the basis of recognising user’s
emotion in the SympaFly corpus, a database with dialogues between users and a
fully automatic speech dialogue telephone system for flight reservation and booking,
and discuss possible remedies. Taxonomy of applications that utilize emotional
awareness is discussed in [84]. The discussion is confined on speech-based
applications. As detailed in this work, the state of emotion recognition in general still
suffers from the prevalence of acted laboratory speech as object of investigation.
The high recognition rates of up to 100% reported for acted-speech data cannot be
transferred onto realistic, spontaneous data. For the later databases, performance
for a two-class problem is typically less than 80%, and less than 60% for a four-class
problem.
5.3.8 Speech Interfaces and Dialogue Management in Games
A robust method for the interpretation of spoken input to a conversational computer
game has been presented and evaluated in [74]. The scenario of the game is that of
a player interacting with embodied fairy-tale characters in a 3D world via spoken
dialogue (supplemented by graphical pointing actions) to solve various problems.
The player himself cannot directly perform actions in the world, but interacts with the
fairy-tale characters to have them perform various tasks, and to get information
about the world and the problems to solve [74]. Their system produces a semantic
representation that constitutes a trade-off between the simple structures typically
generated by pattern-matching parsers and the complex structures generated by
general-purpose, linguistically-based parsers. A term paper in dialogue systems is
described in [75]. This paper discusses how game characters may be equipped with
conversational skills, what type of dialogues and dialogue features a game dialogue
manager must be capable of handling and some considerations about what
technology to use. In this work a formulation of some tentative steps towards
reaching a game dialogue system capable of handling unrestricted natural language
was introduced.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
79/130
5.3.9 Conclusions
Although spoken dialog interfaces are currently used in various domains, their
integration in games imply investigation of (i) flexibility and degree of configurability
of system components (i.e. easy generation/integration of new TTS voices), (ii)
timing and turn-taking behaviour, (iii) multi-participant dialog handling, (iv) dynamic
dialog task construction, and (v) automatic knowledge extraction for dialog language
resources construction. In addition the systems should be generic, addressing a
large heterogeneous group of users (i.e. female versus male) and a wide variety of
tasks, should be easy to configure/integrate, and should provide domain-, user- and
task-independent acceptable performance. Recognition of emotional speech, and its
accuracy and reliability, play an important role in the integration and usability of
speech-based interfaces in serious games, especially for cognitive and behavioural
exercises.
5.4 Game Engines / Tools Overview and Comparison
5.4.1 Game Engine Basics
A Game Engine is software that implements the core of a computer game. It is
assigned with rendering the game graphics (2-dimensional or 3-dimensional), the AI
of the world or of the game characters not being controlled by the player, physics
(e.g. collision between 2 objects) and physics rules upon the game world objects,
advancing the game state and taking care of the sound and the interaction with the
player.
When a game is running, a collection of objects and processes are executed behind
the scenes, which constitute the virtual world that the player experiences. The
interaction between the game and the player is manifested by a set of devices for
receiving the player’s commands and a similar set of devices for feeding back to the
player the game’s response. For the first, a keyboard, a joystick and a mouse are the
standard tactile input devices. For the latter, visual feedback (screen), sound
(speakers), haptic (game controllers) are the modes most usually used.
The game play is based on a virtual game world and a mission that the player is
called to accomplish. Usually the game play is structured around game levels that
are pre-constructed by the game level editors and the artists of the game
development team and plugged into the game engine software. Users, players or
third party teams can also create game levels and easily import them into the most of
the games in the market today.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
80/130
Game application
Game content
Game logic
Game Engine
Plug ins
Game Entities Managers
World Manager
Effect Manager
Plug in Manager
Animation System
Scene Graph
Controller Manager
Message Manager
Interface Manager
Graphics Manager
Network Manager
Rendering Core
Resource Manager
File Manager
World Editor
Sound Player
Movie Player
System Managers
Sound Manager
World Editor
…
Input Manager
Base Services
System Utilities
Memory Manager
Time Manager
Math
Graphics Drivers (OpenGL/ DirectX/ Java3D/ Other)
OS (Windows/ Linux/ MacOS/ Other)
Fig. 45 A typical 3D game engine architecture
The game engine handles the objects of the world scenes, the animations of the
animated characters and the effects, the sound, the physics interactions among the
objects, the low-level rendering of the screen, and the interpretation of the input user
commands through the various input devices that the player is using to control the
game play. Also, in multiplayer games, the networking part of the game management
is also included in the game engine bouquet of services, controlling the TCP or UDP
connection, anticipating the packet loss with fault tolerant mechanisms, etc.
Most game engines use scripting languages to program at a higher level the
behaviour of game objects in the game world. Scripting can also be used by
advanced users to alter the game play or the game contents.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
81/130
During runtime, the game engine will offer services to the game content and game
mechanics programmed by the game developers, while at the same time using
resources of the Operating System of the machine the game is executed upon, and
of the Graphics Driver that the rendering engine will potentially use to exploit the 2D
and 3D graphics acceleration abilities of the graphics hardware processor of the
system. Most of the games available today are based on Microsoft’s proprietary
DirectX driver or on the OpenGL standard graphics library.
Out of the game development community there is often a misconception among the
terms “game engine”, “graphics rendering engine”, “game development middleware”
and Software Development Toolkit (SDK). The first term has been explained through
out this section. The second is a subset of services and utilities of a game engine,
aiming at handling the rendering of the graphics alone. A graphics rendering engine
cannot by itself alone be a very versatile tool in nowadays game development,
because it lacks the support for the other game features that are found in
contemporary game engines, greatly facilitating the game development process. The
third term refers to a generic piece of software assisting in specific tasks. Game
engines can be seen as such middleware, situated between OSs /graphics drivers
and game applications, thus hiding the complexity of rendering the graphics and
audio streams and even the complexity of handling the network communication and
of other specialised tasks. The game developer has only to program the game
application making use of the services offered by the game engine. Similar to
middleware, the term SDK is coined to mean a library of functions complemented by
all the tools, the documentation and examples needed for a starting programmer to
start working and exploiting its power. The latter are much more flexible but with a
narrower focus. For example, Gamebryo [95] is a very flexible proprietary renderer
but has no collision or physics capabilities, unlike Havok [96] which is solely a
physics engine. Similar middleware include Criterion’s Renderware [97] and
Speedtree [98].
5.4.2 The game engine roadmap: Past, current and future trends
During the first era of computer gaming (1970 to about 1985), the term game engine
was not existent. Any game belonging to this period was programmed using no or
small portions of reusable code, actually from scratch. With the increase in popularity
of computer games up to the middle of the nineties, game developers switched to
another business model: to license part of their code libraries to other game
developers. This paradigm shift progressively resulted into a clear divide between
game content and game services (in the form of game engines) that coordinate this
content at runtime.
Computer games reflect advances in computer graphics. These advances can be
divided into 6 time periods:
1. Early engines (1974-1990)
The technology of early games supported simple wireframe representations (using
vector graphics) and single player top-down or side views. Progressively the game
PLAYMANCER
FP7 215839
D2.1b: State of the Art
82/130
world objects started to appear as flat shaded raster graphics objects. Asteroids by
Taito [99] and Space Invaders by Atari [100] are representatives of vector and raster
games respectively.
2. The rise of the 3D representations (1990-1994)
From the simple wireframe game worlds, the first pseudo-3D engines based on 2D
vectors of level maps and 3D models moved rendering ahead, by applying the first
textured-mapping of the game’s surfaces. The player’s viewpoint remains static or
moves along the main character in a certain plane of the game world. Early ID [107]
titles such as Wolfenstein 3D and Doom (first appeared in 1993) are classic
examples of this era. In contrast to the static levels of Wolfenstein 3D, those in Doom
are highly interactive: platforms can lower and rise, floors can rise sequentially to
form staircases, and bridges can rise and descend. The life-like feeling of the
environment was enhanced further by the stereo sound system, which made it
possible to roughly tell the direction and distance of a sound's origin. Still, the
enemies of the titles of this period were rendered as 2D sprites.
3. True 3D worlds (1995-1999)
By 1995, games started to appear that were using true 3D objects and geometry of
the game world. Descent by Parallax Software [124] was the first title to use real 3D
models of enemies. Apogee’s Duke Nukem [125] and ID’s Quake followed,
coinciding with the introduction of more powerful CPUs (such as the Intel Pentium)
and the first hardware 3D acceleration cards. Companies are starting to provide level
editors and to allow to third parties to develop game levels for their titles. Effects like
sliding doors between level rooms, see-through grating and simple dynamic lighting
were implemented for the first time.
4. Advanced realism and effects (2000-2003)
The advancement in graphics hardware brought increasing realism. The engines of
this period were able to cope with extremely large view distances and massive
numbers of models. Game developers exploited this rush to develop innovative and
exciting new effects such as particles (smoke, fire, fog), dynamic lighting and
shading. Shaders are described and rendered as several layers, each containing
one texture, one "blend mode" which determines how to superimpose it over the last
one, and texture orientation modes such as environment mapping, scrolling, and
rotating. The shader system goes beyond just visual appearance, also defining the
contents of volumes (e.g. a water volume is defined as such by applying a water
shader to its surfaces), light emission, and which sound to play when a volume is
trod upon. In Quake III, the game engine implements a virtual machine, for easy and
safe description and handling of game objects, which was a great facility for the
external teams working on game modifications (MODs). User manipulated objects
and rigid body dynamics also dominated the engines of that era, with successful
examples such as Unreal II [126], Grand Theft Auto 3 of Rockstar games [127].
5. Graphics Galore (2004-2008)
In a close loop and a kind of chain reaction, hardware 3D graphics accelerator and
3D game engine features were evolved rapidly, in a quest for the visual realism. As
PLAYMANCER
FP7 215839
D2.1b: State of the Art
83/130
the visual photo-realism is the no1 objective for the game industry, attributing to
player immersion, a lot of refinements to graphics, physics, AI have been developed
going down in the granularity of math calculations, thus improving the state of the art
of game engines. Shading is now done at the pixel and vertex level, so does lighting,
bump mapping has been introduced to give to surfaces the sense of depth that is
lacking from 2D displays, shadow systems have been further exploited, High
Dynamic Range Imaging has been used by the Source engine of Valve Software
(used in Half-Life 2: Lost Coast [101]) and by the Unreal Engine 3 [102] of Epic
Megagames [103]. Remarkable open source efforts implementing HDRI include the
OGRE 3D engine [104] and the Nexuiz game [105]. Ragdoll physics is widely used
by engines mainly to realistically animate character bodies when they drop dead,
hardware physics accelerators have started to become popular following the 3D
graphics hardware accelerators paradigm, while procedural texturing has been used
to small extent. The appearance and success of Nintendo Wii meant a turn to more
natural input modes, like the motion control. Even if the realisation of hand motion
tracking was not very accurate (due to cost constraints), the smart exploitation of this
feature by game developers was embraced by game players of all ages and showed
the way to the games of the future.
6. Tracing the future in game engines (2009-)
To predict the future of game engines, under the current market growing success of
the gaming sector, is probably a risky task. The current trends show that both the
physics and graphics have gone a long way. Graphics will reach photo-realistic
realism, given that experts in the field estimate that by 2010 a game would be able to
render real-time a realistic-looking video of an environment [106]. Physics, due to the
predicted success of devoted hardware accelerators, will advance further from rigid
body to deeper levels of finer armatures and inner body assemblies. This advance
will inevitably be met with innovative motion tracking input devices, force feedback
and maybe tactile interfaces. It is reasonable to think that tactile interfaces will be
coupled naturally with advanced physics, resulting in developing the touch sense in
future games, thus boosting even further gaming immersion and realism.
Moreover, AI will provide methods for real-time clustering, classification and
prediction, and coupled with graphics will unleash the potential for advanced
procedural content generation using genetic programming or genetic algorithms. The
long-awaited game Spore is an example of such progress.
5.4.3 Serious features of Game Engines
Game engines are created to facilitate game development. However, there have
been many applications that exploited their power and their features to render
realistic 3D worlds for a serious cause. While more accurate simulation
environments already exist for scientific and industrial applications (i.e. Virtual
Reality immersive setups, CAVETM, etc.), game development software offers some
key advantages:
PLAYMANCER
FP7 215839
D2.1b: State of the Art
84/130
1. Low cost
Many game engine and game middleware software development companies market
their products with special licenses for academies (universities, research
institutions), and for non-commercial efforts. Some either totally waive those fees for
non-commercial use, trying to promote their products and enlarge their user base. At
the same time, there are free game engines, either open or closed source, some of
which compete their large-company backed up rivals.
2. Simple programming –scripting
Most engines allow for scripting the game logic, and the behaviour of the game
engine’s components during game play. Physics, animation, game rules, and AI can
be scripted, either selecting from pre-made scripts or writing new ones in the
packaged script editor. Thus novice developers with little or no previous experience
can start making 3D applications using many contemporary game engines.
3. Lighting, weather and terrain manipulation
Advanced lighting effects emulating static and dynamic lights, environmental light
simulation, are all within the capabilities of today’s game engines. Weather
simulation including effects such as fog, shadows, rain, and snow can also be
exploited by serious applications. Extremely large view distances of extensive
landscapes coupled with accurate photographic textures and foliage can be used to
offer realistic terrain navigation.
4. Networking and multi-party collaboration
Game engines that are used in Massive Multi-player Online Games (MMOGs)
support networking mechanisms for reliable and robust data communication between
players (clients) and game servers. In addition, there are available several services
for in-game communication, and tools to coordinate cooperative or competitive game
playing. These tools and services can be exploited for serious collaborative tasks in
the 3D game world, such as distributed exploring or coordinated task execution.
5. Robustness
Game engines are built to be reusable and extendable. Commercial game engines
sold as software products have been tested extensively, within the development
team and externally by the game developers that build games on top. Even the
game players happen to report back development issues pertaining to the engine
itself. As a result, game engines are seen as robust piece of software that can be reused in other contexts without problems.
6. Documentation and user base
Apart from the robustness that the modern game engines exhibit, the developer
companies or parties usually equip them with extensive documentation, tutorials,
examples and developer community fora. The user base is usually an essential
factor contributing to the success of a game engine, as an equally essential factor
assisting in decreasing the learning curve for a developer that starts building 3D
applications on it.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
85/130
5.4.4 Research projects making use of Game engines
Game engines, as a tool that facilitates the development of applications based on
the graphical 3D representation of virtual worlds, has been exploited by researchers
in the past, and its use for serious applications and serious games is constantly
expanding.
A vast and relatively unexplored area for scientific research is serious games, that is
applications that are fun and exciting, while servicing a purpose other than true
entertainment. Being software games, serious games rely on game engines as their
central building pieces as much as any other game. An extensive citation of research
being conducted on serious games is presented in Chapter 4 of this document.
Promotion of historical and cultural places has been demonstrated by the Notre
Dame Cathedral project funded by UNESCO and produced by DeLeon [128]. An
Unreal engine has been used in this project for a publicly available demonstration of
a virtual 3D reconstruction of an architectural and historical monument of great
value.
Game engines have been used equally as a design, visualisation and presentation
medium for promoting commercial real estate. The Unrealty project described by
Miliano [129] empowers the user to create realistic virtual real estates as an efficient
prototyping method, making the result more attractive by inserting into this virtual
world animated objects from the real context of each specific area (e.g. area-specific
birds, surrounding nature, etc.).
Archaeology is another domain that has capitalised on the progress and usefulness
of game engines. As Sifniotis describes [130], the AERIA project [131] attempted to
create archaeological reconstructions without the use of expensive CAD software.
The authors used the Quake 2, Half-Life and Morrorwind [132] engines to
reconstruct the palace of Nestor in Pylos and the throne of Apollo, respectively. They
recognised that game engines have come 'of age' and offer a low cost but powerful
tool for heritage visualisation.
Anderson [133] has followed a similar approach by using a Quake 3 engine to
recreate a Pompeian house from archaeological plans. Limitations that were
encountered include the inability to access the game engine’s source code, Quake’s
strict CSG (Constructive Solid Geometry) modelling approach, and the inaccuracy of
the game units. The Quake engine is also adapted by researchers from the
University of Aizu, Japan [134] in order to produce a virtual 3D interactive scene of a
Japanese temple. They enhanced the engine’s features with support for simple
procedural building generation, landscape generation, human skeletal animation,
cloth simulation, etc.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
86/130
5.4.5 Tools for game development
Game development is a long and tedious process, requiring for a multitude of
different expertise and roles in the programming team. During this process, the game
engine is just the core of the software used in the game creation, because it is also
the software core that is running during the game runtime. Apart of the game engine,
a set of software libraries and tools should be exploited, both for designing the
content of the game (2D and 3D graphics and models) and for programming the
behaviour and interaction of this content in the game world.
Content /Media creation Tools
Numerous and diverse tools exist and can be used for content creation. The
following tables categorises some of the well-known and most broadly used (OS
stands here for Open- Source)2:
Category
Subcategory
Maker
Tool name
License
Comments
Graphics
Raster drawings
Adobe
Photoshop
Commercial
Graphics
Raster drawings
Alias
Sketchbook
Pro
Commercial
Graphics
Raster drawings
Gimp
OS/GPL
Graphics
Graphics
Graphics
Raster drawings
Raster drawings
Raster drawings
Corel
Paint.net
Corel
Painter IX
Graphics
Graphics
Graphics
Vector drawings
Vector drawings
Vector drawings
Adobe
Corel
Microsoft
Graphics
Graphics
Vector drawings
Vector drawings
Graphics
Graphics
Graphics
3D modelling
3D modelling
3D modelling
PaintShop
Pro
Illustrator
Draw
Expression
Design2
Synfig
Xara
Extreme
Blender
OpenFX
BLR-CAD
Commercial
Free/MIT
Commercial
Standard
of
all
professional
graphic
engineers
Designed
specifically
for use with Tablet PC
or digitized pen tablets
Many
advanced
plugins, writing new
ones is rather easy
Versatile image editor
Graphics
3D modelling
k-3D
OS/GPL
Graphics
3D modelling
TrueSpace3
Free
2
Caligari
Photo editing
Commercial
Commercial
Commercial
OS/GPL
OS/GPL
OS/GPL
OS/GPL
OS/GPL
2D animation suite
powerful cross-platform
combinatorial
Constructive
Solid
Geometry (CSG) solid
modeling system
generates
motionpicture-quality
animation
using
RenderMan-compliant
render engines
Specially suited for web
Source: http://wiki.gamedev.net/index.php/Tools:Content
PLAYMANCER
FP7 215839
D2.1b: State of the Art
Category
Subcategory
87/130
Maker
Tool name
License
Comments
3D (VRML) authoring
Graphics
Graphics
3D modelling
3D modelling
OS/GPL
Commercial
3D modelling
3D modelling
Wings3D
3D
Studio
Max
Cinema 4D
ClayWorks
Graphics
Graphics
Graphics
3D modelling
Lightwave
Commercial
Graphics
3D modelling
Maya
Commercial
Graphics
3D modelling
SoftImage
XSI
Commercial
Graphics
3D modelling
Carrara
Commercial
Graphics
3D modelling
Daz
Studio
Daz
Studio
Bryce
Commercial
Graphics
Font Tools
Bitmap Font
Generator
Free
Autodesk
Commercial
Commercial
ClayWorks
is
a
procedural
modeling
program
Modelling,
animation
and rendering
Modelling,
animation
and rendering
Professional 3D suite
used for games, but
more commonly used
movie special fx
converts
fonts
images/textures
to
Programming Tools and libraries
Programming tools and libraries are pieces of software that extend the functionality
of game engine code. They work together and usually are integrated (through plug-in
interfaces). Some are generic (supported several utilities not closely dependent),
other are specific dedicated to a task. In terms of licensing, the open source projects
are free, while the commercial tools might have several licensing schemes for noncommercial use or for limited games production.
Category
Maker
AI
AI
AI.impla
nt
Tool
name
Language
Platform
License
Comments
ABKit
C++
MacOSX,
Linux,
Win32
OS/GPL
Alpha-Beta
algorithm
for
board games
Dynamic
pathfinding,
Used in Unreal3
engine
AI.implant
AI
FEAR
AI
GAlib
Louder
than a
Bomb
AI
Memetic
AI
PLAYMANCER
Spark!
Fuzzy
Logic
Editor
Memetic
Toolkit
Commercial
C++.
Python
Win32,
POSIX
C++
Special
Commercial
Cross-Platf.
Genetic
Algorithms
Fuzzy logic API
Artificial Life for
NPCs,
integrated with
Neverwinter’s
FP7 215839
D2.1b: State of the Art
Category
Tool
name
Language
Platform
License
AI
OpenAI
C++, Java
Cross-Platf.
BSD
AI
C++, C#
Cross-Platf.
OS/GPL,
LGPL
AI
OpenSkyN
et
SPADES
AI
GAUL
C/C++
Linux/Posix
, Win
OS/GPL
AI
AINN
C++
Cross-Platf.
Sound
Audiere
Sound
Maker
88/130
Un4see
n
OS/LGPL
Win32,
OSX
Commercial
Cross-Platf.
Freeware/GP
L/Own
license
Commercial
Sound
Dumb
C/C++,
Delphi,
Visual
Basic, and
MASM APIs
C
Sound
FMOD
C
Cross-Platf.
Sound
irrKlang
C++/VB/C#
Cross-Platf.
Free/
irrKlang Pro
is
commercial
Sound
OpenAL
C
Cross-Platf.
OS/LGPL
PLAYMANCER
BASS
Windows,
Linux-i386,
Cygwin,
and IRIX
Comments
Nights
Agents, genetic
algorithms and
bayesian
networks
FSMs,
pathfinding, ML
Agent-based
simulation
Genetic
Algorithms,
evolutionary
programming
Neural
Networks
Ogg
Vorbis,
MP3,
FLAC,
uncompressed
WAV,
AIFF,
MOD,
S3M,
XM, and IT files
support
IT, XM, S3M
and
MOD
player library
Audio playback
library
that
supports MOD,
XM, MP3, MIDI
and many other
audio formats
high level 2D
and 3D cross
platform sound
engine
and
audio
library
which
plays
WAV,
MP3,
OGG,
MOD,
XM, IT, S3M
and more file
formats
management of
audio sources
moving in a 3D
space that are
heard
by
a
single listener
FP7 215839
D2.1b: State of the Art
Category
Maker
Tool
name
89/130
Language
Platform
License
Sound
SDL
C
Cross-Platf.
OS/LGPL
Sound
SDL_mixer
C
Cross-Platf.
OS/LGPL
Animation
EMotionFX
C, C++
PC, Mac,
Linuc,
Xbox360
Commercial
Image
Image
Corona
CxImage
C++
C++
Cross-Platf.
Cross-Platf.
OS/zlib
OS/zlib
Image
FreeImage
C++
Cross-Platf.
Image
Image
DevIL
ImageMagi
c
C
Bindings to
many langu.
Cross-Platf.
Cross-Platf.
OS/GPLFIPL
OS/LGPL
OS/GPLcompatible
Image
libpng
Cross-Platf.
OS
Image
SDL image
Cross-Platf.
OS/LGPL
Networking
Boost.ASI
O
C++
Cross-Platf.
OS/Boost
Networking
libpkg
C
Cross-Platf.
OS/LGPL
C
Cross-Platf.
OS/LGPL
Networking
HawkSo
ft
HawkNL
Networking
Quazal
Net-Z
PLAYMANCER
Commercial
Comments
somewhere in
that
space,
used in AAA
game engines
low level access
to
audio,
keyboard,
mouse, joystick,
3D
hardware
via
OpenGL,
and 2D video
framebuffer.
Extension
to
SDL for playing
audio files
real-time
character
animation
system
load,
save,
display,
transform
images
suite to create,
edit,
and
compose
bitmap images
PNG reference
library
image
file
loading library
interface
for
async.
IO
through sockets
BRL_CAD
library, simple
sync. & async.
Interface
Wrapper
API
over
Berkeley/Unix
Sockets
and
Winsock
Authoring and
integration
platform,
allowing rapid
integration
of
networking 2-32
FP7 215839
D2.1b: State of the Art
Category
Maker
90/130
Tool
name
Language
Platform
License
Networking
OpenTNL
C/C++
Cross-Platf.
Networking
SDL_net
OS/GPL
compatible
OS/GPL
Networking
Zoidcom
C++
C++
Comments
players
Free for non
commercial
use
Networking
jenkinss
oftware
RakNet
Networking
Demon
Ware
Matchmaki
ng+/State
Engine
Commercial
Networking
eNet
OS/Own
Networking
Twisted
Python
Networking
Zig
C/C++
Cross-Platf.
Physics
BRL-CAD
raytracer
C
Cross-Platf.
Physics
Havok
Physics
Bullet
Physics
Library
ODE
Physics
Physics
Newton
Dynami
PLAYMANCER
Newton
Game
Windows,
Linux, and
Unix
OS
PC
Windows,
Nintendo
Gamecube,
PlayStation
2, Xbox
Cross-Platf.
C/C++
Commercial
OS/BSD-like
Commercial
OS/ZLib
Cross-Platf.
Cross-Platf.
TCP, UDP, Part
of SDL
UDP
based
networking
library, efficient
bitstream
transmission
Reliable
UDP
and high level
networking
State
synchronization
C++
programming
framework, not
for MMOs
Wrapper
for
reliable
and
unreliable
streams
over
UDP
unsync.
networking
Based
on
HawkNL, clientserver
game
networking
engine
High
performance
collision
detection
Free/Own
SDK
simulate
vehicles,
objects in virtual
reality
environments
and
virtual
creatures
scene
management,
FP7 215839
D2.1b: State of the Art
Category
Maker
Tool
name
cs
Dynamics
Physics
Physics
Physics
Physics
91/130
Language
Platform
Tokamak
True
Axis
ThePhy
sicsEngi
ne
Novodex
True Axis
C/C++
C++
License
Comments
Open/BSD
collision
detection,
dynamic
behavior
Joints, friction,
stacking,
coll.
det.,
rigid
particle,
breakage
Free,
Budget,
Commercial
nV Physics
Collision,
contact
force,
joints-ragdoll,
vehicles
Free
Physics
OPAL
C++
Cross-Platf.
Physics
Chipmunk
Game
Dynamics
Box2D
C
Cross-Platf.
Joints, motors,
sensors, event
handlers
For 2D games
C++
Cross-Platf.
2D engine
Physics
OS/LGPL or
BSD
Game Engines
With the advancement of the graphics and physics hardware acceleration cards,
more and more game engines are developed, in an effort to optimally exploit their
characteristics and stand out of the competition. While the sceptre in the engine
features and processing power hold proprietary engines such as Unreal Engine III
(by Epic Games), Gamebryo (by Numerical Design), Source (by Valve) there are
some worth-mentioning open source efforts, robust enough for 3D applications
development by external development teams, embraced by many 3D applications
and game developers due to the low overall cost and the enthusiastic user
community.
Table 2 in page 97 overviews the currently updated list of game engines (by May
2008), maintained by the DevMaster web site [220].
An explanation of some of the features supported by contemporary game engines is
provided below:
Feature
Culling System:
Mipmap:
PLAYMANCER
Description
The algorithm followed of removing from the rendering pipeline
(culling) objects that are not within the viewing frustum (the volume
of space that a camera can see)
Sets of copies of one texture. Every mipmap is x2 smaller than the
FP7 215839
D2.1b: State of the Art
LOD:
Environment Map:
Lightmaps:
Dynamic Shadows:
Mesh interpolation:
Terrain:
Particle system:
Mirrors:
Curved Surfaces:
Shaders:
Bone Animation:
Multiplayer:
Multisession:
Physics Engine:
Scripting:
Price:
PLAYMANCER
92/130
one before. Used to alter the texture of an object using different
versions of the same texture, by the distance of the object from the
camera
Level of Detail. It determines which version of an object to use,
based on its distance from the camera. Usually 3 or more versions
are produced for each viewable object (from low polygons-rough
textured to high polygon-detailed textured ones).
Technique for creating a texture on a surface that reflects the
surrounding environment
Light data structure which contains the brightness of surfaces.
Lightmaps are pre-computed and used for static objects, usually
during level loading time.
In contrast to shadow mapping, which is a technique for static prerendered shadows, shadow volumes is a better technique for realtime shadow rendering.
A technique for automatically adding intermediate key-frames in
mesh object animation, thus refining their movement or
transformation while rendering.
Capability of the engine to render a terrain, and optionally to apply
continuous Level of Detail (CLOD) on terrain mesh and texture.
A particle system is a group of particles - usually small sprites used to model phenomena like clouds, vapor, fire, splashing water,
sparks, explosions, and other volumetric effects.
The ability of the game engine to easily render reflective planes
The ability of the game engine to represent curves and curved
surfaces. Popular curve systems are SPlines and Bezier surface
patches.
A program that runs on the video card. Shaders can be run for every
vertex or pixel drawn. Vertex shaders are being executed whenever
a vertex is transformed. Most common transformations done using
vertex shaders are surface deformations (e.g. a moving water
surface effect).
A system that supports rugged body animation. An extension of
such a system is the ragdoll dynamics, that is simulation of a bonestructured rigged body during free-fall.
Support for multi-player game play
Inherent requirement in massive multi-player games, multi-session
guarantees that multiple sessions can be handled by different multiplayer parties at the same time.
A system to calculate forces on objects and results of collision. Can
include collision detection, rigid body dynamics, vehicle physics.
The (high-level) scripted language (if any) that the developer can
use to program the game engine features.
When not free, this field represents the price to develop and
distribute a commercial game (without getting royalties to the full
source code of the game engine).
FP7 215839
D2.1b: State of the Art
93/130
5.4.6 Overview and comparison of available game engines and tools
Table 1 overviews proprietary and free, powerful and popular, contemporary game
engines, based on their supported features. The legend of feature acronyms per
feature type is given below:
Feature Category
Graphics API
OS
General
Physics
Lighting
Shadows
Texturing
Animation
Meshes
Special FX
PLAYMANCER
Feature
Acronym
DX
Sft
OGL
XB
PS
Lnx
MC
GC
OO
S/L
B
CD
RB
VP
PV
PP
HL
LM
Ani
SV
PP
SM
B
M-T
Vl
BM
MM
IK
SkA
AB
FA
Mph
ML
Sk
Df
Pr
EM
LF
BB
PS
Feature Description
Microsoft Direct X
Software (no hardware graphics acceleration)
OpenGL
Microsoft XBox
Sony PlayStation
Linux
MacOS
Nintendo GameCube
Object Oriented design
Save and Load system
Basic physics
Collision Detection
Rigid Body physics
Vehicle Physics
Per Vertex
Per Pixel
High Level
Light-Mapping
Anisotropic
Shadow Volume
Projected Planar
Shadow-mapping
Basic
Multi-texturing
Volumetric
Bump-mapping
Mipmapping
Inverse Kinematics
Skeletal Animation
Animation Blending
Face Animation
Morphing
Mesh Loading
Skinning
Deformation
Progressive
Environmental Mapping
Lens Flares
BillBoarding
Particle System
FP7 215839
D2.1b: State of the Art
Terrain
Networking
Sound
AI
MB
S
W
Fr
Dc
Fg
Mr
Rdr
CLOD
C-S
M-S
P2P
SS
Pf
DM
Scr
FSM
94/130
Motion Blur
Sky
Water
Fire
Decalls
Fog
Mirror
Rendering
Continuous Level of Detail
Client Server
Master Server
Peer to Peer
Streaming Sound
Path-finding
Decision Making
Scripted
Finite State Machines
The abovementioned list comprises of some free solutions, some low cost and some
having parametric licenses.
The quality of the available game engines is not dependent on the number of
features that each of those supports, but rather on the efficiency of the code that
implements each of those features. It is for this efficiency that some game engines
are very expensive, while implementing just a subset of features supported by other
low-cost or open source solutions. Such a critical factor is the Frames Per Second
(FPS) rate that a specific scene can be rendered by a given 3D game engine, under
a specific configuration of the enabled features (e.g. lighting algorithm, special
effects chosen, shadows algorithm).
5.4.7 Conclusion
Even if game engines comprise a very efficient middleware for game development,
that fact alone does not mean that a game can be built without additional effort in
understanding the engine. The learning curve of learning to exploit a game’s features
is a cost that developers should take into account before acquiring a game engine.
That level of support and the efficiency of carrying out the game logic, physics,
sound, AI, network and rendering operations should determine the right choice
among the multitude of solutions available in the market today, free of cost or
commercial ones.
From the solutions presented in Table 1, PlayMancer should exclude the expensive
commercial ones like Valve, CryEngine, Gamebryo and Unreal3. Torque, despite
having a low license cost and support for developers is a bit outdated in terms of
rendering features and is best fitted for first person shooters. Crystal Space and
Ogre3D are open source projects and have a wide and lively user base. Unity on the
other hand, is a true cross-platform development suite, able to compile and build
PLAYMANCER
FP7 215839
D2.1b: State of the Art
95/130
game solutions for Windows, Apple, Nintendo Wii and i-Phone, having a low
independent license price. Unity, NeoAxis, C4, 3DGameStudio also feature useful
level editors or other game development editors (e.g. character editor).
Overall, Unity, C4, 3DGameStudio and Ogre3D from the open-source domain, might
best fit the PlayMancer goals in terms of features and tools for rapid game
development.
PLAYMANCER
FP7 215839
Table 1: Overview of game engines in terms of supported features
Game Engines
Features
Maker
Source
Valve
NeoAxis
NeoAxis
CryEngine
C4
TV3D
Crytek
Terathon
TrueVision
3D
OGL, DX
OGL
DX
Irrlicht
Graphics API
DX
Programming
Language
C/C++
C/C++, C#
C/C++
C/C++
C/C++, C#,
VB, Delphi
Win
Win
Win, MC, PS
Win
Yes
OO, plug-in,
S/L
Yes
B, CD, RB
Yes
OO, plug-in,
S/L
Yes
B, CD, RB
Yes
OO
Win, Lnx,
MC
Yes
OO
Editors
Physics
Yes
OO, plug-in,
S/L
Yes
B, CD, RB
CD, RB, VP
GUI, Lmap
CD
Lighting
PV, PP, LM
PV
Win, XB,
PS, GC
Yes
OO, plug-in,
S/L
Yes
B, CD, RB,
VP
, HL
PV,PP, Ani
PV, PP, LM
PV, PP, LM
Shadows
Texturing
SM, PP, SV
B, M-T, BM,
MM, Vl
SkA, AB
SV
B, M-T, BM,
MM
IK, SkA, AB
SM, PP, SV
B, M-T, BM,
MM
SkA, AB
SV
B, M-T, BM
SV
B, M-T, BM,
MM
SkA, Mph
ML, Sk
ML, Sk, Pr
ML, Pr
Special FX
SM
B, M-T, BM,
MM
SkA, AB,
Mph, FA
ML, Sk, Pr,
Df
EM, BB, PS
EM, BB, PS,
MB, S,W,
Dc, Fg, Mr
EM, LF, BB,
PS, S, W,
Dc, Fg, Mr
Terrain
Rdr, CLOD
EM, LF, BB,
PS, MB, S,
W, Fr, Dc,
Fg, Mr
Rnd
OS
Documentation
General
Animation
Meshes
Networking
Sound
AI
Price
C-S
2D, 3D
Pf, DM,
FSM, Scr
Rdr, CLOD
2D, 3D, SS
Pf
C-S
2D, 3D
Pf, DM, Scr
C-S
2D, 3D, SS
Variant
$200
KA, SkA,
Mph, AB
ML, Sk
OGL, DX,
Sft
C/C++, C#,
VB.net
ML
Game
Studio
Conitec
Datasystem
s
DX
Crystal
Space
Torque
Unity
Garage
Games
OGL, Sft
OGL, DX
OGL, DX
DX
C/C++,
Delphi
C/C++
C/C++
C/C++
C/C++
Win
Win, Lnx,
MC
Yes
OO
Win, PS,
XB, GC
Yes
OO, plug-in,
S/L
Win
Yes
OO
Yes
OO
B, CD, RB
CD
PV, LM
PV, PP
Yes
BP, CD, RB,
VP
PV, PP, LM
Yes
BP, CD, RB,
VP
PV, PP, LM
PP, SV
B, M-T, BM,
MM
KA, SkA,
Mph, AB
ML, Sk, Df
PP, SV
B, MT, MM
SM
B, M-T, BM,
Pr
IK, FK,
KA,SkA, AB
ML, Sk, Pr
Win, Lnx,
MC, XB, PS
Yes
OO, plug-in,
S/L
Yes
B, CD, RB,
VP
PV, PP, V,
LM, Ani
SM, PP, SV
B, M-T, BM,
MM, Vl, Pr
IK, KA, SkA,
FA, AB
ML, Sk, Pr,
Df
EM, BB, PS,
MB, S,W,
Dc, Fg, Mr
.Net C#,
C++,
JavaScript,
Boo
Win, MC
PP, SV
B, M-T, BM,
MM
IK, SkA, AB
PP
B, BM
KA
SM, SV
B, M-T, BM,
MM, Vl, Pr
IK, SkA, AB
ML, Sk, Pr
ML, Sk
ML, Sk, Pr
EM, LF, BB,
PS, S, W,
Dc, Fg, Mr
PS, MB, S,
W, Mr
EM, LF, BB,
PS, MB, S,
W, Fg
Rdr, CLOD
Rdr
C-S
2D, 3D, SS
Pf, Scr
C-S
2D, 3D, SS
Scr
Var. Max
$1495
$200 $2000
Yes
OO, plug-in,
S/L
Yes
BP, CD, RB,
VP
PV, PP, LM
KA, SkA
ML, Pr
EM, LF, BB,
PS, S, W,
Fg, Mr
EM, LF, BB,
PS, S, Mr
Rdr, CLOD,
Sp
Rdr
Rdr
Rdr, CLOD
Free
C-S, M-S
3D
Pf, DM,
FSM, Scr
Max $800
2D, 3D
Free
EM, LF, BB,
PS, S, Fr,
Dc, Fg
Rdr, CLOD,
Sp
C-S, P2P
2D, 3D, SS
Pf, DM,
FSM, Scr
Unity
Technologi
es
OGL, DX
Ogre3D
Epic Games
Em, BB, PS,
S, W, Fg
Variant
Unreal3
Numerical
Design
EM, BB, PS,
MB, S, W,
Fg
2D, 3D, SS
GameBryo
OGL, DX
C/C++
Win, Lnx,
MC
Yes
OO, plug-in,
S/L
B, CD, RB
PV, PP, LM
Free
6 Appendix 1: List of available Game Engines
Table 2: Updated list of available game engines (May 2008) [220]
Name
Language
3DCakeWalk
Python
Platform
License
Graphics
2D/3D via
Windows/Li Commercia
DirectX and
nux
l
OpenGL
C-script like
Commercia
3D via DirectX
language/C+ Windows
l
+/Delphi
2D via
Windows /
AgateLib
.NET
Free
Direct3D or
Mono
OpenGL
Hardware
accelerated
Indie/Com
AGen
C++
Windows
2D via
mercial
Direct3D or
OpenGL
2D via
Commercia DirectDraw,
AGL Engine
C++
Windows
l
Direct3D or
OpenGL
DOS, Unix,
Windows,
Free (Open
Allegro
C
BeOS,
2D and 3D
Source)
QNX,
MacOS
Artificial
.NET
Windows
Free 3D via DirectX
Engines
A6 Game
Engine
Asphyre
Axiom website
Delphi /
Windows
Delphi .NET
.NET
Windows /
Linux /
MacOS
Free
LGPL
2D/3D via
DirectX
3D via
OpenGL/Direc
tX/XNA
Sound
Networking
Yes
No
Yes
Yes
Yes
No
Yes
No
Scripting
Yes - Python
scripting with
3DCW helpers
Yes - Custom CScript scripting
language
Other features
Plus
Minus
Compiler not required. High-level
framework. Plugin-based
architecture. Automatic memory
management.
In alpha development
stage
Many
Physics
No
Yes - Lua
Yes
No
No
Yes
No
No
Yes
Yes
No
No
Yes
No
No
No
No
Runs entirely from Lua scripts
Easy to start, several layers of
abstraction, automatic resources
High-level game states framework
management, custom filesystems
support
GUI Editor
Based on the very popular OGRE
rendering engine.
Versions later than v3.1
are only for BDS and
Turbo Delphi
D2.1b: State of the Art
98/130
Name
Language
Platform
License
Graphics
Sound
Networking
Scripting
Other features
Plus
Baja Engine
C++/Lua
Windows,
Mac Os X
Free
3D via
OpenGL
Yes
Yes
Yes - Lua
Professional Results, Includes all
tools
Shaders, Shipped a Commercial
game, Easy to use, Flexible
Blitz3D
Basic
Windows
Commercia
l
2d/3D via
DirectX7
Yes
Yes
Big community, a lot games
BlitzMax
Object Basic
2d via
OpenGL
Yes via
BlitzBasic
Yes
Yes
Yes via
BlitzMAX Script
Has many modules (GUI, 3D,
Sound, Physics, etc ). Easy to
start
Blox Game
Engine
C++
Windows
2D via
Direct3D
Yes
No
No
Color Blending, Alpha Blending,
and many more.
BlurredEngine
C++
Windows
Commercia
3D via DirectX
l
Yes
Yes
Yes via Lua
Brume Game
Engine
.NET 2.0
(C#)
Windows
(XP/Vista)
3D via DirectX
9
Yes
No
No
C4 Engine
C++
3D
Yes
Yes
Cipher Game
Engine
Windows, Commercia
MacOS
l
Yes - Visual
Scripting
C/C++
Windows
3D
Yes
Yes
No
ClanLib
C++
PLAYMANCER
Windows,
Commercia
Linux,
l
MacOS
Free
Free
Commercia
l
Windows,
Free (Open Accelerated
Linux,
Source)
2D
MacOSX
Yes
Yes
No
Easy to start, support BSP, 3DS,
better for shareware games
Fast 2D engine, better for casual
games(Arcanoids, Puzzles etc.),
OOP, LUA Bind
Free. Easy to use. Fully objectoriented. Includes basic collision
detection. Choose between 2
different rendering
systems(Software, Hardware).
Minus
Site does not indicate
source is included with
the download (or for
that matter, is
available). Hard to use
art pipeline. Small
community.
No OOP. Basic syntax
Has no 3D module
Includes level editor and 3D gui
components
Have more modules (GUI, Sound,
Easy to use, object oriented,
Physics, Collisions, Animations,
animation system, integrated
Effects, Terrains, etc ). Easy to
physics
start
Shader support. Dynamic lighting.
Portals. Script editor. Support for Active development. Good support.
many models. More.
Collision Detection, AI
Open Source. Lightweight
networking.
FP7 215839
Object-Oriented, simple, clean, easy
to use, mature.
There seems to be a
slight lack of
documentation (and
most of it is somewhat
D2.1b: State of the Art
Name
Clockwork
Language
C++
Crystal Space
C/C++
DaBooda
Turbo Engine
VB/FB
Delta3D
License
Windows Indie/Com
None needed
2000-Vista mercial
CRM32Pro
Daimonin
Platform
99/130
Graphics
3D via OGRE
Yes via
(OpenGL
OpenAL
render
system)
Networking
No
Closed
2D via
Yes - API
Yes - API built
Source;
Windows,
SDL/glSDL built on top
on top of
LGPL
Linux
and optimized
of
announced
SDL_Net
MMX blitters SDL_mixer
on site
Linux,
Windows,
MacOS X
Windows
Free
(LGPL)
3D via
OpenGL
2D via
DirectX8
C (server),
Linux,
2d/3d via SDL
GPL
C++ (client), Windows,
and OGRE3D
java (editor) MacOSX
3D via
Linux,
C++
Windows, Free(LGPL) OpenSceneGr
MacOSX
aph (OpenGL)
PLAYMANCER
Sound
Scripting
Yes via Lua
No
Other features
Plus
Self-contained system (one multipurpose application for nearly all
tasks). Only requires 3rd party
Will include physics engine, GLSL applications for game resource
shaders (and editor), OGRE creation (levels, models, audio, etc).
material editor.
Uses OpenGL 2.x.x, and GLSL for
shaders. Will have physics engine,
using PhysX. Editor written in
C#(.NET 2), engine written in C++.
XML parser, Log, proprietary file
Full documentation (English and
system to package your
resources with full protection and Spanish). Cross-platform. Heavily
useful EditorDPF to manage
optimized for each current CPU
them, graphics primitives, cursors, (MMX and SSE). Available as a DLL
tiles, sprites, fonts, several FX
or static library (only Win32).
EditorDPF, a resources manager.
effects, GUI system, accurate
timer, MPEG-I video, full support SetupProyect, a customizable config
of OGG,modules,WAV and VOC,
system. Free.
useful network API and more...
Yes
No
Yes - Python,
Perl or Java
Yes
No
No
Yes
Yes
Yes - Lua
Complete MMORPG engine
Yes
Yes Client/Server
and HLA
Yes - Python
ODE Physics, STAGE Game
Editor, Much More
FP7 215839
A well-supported open source
project. Built upon other open
source projects (OSG, ODE,
Minus
out-of-date). Also, there
are no scripting
facilities.
In beta.
D2.1b: State of the Art
Name
Language
DarkbasicPro
Basic
Windows Shareware
DizzyAGE
C++
Windows
DXGame
Engine
dx_lib32 2.0
VB6
Windows
VB 6.0,
VB.NET
E76 game
engine
100% lua
scriptdriven
EasyWay
Game Engine
Java
Epee Engine
Platform
100/130
C++
Windows
License
Graphics
Sound
Networking
Scripting
Other features
2D/3D via
DirectX9
Yes
Yes
Yes - Darkbasic
Big community, a lot games
Free
2D via DirectX
Yes
No
Free
2D+ via
Direct3D
Free
Yes
Yes 2D hardware
DirectAudio
via
8 and
DirectGraphic
DirectShow
s (D3D8)
8
No
No
Yes 2D/3D via
DirectAudio Yes - UDP
Windows shareware OpenGL/Direc
and
and TCP
tX
software
Windows,
opensource 2D/3D via
Linux, Mac
GPL
OpenGL
OS
Windows,Li zlib/libpng 2D SDL but
PLAYMANCER
Yes OpenAL
Yes
Yes - GS9
scripting
language
No
Tool used to create Dizzy games,
in the classic adventure style
Automated Sprites. 2D Tile Map
(Unlimited Layers). Collision
Checking. Basic Particle Engine.
High Level.
No
Movie playback. Easy input
handling. PAK. Timers.
Yes
Newtonian physics. 3D sound.
Cryptography. World
management. GUI controls and
skins. Event management. Keymapping. Dynamic lighting. 3D
animation. World editor. Multilanguage support.
No
No
No
No
Easily to expand. Perfect pixel
collision. Pathfinding.
See web site
FP7 215839
Plus
Minus
OpenAL, etc.). Great for games,
simulations, or other graphical
applications. Supports massive
terrains. Used by many large scale
companies (e.g., Boeing, Lockheed
Martin, etc.), educational institutions,
and small developers.
Easy to start. Support for BSP and
No OOP. Basic syntax.
3DS.
Single light DLL (VB6 Runtime and
No full OOP interface.
DirectX API only dependency).
ActiveX DLL. All
Simple interface. Easy to start.
Several layers of abstraction.
documentation and web
Automatic resources management. site are in Spanish.
Full documentation.
100% scriptable (no compiling
required). Completely extensible
and flexible.
Easy to learn. Rapid development.
Very easy to use and fast rendering
Incomplete
documentation
Engine is still in the
D2.1b: State of the Art
Name
Entropia
Engine
ephLib
Language
VB6/C
Platform License
nux,Mac,ho
mebrew
console
planned
Windows,
and works
perfectly
with Wine
LGPL
on Linux
(tested
version
0.9.44)
C++/Io
OS X
(Others in GPL/Other
progress)
Windows,
Linux, Mac
language
OS,
Fenix Project
hibrid
Solaris,
(beta)
between
BeOs,
Pascal and C
DreamCast
, GP32X
FIFE - the
Flexible
Isometric
Fallout-like
Engine
FlatRedBall
101/130
Open
Source
Graphics
3D planned
using
OpenGL
Yes (Via
DirectSoun
d or FMod,
2D using
or
DirectX 8.1 DirectShow
for
music/video
)
PLAYMANCER
Windows
Free
Networking
Scripting
No
No
2D/3D via
OpenGL
No
No
Yes IoLanguage
2D via SDL
Yes MikMod
Yes SDL_Net
No
Other features
Plus
See Web Site (or the SDK)
Lots of utilities for a rapid game
development. Particle engine. Sprite
engine. Map engine. Dynamic lights
engine. Tiler. Console. PAK file
format.
Minus
early stages
Web site in Spanish
while Engine mostly
English
Constrained particle and rigid
Easily modifiable. Suitable for
body physics. Scalable polygonal
Under development.
prototypical development.
and continuous collision
detection.
Perfect Pixel collision, path finding
routine, music modules and Ogg
Vorbis Support, cross plataform, Very easy syntax, documentation
No official IDE
very similitudes with Div Game and web site in english and spanish,
(alternatives exist). 2D
Studio: compatibility with more of a complete game of functions, easy
via software. No OPP
for newbies, the evolution of Div
file formats (FPG, MAP, PAL,
language. Buggy.
etc...) and a few compatibility with
Game Studio!
the syntax and other functions of
the Div language
Yes: Python
support out of
Yes
the box, Lua and
(OpenAL No (Might be
Planned support for complex
a couple of other
audio
added later)
rulesets
languages
backend)
supported via
SWIG.
3D via DirectX
Yes
No
No
Template, Collision Detection,
2D software
Win32,
renderer via
Linux,
Free (GPL SDL, hwC++, Python
MacOS X,
accelerated
2.0)
mode via
BSD
OpenGL
.NET
Sound
FP7 215839
One of the few open source 2D
isometric game engines available
Work in progress, not
fully usable yet
D2.1b: State of the Art
Name
2.5D
Language
G3D
C++
G3DRuby
Ruby
GameBrix
Platform
102/130
License
Linux,
Windows, Free (BSD)
MacOS X
Windows,
Free
Linux
None needed Web-Based
Free
Free and
Windows Commercia
l
Game Maker
Delphi
Genesis3D
C++
GhostEngine
C++
Windows
(Mac and
Linux
support is
on the
works)
Goblin 2D+
C/C++
Windows
Golden T
Game Engine
Java
Windows,
Linux,
MacOS X
Gosu
C++, Ruby
Windows
Windows,
Mac, Linux
PLAYMANCER
Graphics
Sound
Networking
Scripting
3D via
OpenGL
No
No
No
3D via
OpenGL
No
No
No
2D
Yes
Yes, some
Yes, script editor
for ActionScript
2.0
2D/3D
Yes
Yes (limited)
Yes - GML
Yes, UDP
No
No
No
Yes
No
No
Yes
No
Yes
Free/Comm
3D via DirectX
ercial
3D via
Engine OpenGL/Direc
code is
tX, with
No
Zlib/libPNG
DirectX
-licensed support in the
works
Freeware, Mainly 2D via
Shareware D3D but has
and
support for .X Yes - Own
Commercia and .MD2 3D
l
models
Free
Free
2D via
OpenGL
2D via
OpenGL/Direc
tX
Other features
Physics, Skeletons
Plus
Minus
No programming required for
making quick 2D web-based
Games and Animations
Terrific for making quick 2d tilebased games with easy scripting
interface. Slow 3D support (via
DirectX).
Still under heavy
development. Not ready
for use yet.
Small footprint. Able to make
standalone executables (no DLL).
FP7 215839
Active development - stable
D2.1b: State of the Art
Name
Language
HGE (Haaf's
Game Engine)
C++
HGE at
SourceForge
HGE
Platform
103/130
License
Graphics
Sound
Open
Yes via
Source
Windows
2D via DirectX
BASS
(Zlib/libpng
license)
Horde3D
C++, C DLL
interface
Windows
Free
(LGPL)
Irmo
C
Linux
Free
Irrlicht
C++/.NET
ika
C++
3D via
OpenGL
3D via
DirectX(8,9),
Windows,
Free
OpenGL or
Linux, Mac
(zlib/pnglib)
various
OSX
software
renderers
2D via
Windows,
Free (GPL)
OpenGL
Linux
Jad Engine
C#
Windows
LGPL
3D via
Managed
DirectX
Jamagic 1.2
Jamascript
Windows
Commercia
l
(withdrawn
3D
PLAYMANCER
Networking
Scripting
Other features
Plus
No
No
Authoring tools, lightweight
Easy to start, good engine structure
Shader based design, skeletal
animation, animation blending,
morph targets, post processing
effects like HDR or DOF,
COLLADA support
Lightweight next-generation engine
with clean design
Big Community. Good
documentation.
No
No
Yes - Lua
No
Yes
Yes - Ruby
No
No
Yes - Lua script
Collision Detection, HDR,
PARALLAX
Yes
No
Yes - Python
Very low overhead
Yes MDSound
and
Vorbis.NET
No
No
Yes
Yes
Yes
Minus
Active development.
Stable.
Focused to graphics cards that
support shaders 2.0 or better.
Uses Newton Physics Engine for
movement and collision. HDR.
FirstPerson and SelfDriven
(exported from 3D Studio Max) Very more easy. Intuitive interface. No full documentation
camera support. Skeletal
animation using channels.
Integrated postproduction system.
AI Engine. Genetic Programming
Framework.
Inbuilt editors
FP7 215839
Easy to learn. Can build online
games like flash.
No longer supported
D2.1b: State of the Art
Name
Language
JEngine SSE
C++
Jet3D
C/C++
jMonkey
Engine
Platform
104/130
License
from sale)
Windows,
Free (GPL)
Linux{Yes}
Graphics
Sound
Networking
Scripting
Other features
2D via
OpenGL
Yes
Yes
Yes - Lua
Yes
Yes - JGN
and jmenetworking
Collision detection. Cg and GLS
effects. GUI. Full 2D open source
framework with editor.
Yes - jMonkey
Scripting
Framework
A Java scene graph based 3D
game engine. See the latest
release notes
No
No
No
No
Windows
Free 3D via DirectX
Windows,
Yes 3D via
Linux, Free (BSD)
OpenAL
LWJGL
MacOS X
Sound
Free
Windows,
2D via
(Creative
Linux,
Yes
LWJGL
Commons
MacOS X
License)
Windows,
Linux, Free (BSD)
2D
MacOS X
Java
Joge
Java
JOGRE
Engine
Java
Lightfeather
3d engine
C++
Windows,
Free
Linux,
(zlib/libpng)
MacOS X
3D via
OpenGL
No
Yes
No
LÖVE
Lua
Windows /
zlib/libpng
Linux
2D via
OpenGL
Yes
No
Yes
2D
Yes
Yes
Lua
ActiveX, Dll, many plug-ins,
movement extensions
No
Yes
Yes
Yes
Yes
Comes with the full source code,
allows to add/edit modules.
Multimedia
Fusion 2
Custom none
scripting
neabEngine
PHP
NeL
C/C++
PLAYMANCER
Windows
Commercia
l
Free /
Windows,
Commercia 2D (AJAX)
Linux
l
Windows, Free/Comm 3D via DirectX
Plus
GLSL and Cg shaders. HDR.
MRT. Portals. Occlusion culling.
PVS. Skeletal and morphing
animation. Exporter for Blender to
LFM format. Post-processing
framework. Paging terrain with
splatting. Built-in GUI. Many
editors. More..
CEGUI Integration
FP7 215839
Easy to Learn, a favourite with
younger developers, online games
like flash
Minus
D2.1b: State of the Art
Name
Language
Platform
Linux
105/130
License
ercial
Graphics
or OpenGL
Sound
VB/Delphi/.N
Windows
Free 3D via DirectX DirectX
ET
Windows,
3D via DirectX
NeoEngine
C++
Free (MPL)
Yes
Linux
or OpenGL
Novashell
Windows,
ClanLib
OpenAL
Game
Lua
Linux, OS zlib/libpng
(OpenGL)
Creation
X
Windows
Free
3D (OGRE
OGE - Open
(mingw,
(LGPL) /
C++
hence DX + OpenAL
Game Engine
VC), Linux Commercia
OpenGL)
(gcc)
l
Free
Windows,
(LGPL) / 3D via DirectX
Linux,
OGRE
C++
No
Commercia or OpenGL
MacOS X
l
2d via
ORE
VB6
Windows
Free
DirectX7 /
Yes
DirectX8
Windows /
Yes Linux /
Plugins
2D (plugins
ORX metaFree
MacOS X /
C/C++
based on
based on
engine
portable via (LGPL)
SDL, SFML) FMod &
plugins(DS,
SFML
PSP, ...)
Ovorp Engine
.NET
Windows
Free 2D via DirectX
PAB game
VB
Windows
engine
Windows,
Yes (FMod
Free
3D
Panda3D C++, Python
Linux
or OpenAL)
Yes,
Linux,
2D via
Photon
C++
zlib
OpenAL
Windows
OpenGL
NemoX 3D
Engine
PLAYMANCER
Networking
Scripting
Other features
Plus
Minus
Yes
Yes
Lua
No
Lua
Fast game creation with Lua
Sector based partitioning. Easy
editing files. Level editor.
Beta
RakNet
Squirrel
GUI (CEGUI). Physics (ODE).
Unicode. OGEd - Game Editor
Multithreading. Clean OO.
Early stage of
development.
Large Community. Good
documentation. Used in severals
Supports all high-end 3D
technologies. Plug-in structure. large games and simulations. Open
Source.
No
No
Yes
Yes
No
No
Platform-independent design.
Animation graph. Resource
management. Physics.
Yes
Python, C++
No
No
Free models. Documentation.
Simple installation.
Easily portable on new platforms.
Small dev team. No
Plug-in architecture. Object oriented. editor. Scripting support
Customizable.
in progress.
Yes
Resource management
FP7 215839
Easy to learn. Stable. Used in
Disney's ToonTown.
Good documentation
Early in development.
Uncertain future.
D2.1b: State of the Art
Name
Language Platform
Visual Basic
PlayerRealms
Windows
6
Linux,
UNIX,
PLib
C++
Windows,
MacOSX,
MacOS9
106/130
License
Free
Graphics
2D via DirectX
7
No
Yes
Yes
Yes
Free
2D
Yes
No
No
Yes
No
Yes - Custom
C++ scripting
language
Yes
No
Windows
PowerRender
C++
Windows, Commercia
3D via DirectX
XBox
l
PTK Engine
C++
PPTactical
Engine For
RTS games
C++
PureBasic
Basic
PySoy
Python
Quake Engine
C
Quake II
Engine
C
Free and
Windows,
Commercia
Mac
l
PLAYMANCER
Yes
Scripting
2D and 3D via
OpenGL
C++
Mac OS X,
Linux,
Windows
Linux, Mac
OS X,
Windows
DOS,
Windows,
Linux, Mac
OS X
Windows,
Linux, Mac
OS X
Yes
Networking
Free
(LGPL)
Popcap
Framework
Windows,
Linux
Sound
Free
(LGPL)
2D
In-game editors
Super Game Engine for
developing super games like as
Zuma
Plus
Minus
Works on Windows 2000/XP/Vista No scripting capabilities
A bit hard to use. PW
seems a bit immature,
Used in numerous projects. Up-toand the alternative is to
date documentation.
use the lower-level
GLUT.
have great game ZUMA
Physics. Collision Detection.
HDR.
Easy to learn. Flexible engine.
No
Font, TTF, Spline, Tar files
Lightweight
Integrated Physics
No proprietary dependencies
Poor shadow support.
2D
Commercia
l
Free
(GPLv3)
3D via
OpenGL
Yes
Yes
GPL,
Commercia
l
Yes, with
OpenAL
and Ogg
Software,
OpenGL
Yes
Yes
QuakeC
OpenGL
Yes
Yes
GPL,
Commercia
l
Other features
FP7 215839
Still in Beta (lacks
features)
D2.1b: State of the Art
Name
Quake III
Arena Engine
Raydium 3D
Language
C
C
Platform License
Windows,
GPL,
Linux, Mac Commercia
OS
l
Windows,
Free (GPL)
Linux
Ray Game
None needed Windows
Designer 2
The RealFeel
Engine
Reality
Factory
RealmForge
107/130
Windows
XP/Vista
VB6
Free
Free
(Closed
Source)
None needed Windows
.NET
Realmcore
.NET / C#
and Shardcore
.NET / CLI
Closed
source.
Graphics
Sound
Networking
OpenGL
Yes
Yes
3D via
OpenGL
Yes via
OpenAL
3D via
OpenGL or
Direct3D
2D
3D via
Genesis3D
(DirectX)
3D via Axiom
(OpenGL)
Other features
Yes
embedded PHP,
Python bindings
Physics via ODE
Yes
No
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
Yes
Plus
Minus
Very limited gameplay
Requires no programming. Very
Collision Detection. Translucency.
options. Outdated
easy to use. Includes most needed
Lighting.
graphics engine. Very
tools.
small community.
Designed for MMORPGs.
Yes Complete
server
2D and 3D via Yes via framework via
DirectX or DirectX or Shardcore.
OpenGL
OpenAL Peer-to-peer
and client-toserver within
Realmcore.
Yes via Lua
Genre-agnostic. Fully extensible
and modular game engine. APIagnostic implementations. Allows
custom renderers.
Yes
No
No
User Interface
Yes
No
No
Completelty
.NET 2.0
3D via
RetinaX
(C#). No
Free (BSD) Managed
DirectX
wrapped C++
Libraries.
Revolution3D VB/C++/.NET Windows
Free 3D via DirectX
PLAYMANCER
Scripting
FP7 215839
Incomplete - still in
production.
Easy to use. Well structured
framework.
D2.1b: State of the Art
Name
RPG Maker
2003
RPG Maker
XP
Language
PTK Engine
C++
Saq2D
C#
Platform
108/130
License
C/Delphi
Windows Shareware
C/Delphi
Windows Shareware
Free and
Windows,
Commercia
Mac
l
Windows
Free
Graphics
Sound
Yes via
DirectX
Yes via
DirectX
Networking
Scripting
Other features
Level Editor
Easy to use
Yes
Ruby
Level Editor
Easy to use
2D
Yes
No
No
Font. TTF. Spline. Tar files
Lightweight
2D engine via
XNA
Soon
Maybe
No
Yes
Yes
No
Yes
Yes
Yes
No
Yes
Yes
Yes
No
No
2D
2D
VB/C++/Delp
Windows
Free
2D
hi
Windows,
Linux Commercia
Source Engine
C++
Direct3D
(serverl
side)
The Nebula
C++
Windows
Free 3D via DirectX
Device 2
Windows,
Thousand
Python, C++,
2D/3D
Parsec
Linux, Free (GPL)
others
Framework
MacOS X
TNT Basic
Basic
MacOSX Free (GPL)
2D
Sprite Craft
Windows,
Commercia
Linux,
l
MacOS X
Yes
Ruby
VBScript/JavaSc
ript
3D via
OpenGL
Yes OpenAL
Yes
Yes - Custom
Torque Script
2D
Yes OpenAL
Yes
Yes - Custom
Torque Script
Free for
VB/Delphi/C+
Windows learning/Co 3D via DirectX DirectX
Truevision3D
+/.NET
mmercial
Yes
Torque
C++
Torque2D
C++
PLAYMANCER
Windows,
Commercia
Linux,
l
MacOS X
Plus
Havok Physics, Valve Faceposer
Technology, VGUI, HDR
Framework for online turn based
space strategy games
Mission Editor. Terrain Editor.
WYSIWYG GUI editor. Particle
engine. Theora video. Multiple
language support.
Physics. Plugins for popular
Yes - VBscript,
modeling packages. Active user
Python, Java
base. Normal Mapping. Relief
Script
Mapping. Complex shaders.
FP7 215839
Open source. Large community.
Many 3D modeling exporters.
Minus
D2.1b: State of the Art
Name
Language
UnrealEngine2
/ 2X / 3
C++
UnrealEngine2
Runtime
C++
Unigine
C++
Unity
C++
XtremeWorlds
VB6
vbGORE
VB6
Visual3D.NET
.NET 2.0
(C#)
Platform License
Windows,
Linux,
MacOS X, Commercia
PS2, Xbox,
l
PS3, XBOX
360
Windows,
NonLinux,
MacOS X, Commercia
l/
PS2, Xbox,
PS3, XBOX Educational
360
Windows, Commercia
Linux
l
109/130
Graphics
Sound
Networking
Scripting
Other features
3D
Yes
Yes
Yes via
UnrealScript
Physics. HDR (UE3).
3D
Yes
Yes
Yes via
UnrealScript
3D
Yes
Yes
Yes UnigineScript
Mac
(developme
nt),
Commercia 3D via DirectX
Windows,
or OpenGL
l
web,
Nintendo
Wii
Free
Windows (Closed
2D
Source)
Free (Open
Windows
2D via 3D
Source)
Commercia
l, Free
Windows, Student 3D via DirectX
Xbox 360 Commercia
or XNA
l & Noncommercial
PLAYMANCER
Yes
Yes
Yes
No
Yes
Yes
No
Yes
Yes - .NET
languages,
IronPython
Yes
Minus
Expensive.
Expensive.
Physics. HDR. PRT. Pixel and
vetex shaders (3.0). Soft
shadows.
Yes - .NET
Ageia PhysX. Terrain engine.
based
Extensible shaders. JIT compiled
scripts. Soft shadows.
JavaScript, C#,
Boo, or C++
Collaboration tools. Realtime
DLLs
networking.
Yes
Plus
Designed towards ORPG and
MMORPG design
Designed towards ORPG and
MMORPG design
Visual development and
prototyping. Ragdoll Physics.
Normal-mapping. Shaders (3.0).
HDR. Integrated runtime design
toolset. Skinnable GUI.
FP7 215839
Low cost. Tools. GUI.
Many tools. Complete
documentation.
Source code is a
separate license
D2.1b: State of the Art
Platform
110/130
Name
Language
YAKE Engine
C++
Yage
D
Windows,
Linux
Free
(LGPL)
Zak Engine
C++
Windows
Free
ZFX
Community
Engine
2D via DirectX
8.1 and 9.0
C++
Windows,
Linux, BSD
Free
(LGPL)
3D via DirectX
and OpenGL
Edge2d
Engine
C++
Windows,
Linux
Open
Source
Phoenix
Engine
C#
Windows,
Mono
Beta
Windows,
Linux
PLAYMANCER
License
Free
Graphics
Sound
3D via OGRE
Yes (OpenGL),
OpenAL
Direct3D9
3D via
Yes OpenGL
OpenAL
Library
independent
(both DirectX
and OpenGL)
SDL.NET
Networking
Scripting
Other features
Yes
Yes - Lua
GUI via CEGUI, physics via ODE
No
No
Yes
Yes
Yes AngelScript
Yes
Yes
Yes - Lua
Yes
No
No
Object-oriented. Plug-in based.
Yes
Yes
No
IronPython
Tiles Maps (AnaConda Map
Editor). Sprites. Particle system.
Bitmap fonts.
FP7 215839
Plus
Stable. Easy to use. Fast games
development.
Map Editor, Sprite, Plugin system,
e.t.c
Minus
D2.1b: State of the Art
111/130
7 Appendix 2: Research on Games for Health
AUTHORS
Broeren , Rydmark ,
Björkdahl and Sunnerhagen
Merians, Poizner, Boian,
Burdea and Adamovich
Coyle, Matthews, Sahrry,
Nisbet and Doherty
Walsche, Lewis, Sun,
O´Sullivan and Weidergold
PLAYMANCER
YEAR
2007
METHOD
All the subjects began with a
baseline phase, then started
the treatment phase and
finally a follow-up
assessment was made 12
weeks later.
SAMPLE
5 hemi paretic, poststroke subjects.
Mean age: 59 years
INSTRUMENTOS/BIOSENSORES
Computer Haptic Device
Stereoscopic glasses
2006
All the subjects began with a
baseline phase, then started
the treatment phase.
6 males, 2 females
with hemiparesis.
Age mean: 64
18-sensor CyberGlove.
Rutgers Master II-ND force
feedback prototype glove.
2005
The "personal investigation"
3D therapeutic game was
completed in three sessions
over a 3 week period.
Feedback in form of
questionnaires and
discussion with supervising
therapist.
4 adolescents (2 boys
and 2 girls) with ages
13-16.
Computer
3D computer game
Questionnaires
2003
Subjects with driving phobia
post MVA were exposed to a
VR driving environment using
VR and GR simulation with
computer games.
14 subjects with
Simple
Phobia/Accident
Phobia
VR and GR computer game
Questionnaires
heart rate monitoring
MEASURES
Collected by the
movement of the haptic
device
Collected by 4
electromagnetic sensors
in the Gloves.
Jebsen Test of Hand
Function.
Subjective assessment.
Feedback in form of
questionnaires for
therapist and adolescent.
Therapist 5 Likert scale:
'very helpful' to 'very
unhelpful'
Young person 5 Likert
scale: 'very easy' to 'very
difficult'
MINI Interview for major
Axis I psychiatric
disorders in DSM-IV and
ICD 10.
Fear of Driving Inventory:
travel- distress,
avoidance, maladaptive
driving strategies; travel
anxiety.
Heart rate monitoring
FP7 215839
RESULTS
Aspects of motor performance improve after
training; the gains were maintained at follow-up.
This technology can detect small variations not
detectable with naked eye.
The subjects improved and retained gains made in
range of motion, speed, and
isolated use of the fingers.
Literature review and presentation of the Personal
Investigator (PI); a 3D computer game specifically
designed to help adolescents overcome mental
health problems and to engage more easily with
mental health professionals.
Post-treatment reduction on all measures which
thereby suggests that VR and GR have a useful
role in treatment of driving phobia post-accident
even when depression and post-traumatic stress
are present.
D2.1b: State of the Art
AUTHORS
Bellack, Dickinson, Morris
and Tenhula
Rezaiyan,Mohammadi and
Fallah
Bouchard, Cote, St-Jaques,
Robillard and Renaud
Kristin, Bussey-Smith and
Roger D. Rosen
YEAR
METHOD
SAMPLE
INSTRUMENTOS/BIOSENSORES
MEASURES
RESULTS
CACR shows promise as cognitive
remediation intervention because it
incorporates several of the features and
techniques that have been demonstrated his
effectiveness in previous studies.
2005
The clients pass for each exercise
with the help of a therapist.
--
Computer-Assisted Cognitive
Remediation Program with
different modules.
No Measures collected
2007
Both groups took the TPS as
pretest. The EG experienced the
computer game and the CG
received no treatment. Both groups
were assessed trough the TPS after
5 weeks.
60 mentally retarded
subjects.
IQ level: 50 - 70
Toulousse-Pieron Scale
Computer Games
Computer games increased the attention of
Collected by the
subjects, but the results do not have a
Toulousse-Pieron Scale suitable consistency in the follow-up
assessments.
2005
Patients were met 5 weekly
sessions. 2 sessions to explaining
1 male, 10 females with
their phobia and to introduce them
arachnophobia.
to the VR. 3 sessions VR treatment.
Age mean: 30,73
After the last exposure patients filled
the questionnaires.
IBM Computer
3D editor of Half Live
Head mounted display
Joystick
Collected by various
questionnaires.
Significant improvement between pre and
post results on the behavioral avoidance
test, Spider Beliefs Questionnaire, and
perceived self-efficacy.
2007
No subjects.
State_of_the_art critical analysis.
Educational computer games
In all articles collected
by various
questionnaires.
Computerized asthma patient education
programs may improve asthma knowledge
and symptoms but their effect on objective
clinical outcomes is less consistent.
Computer TVEDGs Head Mouted
Display Pc computer games
Subject's verbal reports
of levels of anxiety,
sense of presence,
Anxiety could be induced in phobic patients
simulator sickness.
by exposing them to phobogenic stimuli in
Questionnaires
therapeutic virtual environments derived
measuring sense of
from computer games (TVEDG).
presence and
symptoms of simulator
sickness.
Robillard, Bouchard, Fournier
2003
and Renaud
PLAYMANCER
112/130
Three sessions of virtual exposure
therapy were given to the phobic
group. Phobogenic cues were
immersed in periods of 20 minutes.
After each 5 min the subjects gave
verbal reports.
The non-phobic group went through
the same procedure but for in less
sessions and less time.
Review article
13 specific phobic
patients (9 women, 4
men: mean age 33.7)
and 13 non-phobic
patients (9 women, 4
men: mean age 33.9).
FP7 215839
D2.1b: State of the Art
AUTHORS
Rassin, Gutman and Silne
Cook, Meng, Gu and Howery
YEAR
2004
2002
Cohen, Hodson, O'Hare,
Boyle, Durrani, McCartney,
Mattey, Naftalin and Watson
J
2005
Beale, Kato, Marin-Bowling,
Guthrie and Cole
2007
PLAYMANCER
113/130
METHOD
SAMPLE
Interviews with the surgery and the
control group to determine their
favorite computer game types and
their view on computer games as a
form of education and moreover in
the case of the surgery group their
main concerns and fear before the
surgery. Semi-structured
questionnaire with some questions
directed to the accompanying
parent.
A study carried out to
develop a game:20
children aged 7-12 of
whom 10 children
awaiting surgery and 10
healthy children
Pilot study using a robotic arm
system to complete play-related
tasks and compare the results to
interventions using normal toys and
computer games.
3 Groups, A: home-based
intervention with FFW(N=23), B:
Computer software(N=27), C:
Control Group(N=27). Outputs: 9
weeks baseline and 6 moths followup by various language
questionnaires.
2 Groups completed questionnaires
at the beginning (baseline), 1 month
after baseline and 3 months after.
EG played Re-Mission and Indiana
Jones, CG played only Indiana
Jones, both were asked to play at
least an hour a week during 3
months.
INSTRUMENTOS/BIOSENSORES
MEASURES
RESULTS
Interviews and semi-structured
questionnaires
Evaluation of interview
recordings and
questionnaire answers.
Research show that children of the
computer age have a preference for
computer-assisted learning. Interviews and
questionnaires revealed children's greatest
fears before surgery and how they prefer
cartoon- like games.
4 children with severe
cerebral palsy aged 6-7
Software in form of a CRS robot
manipulator
Software control for
switch controlling
playback of movements
by individual using
interface switch.
Measures movement
capability and cognitive
understanding through
speed and movement.
Children using the CSR robot manipulator
were highly responsive to the robotic tasks
but not to interventions using toys and
computer games.
55 boys
22 girls
Age Range 6 -10 years
With speech-language
pathologists
Collected by various
Fast ForWord-Language packages
language
Progress monitorized
questionnaires
375 patients with cancer
Age Range 19-23
Re-Mission (Serious Game)
Knowledge Test
FP7 215839
Each group made significant gains in
language scores, but there was no
additional effect for either computer
intervention.
Video games can be an effective vehicle for
Collected by knowledge
health education in adolescents and young
test
adults with chronic illnesses.
8 References
[1]
Ten Myths About Serious games by Ben Sawyer, 30 Oct 2007,
http://www.escapistmagazine.com/articles/view/issues/issue_121/2575-Ten-MythsAbout-Serious-Games (last visit: 05.05.2008)
[2]
Personal Investigator by David Coyle, Computer Science Dept. Trinity College Dublin.
https://www.cs.tcd.ie/David.Coyle/personalInvestigator.htm (last visit: 11.04.2008)
[3]
Coyle, D., Matthews, M., Sharry, J., Nisbet, A., & Doherty, G. (2005). Personal
Investigator: A Therapeutic 3D Game for Adolescent Psychotherapy. International
Journal of Interactive Technology and Smart Education, 2, 73-88.
[4]
PlayWrite by David Coyle, Computer Science Dept., Trinity College Dublin.
https://www.cs.tcd.ie/David.Coyle/playWrite.htm (last visit: 11.04.2008)
[5]
PlayWrite Games: https://www.cs.tcd.ie/~coyledt/playWriteGames.htm (last visit:
11.04.2008)
[6]
Self-Esteem Games, Mc Gill University, Montreal.
http://selfesteemgames.mcgill.ca/index.htm (last visit: 11.04.2008)
[7]
Background theory from academic research on self-esteem, McGill University,
Montreal. http://selfesteemgames.mcgill.ca/resources/index.htm (last visit: 11.04.2008)
[8]
Hansen, P., & Hansen, R. (1988). Finding the face in the crowd: An anger superiority
eVect. Journal of Personality and Social Psychology, 54, 917–924.
[9]
EyeSpy: The Matrix Demo, http://selfesteemgames.mcgill.ca/games/sematrix.htm (last
visit: 11.04.2008)
[10] The Inhibition of Socially Rejecting Information Among People with High versus Low
Self-Esteem: The Role of Attentional Bias and the Effects of Bias Reduction Training in
the Journal of Social and Clinical Psychology, 2004, Volume 23, pp. 584-602.
[11] Wham! Self-Esteem Conditioning Demo,
http://selfesteemgames.mcgill.ca/games/wam.htm (last visit: 11.04.2008)
[12] Increasing Implicit Self-Esteem through Classical Conditioning in Psychological
Science, 2004, Volume 15, pp. 498-502.
[13] Grow your Chi! Demo, http://selfesteemgames.mcgill.ca/games/chigame.htm (last visit:
11.04.2008)
[14] Future developments: EyeZoom, Click’n’Smile, Word Search, McGill University,
Montreal. http://selfesteemgames.mcgill.ca/research/futureprojects.htm (last visit:
11.04.2008)
[15] NEAT-o-Games, Computational Physiology Lab, University of Houston, Texas.
http://www.cpl.uh.edu/html/Localuser/neat-o-games/ (last visit: 10.04.2008)
[16] Toscos T., Faber A., An S., and Gandhi M.P. Chick Clique: Pervasive technology to
motivate teenage girls to exercise. In Proc. CHI 2006, ACM press (2006), 1873-1878.
[17] Consolvo, S., Everitt, K., Smith, I., and Landay, J.A. Design requirements for
technologies that encourage physical activity. In CHI 2006, ACM Press (2006), 457466.
[18] Lee, G., Raab, F., Tsai, C., Patrick, K., and Grisworld W.G. Work-in-progress: PmEB:
A mobile phone application for monitoring caloric balance. Ext. Abstracts CHI 2006,
(2006), 1013-1018.
[19] Hoysniemi J. International survey on the dance-dance revolution game. ACM
Computers in Entertainment 4, 2 (2006).
D2.1b: State of the Art
115/130
[20] Mokka, S., Vaatanen, A., Heinila, J., and Valkkynen, P. Fitness computer game with a
bodily user interface. In Proc. Of the Second International Conference on
Entertainment Computing 38, (2003), 1-3.
[21] Mueller, F. and Agamanolis S. Pervasive gaming: Sports over a distance. ACM
Computers in Entertainment 3, 3 (2006).
[22] Steinhausen, Hans-Christoph, and Margarete Voltrath. The self-image of adolescent
patients with eating disorders. International Journal of Eating Disorders.. Vol. 13, (2),
221-227. 1993.
[23] Button, Eric J., Philippa Loai4 Jo Davies, and Edmund Sonuga-Barke. Self-esteem,
eating problems, and psychological well-being in a cohort of schoolgirls aged 15-16- A
questionnaire and interview study. International Journal of Eating Disorders. Vol. 21,
(1), 39-47. 1997.
[24] Luis von Ahn, Games With a Purpose (http://www.cs.cmu.edu/~biglou/ieee-gwap.pdf),
IEEE Computer Magazine, June 2006.
[25] Luis von Ahn and Laura Dabbish, Labeling Images with a Computer Game
(http://www.cs.cmu.edu/~biglou/ESP.pdf), ACM CHI 2004.
[26] Luis von Ahn, Ruoran Liu and Manuel Blum, Peekaboom: A Game for Locating
Objects in Images (http://www.cs.cmu.edu/~biglou/Peekaboom.pdf), ACM CHI 2006.
[27] The ESP Game by Luis van Ahn: http://www.espgame.org/ (last visit: 16.04.2008)
[28] Google Talk on Human Computation, Luis van Ahn, July 26, 2006.
http://video.google.com/videoplay?docid=-8246463980976635143 (last visit:
16.04.2008)
[29] Exergaming on Wikipedia: http://en.wikipedia.org/wiki/Exergaming (last visit:
16.04.2008)
[30] Dance Dance Revolution on Wikipedia:
http://en.wikipedia.org/wiki/Dance_Dance_Revolution (last visit: 17.04.2008)
[31] Wii players need to exercise too, BBC News Online, 21 December 2007.
http://news.bbc.co.uk/2/hi/health/7155342.stm (last visit: 17.04.2008)
[32] Wii Fit on Wikipedia: http://en.wikipedia.org/wiki/Wii_Fit (last visit: 17.04.2008)
[33] Exergaming fad or “fit” for purpose:
http://drilly.wordpress.com/2007/07/07/exergaming-fad-or-fit-for-purpouse/ (last visit:
17.04.2008)
[34] Peek-a-Boom: http://peekaboom.org/ (last visit: 17.04.2008)
[35] Social gaming picks up momentum. Ellen Lee, Chronicle Staff Writer, San Francisco
Chronicle, March 31st, 2008. http://www.sfgate.com/cgibin/article.cgi?f=/c/a/2008/03/31/BU5GVSA3F.DTL (last visit: 18.04.2008)
[36] The Journey to Wild Divine: The Passage – User’s Manual, section V: Biofeedback
Technology in The Journey to Wild Divine, pp.21-24
[37] An Analysis of the Potential of Affective Gaming, A Critique of Journey to Wild Divine.
Ella Romanos, Games Critique, Design for Entertainment Systems, Digital Art &
Technology BSc (Hons), University of Plymouth, November 2007.
[38] The Journey to Wild Divine: The Passage:
http://www.wilddivine.com/JourneytoWildDivine/ (last visit: 18.04.2008)
PLAYMANCER
FP7 215839
D2.1b: State of the Art
116/130
[39] Heart Rate Variability, summary prepared by Ichiro Kawachi in collaboration with the
Allostatic Load Working Group, 1997.
http://www.macses.ucsf.edu/Research/Allostatic/notebook/heart.rate.html (last visit:
18.04.2008)
[40] Heart Rate Variability: An Indicator of Autonomic Function and Physiological
Coherence. Institute of HeartMath. http://www.heartmath.org/research/science-of-theheart/soh_13.html (last visit: 18.04.2008)
[41] Affective Videogames and Modes of Affective Gaming: Assist Me, Challenge Me,
Emote Me. K. M. Gilleade, A. Dix (Computing Department, Lancaster University), J.
Allanson (Allanson Consulting), 2005.
[42] Intelligent Biofeedback using an Immersive Competitive Environment. D. Bersak, G.
McDarby, N. Augenblick, P. McDarby, D. McDonnell, B. McDonald, R. Karkun, Media
Lab Europe, Dublin.
[43] Vyro Games: http://www.vyro-games.com/ (last visit: 25.04.2008)
[44] ‘Relax to Win’ – Treating children with anxiety problems with a biofeedback video
game. J. Sharry, M. McDermott, J. Condron. Media Lab Europe and Department of
Child and Family Psychiatry, Mater Hospital.
[45] Reitmayr, G. & Schmalstieg, D. (2005), 'OpenTracker: A flexible software design for
three-dimensional interaction', Virtual Real. 9(1), 79--92.
[46] Virtual Reality Peripheral Network: http://www.cs.unc.edu/Research/vrpn/index.html
(last visit: 22.04.2008)
[47] VRCO trackd:
http://www.inition.com/inition/product.php?URL_=product_software_vrco_trackd&SubC
atID_=69 (last visit: 22.04.2008)
[48] Gadgeteer: http://www.vrjuggler.org/gadgeteer/ (last visit: 22.04.2008)
[49] 3D User Interfaces - Theory and practice, D. A. Bowman, E, Kruijff, J.J. LaViola Jr., I.
Pupyrev, Addison-Wesley, 2005.
[50] Motion capture on Wikipedia: http://en.wikipedia.org/wiki/Motion_capture (last visit:
07.01.2008)
[51] Motion Capture - What is it? http://www.metamotion.com/motion-capture/motioncapture.htm (last visit: 07.01.2008)
[52] MyHeart (IST-2002-507816): http://www.hitech-projects.com/euprojects/myheart/ (last
visit: 28.04.2008)
[53] Mobile Phones as Interfaces to Serious games, J. Arrasvuori, Nokia Research Center.
Serious games Summit, Games Developer Conference 2006, San José, USA, 200603. http://www.hitechprojects.com/euprojects/myheart/public/public_results_documents/documents/MyHeart
-serious-games-san-jose-2006-Nokia.pdf (last visit: 28.04.2008)
[54] Multiplayer Game of the Year. August 2006.
http://www.cabel.name/2006/08/multiplayer-game-of-year.html (last visit: 28.04.2008)
[55] Nike+iPod on Wikipedia: http://en.wikipedia.org/wiki/Nike%2BiPod (last visit:
28.04.2008)
[56] Nike+ Website: http://nikeplus.nike.com/nikeplus/ (last visit: 28.04.2008)
PLAYMANCER
FP7 215839
D2.1b: State of the Art
117/130
[57] Girls Involved in Real Life Sharing: Utilizing Technology to Support the Emotional
Development of Teenaged Girls. Shaundra B. Daily and Rosalind W. Picard,
Massachusetts Institute of Technology.
[58] Digital story explication as it relates to emotional needs and learning. Shaundra B.
Daily, Massachusetts Institute of Technology.
[59] Girls Involved in Real Life Sharing: http://affect.media.mit.edu/projectpages/girls/ (last
visit: 24.08.2008)
[60] Liu, H. and Singh, P. (2004). ConceptNet: a practical commonsense reasoning toolkit.
BT Technology Journal 22(4), 211-226.
[61] Liu, H., Selker, T. and Lieberman, H. (2003). Visualizing the Affective Structure of a
Text Document. Conference on Human Factors in Computing Systems, CHI 2003, Ft.
Lauderdale, FL.
[62] White, M., & Epston, D. (1990). Narrative means to therapeutic ends (1st ed.). New
York: Norton.
[63] Papert, S. (1980). Mindstorms : Children, computers, and powerful ideas. New York:
Basic Books.
[64] Solution Focused Brief Therapy on Wikipedia:
http://en.wikipedia.org/wiki/Solution_focused_brief_therapy (last visit: 29.04.2008)
[65] MUPE: Multi User Publishing Environment: http://www.mupe.net/ (last visit:
29.04.2008)
[66] S. Furui. K. Yamaguchi, “Designing a multimodal dialogue system for information
retrieval,” In ICSLP-1998, paper 0036
[67] K. Kvale and N. Warakagoda, “A Speech Centric Mobile Multimodal Service useful for
Dyslectics and Aphasics,” Proceedings Intespeech 2005, pp. 461-464
[68] M. Johnston, S. Bangalore, G.Vasireddy,A. Stent, P. Ehlen, M.Walker, S. Whittaker,
and P. Maloor, “MATCH: An architecture for multimodal dialogue systems,” Proc.
Annu. Meeting of the Association for Computational Linguistics, 2002
[69] Ivan Kopecek and Radek Oslejsek “Creating Pictures by Dialogue,” Computers
Helping People with Special Needs, pp.61-68, 2006.
[70] Leo Ferres, Avi Parush, Shelley Roberts, and Gitte Lindgaard, “Helping People with
Visual Impairments Gain Access to Graphical Information Through Natural Language
The iGraph System,” Computers Helping People with Special Needs, pp. 1122-1130,
2006
[71] X. Huang, A. Acero, C. Chelba, L. Deng, D. Duchene, J. Goodman, H.-W. Hon, D.
Jacoby, L. Jiang, R. Loynd, M. Mahajan, P. Mau, S. Meredith, S. Mughal, S. Neto, M.
Plumpe, K. Wand, and Y. Wang, “MIPAD: A next generation PDA prototype,” in Proc.
ICSLP 2000, Beijing, China, 2000
[72] K. Wang, “A Plan-Based Dialog System with Probabilistic Inferences” Proc ICSLP
2000. Beijing, China, 2000
[73] Beskow J, Karlsson I, Kewley J and Salvi G (2004). “SYNFACE - A Talking Head
Telephone for the Hearing-impaired,” In K Miesenberger, J Klaus, W Zagler, D Burger
eds Computers helping people with special needs 1178-1186
PLAYMANCER
FP7 215839
D2.1b: State of the Art
118/130
[74] Johan Boye, Joakim Gustafson and Mats Wire'n, Robust spoken language
understanding in a computer game, Speech Communication, Volume 48, Issues 3-4,
March-April 2006, Pages 335-353
[75] Jenny Brusk, Game Dialogue Management, Term Paper in Dialogue Systems, GSLT2, spring 2006
[76] Rudra, T. Bossomaier, T. , 1.1 Cognitive Emotion in Speech Interactive Games,
TENCON 2005, pp. 1-6
[77] Diane J. Litman and Kate Forbes-Riley, Recognizing student emotions and attitudes
on the basis of utterances in spoken tutoring dialogues with both human and computer
tutors, Speech Communication, Volume 48, Issue 5, May 2006, pp. 559-590
[78] C. M. Lee and S.S. Narayanan, “Towards detecting emotions in spoken dialogs,” IEEE
Transactions on Speech and Audio Processing, Vol 13, No. 2, pp. 293-303, 2005
[79] J. Ang, R. Dhillon, A. Krupski, E. Shriberg, A. Stolcke, “Prosody based automatic
detection of annoyance and frustration in human computer dialog,” Proc. Of ICSLP,
Denver, 2002, pp. 2037-2040
[80] M. Walker, J. Aberdeen, J. Boland, E. Bratt, J. Garafolo, L. Hirschman, A. Le, S. Lee,
S. Narayanan, K. Papineni, B. Pellom, J. Polifroni, A. Potamianos, P. Prabhu, A.
Rudnicky, G. Sanders, S. Seneff, D. Stallard, and S.Whittaker, “DARPA Communicator
dialog travel planning systems: The June 2000 data collection”, in P. Dalsgaard, B.
Lindberg, H. Benner, and Z. Tan, editors, Proc. EUROSPEECH, pp. 1371–1374,
Aalborg, Denmark, Sep. 2001.
[81] J. Liscombe, G. Riccardi, D. Hakkani-Tür, “Using context to improve emotion detection
in spoken dialog systems,” Proc. Interspeech 2005, pp. 1845-1848, 2005
[82] A. L. Gorin, G. Riccardi and J. H. Wright, “How may I help you?,” Speech
Communication, vol. 23, pp.. 113-127, 1997
[83] Batliner, A., Hacker, C., Steidl, S., No"th, E., & Haas, J. From emotion to interaction:
Lessons from real human-machine-dialogues. In E. Andre'e', L. Dybkjaer, & W. Minker,
Affective Dialogue Systems: Tutorial and Research Workshop, ADS 2004, Kloster
Irsee, Germany, June 14-16, 2004. Lecture Notes in Computer Seience 3068 /2004
(pp. 1-12). Heidelberg: Springer.
[84] Batliner, A., Burkhardt, F., van Ballegooy, M., & Nöth, E. (2006). A taxonomy of
applications that utilize emotional awareness. In Erjavec, T. and Gros, J. (Ed.),
Language Technologies, IS-LTC 2006 (pp. 246-250). Ljubljana, Slovenia: Infornacijska
Druzba
[85] W. Xu, A. Rudincky, “Task-based dialogue management using an agenda,”
ANLP/NAACL 2000 Workshop on Conversational Systems, May 2000
[86] D. Bohus, and A. Rudnicky, “RavenClaw: Dialog Management Using Hierarchical Task
Decomposition and an Expectation Agenda” , in Eurospeech-2003, Geneva,
Switzerland
[87] Open Speech Recognizer. http://www.nuance.com (last visit: 05.05.2008)
[88] Loquendo Technologies. http://www.loquendo.com (last visit: 05.05.2008)
[89] F. Flippo, A natural human-computer interface for controlling wheeled robotic vehicles,
Technical Report DKS04-02, Delft University of Technology, 2004.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
119/130
[90] A. Kain and M. Macon, "Spectral voice conversion for text-to-speech synthesis"
Proceedings of ICASSP, pp. 285-288, May 1998.
[91] Narendranath, M., Murthy, H., Rajendran, S., Yegnanarayana, B. (1995).
Transformation of formants for voice conversion using artificial neural networks.
Speech Communication vol.16, pp. 207-216.
[92] Watanabe, T., Murakami, T., Namba, M., Hoya, T., Ishida, Y. (2002). Transformation of
spectral envelope for voice conversion based on radial basis function networks. In
Proceedings of Interspeech 2002 ICSLP International Conference on Spoken
Language Processing, Denver, USA, pp. 285-288.
[93] Y. Stylianou, O. Capp´e, and E. Moulines. Continuous probabilistic transform for voice
conversion. IEEE Trans. Speech and Audio Processing, Vol. 6, No. 2, pp. 131–142,
1998.
[94] MobiHealth, European project number IST-2001-36006, http://www.mobihealth.com
(last visit: 08.05.2008)
[95] Gamebryo: http://www.emergent.net/ (last visit: 13.05.2008)
[96] Havok: http://www.havok.com/ (last visit: 13.05.2008)
[97] Renderware: http://www.renderware.com/ (last visit: 13.05.2008)
[98] Speedtree: http://www.speedtree.com/ (last visit: 13.05.2008)
[99] Taito: http://www.taito.co.jp/eng/ (last visit: 13.05.2008)
[100] Atari: http://www.atari.com/ (last visit: 13.05.2008)
[101] Half-Life 2: Lost Coast: http://en.wikipedia.org/wiki/Half-Life_2:_Lost_Coast (last visit:
13.05.2008)
[102] Unreal Engine:
http://en.wikipedia.org/wiki/Unreal_Engine_technology#Unreal_Engine_3.0 (last visit:
13.05.2008)
[103] Epic Games: http://en.wikipedia.org/wiki/Epic_Megagames (last visit: 13.05.2008)
[104] OGRE 3D: http://en.wikipedia.org/wiki/OGRE_3D (last visit: 13.05.2008)
[105] Nexuiz: http://en.wikipedia.org/wiki/Nexuiz (last visit: 13.05.2008)
[106] J. Carmack. Talk on Game Development at GDC2004.
http://pc.ign.com/articles/502/502167p1.html (last visit: 13.05.2008)
[107] ID Software: http://www.idsoftware.com/ (last visit: 17.05.2008)
[108] “Earthquake in Zipland” on Wikipedia:
http://en.wikipedia.org/wiki/Earthquake_in_Zipland (last visit: 15.05.2008)
[109] ZipLand Interactive: http://www.ziplandinteractive.com/ (last visit: 15.05.2008)
[110] Szer J. Video games as physiotherapy. Medical Journal of Australia, 1983; 1: 401-402.
[111] King TI. Hand strengthening with a computer for purposeful activity. American Journal
of Occupational Therapy 1993; 47: 635-637.
[112] O’Connor TJ, Cooper RA, Fitzgerald SG, Dvorznak MJ, Boninger ML,VanSickle DP, et
al. Evaluation of a manual wheelchair interface to computer games. Neurorehabil
Neural Repair 2000;14:21-31.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
120/130
[113] Adriaenssens EE, Eggermont E, Pyck K, Boeckx W, Gilles B. The video invasion of
rehabilitation. Burns 1988;14:417-9.
[114] Vilozni D, Bar-Yishay E, Shapira Y, Meyer S, Godfrey, S. Computerized respiratory
muscle training in children with Duchenne Muscular Dystrophy. Neuromuscular
Disorders 1994; 4: 249-255.
[115] Gaylord-Ross RJ, Haring TG, Breen C, Pitts-Conway V. The training and
generalization of social interaction skills with autistic youth. Journal of Applied
Behaviour Analysis, 1984; 17: 229.
[116] Kappes BM, Thompson DL. Biofeedback vs. video games: Effects on impulsivity, locus
of control and self-concept with incarcerated individuals. Journal of Clinical Psychology
1985; 41: 698-706.
[117] Beale IL , Kato PM, Marin-Bowling VM, Guthrie N, Cole SW. Improvement in CancerRelated Knowledge Following Use of a Psychoeducational Video Game for
Adolescents and Young Adults with Cancer, Journal of Adolescent Health, 41 (3),
p.263-270, Sep 2007.
[118] Fawzy, I., Fawzy, N. W., & Canada, A. L. Psychoeducational intervention programs for
patients with cancer. In B. Andersen (Ed.), Psychosocial interventions for cancer (pp.
235-268) 2001. Washington, DC: American Psychological Association.
[119] Fernández-Aranda F, Núñez A, Martínez C, Krug I, Cappozzo M, Carrard I, Rouget P,
Jiménez-Murcia S, Granero R, Penelo E, Santamaría J, Lam T. Internet-Based
Cognitive-behavioral Therapy for Bulimia nervosa: A controlled study, 2007 (In Press).
[120] Walshe, D., Lewis, E.J, Kim, S.I, O’Sullivan, K and Wiederhold, B.K (2003). Exploring
the Use of Computer Games and Virtual Reality in Exposure Therapy for Fear of
Driving Following a Motor Vehicle Accident Cyberpsychology & Behavior, 6( 3), 329334.
[121] Wang, X. and Perry, A.C (2006). Metabolic and Physiologic Responses to Video
Game Play in 7- to 10-Year-Old Boys. Arch Pediatr Adolesc Med., 160, 411-415
[122] Knowlton, G.E and Larkin, K.T (2006). The Influence of Voice Volume, Pitch, and
Speech Rate on Progressive Relaxation Training: Application of Methods from Speech
Pathology and Audiology. Applied Psychophysiology and Biofeedback, 31(2), 173-184.
[123] Najström, M. and Jansson, B. (2006). Skin conductance responses as predictor of
emotional responses to stressful life events. Behaviour Research and Therapy, 45,
2456–2463.
[124] Parallax Software: http://en.wikipedia.org/wiki/Parallax_Software (last visit:
17.05.2008)
[125] Duke Nukem I at 3DRealms (ex Apogee): http://www.3drealms.com/duke1/ (last visit:
17.05.2008)
[126] Legend Entertainment: http://en.wikipedia.org/wiki/Legend_Entertainment (last visit:
17.05.2008)
[127] Rockstar Games: http://www.rockstargames.com/ (last visit: 17.05.2008)
[128] DeLeon, V. Notre Dame Tour Guide. Demonstration designed by the Digitalo Studios.
Available on WWW at http://www.digitalo.com (1st of July, 2000)
[129] Miliano, V. Unrealty: Application of a 3D Game Engine to Enhance the Design,
Visualization and Presentation of Commercial Real Estate. 5th International
PLAYMANCER
FP7 215839
D2.1b: State of the Art
121/130
Conference on Virtual Systems and Multimedia, September 1-3, Dundee, Scotland,
1999.
[130] Maria Sifniotis, “Featured 3d Method: 3d Visualisation Using Game Platforms” in
3DVisA Bulletin, Issue 3, September 2007:
http://3dvisa.cch.kcl.ac.uk/paper_sifniotis1.html (last visit: 19.05.2008)
[131] Meister, M. and Boss, M. , 'On Using State of the Art Computer Games Engines to
Visualize Archaeological Structures in Interactive Teaching and Research',
Proceedings of CAA Computer Applications in Archaeology, 2003.
[132] Morrowind homepage: http://www.elderscrolls.com/games/morrowind_overview.htm
(last visit: 18.05.2008)
[133] Anderson, M. 'Computer Games and Archaeological Reconstruction: The low cost VR',
Proceedings of CAA Computer Applications in Archaeology, 2003
[134] Calef, C., Vilbrandt ,C., Goodwin, J., 'Making it Realtime: Exploring the Use of
Optimized Realtime Environments for Historical Simulation and Education',
Proceedings of the International Conference on Museums and the Web, Bearman, D.
(ed.), 2002.
[135] Universally Accessible Games: http://www.ics.forth.gr/hci/ua-games/. HumanComputer Interaction Laboratory, ICS-FORTH. (last visit: 30.06.2008)
[136] Unified Design for UAG:
http://www.gamasutra.com/features/20061207/grammenos_01.shtml. Dimitris
Grammenos, Anthony Savidis, HCI Lab, ICS-FORTH. (last visit: 30.06.2008)
[137] Accessibility in Games: Motivation and Approaches. Game Accessibility Special
Interest Group, International Game Developers Association, 2004.
http://www.igda.org/accessibility/IGDA_Accessibility_WhitePaper.pdf (last visit:
30.06.2008)
[138] Game Accessibility: Project Gallery http://www.gameaccessibility.com/index.php?pagefile=project_gallery (last visit: 02.07.2008)
[139] AbleData: Perceptual Training Program:
http://www.abledata.com/abledata.cfm?pageid=19327&top=11224&trail=22,11114,112
18 (last visit: 02.07.2008)
[140] Games for Health: Pre-Conference Workshop Schedule: Games Accessibility Day:
http://www.gamesforhealth.org/archives/000221.html (last visit: 02.07.2008)
[141] The Winds of Orbis: An Active Adventure: http://www.etc.cmu.edu/projects/wiixercise/.
Entertainment Technology Center, University of Carnegie Mellon (last visit:
07.07.2008)
[142] Student Postmortem: ETC’s The Winds of Orbis. Seth Sivak, Game Career Guide,
2008.
http://www.gamecareerguide.com/features/562/student_postmortem_etcs_the_.php?p
age=1 (last visit: 07.07.2008)
[143] 64 percent of those polled gave up Wii Fit. Kotaku, July 4 2008.
http://kotaku.com/5022132/64-percent-of-those-polled-gave-up-wii-fit (last visit:
07.07.2008)
[144] Emotiv: http://www.emotiv.com/ (last visit: 15.07.2008)
PLAYMANCER
FP7 215839
D2.1b: State of the Art
122/130
[145] Neural Impulse Actuator, OCZ Technology:
http://www.ocztechnology.com/products/ocz_peripherals/nia-neural_impulse_actuator
(last visit: 15.07.2008)
[146] NeuroSky: http://www.neurosky.com/menu/main/technology/product_summary/ (last
visit: 15.07.2008)
[147] Gesturetek Health, IREX platform
http://www.gesturetekhealth.com/rehabsolutions/index.php (last visit: 15.07.2008)
[148] Sony eye toy: http://www.eyetoy.com (last visit: 15.07.2008)
[149] Patrice L Weiss, Debbie Rand, Noomi Katz and Rachel Kizony, Video capture virtual
reality as a flexible and effective rehabilitation tool, Journal of NeuroEngineering and
Rehabilitation 2004, 1:12
[150] http://www.caip.rutgers.edu/vrlab/projects/ankle/ankle.html (last visit: 15.07.2008)
[151] Maureen K. Holden, Virtual Environments for Motor Rehabilitation: Review,
CyberPsychology & Behaviour, Volume 8, Number 3, 2005
[152] Boian, R.F., Sharma, A., Han, C., et al. (2002). Virtual reality–based post-stroke hand
rehabilitation: medicine meets virtual reality. Newport Beach, CA. IOS Press.
[153] Broeren, J., Georgsson, M., Rydmark, M., et al. (2002). Virtual reality in stroke
rehabilitation with the assistance of haptics and telemedicine. Presented at the 4th
International Conference on Disability, Virtual Reality & Associated Technologies,
Veszprem, Hungary.
[154] Connor, B.B., Wing, A.M., Humphreys, G.W., et al. (2002). Errorless learning using
haptic guidance: research in cognitive rehabilitation following stroke. Presented at the
4th International Conference on Disability, Virtual Reality & Associated Techologies,
Veszprem, Hungary
[155] Foreman, N., Orencas, C. Nicholas, E., et al. (1989). Spatial awareness in seven to
11-year-old physically handicapped children in mainstream schools. European Journal
of Special Needs Education 4:171–178.
[156] Wilson, P.N., Foreman, N., & Tlauka, M. (1996). Transfer of spatial information from a
virtual to a real environment in physically disabled children. Disability and
Rehabilitation 18:663–637.
[157] Minhua Ma, Michael McNeill, Darryl Charles, Suzanne McDonough, Jacqui Crosbie,
Louise Oliver, and Clare McGoldrick, Adaptive Virtual Reality Games for Rehabilitation
of Motor Disorders, Universal Access in HCI, Part II, HCII 2007, LNCS 4555, pp. 681–
690, 2007.
[158] Sharma R, Khera S, Mohan A, Gupta N, Ray RB. Assessment of computer game as a
psychological stressor. Indian J Physiol Pharmacol. 2006 Oct-Dec;50(4):367-74.
[159] Larkin KT, Zayfert C, Abel JL, Veltum LG.Reducing heart rate reactivity to stress with
feedback. Generalization across task and time. Behav Modif. 1992 Jan;16(1):118-31.
[160] Larkin KT, Manuck SB, Kasprowicz AL. The effect of feedback-assisted reduction in
heart rate reactivity on videogame performance.Biofeedback Self Regul. 1990
Dec;15(4):285-303.
[161] An integrated telemedicine platform for the assessment of affective physiological
states; Katsis, Christos D / Ganiatsas, George / Fotiadis, Dimitrios I , Diagnostic
Pathology, 1 (1), p.16, Aug 2006 doi:10.1186/1746-1596-1-16
PLAYMANCER
FP7 215839
D2.1b: State of the Art
123/130
[162] Skin conductance responses as predictor of emotional responses to stressful life
events; Najström, Mats / Jansson, Billy , Behaviour research and therapy, 45 (10),
p.2456-2463, Oct 2007
[163] Treasure Hunt: http://www.treasurehunt.uzh.ch/index_en.html (last visit: 26.01.2009)
[164] Brezinka V. Treasure Hunt – a serious game to support psychotherapeutic treatment
of children. Stud Health Technol Inform., 2008. 136, p.71-76.
[165] Kato P., Beale I., Factors affecting acceptability to young cancer patients of a
psychoeducational video game about cancer. Journal of Pediatric Oncology Nursing,
2006; 23; 269
[166] Kato P., Cole S., Bradlyn A. and Pollock B. A video game improves behavioral
outcomes in adolescents and young adults with cancer: a randomized trial. Pediatrics,
2008; 122; e305-e317.
[167] Mitchell, J. G. and P. J. Robinson. A multi-purpose telehealth network for mental health
professionals in rural and remote areas. Telemed Today, 2000. 8(3): 4-5, 28.
[168] Hicks, L. L., K. E. Boles, et al. Using telemedicine to avoid transfer of rural emergency
department patients. J Rural Health, 2001. 17(3): 220-8.
[169] Baer, L., P. Cukor, et al. Pilot studies of telemedicine for patients with obsessivecompulsive disorder. Am J Psychiatry, 1995. 152(9): 1383-5.
[170] Zarate, C. A., Jr., L. Weinstock, et al. Applicability of telemedicine for assessing
patients with schizophrenia: acceptance and reliability. J Clin Psychiatry, 1997. 58(1):
22-5.
[171] Myers, T. C., L. Swan-Kremeier, et al. The use of alternative delivery systems and new
technologies in the treatment of patients with eating disorders. Int J Eat Disord, 2004.
36(2): 123-43.
[172] Botella, C., et al., The use of VR in the treatment of panic disorders and agoraphobia.
Stud Health Technol Inform, 2004. 99: p. 73-90.
[173] Wood, D.P., et al., Combat related post traumatic stress disorder: a multiple case
report using virtual reality graded exposure therapy with physiological monitoring. Stud
Health Technol Inform, 2008. 132: p. 556-61.
[174] Difede, J., et al., Virtual reality exposure therapy for the treatment of posttraumatic
stress disorder following September 11, 2001. J Clin Psychiatry, 2007. 68(11): p.
1639-47.
[175] Lee, J.H., et al., Cue-exposure therapy to decrease alcohol craving in virtual
environment. Cyberpsychol Behav, 2007. 10(5): p. 617-23
[176] Griffiths, M. D., Hunt, N. Computer games playing adolescence: prevalence and
demographic indicators. Journal of Community and Applied Social Psychology, 1995.
5, 189–193.
[177] Griffiths, M. D., Hunt, N. Dependence on computer games by adolescents.
Psychological reports, 1998. 82 (2), p.475-480.
[178] Schott G., Hodgetts D. Health and Digital Gaming ,The Benefits of a Community of
Practice. J Health Psychol, 2006 11(2): 309–316.
[179] Griffiths, M. Can videogames be good for your health? J Health Psychol 2004.9(3),
339-44.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
124/130
[180] Ritterfeld U., Cody M., and Vorderer P. Serious Games: Mechanisms and Effects.
2008, Ed. Routledge, Taylor and Francis.
[181] Holden Maureen K. Virtual Environments for Motor Rehabilitation: Review,
CyberPsychology & Behaviour, Volume 8, Number 3, 2005
[182] Deutsch Judith E., Jason Latonio, Grigore Burdea, Rares Boian.Rehabilitation of
Musculoskeletal Injuries Using the Rutgers Ankle Haptic Interface: Three Case
Reports. In proceedings book of EuroHaptics 2001 Conference.
[183] Bellack, A. S., Dickinson, D., et al. The development of a computer-assisted cognitive
remediation program for patients with schizophrenia. Isr J Psychiatry Relat Sci, 2005.
42(1): 5-14.
[184] Bussey-Smith, K. L. and R. D. Rossen. A systematic review of randomized control
trials evaluating the effectiveness of interactive computerized asthma patient education
programs. Ann Allergy Asthma Immunol, 2007. 98(6): 507-16; quiz 516, 566
[185] Broeren, J., M. Rydmark, et al. Assessment and training in a 3-dimensional virtual
environment with haptics: a report on 5 cases of motor rehabilitation in the chronic
stage after stroke. Neurorehabil Neural Repair, 2007. 21(2): 180-9.
[186] Broeren, J., K. S. Sunnerhagen and M. Rydmark. "A kinematic analysis of a haptic
handheld stylus in a virtual environment: a study in healthy subjects." J
Neuroengineering Rehabil, 2007, 4: 13.
[187] Broeren J, L. Claesson, D. Goude, M. Rydmark, K. Stibrant Sunnerhagen. "Virtual
Rehabilitation in an activity centre for community dwelling persons with stroke; the
possibilities of 3D computer games.(Submitted)
[188] Broeren, J., H. Samuelsson, K. Stibrant-Sunnerhagen, C. Blomstrand and M. Rydmark
"Neglect assessment as an application of virtual reality. Acta Neurologica
Scandinavica (in press)
[189] Beale, I. L., P. M. Kato, et al. Improvement in cancer-related knowledge following use
of a psychoeducational video game for adolescents and young adults with cancer. J
Adolesc Health, 2007. 41(3): 263-70.
[190] Rassin, M., Y. Gutman, et al. Developing a computer game to prepare children for
surgery. Aorn J, 2004. 80(6): 1095-6, 1099-102.
[191] Coyle, D., Matthews. M. Personal Investigator: a Therapeutic 3D Game for Teenagers.
CHI2004 Vienna 25-29 April 2004. Presented at the Social Learning Through Gaming
Workshop.
[192] Coyle, D., Matthews M., et al. Personal Investigator: A therapeutic 3D game for
adolescent psychotherapy. Journal of Interactive Technology & Smart Education,
2005. 2(2): 73-88
[193] Walshe, D. G., E. J. Lewis, et al. Exploring the use of computer games and virtual
reality in exposure therapy for fear of driving following a motor vehicle accident.
Cyberpsychol Behav, 2003. 6(3): 329-34
[194] Fairburn, C.G., et al., Psychotherapy and bulimia nervosa. Longer-term effects of
interpersonal psychotherapy, behavior therapy, and cognitive behavior therapy. Arch
Gen Psychiatry, 1993. 50(6): p. 419-28.
PLAYMANCER
FP7 215839
D2.1b: State of the Art
125/130
[195] Openshaw, C., Waller, G., Sperlinger, D. Group cognitive-behavior therapy for bulimia
nervosa: statistical versus clinical significance of changes in symptoms across
treatment. The International Journal of Eating Disorders, 2004. 36 (4), p.363-375
[196] Wilfley, D. E., Agras, W. S., et al. Group cognitive-behavioral therapy and group
interpersonal psychotherapy for the nonpurging bulimic individual: a controlled
comparison. Journal of consulting and clinical psychology, 1993. 61 (2), p.296-305
[197] Fairburn, C. G., Welch, S. L., et al. Risk factors for bulimia nervosa. A communitybased case-control study. Archives of general psychiatry, 1997. 54 (6), p.509-517
[198] Fairburn, C.G. and P.J. Harrison, Eating disorders. Lancet, 2003. 361(9355): p. 40716.
[199] Fernández-Aranda, F., Casanovas, C.,et al. Eficacia del tratamiento ambulatorio en
bulímia nervosa. Revista Psicologia Conductual, 2004. 12(3), 501-518.
[200] Fernandez-Aranda, F., et al., Impulse control disorders in women with eating
disorders. Psychiatry Res, 2008. 157(1-3): p. 147-57.
[201] Fernández-Aranda, F., et al., Nuevas tecnologías en el tratamiento de los trastornos
de la alimentación Cuadernos de Medicina Psicosomática y Psiquiatría de enlace,
2007. 82: p. 7-16
[202] Fernandez-Aranda F, & Turon, V.: Trastornos alimentarios. Guia basica de tratamiento
en anorexia y bulimia. Barcelona, Masson, 1998
[203] Carrard, I., P. Rouget, et al. Evaluation and deployment of evidence based patient self
management support program for Bulimia Nervosa. Int J Med Inform, 2006. 75(1): 1019.
[204] Rouget, P., I. Carrard, et al. [Self-treament for bulimia on the Internet: first results in
Switzerland]. Rev Med Suisse, 2005. 1(5): 359-61.
[205] Tchanturia, K., et al., An investigation of decision making in anorexia nervosa using
the Iowa Gambling Task and skin conductance measurements. J Int Neuropsychol
Soc, 2007. 13(4): p. 635-41.
[206] Tchanturia, K., H. Davies, and I.C. Campbell, Cognitive remediation therapy for
patients with anorexia nervosa: preliminary findings. Ann Gen Psychiatry, 2007. 6: p.
14.
[207] Vanderlinden, J., et al., Which factors do provoke binge eating? An exploratory study
in eating disorder patients. Eat Weight Disord, 2004. 9(4): p. 300-5.
[208] Alvarez-Moya, E.M., et al. Comparison of personality risk factors in bulimia nervosa
and pathological gambling. Compr Psychiatry, 2007. 48(5): p. 452-7.
[209] Jiménez Murcia, S., Alvarez Moya, E., et al. Cognitive-Behavioral Group Treatment for
Pathological Gambling: Analysis of efficacy and predictors of therapy outcome.
Psychotherapy Resarch, 2006. 17 (5), 544-552.
[210] Jiménez Murcia, S.; Aymamí Sanromà, N. et al. Tractament cognitiuconductual per al
joc patològic i d’altres addiccions no tòxiques. Hospital Universitari de Bellvitge,
L’Hospitalet del Llobregat (ISBN 84-89505-78-0), 2006
[211] Jimenez-Murcia, S. Stinchfield, R. et al. Reliability, Validity, and Classification
Accuracy of a Spanish Translation of a Measure of DSM-IV Diagnostic Criteria for
Pathological Gambling. J Gambl Stud, 2008. Jul 1
PLAYMANCER
FP7 215839
D2.1b: State of the Art
126/130
[212] Young, K. S., Cognitive behavior therapy with Internet addicts: treatment outcomes
and implications. Cyberpsychology & behavior : the impact of the Internet, multimedia
and virtual reality on behavior and society, 2007. 10 (5), p.671-679.
[213] Campbell, A. J., Cumming, S. R. and Hughes, I., Internet use by the socially fearful:
addiction or therapy?, Cyberpsychology & behavior : the impact of the Internet,
multimedia and virtual reality on behavior and society, 2006. 9 (1), p.69-81.
[214] Kolster, BC., Olster, B.C., & Ebelt-Paprotny, G. Leitfaden Physiotherapie – 4. Auflage.
München: Urban und Fischer, 2002.
[215] Holden Maureen K. Virtual Environments for Motor Rehabilitation: Review,
CyberPsychology & Behaviour, Volume 8, Number 3, 2005
[216] Kasten, E., Wust, S., Behrens-Baumann, W. & Sabel, B. A. . Computer-based training
for the treatment of partial blindness. Nat.Med., 1998, 4, 1083-1087.
[217] Rose FD, Brooks BM, Rizzo AA. Virtual reality in brain damage rehabilitation: review.
Cyberpsychol Behav 2005;8:241-62; discussion 263-71.
[218] Heidi Sveistrup, Motor rehabilitation using virtual reality Journal of NeuroEngineering
and Rehabilitation 2004, 1:10
[219] Weiss Patrice L, Debbie Rand, Noomi Katz and Rachel Kizony, Video capture virtual
reality as a flexible and effective rehabilitation tool, Journal of NeuroEngineering and
Rehabilitation 2004, 1:12
[220] 3D Engines database, a directory listing by DevMaster.net, at
http://www.devmaster.net/engines/ (last visit: 17/02/2009)
PLAYMANCER
FP7 215839
D2.1b: State of the Art
127/130
9 List of Abbreviations
A.R.T.
Advanced Realtime Tracking GmbH
AI
Artificial Intelligence
API
Application Programming Interface
AR
Augmented Reality
ASR
Automatic Speech Recognition
BN
Bulimia Nervosa
BPM
Beats per Minute
BSD
Berkeley Software Distribution
CAD
Computer-Aided Design
CAPTCHA Completely Automated Public Turing test to tell Computers and
Humans Apart
CAVE
Computer Assisted Virtual Environment
CBT
Cognitive Behavioural Therapy
CEO
Chief Executive Officer
CLOD
Continuous Level of Detail
CPU
Central Processing Unit
CSG
Constructive Solid Geometry
DDR
Dance Dance Revolution
DLL
Dynamic Link Library
DOF
Degree Of Freedom
DTD
Document Type Definition
ECG
Electrocardiogram
ED
Eating Disorders
EDR
Electrodermal Response
EEG
Electroencephalogram
EMG
Electromyogram
EOG
Electro-Oculogram
PLAYMANCER
FP7 215839
D2.1b: State of the Art
ESP
Extra-sensory perception
FOR
Field Of Regard
FOV
Field Of View
FPS
Frames Per Second
FSM
Finite State Machine
GIRLS
Girls Involved in Real Life Sharing
GL
Graphics Library/Language
GLSL
GL Shading Language
GLUT
OpenGL Utility Toolkit
GML
Game Maker Language
GPL
General Public License
GPRS
General Packet Radio Service
GPS
Global Positioning System
GSR
Galvanic Skin Response
GUI
Graphical User Interface
HCI
Human Computer Interface
HDR
High-Definition Rendering
HDRI
High Dynamic Range Imaging
HLA
High Level Architecture
HMD
Head Mounted Display
HMM
Hidden Markov Model
HR
Heart Rate
HRV
Heart Rate Variability
I/O
Input / Output
IDE
Interactive Development Environment
IMS
Interactive Media Systems Group
IREX
Interactive Rehabilitation and Exercise System
JIT
Just In Time
PLAYMANCER
128/130
FP7 215839
D2.1b: State of the Art
LCD
Liquid Crystal Display
LGPL
Lesser General Public License
LOD
Level of Detail
MASM
Microsoft Assembler
MIT
Massachusetts Institute of Technology
ML
Machine Language
MMOG
Massive Multi-player Online Games
MMORPG
Massive Multi-Player Online Role Playing Game
MUPE
Multi User Publishing Environment
NEAT
Non-Exercise Activity Thermogenesis
NLG
Natural Language Generation
NPC
Non-Playable Character
OOP
Object Oriented Programming
ORPG
Online Role Playing Game
OS
Operating System
PC
Personal Computer
PDA
Personal Digital Assistant
PI
Personal Investigator
PIP
Personal Input Pod
PGR
Psychogalvanic Reflex
PRT
Progressive Relaxation Training
RAVE
Reconfigurable Assisted Virtual Environment
RGB
Red Green Blue
RPG
Role Playing Game
RTOS
Real-Time Operating System
SCL
Skin Conductance Level
SCR
Skin Conductance Response
SDK
Software Development Kit
PLAYMANCER
129/130
FP7 215839
D2.1b: State of the Art
SDL
Specification and Design Language
SFT
Solution Focused Therapy
SRT
Self-Report measures of Tension
TCP
Transmission Control Protocol
TTS
Text to Speech
TUW
Technische Universität Wien
UA
Universally Accessible
UAG
Universally Accessible Games
UDP
User Datagram Protocol
UE
Unreal Engine
UK
United Kingdom
USB
Universal Serial Bus
VB
Visual Basic
VR
Virtual Reality
VRPN
Virtual Reality Peripheral Network
130/130
WYSIWYG What You See Is What You Get
XML
eXtensible Markup Language
PLAYMANCER
FP7 215839