Icon - Ruidera

Transcription

Icon - Ruidera
UNIVERSITY OF CASTILLA-LA MANCHA
DEPARTMENT OF COMPUTING SYSTEMS
A VIBROTACTILE PROTOTYPING TOOLKIT FOR
VIRTUAL REALITY AND VIDEOGAMES
PH.D. DISSERTATION
D. JONATAN MARTÍNEZ MUÑOZ
ADVISORS
DR. D. PASCUAL GONZÁLEZ LÓPEZ
DR. D. JOSÉ PASCUAL MOLINA MASSÓ
ALBACETE, DECEMBER 2013
Jonatan Martínez Muñoz: A Vibrotactile Prototyping Toolkit for Virtual Reality and Videogames, A dissertation for the degree of Doctor of Philosophy in Computer Science to be presented with due
permission of the Computing Systems Department, for public
examination and debate, © December 2013
Dedicado a mis padres
y a mi hermana.
AGRADECIMIENTOS
Ha pasado mucho tiempo desde que comencé los cursos de doctorado, desde donde apenas se podía ver el comienzo del largo
camino que quedaba por recorrer. Sin embargo, tras unos cuantos años de trabajo, la meta ha llegado, y para ello el apoyo de
determinadas personas ha sido fundamental.
En primer lugar me gustaría dar las gracias a mis directores,
Pascual y José Pascual, mis copilotos de este viaje y los que más
esfuerzo han dedicado para que sea posible. A Pascual, por darme la oportunidad de trabajar en distintos proyectos y de posteriormente comenzar este trabajo, por guiarme con su experiencia, y por marcarme objetivos a corto plazo para saber hacia
donde dar el siguiente paso. Gracias también a José Pascual, por
ayudarme en mis inicios como investigador, por transmitirme
esa pasión por la Realidad Virtual y por sus acertados consejos.
Gracias también por acompañarme al congreso de Canadá, que
se convirtió en una de las mejores experiencias del doctorado.
Muchas gracias a mis compañeros de laboratorio, que han evitado mi desquiciamiento en varias ocasiones usando combinaciones variables de conocimientos técnicos y buen humor. En
especial gracias a Arturo, cuyo apoyo en los análisis estadísticos
ha sido fundamental; y a Diego, capaz de convertir en reto cualquier tipo de problema técnico. Tengo que agradecer también
la participación en las pruebas de forma voluntaria a todos mis
compañeros del grupo. Gracias entre otros a Antonio, Cristina,
Elena, Félix, Gabi, Juanen, Miguel, Miguel Ángel y Pedro.
Los almuerzos durante estos años han supuesto no solamente
una parada en boxes para repostar, sino la mejor forma de descansar la mente, aunque fuese resolviendo jeroglíficos. Entre mi
equipo de mecánicos han estado de forma habitual José Pascual,
Arturo y Anto. ¡Gracias!.
Siempre estaré enormemente agradecido a Diego y Alicia, por
compartir su casa y hacerme parte de su familia durante mi estancia en Bristol, que sin duda fue una experiencia muy enriquecedora tanto a nivel personal como profesional. Esto no habría
sido posible sin la ayuda de Sri, que me aceptó en el grupo supervisando mi trabajo, y que también contribuyó a la socialización
v
interna del grupo mediante reuniones informales fuera de la universidad. Allí fue un placer conocer y trabajar con gente como
Anne, Diana, Tom, Sue Ann, Matt, Ben y Abe.
Mis amigos también han sido un gran apoyo durante estos
años. Muchas gracias en especial a Marian, con la que he compartido tanto buenos como malos momentos, y que me ha hecho
el día a día en el I3A mucho más llevadero. Por supuesto no puedo dejar de agradecer a mis amigos Anaré, Anto, Arturo, Cris,
Edu y Juan que siempre hayan estado ahí y de los cuales siempre
he tenido un apoyo incondicional.
Por último y de forma muy especial me gustaría dar las gracias a mi familia. En especial a mi hermana, por estar siempre
dispuesta a ayudar en cuanto fuera posible y a mis padres, que
con su educación y cariño han hecho de mi quien soy.
vi
ABSTRACT
The emergence of new interaction devices, which are beyond the
traditional desktop, is bringing the field of Virtual Reality into
our homes, allowing us to interact with computers in a more natural fashion. However, despite the today’s high-definition 3D images, and the high-fidelity sound systems, the sense of touch is
still quite absent, which complicates the interaction, lessens the
realism, and in general reduces the user’s immersion. Some professional equipment allow the simulation of some aspects, like
the generation of forces, and other devices, like game controllers,
usually have some kind of vibrotactile feedback. The problem is
that these devices are oriented to very specific fields, and therefore they are not suitable to more general environments, where
the big extension of the sense of touch along the body could be
useful.
Thus, this work tries to bring the use of tactile feedback to
places where it has little or no presence. For this, a vibrotactile
feedback platform is proposed that is low cost, so that it can be
accessible; versatile, in order to enable the creation of a wide variety of tactile devices; and that provides the highest performance.
The platform is composed of an electronic controller, which is
able to extract the best performance from vibrotactile actuators,
and a tool that makes use of the hardware features, allowing the
design of complex vibrotactile stimuli.
To assess the performance of this solution, several experiments
with users have been carried out, in order to cover some of
the essential aspects of the sense of touch. First, an evaluation was conducted to recognize uni-dimensional textures and
bi-dimensional shapes, comparing it against a commercial force
feedback device, and the use of a bare finger with paper patterns. Next, we focused on the identification of 3D geometric
figures without visual guidance. To this end, an evaluation
was performed with an state-of-the-art multi-point force feedback device, and it was compared with previous experiments
found with single-point devices. In a new experiment, the evaluation was carried out using the created vibrotactile platform,
designing a glove-like device with multiple actuators. Last, the
vii
platform have been used to discriminate object weights and sizes
in a Virtual Environment, achieving a high successful rate.
The experiments have allowed not only the development of
algorithms and haptic rendering techniques optimized for this
technology, but also to confirm the potential of the platform to
complement the interaction with the haptic channel.
viii
RESUMEN
La aparición de nuevos dispositivos de interacción, que se alejan
del tradicional escritorio, está trayendo el campo de la Realidad
Virtual a nuestros hogares, permitiéndonos interaccionar con ordenadores de una forma más natural. Sin embargo, a pesar de
las imágenes en 3D y de los sonidos de alta fidelidad de hoy
en día, el sentido del tacto sigue siendo el gran ausente, lo que
dificulta su uso, le resta realismo y en general reduce la inmersión del usuario. Algunos dispositivos profesionales permiten
simular algunos aspectos, como el retorno de fuerza, y otros periféricos como los mandos de juegos suelen incluir algún tipo de
retorno vibrotáctil. El problema es que estos dispositivos están
orientados a ámbitos muy concretos, y no son adecuados para
entornos más generales, donde se podría aprovechar la gran extensión del sentido del tacto a lo largo de todo el cuerpo.
Así, este trabajo intenta llevar el uso del retorno táctil a ámbitos donde éste tiene poca presencia. Para ello, se ha propuesto
una plataforma de realimentación vibrotáctil de bajo coste, de
forma que sea accesible; versátil, para que pueda dar lugar a
muy diversos dispositivos táctiles; y que ofrezca el mayor rendimiento posible. La plataforma está formada por un controlador
electrónico escalable capaz de extraer el máximo rendimiento
de los actuadores vibrotáctiles, y una herramienta software que
aprovecha las características del hardware, y que permite diseñar complejos estímulos vibrotáctiles.
Para evaluar el rendimiento de esta solución, se han llevado a
cabo varios experimentos con usuarios, tratando de cubrir algunos de los aspectos esenciales del sentido del tacto. Por un lado
se evaluó su desempeño en el reconocimiento de texturas y formas 2D, comparándolo con un dispositivo comercial de retorno
de fuerzas y el uso directo del dedo con patrones de papel. En
cuanto al reconocimiento a ciegas de formas geométricas 3D, en
primer lugar se realizó un experimento mediante un dispositivo comercial de retorno de fuerzas multipuntual, comparando
los resultados con los obtenidos por dispositivos de retorno de
fuerzas con un único punto de interacción. En un nuevo experimento, se repitió la evaluación usando para ello la plataforma
vibrotáctil creada, diseñando un dispositivo en forma de guan-
ix
te con múltiples actuadores. Por último, se evaluó su uso para
identificar pesos y tamaños de objetos en un entorno virtual,
consiguiendo un elevado porcentaje de aciertos.
Los resultados de los experimentos han permitido no solo confirmar la validez de la plataforma incluso para llevar a cabo complejas tareas de identificación de formas a ciegas, sino también
desarrollar algoritmos y técnicas de renderizado háptico adecuados a este tipo de tecnología.
x
P U B L I C AT I O N S
Some ideas and figures have appeared previously in different
publications.
The hardware platform and the authoring tool described in
chapters 3 and 4 have been submitted to journal IJHCI [111]
and accepted with major changes. The hardware device has also
been integrated with several platforms of the research group,
resulting in the co-authoring of multiple papers [43, 44, 103, 104,
105].
The experiment evaluating shape and texture recognition in
Chapter 6 has been published in [106, 109, 107, 108, 110].
The experiment of 3D objects discrimination using vibrotactile
feedback in Chapter 7 has been submited to journal CG&A [112]
and accepted with major changes.
This thesis only exploits the parts of these papers that are directly attributable to the author. All other referenced material
has been given full acknowledgment in the text.
xi
CONTENTS
i background
1 introduction
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . .
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Thesis Structure . . . . . . . . . . . . . . . . . . . .
2 fisiología del tacto y tecnologías de realimentación táctil
2.1 Fisiología del Tacto . . . . . . . . . . . . . . . . . .
2.1.1 Mecanorreceptores . . . . . . . . . . . . . . .
2.1.2 Distribución . . . . . . . . . . . . . . . . . . .
2.1.3 Umbral de Detección . . . . . . . . . . . . .
2.1.4 Percepción de la Intensidad y Nivel de Molestia . . . . . . . . . . . . . . . . . . . . . . .
2.2 Tecnologías de Realimentación Táctil . . . . . . . .
2.2.1 Actuadores Neumáticos . . . . . . . . . . . .
2.2.2 Actuadores Piezoeléctricos . . . . . . . . . .
2.2.3 Actuadores Térmicos . . . . . . . . . . . . .
2.2.4 Aleaciones con Efecto Térmico de Memoria
(SMA) . . . . . . . . . . . . . . . . . . . . . .
2.2.5 Actuadores Microelectromecánicos (MEMS)
2.2.6 Interfaces Electrotáctiles y Neuromusculares
2.2.7 Materiales Electrostáticos y Electrostrictivos
2.2.8 Fluidos Electroreológicos y Magnetoreológicos . . . . . . . . . . . . . . . . . . . . . . . .
2.2.9 Tecnologías sin Contacto Directo . . . . . . .
2.2.10 Actuadores Electromagnéticos . . . . . . . .
2.2.11 Discusión . . . . . . . . . . . . . . . . . . . .
2.3 Estimulación Vibrotáctil mediante ERM . . . . . .
2.3.1 Corrección de la postura . . . . . . . . . . .
2.3.2 Sustitución Sensorial . . . . . . . . . . . . . .
2.3.3 Alertas . . . . . . . . . . . . . . . . . . . . . .
2.3.4 Navegación y Percepción Espacial . . . . . .
2.3.5 Sensación de Presencia . . . . . . . . . . . .
2.3.6 Videojuegos . . . . . . . . . . . . . . . . . . .
2.3.7 Telemanipulación / Realidad Virtual . . . .
2.3.8 Otros . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
1
3
3
5
5
.
.
.
.
7
7
7
9
9
.
.
.
.
.
10
11
12
13
15
.
.
.
.
16
17
18
20
.
.
.
.
.
.
.
.
.
.
.
.
.
20
22
22
28
31
32
32
32
33
34
35
35
36
xiii
xiv
contents
ii vibrotactile prototyping toolkit
3 vibrotactile display
3.1 Related Work . . . . . . . . . . . . . . . . . . . .
3.2 Actuators . . . . . . . . . . . . . . . . . . . . . .
3.2.1 Description . . . . . . . . . . . . . . . . .
3.2.2 Vibration strength . . . . . . . . . . . . .
3.2.3 Latency . . . . . . . . . . . . . . . . . . .
3.2.4 Pulse Overdrive . . . . . . . . . . . . . .
3.2.5 Active Braking . . . . . . . . . . . . . . .
3.3 Vibrotactile Controller . . . . . . . . . . . . . . .
3.3.1 Design goals . . . . . . . . . . . . . . . .
3.3.2 System Architecture . . . . . . . . . . . .
3.3.3 Electronic Architecture . . . . . . . . . .
3.3.4 Controller . . . . . . . . . . . . . . . . . .
3.3.5 Scalability . . . . . . . . . . . . . . . . . .
3.3.6 Voltage Considerations . . . . . . . . . .
3.3.7 Prototyping Features . . . . . . . . . . .
3.3.8 Haptic Driver . . . . . . . . . . . . . . . .
3.4 Performance Evaluation . . . . . . . . . . . . . .
3.5 Conclusions . . . . . . . . . . . . . . . . . . . . .
4 vibrotactile authoring tool
4.1 Related Work . . . . . . . . . . . . . . . . . . . .
4.2 Vitaki Authoring Tool . . . . . . . . . . . . . . .
4.2.1 Architecture . . . . . . . . . . . . . . . . .
4.2.2 Implementation . . . . . . . . . . . . . . .
4.3 Examples of use . . . . . . . . . . . . . . . . . .
4.3.1 Vibrotactile Morse Encoder . . . . . . . .
4.3.2 Object Fall Detection . . . . . . . . . . . .
4.4 Evaluation of the Platform . . . . . . . . . . . .
4.4.1 Quality assessment using Olsen’s criteria
4.4.2 Comparison with state-of-the-art tools .
4.5 Conclusion . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
iii case studies
5 experiments for the evaluation of the platform
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . .
5.2 System Architecture . . . . . . . . . . . . . . . . . .
5.3 Tracking System . . . . . . . . . . . . . . . . . . . .
5.3.1 2D Configuration . . . . . . . . . . . . . . . .
5.3.2 3D Configuration . . . . . . . . . . . . . . . .
5.4 Actuator arrangement . . . . . . . . . . . . . . . .
39
41
41
42
42
43
44
44
46
48
48
48
49
51
53
53
54
54
57
58
61
61
62
63
63
67
67
70
71
72
77
81
83
.
.
.
.
.
.
85
85
86
87
87
87
89
contents
5.4.1 2D Configuration . . . . . . . . . . . . . . . . . 89
5.4.2 3D Configuration . . . . . . . . . . . . . . . . 89
5.5 Collision Detection . . . . . . . . . . . . . . . . . . . . 89
5.5.1 2D Collisions . . . . . . . . . . . . . . . . . . . 90
5.5.2 3D Collisions . . . . . . . . . . . . . . . . . . . 91
5.6 Haptic rendering . . . . . . . . . . . . . . . . . . . . . 91
5.6.1 Vibroctactile Rendering of Textures and 2D
shapes . . . . . . . . . . . . . . . . . . . . . . . 91
5.6.2 Vibrotactile Rendering of 3D objects . . . . . . 92
6 shape and texture recognition
95
6.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . 95
6.2 Description of the Haptic Feedback Methods . . . . 96
6.2.1 Stimuli . . . . . . . . . . . . . . . . . . . . . . . 97
6.2.2 Force Feedback . . . . . . . . . . . . . . . . . . 98
6.2.3 Vibrotactile Feedback . . . . . . . . . . . . . . 99
6.2.4 Direct Stimulation . . . . . . . . . . . . . . . . 100
6.3 Experiment Design . . . . . . . . . . . . . . . . . . . 101
6.4 Results And Discussion . . . . . . . . . . . . . . . . . 102
6.4.1 Time And Success Rate . . . . . . . . . . . . . 102
6.4.2 Learning Curves . . . . . . . . . . . . . . . . . 105
6.4.3 Comparison with Kyung et al. . . . . . . . . . 106
6.4.4 Questionnaires . . . . . . . . . . . . . . . . . . 108
6.4.5 Gender Differences . . . . . . . . . . . . . . . 108
6.4.6 Identification Strategies . . . . . . . . . . . . . 109
6.4.7 Error Analysis . . . . . . . . . . . . . . . . . . 110
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . 110
7 identification of 3d virtual geometric forms 113
7.1 Related Work . . . . . . . . . . . . . . . . . . . . . . 113
7.2 Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.3 Experiment 1: Force Feedback . . . . . . . . . . . . . 116
7.3.1 Haptic Display . . . . . . . . . . . . . . . . . . 116
7.3.2 Haptic Rendering . . . . . . . . . . . . . . . . 116
7.3.3 Participants . . . . . . . . . . . . . . . . . . . . 117
7.3.4 Procedure . . . . . . . . . . . . . . . . . . . . . 117
7.4 Experiment 2: Vibrotactile Feedback . . . . . . . . . 118
7.4.1 Haptic Display . . . . . . . . . . . . . . . . . . 118
7.4.2 Participants . . . . . . . . . . . . . . . . . . . . 118
7.4.3 Procedure . . . . . . . . . . . . . . . . . . . . . 119
7.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
7.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . 123
xv
xvi
contents
8 weight and size discrimination with vibrotactile feedback
8.1 Related Work . . . . . . . . . . . . . . . . . . . . . .
8.1.1 Weight . . . . . . . . . . . . . . . . . . . . . .
8.1.2 Size . . . . . . . . . . . . . . . . . . . . . . . .
8.2 Haptic Display . . . . . . . . . . . . . . . . . . . . .
8.3 Haptic Rendering Methods . . . . . . . . . . . . . .
8.3.1 Weight . . . . . . . . . . . . . . . . . . . . .
8.3.2 Size . . . . . . . . . . . . . . . . . . . . . . . .
8.4 Description of the Experiment . . . . . . . . . . . .
8.4.1 Stimuli . . . . . . . . . . . . . . . . . . . . . .
8.4.2 Participants . . . . . . . . . . . . . . . . . . .
8.4.3 Method . . . . . . . . . . . . . . . . . . . . .
8.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . .
8.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . .
9 conclusions
9.1 Contributions . . . . . . . . . . . . . . . . . . . . . .
9.2 Future work . . . . . . . . . . . . . . . . . . . . . .
9.3 Scientific Contributions . . . . . . . . . . . . . . . .
9.3.1 Participation in R&D projects . . . . . . . . .
9.3.2 Collaboration with other research centers .
9.3.3 Publications related with the thesis . . . . .
9.3.4 Other Publications . . . . . . . . . . . . . . .
bibliography
125
. 125
. 125
. 126
. 127
. 127
. 127
. 128
. 129
. 129
. 129
. 129
. 130
. 130
133
. 133
. 134
. 135
. 135
. 136
. 136
. 142
147
LIST OF FIGURES
Figura 2.1
Figura 2.2
Figura 2.3
Figura 2.4
Figura 2.5
Figura 2.6
Figura 2.7
Figura 2.8
Figura 2.9
Figura 2.10
Figura 2.11
Figura 2.12
Figure 2.13
Figure 3.1
Figure 3.2
Figure 3.3
Figure 3.4
Figure 3.5
Figure 3.6
Figure 3.7
Figure 3.8
Figure 3.9
Figure 3.10
Figure 3.11
Figure 3.12
Figure 4.1
Figure 4.2
Figure 4.3
Figure 4.4
Figure 4.5
Figure 4.6
Figure 4.7
Figure 4.8
Figure 5.1
Figure 5.2
Figure 5.3
Figure 5.4
Mecanorreceptores de la piel . . . . . . . . .
Distribución de los receptores sensoriales. .
Umbral de detección de la piel. . . . . . . . .
Actuadores neumáticos Teletact [158]. . . . .
Dispositivos piezoeléctricos. . . . . . . . . . .
Dispositivos hápticos basados en SMA. . . .
Dispositivos electrotáctiles . . . . . . . . . . .
Dispositivos electrostáticos. . . . . . . . . . .
Dispositivos basados en solenoides. . . . . .
Dispositivos basados en motores . . . . . . .
Dispositivos con actuadores ERM. . . . . . .
Dispositivos vibrotáctiles para RV. . . . . . .
Bucle de control cerrado. . . . . . . . . . . . .
ERM actuators. . . . . . . . . . . . . . . . . .
Start-up curves of ERM actuators. . . . . . .
Braking curves of an ERM. . . . . . . . . . .
Architecture of the system. . . . . . . . . . .
Hardware architecture. . . . . . . . . . . . . .
ERM controller circuit prototype. . . . . . . .
ERM controller box. . . . . . . . . . . . . . .
Scalability of the controller. . . . . . . . . . .
Prototyping features of Vitaki. . . . . . . . .
Driver architecture. . . . . . . . . . . . . . . .
Function used to map waveforms to voltages.
Acceleration response to a 300 ms pulse. . .
Vitaki GUI architecture. . . . . . . . . . . . .
Vitaki GUI. . . . . . . . . . . . . . . . . . . . .
Vitaki GUI configuration dialog. . . . . . . .
Vitaki GUI waveform editor. . . . . . . . . . .
Vitaki toolkit used to code Morse signals. . .
ERM response to the Morse-coded letter “d”.
Creation of an object fall stimulus. . . . . . .
Application to grasp objects using Vitaki. . .
Exploratory procedures. . . . . . . . . . . . .
Architecture of the system. . . . . . . . . . .
Location of the markers for 2D interaction. .
Location of the markers for 3D interaction. .
9
11
12
14
16
18
19
21
25
27
29
35
37
43
45
47
49
50
52
52
53
55
56
57
59
63
64
65
66
68
69
70
71
85
86
88
88
xvii
Figure 5.5
Figure 5.6
Figure 6.1
Figure 6.2
Figure 6.3
Figure 6.4
Figure 6.5
Figure 6.6
Figure 6.7
Figure 6.8
Figure 6.9
Figure 6.10
Figure 6.11
Figure 7.1
Figure 7.2
Figure 7.3
Figure 7.4
Figure 7.5
Figure 7.6
Figure 8.1
Configuration of the actuators for 3D. . . . . 90
Wall region of the virtual objects. . . . . . . . 93
Shapes and textures used in the experiment. 98
Force feedback device used. . . . . . . . . . . 99
Direct stimulation test environment. . . . . . 101
Textures on a transparency paper. . . . . . . 102
Average duration of each trial. . . . . . . . . 103
Average percentage of correct answers. . . . 104
Learning curve along trials. . . . . . . . . . . 106
Learning curves along time. . . . . . . . . . . 107
Identification strategies used by the users. . 109
Error in identification strategy. . . . . . . . . 111
Correct answers per each group of textures. 111
Geometric forms used in the experiment. . . 115
Force feedback experiment setup. . . . . . . 118
Vibrotactile feedback experiment setup. . . . 119
Results of the Cybergrasp experiment. . . . . 120
Results of the 3D shape experiment. . . . . . 120
Results obtained by Jansson. . . . . . . . . . 121
Sizes and weight average results. . . . . . . . 131
L I S T O F TA B L E S
Tabla 2.1
Tabla 2.2
Table 4.1
Table 6.1
Tipos de mecanorreceptores. . . . . . . . .
Tecnologías de actuadores comerciales. . .
Tool comparison table. . . . . . . . . . . . .
Results of the experiment for each gender.
ACRONYMS
API
Application Programming Interface
DC
Direct Current
DOF
Degrees Of Freedom
xviii
. 10
. 31
. 80
. 109
acronyms
ERM
Eccentric Rotating Mass
EMF
ElectroMotive Force
GUI
Graphic User Interface
LRA
Linear Resonant Actuator
PH
Pseudo-Haptic
PWM
Pulse Width Modulation
SDK
Software Development Kit
STU
Situations, Tasks, and Users
VR
Virtual Reality
VE
Virtual Environment
xix
Part I
BACKGROUND
1
INTRODUCTION
This chapter introduces the main motivation of this thesis, identifying the primary pursued objective and its decomposition into
partial sub-objectives. Next, the organization of the document is
described, presenting the different parts and chapters.
1.1
motivation
Haptics, which in general refers to the sense of touch, plays an
essential role not only in our perceptual construction of spatial
environmental layout, but in the human ability to manipulate
objects with one’s hands [138]. As Lederman and Klatzky stated
[95], vision and audition are recognized for providing highly
precise spatial and temporal information, respectively, whereas
the haptic system is especially effective at processing the material characteristics of surfaces and objects. Moreover, this sense
reinforces other channels and, as Sallnäs et al. [146] indicated,
increases perceived virtual presence.
The change from traditional user interfaces, such as the mouse
and keyboard, to more modern ones, like touch screens present
in mobile phones and tablets, have resulted in the loss of the perceptual keys that allowed their efficient use without visual guidance. In a keyboard, not only the shape and disposition of the
keys are easily perceived by the user’s sense of touch, but also
the acknowledgement of their activation. However, in a touch
screen, the visual channel needs to actively support the user interaction. The industry has partially supplied this absence with
vibrotactile feedback, which is present in almost every mobile
device. Game controllers also benefit from this feedback, although with the different objective of enriching the user experience.
Recently, the interaction has jumped beyond the screen, with
depth sensors like Kinect1 , Xtion2 and Leap Motion3 . In this
1 http://www.xbox.com/en-US/Kinect
2 http://www.asus.com/Multimedia/Xtion_PRO
3 https://www.leapmotion.com
3
4
introduction
case, the lack of haptic feedback is even more noticeable, since
the gestures occur in the air. This problem is not new. In fact, it
has been present in Virtual Reality (VR) for many years, where
the challenge is to provide realistic sensations to the users. Many
VR environments have stunning visual displays and high-fidelity
sound equipment, whereas haptic technology is clearly behind.
However, being able to touch, feel, and manipulate objects, in
addition to seeing and hearing them, is essential to fulfil the
objective of VR.
Developing haptic schemes of interaction inherently requires
the use of a physical device to transmit this kind of information
to their senses. Many researchers have studied different ways
to provide realistic sensations, and different companies have created complex devices including Phantom or CyberGrasp. However, the problem with these systems is twofold. On one hand
these systems have serious limitations, like their reduced workspace or high cost, being suitable for a reduced field of application. On the other hand, commercial vibrotactile devices like CyberTouch have a fixed distribution of actuators (in this case, on
the top of the fingers of a glove) and can not be adapted to suit
other applications or interaction metaphors. Moreover, solutions
from the game industry like the Nintendo Wiimote or Logitech
Rumblepad provide very limited haptic sensations, and they are
not adequate for general haptic feedback.
This lack of general purpose tactile solutions has led to many
haptic researchers to build their own devices [53, 101], which is
time consuming and far from optimal. As the sense of touch is
distributed all over the skin, these devices are not focused on a
single part of the body. It is possible to find haptic devices for the
hands [45, 42, 47, 119, 129], forearm [152, 149, 134, 12], shoulders
[168], torso [99], waist [169], feet [50], and even integrated in
objects like seats [118, 57].
Therefore, there is a need for a system that allows an easy
connection and placement of tactile actuators to form an adaptable haptic display, so that it can be used in different scenarios.
This system would be useful not only for any general purpose
Virtual Environment (VE), but also for prototyping specialized
haptic devices.
1.2 objectives
1.2
objectives
This thesis aims to improve the haptic feedback on VE, including
videogames and interactions beyond the screen. To this end,
several objectives are proposed.
• Study the psychophysical aspects of the sense of touch.
• Analyse the different technologies available to provide haptic
feedback, as well as some of the most representative devices
built with them.
• Design and build a prototyping vibrotactile platform, which
allows an easy development of different vibrotactile-enabled
devices. This platform should be composed of an electronic controller and multiple actuators.
• Design and implement a tactile authoring tool. This tool
should allow the creation of tactile patterns associated to
one or more actuators.
• Develop different experiments that assess the capabilities
of the developed haptic platform to simulate different aspects of the sense of touch.
1.3
thesis structure
This thesis is organized in three parts. The first one comprises
chapters 1 and 2, and introduces the background of the problem. The second part is formed by chapters 3 and 4, and details
the vibrotactile toolkit proposed. The third part, composed of
chapters 5 to 8, describes a series of experiments to test different
simulated aspects of the sense of touch. A brief description of
each chapter follows:
• Chapter 1, Introduction, presents the problem, defines the
objectives and describes the structure of the thesis.
• Chapter 2, Physiology of Touch and Tactile Feedback Technologies, studies the physiological facts that are relevant to
understand the perception of the sense of touch and its limitations. A review of the available actuation technologies
is given, as well as the main haptic devices built with each
one. Finally, the chosen technology used in this research is
detailed and justified.
5
6
introduction
• Chapter 3, Vibrotactile Display, describes the electronic vibrotactile controller and its prototyping features. The actuators are described, and some features to improve their
response are detailed and evaluated.
• Chapter 4, Vibrotactile Authoring Tool, proposes a tactile authoring tool, which is used to design and test vibrotactile
patterns on multiple actuators. Two examples of use of the
tool are reported, and finally an evaluation assessment is
considered to discuss its utility.
• Chapter 5, Experiments for the Evaluation of the Platform, justifies the experiments which are described in the following
chapters to evaluate the platform, and details some of their
common characteristics.
• Chapter 6, Shape and Texture Recognition, reports an experiment that uses the developed platform to discriminate
shapes and textures in 2D, and compares it with the use
of force feedback and real tactile feedback.
• Chapter 7, Identification of 3D Virtual Geometric Forms, evaluates the identification of 3D shapes with two methods. The
first one uses a multi-point force feedback system, while
the seconds one uses the vibrotactile platform to create a
glove-like device. Results are compared with previously
found experiments carried out with single-point force feedback equipment.
• Chapter 8, Weight and Size Discrimination with Vibrotactile
Feedback, reports the use of vibrotactile technology to transmit weight and size information, evaluating the proposed
techniques in an experiment with users.
• Chapter 9, Conclusions, summarises the work presented in
this document, discussing the main contributions. In addition, suggestions for future work are proposed, and finally
the scientific contributions are enumerated.
2
F I S I O L O G Í A D E L TA C T O Y T E C N O L O G Í A S D E
R E A L I M E N TA C I Ó N TÁ C T I L
Este capítulo comienza ofreciendo un panorama general de la fisiología del tacto y algunos de los parámetros que se han estudiado a lo largo del tiempo para diseñar los dispositivos hápticos.
A continuación se hace un repaso de las distintas tecnologías
usadas por otros investigadores y se argumenta la elección de
los actuadores seleccionados. La última sección se centra en los
trabajos relacionados con la tecnología que se va a usar.
2.1
fisiología del tacto
El sentido del tacto es el que mayor área ocupa en nuestro cuerpo, y está formado por dos sistemas sensoriales principales: el
sistema cinestésico, que percibe las sensaciones producidas en
los músculos, tendones y articulaciones, como las causadas por
el movimiento; y el sistema cutáneo o táctil, que responde a los
estímulos de la superficie de la piel. Estos estímulos pueden ser
termales, eléctricos, químicos, de dolor o de deformación de la
piel, en los cuales nos centraremos a continuación.
2.1.1
Mecanorreceptores
Un mecanorreceptor es un tipo de receptor sensorial que responde a presiones mecánicas o distorsiones. Cuando la piel se
somete a presión o vibración, la superficie cutánea se distorsiona, generando ondas que se transmiten por la piel alcanzando
las membranas de los mecanorreceptores. La membrana de estos sensores también se altera, causando que se abran canales de
iones y a su vez una alteración en el potencial eléctrico que se
transmite al córtex sensorial, produciendo una sensación táctil
dependiendo del tipo de receptor activado.
En el ser humano, la piel glabra (sin pelo) de la mano presenta una gran densidad de estos elementos, con aproximadamente
17.000 terminaciones nerviosas que se concentran principalmente en la yema de los dedos [172].
7
8
fisiología del tacto y tecnologías de realimentación táctil
Normalmente se dividen en cuatro tipos principales [70], cuya
representación se puede ver en la Figura 2.1.
terminaciones de ruffini. Están distribuidos en la zona
profunda de la dermis, con una baja resolución espacial.
Son sensibles a la presión sostenida y a la deformación
lateral de la piel. Intervienen principalmente en la percepción de estímulos continuos, la detección de la dirección
del movimiento en estímulos laterales de la piel y en la
propiocepción de la posición de los dedos.
discos de merkel. Se encuentran en la epidermis con una
gran densidad espacial. Son sensibles a la presión sostenida, a frecuencias muy bajas (menores a 5 Hz), y a la deformación espacial. Su función principal es la detección de
frecuencias de baja intensidad, percepción basta de texturas y detección de formas.
corpúsculos de meissner. Presentan una gran resolución
espacial y están distribuidos justo debajo de la epidermis,
en una zona muy superficial. Son sensibles a cambios temporales en la deformación de la piel (entre 5 y 40 Hz) y
a la deformación espacial. Su función es la detección de
vibraciones de baja frecuencia.
corpúsculos de pacini. Están situados en la dermis profunda y, debido al gran tamaño de su campo receptor, proporcionan una baja resolución espacial. Son sensibles a cambios en la deformación de la piel a altas frecuencias (40 a
500 Hz), por lo que además de para detectar vibraciones
son usados para la percepción fina de texturas.
La Tabla 2.1 muestra un resumen de estas características. Algunos autores nombran estos receptores de forma alternativa
basándose en su velocidad de adaptación. Así, encontramos receptores Slow Adapting (SAI y SAII) y Fast Adapting (FAI y
FAII) cuya correspondencia se recoge en la Tabla 2.1. Es importante destacar que una sensación de presión puede venir dada
por la activación de varios mecanorreceptores especializados a
la vez, en lugar de solamente uno. Por tanto, todos ellos son necesarios para permitir la manipulación y agarre de objetos de
forma estable y precisa.
2.1 fisiología del tacto
Figura 2.1: Sección transversal de piel, mostrando sus distintas capas
y mecanorreceptores.
2.1.2
Distribución
Como se ha descrito anteriormente, los receptores de estímulos
táctiles no se encapsulan en un único órgano, sino que se encuentran distribuidos a lo largo del cuerpo. Esta concentración,
sin embargo, no es uniforme. Ciertas partes del cuerpo tales como las manos y en particular, las yemas de los dedos, tienen una
mayor sensibilidad a estímulos externos, como se puede ver en
la Figura 2.2. Esta distribución desigual nos indica los lugares
más propicios para transmitir información táctil.
2.1.3
Umbral de Detección
El umbral de detección es la intensidad mínima de una señal
para que sea detectable por nuestro sentido del tacto. En el caso de los estímulos mecánicos, la intensidad mínima detectable
se mide por la amplitud del movimiento a una determinada frecuencia. Así, el umbral mínimo se encuentra en frecuencias en
torno a los 250 Hz (umbral de ~ 0.0001 mm), donde se encuentra
el pico máximo de sensibilidad de los mecanorreceptores [176].
La Figura 2.3 muestra la curva del umbral de detección de la piel
de la mano.
9
10
fisiología del tacto y tecnologías de realimentación táctil
Terminación
term. de
dis. de
corp. de
corp. de
nerviosa
ruffini
merkel
meissner
pacini
Tipo
SAII
SAI
FAI
FAII
Velocidad
Lenta
Lenta
Rápida
Rápida
Tamaño
Grande
Pequeña
Pequeña
Grande
Posición
Subcután.
Superf.
Superf.
Subcután.
Frecuencia
Estático
0-100 Hz
1-300 Hz
10-1000 Hz
Pico sensib.
0.5 Hz
5 Hz
50 Hz
200 Hz
Resolución
> 7 mm
0.5 mm
3 mm
>10 mm
Tabla 2.1: Modelo de mecanorrecepción de cuatro canales. Tabla adaptada de [70].
2.1.4
Percepción de la Intensidad y Nivel de Molestia
Distintos investigadores en el campo de la psicofísica han estudiado la percepción subjetiva de la intensidad de vibración
[157, 177], demostrando que la relación entre la amplitud de vibración y la magnitud percibida sigue una función de potencia.
También descubrieron que para una mismo nivel de vibración,
las mujeres lo perciben subjetivamente más intenso que los hombres, o que a medida que una persona envejece, pierde gradualmente su sensibilidad. Conocer la función de percepción subjetiva para un actuador determinado es útil para poder proporcionar al usuario los valores de intensidad deseados [120, 142].
Por último, es importante mantener un nivel de intensidad
adecuado para evitar molestias en el usuario, o problemas como
el conocido síndrome de Raynaud o del dedo blanco [14]. Éste
se produce en casos extremos donde un trabajador está expuesto a fuertes vibraciones producidas por maquinaria durante un
tiempo prolongado, que dañan el sistema circulatorio.
La magnitud de molestia depende de muchos factores, como
la frecuencia, la dirección de vibración o el tiempo de exposición.
Griffin [49] recopila varios estudios en los que se establecen escalas semánticas para definir la intensidad de vibración percibida
por usuarios. En estos estudios se puede observar que, en general, las vibraciones por encima de 0.7 m·s−2 son consideradas
como no agradables.
2.2 tecnologías de realimentación táctil
Figura 2.2: Distribución de los receptores sensoriales en la mano.
2.2
tecnologías de realimentación táctil
La transmisión de un estímulo táctil se logra mediante displays
táctiles, es decir, dispositivos que en contacto con la piel son
capaces de generar sensaciones de mayor o menor complejidad.
Los displays más comunes están compuestos por un único actuador que produce una sensación táctil como por ejemplo, presión.
Los más complejos, sin embargo, están formados por una matriz de pines móviles formando una superficie, de tal manera
que éstos se pueden mover de forma independiente para hacer
que queden a distinto nivel, y representar texturas o formas. Las
características principales que diferencian unos displays táctiles
de otros son su resolución espacial, es decir, cuántos elementos
individuales componen el display por unidad de área; su resolución temporal, tiempo de respuesta o respuesta frecuencial; y la
fuerza o intensidad que pueden ejercer.
El factor más determinante en cuanto a la variedad de sensaciones que es posible transmitir con estos displays es el de
la tecnología usada en el mismo. Actualmente se han usado una
gran variedad de tecnologías como la neumática, electromagnética, electrostática, piezoeléctrica, aleaciones con efecto térmico de
memoria así como otras menos comunes como la estimulación
electrotáctil, la estimulación funcional neuromuscular (Functional Neuromuscular Stimulation o FNS) o fluidos electroreológi-
11
12
fisiología del tacto y tecnologías de realimentación táctil
Figura 2.3: Curva del umbral de sensibilidad de la piel de la mano.
Figura adaptada de [175].
cos. A continuación se detallará cada una de ellas y se comentarán ejemplos propuestos en la literatura.
2.2.1
Actuadores Neumáticos
Los actuadores neumáticos se caracterizan por usar aire comprimido para expulsarlo directamente sobre la piel mediante microinyectores, o para llenar pequeñas bolsas que al inflarse provocan presión sobre la piel. Los displays táctiles neumáticos pueden ser finos, ligeros y flexibles ya que el dispositivo que crea la
presión de aire puede montarse lejos del propio actuador, aunque esto lo hace menos portable. Tradicionalmente este sistema
ha estado formado por un compresor, un acumulador y electroválvulas que permiten el paso del aire por los distintos conductos hacia los actuadores. Otra alternativa es el uso de pistones
actuados por solenoides electromagnéticos, y más recientemente se está investigando en aleaciones de hidruro metálico (MH,
metal hydride). Estas aleaciones se caracterizan por captar hidrógeno al enfriarlos y liberarlo cuando se calientan. Uniéndolos a
un dispositivo Peltier, capaz de variar su temperatura mediante
una corriente eléctrica, se tiene un variador de presión portable
para dispositivos neumáticos [148].
Este tipo de tecnología puede crear sensaciones táctiles de
gran intensidad. Sin embargo, la principal desventaja de los dis-
2.2 tecnologías de realimentación táctil
plays neumáticos radica en su frecuencia de funcionamiento, que
se encuentra en torno a los 10 Hz debido a la compresibilidad
del aire. A continuación se detallan algunos de los dispositivos
creados mediante esta tecnología.
El grupo de investigación liderado por Shuichi Ino [148] ha
llevado a cabo diversos trabajos de investigación sobre interfaces táctiles basadas en actuadores neumáticos. En particular, ha
investigado las sensaciones de desplazamiento y de presión creadas a partir de actuadores neumáticos capaces de desplazarse
unos 3 mm lateralmente y de crear presiones de 5,9 N.
Teletact, creado por UK’s National Advanced Robotics Research
Centre (ARRC) y Airmuscle Limited [158], es un guante basado
en tecnología neumática que incluye una bomba, tanque y canales de control de presión para inflar pequeñas bolsas de aire
que se pueden montar en guantes de datos. El primer prototipo
usaba 20 bolsas neumáticas que se podían inflar a 13 psi. La segunda revisión del guante, llamada Teletact II y mostrada en la
Figura 2.4a, disponía de 30 bolsas de aire con 2 rangos de presión distintos. Una de ellas se sitúa en la zona palmar y soporta
una presión máxima de 30 psi, frente a los 15 psi del resto.
Usando la misma tecnología de bolsas de aire, el equipo de
ARRC ha creado además el Teletact Commander [158], un controlador multifunción de mano que tiene integrado tres actuadores neumáticos en la superficie y que se pueden controlar por
un compresor o por un pistón actuado por un solenoide. Este
dispositivo se puede ver en la Figura 2.4b.
Por último, King et al. [81] integraron múltiples elementos neumáticos para formar un display de 3x2 elementos. Este dispositivo tiene unas dimensiones de 10x18 mm y cada elemento es un
pequeño globo de silicona de 3mm. Posee 4 bits de resolución,
una presión máxima por elemento de 0.34 N y una respuesta
frecuencial de 7 Hz.
2.2.2
Actuadores Piezoeléctricos
La piezoelectricidad es un fenómeno que se presenta en determinados cristales por el cual se deforman al ser sometidos a un
campo eléctrico. Los actuadores piezoeléctricos son fáciles de encontrar comercialmente, pequeños, flexibles y delgados. Pueden
generar grandes fuerzas en un rango de frecuencias bastante amplio, pero el desplazamiento que generan es bajo, alrededor de
13
14
fisiología del tacto y tecnologías de realimentación táctil
(a) Teletact II.
(b) TeleTact Commander.
Figura 2.4: Actuadores neumáticos Teletact [158].
un 0.2 %, por lo que se suelen montar con mecanismos de palancas, en múltiples capas, o de forma bimórfica. Un actuador
bimorfo consiste en dos largos elementos piezoeléctricos unidos.
Cuando se aplica un voltaje, uno de ellos se encoje y el otro se
estira, consiguiendo que el bimorfo se doble. Una desventaja general de los actuadores piezoeléctricos es que para montarlos se
necesita un sistema muy elaborado, además de altos voltajes, lo
que también puede afectar a la seguridad [30].
Se han construido numerosos displays táctiles basados en elementos piezoeléctricos. Debus et al. [27] presentaron el diseño y
construcción de un display multicanal vibrotáctil compuesto por
un mango con cuatro actuadores piezoeléctricos para transmitir
estímulos en cuatro direcciones. Summers et al. [160] construyeron un dispositivo formado por una matriz de 100x100 elementos táctiles piezocerámicos con una resolución espacial de 1 mm
para estudiar la precisión de las sensaciones percibidas. Su estudio determinó que a una frecuencia de 320 Hz, la precisión
espacial es mayor que a una de 40 Hz.
STRESS, creado por Pasquero and Hayward [132], está basado
en elementos piezoeléctricos bimorfos, y permite reproducir secuencias de imágenes táctiles a unos 700 Hz. El display usa un
conjunto de 100 tactores que se desplazan lateralmente a la superficie de la piel. La densidad es de un contactor por milímetro
cuadrado, por lo que tiene alta resolución temporal y espacial.
Una nueva versión, presentada en Wang and Hayward [181],
consta de 6x10 actuadores, una resolución espacial de 1.8x1.2
mm, una deflexión máxima de 0.1 mm y una frecuencia de unos
250 Hz (Figura 2.5a).
2.2 tecnologías de realimentación táctil
Kyung et al. [91] diseñaron y construyeron un display táctil
basado también en actuadores piezoeléctricos bimorfos unidos
a un dispositivo en forma de ratón 2D, que además proporciona
fuerza kinestésica y sensación de desplazamiento de la piel. El
display táctil está compuesto por 8 elementos que mueven cada
uno 5 pines un máximo de 1 mm, con una fuerza de 1 N y a una
frecuencia máxima de 1 kHz (Figura 2.5d). Este grupo también
creó un display háptico formado por 30 actuadores del mismo
tipo que impulsaban una matriz de 5x6 pines [92]. La resolución
espacial es de 1.8 mm, con un tiempo de respuesta de 500 Hz y
un desplazamiento de 0.7 mm.
Zimmerman y su grupo añadieron actuadores piezocerámicos
a un guante VPL DataGlove [189]. Usaron modulación en frecuencia para variar la intensidad de la sensación táctil y para
minimizar así la sensación de entumecimiento.
El Optacon [11] fue un dispositivo comercial formado por una
matriz de 24x6 pines conectados a actuadores piezoeléctricos bimorfos y una pequeña cámara de mano que permitía leer texto
impreso a personas con discapacidad visual. Los pines vibran a
una frecuencia fija de 250 Hz (Figura 2.5c).
Recientemente la empresa Mide [4] ha puesto a la venta recientemente un kit piezoeléctrico compuesto por un actuador
bimorfo (Figura 2.5b) y un controlador de dos canales por 300
dólares.
2.2.3
Actuadores Térmicos
Los actuadores térmicos están basados en materiales que pueden
variar su temperatura en presencia de corrientes eléctricas para
transmitir así sensaciones táctiles. Este tipo de actuadores presenta el inconveniente de su baja velocidad de respuesta. Además, puede resultar peligroso para el usuario en caso de un fallo
del sensor de temperatura o del bucle de control.
Comercialmente estuvo disponible el dispositivo Displaced Temperature Sensing System, desarrollado por C&M Research [17],
que proporciona estimulación térmica en los dedos. Cada uno de
los actuadores está compuesto por una combinación de bomba
de calor termoeléctrica, sensor de temperatura y disipador. Esto
le permite obtener una realimentación del sensor para así regular la temperatura de la superficie al valor deseado. Ésta puede
variar entre los 10ºC y 45ºC con una resolución de 0.1ºC. El mo-
15
16
fisiología del tacto y tecnologías de realimentación táctil
(a) Stress V2 [132].
(b) Actuador bimorfo SP-21b, de la
empresa Mide [4].
(c) Optacon [11].
(d) Integrated Tactile Display [91].
Figura 2.5: Dispositivos hápticos basados en piezoelectricidad.
delo X/10 soporta hasta ocho canales, y los actuadores pueden
ser en forma de tiras de Velcro o de dedales.
Ino et al. [61] desarrollaron una interfaz táctil compuesta por
un módulo Peltier, que es el actuador térmico en sí, y un termopar, que hace de sensor de temperatura. Mediante este dispositivo intentan que el usuario pueda distinguir en un entorno
virtual materiales con distintos grados de conductividad térmica, como por ejemplo madera y aluminio.
2.2.4
Aleaciones con Efecto Térmico de Memoria (SMA)
Las aleaciones que muestran efecto térmico de memoria (Shape
Memory Alloys o SMA) se caracterizan por cambiar de forma
a bajas temperaturas y recuperar su estado inicial cuando son
calentadas. Esto se consigue haciendo pasar una gran corriente
a través de alambres de este material, por lo que presentan un
gran consumo. Pueden ejercer grandes fuerzas con un desplazamiento de entre el 2 y 4.5 %. Debido a la capacidad termal del
2.2 tecnologías de realimentación táctil
material, los cambios de temperatura necesitan tiempo y los actuadores basados en SMA no suelen superan los 10 Hz. Además
hay que tener precaución de aislar adecuadamente la piel para
evitar lesiones por quemaduras.
Una de las principales aplicaciones de esta tecnología es la
creación de displays en forma de matriz de varillas. Johnson
[69] propuso una matriz de 5x6 pines separados por 3 mm, con
un tiempo de respuesta de 100 ms, 1 segundo de recuperación
(enfriamiento de la aleación) y una fuerza máxima de 0.196 N.
El display de Wellman et al. [184] usa refrigeración líquida para conseguir tiempos de recuperación menores, consiguiendo de
esta manera una frecuencia de hasta 40 Hz. Está compuesto de
10 pines espaciados 2 mm y empujados por un hilo SMA con
configuración en V que ejerce una fuerza máxima de 1.5 N. (Figura 2.6a). Otros dispositivos similares se pueden encontrar en
[51, 166, 87].
Scheibe et al. [150] presentaron un dispositivo formado por
cables SMA de unos 50 mm ajustados en torno a dedales (Figura 2.6c) de manera que se contraen en torno a 1.5-2.5 mm
ejerciendo presión en la yema del dedo. El cable usado es de 80
µm, que proporciona un tiempo de respuesta de menos de 50
ms.
Tactool System es un producto comercial de Xtensory cuyos
displays táctiles se montan en las yemas de los dedos mediante
tiras de Velcro (Figura 2.6b). Los tactores están basados en pines actuados por SMA, y se pueden usar de forma impulsiva
(30 g) o de vibración (20 Hz). A pesar de no ser una tecnología
muy extendida, en la actualidad es posible encontrarla de forma
comercial. La empresa Mide [4], por ejemplo, ofrece un kit por
unos 500 dólares, compuesto por láminas y alambre SMA.
2.2.5
Actuadores Microelectromecánicos (MEMS)
Los sistemas microelectromecánicos (Microelectromechanical Systems, MEMS) son sistemas mecánicos microscópicos acoplados
a circuitos eléctricos o electrónicos [102]. Los MEMS se han usado en gran medida como acelerómetros, giroscopios, osciladores
de alta calidad, micrófonos y amplificadores entre otras cosas.
Algunos autores, como los investigadores Enikov et al. [34]
han aplicado esta tecnología para construir displays táctiles. En
este caso el dispositivo consiste en una matriz de 4x5 actuadores
17
18
fisiología del tacto y tecnologías de realimentación táctil
(a) Configuración en V del display de Wellman [184].
(b) Tactool System, de Xtensory.
(c) Dedales SMA de Scheibe et al.
[150].
Figura 2.6: Dispositivos hápticos basados en aleaciones con efecto térmico de memoria.
que vibran gracias a elementos piezoeléctricos. La matriz usa
tecnología MEMS para crear micro-frenos, y cada actuador individual se activa y desactiva por parejas de actuadores termoeléctricos. El display de Streque et al. [159] también tiene forma
de matriz de actuadores, pero en este caso actuados mediante
bobinas electromagnéticas en miniatura e imanes de neodimio,
formando una matriz de 4x4 elementos con una resolución de 2
mm.
2.2.6
Interfaces Electrotáctiles y Neuromusculares
Las interfaces electrotáctiles usan electrodos para hacer pasar
una corriente por la piel. Las interfaces neuromusculares usan
electrodos directamente bajo la piel para lograr estímulos musculares. No se han usado a gran escala por su naturaleza invasiva y peligrosa, ya que para el usuario la frontera del dolor está
2.2 tecnologías de realimentación táctil
muy cercana. Otros inconvenientes son la poca resolución espacial que se consigue detectar, y la inestabilidad en la relación
entre la corriente eléctrica y la sensación percibida.
Podemos manejar objetos afilados como cuchillas o agujas en
la vida cotidiana a pesar de que pueden causar dolor porque éste depende de la fuerza con la que se toquen, siendo la sensación
táctil muy débil si la fuerza con la que los tocamos también lo es.
En los actuadores electrotáctiles no es así, ya que la sensación al
tocar un electrodo es independiente de la fuerza con la que se
toque, lo que puede causar cierto miedo y rechazo. Para evitar
este efecto Kajimoto et al. [73] usaron interfaces electrotáctiles
unidas a un sensor de presión, de forma que la intensidad de
la sensación fuese proporcional a la fuerza ejercida. Estos investigadores tratan de evitar el problema de la resolución usando
corriente anódica, en lugar de catódica, para que de esta forma
se estimulen los nervios orientados verticalmente y la sensación
sea más localizada, creando en el usuario la percepción de una
vibración. El prototipo que construyen está compuesto por una
matriz de 2x5 electrodos y una resolución espacial de 2.54 mm
(Figura 2.7a). La duración del pulso la establecen entre 0 y 0.5
ms con una amplitud de entre 0 y 10 mA.
SmartTouch, creado también por Kajimoto et al. [74], usa un
sensor óptico y un actuador formado por una matriz de 4x4 electrodos de 1 mm de diámetro (Figura 2.7b). Su propósito es el de
capturar una imagen visual y mostrarla a través de estímulos
eléctricos de 0.2 ms, 100 a 300 voltios y 1 a 3 mA. De esta manera el usuario puede notar, por ejemplo, los signos de Braille
impresos en un papel.
(a) Matriz de electrodos de Kajimoto et al. [73].
(b) SmartTouch Kajimoto et al. [74].
Figura 2.7: Dispositivos hápticos basados en actuadores electrotáctiles
19
20
fisiología del tacto y tecnologías de realimentación táctil
2.2.7
Materiales Electrostáticos y Electrostrictivos
En los actuadores electrostáticos se emplean fuerzas de Coulomb
generadas por el campo eléctrico entre dos superficies cargadas.
Para ello se necesitan voltajes muy altos y aun así la fuerza y el
desplazamiento son muy pequeños.
Jungmann and Schlaak [72] crearon un actuador miniaturizado mediante múltiples capas de dieléctricos elásticos que se comprimen cuando se les aplica un voltaje de entre 100 y 1000 V (Figura 2.8a). El dispositivo es flexible, ligero y de bajo coste, lo que
lo hace adecuado para usarse por ejemplo en guantes de datos
o con retorno de fuerzas.
Tang and Beebe [165] diseñaron un display táctil electrostático que consiste en un conjunto de electrodos de metal cubiertos
por una fina capa aislante. Conforme se aplica un voltaje entre
el electrodo de metal y un dedo humano tocando la capa aislante, una fuerza electrostática atrae la piel del dedo. Mientras
se desplaza el dedo a lo largo de la superficie del display las
fuerzas friccionales variarán según el potencial eléctrico de los
electrodos. Este principio permite displays táctiles muy finos y
adecuados con una densidad de actuadores alta. Los problemas
de este sistema son que, por una parte, las fuerzas son extremadamente pequeñas y, por otra, que es necesario un movimiento
relativo entre la piel y el display.
Los materiales electrostrictivos son dieléctricos que cambian
su forma cuando se encuentran bajo el efecto de un campo eléctrico. Un ejemplo son los también llamados polímeros electroactivos usados por Koo et al. [88] para crear un display táctil cuyas
membranas se comban al aplicarles alto voltaje (Figura 2.8b). Su
display está formado por una matriz de 4x5 elementos con un
desplazamiento máximo de 0.9 mm a 3 kV, una fuerza de 14 mN
y un peso de 2 gr.
2.2.8
Fluidos Electroreológicos y Magnetoreológicos
Los fluidos electroreológicos (Electro Rheological Fluid, ERF) y
magnetoreológicos (Magneto Rheological Fluid, MRF) pueden
variar su viscosidad de forma notable en presencia de un campo
eléctrico (ERF) o de un campo electromagnético (MRF). Este cambio, que se produce en un tiempo de respuesta de milisegundos,
es reversible y se puede aplicar por ejemplo a displays donde
2.2 tecnologías de realimentación táctil
(a) Estructura del estimulador electrostático con un dieléctrico
elástico [72].
(b) Display táctil vestible formado por polímeros electroactivos
[88].
Figura 2.8: Dispositivos hápticos basados en materiales electrostáticos
y electrostrictivos.
el usuario examina un gráfico táctil moviendo su dedo por su
superficie. Como estos dispositivos no pueden ejercer fuerzas
activas, no son adecuados para displays con retorno táctil en sistemas de telemanipulación, además requieren alto voltaje y son
incapaces de representar superficies muy rígidas o bordes bien
definidos.
Kenaley and Cutkosky [77] fueron los primeros en crear un
sensor táctil basado en ERF con forma de dedal para dedos robóticos. Además propusieron un actuador basado en la misma
tecnología compuesto de 4x3 celdas [178]. Monkman [117] propuso la aplicación de ERF para display táctiles. Taylor et al. [167]
presentaron mejoras a displays anteriores basados en ERF usando una capa de tela dentro de la capa del fluido, consiguiendo
así duplicar las fuerzas reactivas con menos corriente y mejorando la seguridad, ya que aísla los electrodos. Su display está
compuesto por 5x5 unidades táctiles de 11 mm de lado y separadas 2 mm cada una. El voltaje aplicado es bastante alto, de unos
3 kV. Liu et al. [100] usaron en su lugar fluido magnetoreológico
para crear un display con una única celda y probando dos tipos distintos de imanes, concluyendo que los MRFs son también
adecuados para este tipo de displays pasivos.
Klein et al. [85] hicieron pruebas con este tipo de fluidos para
crear un prototipo de display táctil 3D formado por microceldas,
para aplicarlo a la medicina. Voyles et al. [179] crearon tanto un
sensor como un actuador en forma de dedal mediante ERF para
observar tareas humanas que requieren contacto. Por último el
21
22
fisiología del tacto y tecnologías de realimentación táctil
sistema MEMICA [8], desarrollado por la Universidad Rutgers,
usa guantes con retorno háptico basado en ERF.
Kim et al. [79] usaron fluido magnetoreológico para crear un
manipulador para la mano, basado en actuadores pasivos.
2.2.9
Tecnologías sin Contacto Directo
La aparición de interfaces de usuario basadas en gestos ha propiciado la aparición de tecnologías capaces de estimular el tacto
a través del aire. Aireal [154], por ejemplo, se basa en la generación de vórtices de aire creados por una boquilla móvil que
puede dirigirlos hacia el usuario. Combinado con cámaras de
profundidad para poder localizar la mano del usuario en el espacio, puede crear pequeños impulsos en la piel. El dispositivo
tiene un alcance aproximado de un metro y una latencia de unos
140 ms.
Por otro lado, UltraHaptics [19] utiliza un array de16x20 transductores de ultrasonidos para crear múltiples puntos de realimentación en el aire. Para ello usa el principio de la fuerza de radiación acústica, que se crea cuando un conjunto de actuadores
emiten una frecuencia en fase. El sistema es capaz de producir
puntos táctiles en el espacio, que son percibidos por el usuario
como un ligero cosquilleo en la superficie de la piel.
Este tipo de tecnología, aunque puede ser interesante en ciertos ámbitos muy específicos, es muy limitada. En el primer caso los vórtices de aire son impulsos discretos que se lanzan al
usuario, y que además de tener una gran latencia, son estímulos
muy dispersos que afectan a una gran zona de la piel. En el segundo caso el espacio de trabajo es muy limitado, ya que viene
determinado por el tamaño del array de actuadores. Además la
sensación es muy sutil, y podría fácilmente pasar desapercibida
por el usuario.
2.2.10
Actuadores Electromagnéticos
2.2.10.1 Solenoides y bobinas móviles
Los solenoides están formados por una bobina de cobre que
crea un campo electromagnético cuando es atravesada por una
corriente eléctrica. Este fenómeno se aprovecha para atraer un
núcleo ferromagnético y producir así un movimiento mecánico
capaz de ejercer presión o vibración. Las bobinas móviles siguen
2.2 tecnologías de realimentación táctil
un principio similar, pero constan de un imán permanente y es
la bobina la que se mueve por el efecto electromagnético. Al
contrario de los solenoides, tienen una respuesta lineal y son
bidireccionales (pueden ejercer fuerza en ambos sentidos) mientras que los solenoides necesitan de un muelle para volver a la
posición inicial. Existen otras variaciones en las que la bobina es
fija y el imán es móvil, que tienen una respuesta parecida.
Estos dispositivos son baratos y fáciles de controlar, pero suelen ser relativamente voluminosos. Una gran limitación de los
actuadores basados en solenoides de reducido tamaño es que la
fuerza que pueden ejercer es muy limitada, por lo que lo más
factible es usarlos para transmitir sensaciones de vibración.
Esta tecnología está presente en muchos prototipos. Uno de
ellos es el display háptico BubbleWrap [9], mostrado en la Figura 2.9a. Consiste en un conjunto de celdas compuestas por un
solenoide plano de cobre en forma de espiral unido a un imán
plano permanente. Estas celdas pueden contraerse y expandirse individualmente unos 10 mm para crear tanto retorno háptico mediante vibración, como retorno háptico pasivo, adoptando
distintas formas y grados de firmeza. Kontarinis et al. [87] crearon un display táctil modificando pequeños altavoces de 0.2 W
que unieron a pequeños manipuladores para dos dedos. Las características del dispositivo obtenido son un rango de movimiento de 3 mm, y una fuerza máxima de 0.25 N a 250 Hz. ComTouch
[20] es un dispositivo de comunicación que convierte en tiempo
real la presión de la mano que ejerce un usuario, en intensidad
de vibración que recibe otro usuario remoto. De esta manera
se enriquece la comunicación de voz complementándola con un
canal táctil. Usan pequeños altavoces (V1220, de AudioLogic Engineering).
Algunos autores también han creado matrices basadas en solenoides. Fukuda et al. [40] propusieron una matriz de actuadores
compuestos por una micro bobina y un imán permanente que
se desplaza dentro de la misma. Cada actuador tiene 2 mm de
diámetro y ejerce una fuerza de 7,6 mN/mm2 , lo que usa para
producir sensaciones vibrotáctiles de desplazamiento en la piel
y poder así estudiar parámetros táctiles. Petri and McMath [133]
describen otro de estos dispositivos táctiles creado para telemanipulación y formado por 8x8 pines en una superficie de 6.5 cm2 .
Talbi et al. [163] usaron pequeños solenoides para crear una matriz de 4x4 pines vibrotáctiles con una frecuencia de oscilación
de 250 Hz, una amplitud de 0.2 mm y una fuerza de 1.2 mN. El
23
24
fisiología del tacto y tecnologías de realimentación táctil
esquema de los actuadores se muestra en la Figura 2.9d. Otro
dispositivo parecido, llamado VITAL y creado por Benalikhoudja et al. [10], está basado también en microsolenoides y tiene una
resolución de 4 mm, una fuerza de 13 mN, deflexión máxima de
pines de 0.1 mm y frecuencias de hasta 800 Hz. Tan and Pentland
[164] crearon un display táctil formado por una matriz de 3x3 estimuladores vibrotáctiles colocados en el antebrazo y separados
por 8 cm para mostrar patrones direccionales.
En el mercado también se pueden encontrar actuadores comerciales como la gama de Engineering Acoustics [1], con un precio
de unos 200 dólares y compuesta por los tactores C2, C3, CLF
y EMS En la Figura 2.9b se muestran dos de estos tactores. El
tactor C2 es bastante común en el ámbito de la investigación y
ha sido usado por ejemplo por Gurari et al. [50] para simular la
dureza de un material en un entorno virtual y comparar el efecto cuando se estimula el pie, el brazo o la yema de los dedos. La
compañía EXOS lanzó al mercado TouchMaster, que consiste en
actuadores de bobina móvil para las yemas de cuatro dedos que
se ajustan mediante tiras de velcro (Figura 2.9c). El retorno que
se obtiene es de tipo vibrotáctil, con una frecuencia de respuesta fija en torno a los 210-240 Hz. De forma opcional se puede
obtener también electrónica para variar la amplitud y frecuencia.
Audiological Engineering creó varios modelos de un transductor
llamado Tactaid, que aunque ya no están disponibles, han sido
usado por varios investigadores [54, 20, 142].
2.2.10.2 Actuadores Lineales Resonantes (LRA)
Los actuadores lineales resonantes, o por sus siglas en inglés
Linear Resonant Actuator (LRA) están formados por una bobina fija y un imán fijado a una masa. El imán y la masa, cuya
sujeción depende de un muelle, son atraídos y repelidos por
la bobina produciendo un movimiento de oscilación. Debido a
la inclusión de esta masa, el sistema posee mucha más inercia
que los sistemas descritos anteriormente, por lo que solamente
puede oscilar de forma efectiva en su frecuencia resonante. Recientemente se están incluyendo en cada vez más dispositivos
móviles debido a su bajo consumo y gran intensidad de vibración, por lo que es fácil encontrarlos en el mercado a un precio
reducido. Precision Microdrives, por ejemplo, dispone un modelo (C10-100) con una amplitud de vibración de 1.4G, frecuencia
resonante de 175 Hz y consumo de 69 mA por unos 10 euros.
2.2 tecnologías de realimentación táctil
(a) Display BubbleWrap [9].
(b) Tactores C3 y C2 [1].
(c) Exos Touchmaster.
(d) Matriz de pines de Talbi et al.
[163].
Figura 2.9: Dispositivos hápticos basados en solenoides y bobinas móviles.
En investigación también se han utilizado en estos últimos
años. Por ejemplo Seo and Choi [151] los usaron en un estudio
para crear ilusiones de movimiento lineal vibrotáctil mediante
dos LRAs integrados en un soporte rígido.
2.2.10.3 Motores y Servomotores
Un motor eléctrico es un dispositivo formado por bobinas y un
rotor que gira cuando se le aplica una corriente eléctrica. Un servomotor es un motor que tiene la capacidad de ubicarse en cualquier posición dentro de su rango de operación y de mantenerse
estable en dicha posición gracias a un lazo de realimentación.
Este movimiento giratorio se puede usar para crear sensaciones
táctiles mediante distintos tipos de mecanismos.
El dispositivo de Shinohara et al. [153] es un display que muestra gráficos táctiles. Está compuesto por una matriz de 64x64 pines con una separación de 3 mm. La extensión de los estimuladores se hace mediante ejes que son controlados por micro-motores
25
26
fisiología del tacto y tecnologías de realimentación táctil
paso a paso. La altura máxima de los estimuladores elevados es
de 10mm con una resolución de 1 mm, pero las fuerzas generadas son demasiado pequeñas para extender los estimuladores
mientras que éstos son tocados por el usuario. El tiempo de refresco es de 15 s, demasiado elevado para el reconocimiento de
objetos virtuales en tiempo real, pero apropiado para objetos estáticos.
Ottermo et al. [128] usan pequeños motores sin escobillas para
hacer girar un tornillo que mueve cada pin longitudinalmente.
Su display está formado por 4x8 elementos que se desplazan un
máximo de 3 mm generando una fuerza máxima de 1.7 N, con
una resolución de 2.7 mm y a una frecuencia de 2 Hz.
Wagner et al. [180] mejoraron este tiempo de respuesta consiguiendo un refresco de hasta 7.5 Hz usando servomotores para
crear un display de 6x6 pines, con 2 mm de separación y 2 mm
de resolución (Figura 2.10a).
Debido a que el conjunto de servomotores suele ser voluminoso, se han hecho varios intentos de separarlos de la zona táctil.
Por ejemplo Sarakoglou et al. [147] usaron hilos de nailon y muelles para conectar pequeños motores eléctricos a un display de
4x4 pines. Estos pines tienen un desplazamiento máximo de 2.5
mm, resolución de 2 mm, fuerza máxima de 1 N por pin y frecuencia máxima de 15 Hz.
Otro concepto totalmente distinto es el actuador de Inaba and
Fujita [60], que mediante un motor enrolla una tira de tela que
ejerce presión sobre los dedos. Minamizawa and Fukamachi [114]
ampliaron este concepto diseñando un dispositivo llamado Gravity Grabber con dos motores, de tal manera que esta tira de
tela también puede ejercer sensaciones de desplazamiento sobre
la piel (Figura 2.10b). Estas sensaciones de desplazamiento son
muy útiles en ámbitos como la telemanipulación, donde ayudan
a determinar el correcto grado de fuerza que los dedos deben
ejercer sobre un objeto para que éste no se caiga. Para conseguir
este propósito, Chen and Marcus [21] no usaron una tira de tela en contacto con la piel, sino directamente el eje de un motor,
obteniendo así un grado de libertad. Webster et al. [182] usaron una pequeña esfera en contacto con el dedo del usuario que
unida a dos motores perpendiculares le permite girar con dos
grados de libertad (Figura 2.10c). Por último, el display de Frati
and Prattichizzo [39] está basado en tres motores que enrollan
un hilo, estirando cada uno de una punta de una pieza rígida
2.2 tecnologías de realimentación táctil
colocada bajo el dedo y consiguiendo así reflejar el contacto con
superficies de distinta inclinación (Figura 2.10d).
(a) Display de Wagner et al. [180].
(b) Gravity Grabber de Minamizawa and Fukamachi [114].
(c) Display bidimensional de Webster et al. [182].
(d) Interfaz compuesta de motores
de Frati and Prattichizzo [39].
Figura 2.10: Dispositivos basados en motores
2.2.10.4 Motores Vibradores (ERM)
Estos actuadores son pequeños motores eléctricos a cuyo eje hay
acoplada una masa descentrada, logrando que el conjunto vibre
al hacerla girar. Gracias a su amplio uso en dispositivos móviles, los motores vibradores se han favorecido de un gran avance, siendo más pequeños, eficientes y baratos. Comercialmente
se pueden encontrar dos tipos: cilíndricos y planos. Son fáciles
de utilizar, ya que sólo necesitan una pequeña tensión directa
entre sus bornes, proporcionando aun así una fuerza de vibración considerable. Su bajo coste y efectividad ha hecho que se
usen ampliamente en todo tipo de dispositivos, desde sistemas
orientados a juegos como joysticks, gamepads o volantes, hasta
móviles o PDA’s.
27
28
fisiología del tacto y tecnologías de realimentación táctil
En el campo de la Realidad Virtual también han tenido gran
acogida, y se pueden encontrar comercialmente por ejemplo en
el guante CyberTouch de CyberGlove Systems. Básicamente es
un guante de datos CyberGlove, capaz de medir la posición de
los dedos, al que se le ha añadido un estimulador vibrotáctil a
cada dedo, y uno más en la palma de la mano (Figura 2.11a).
Éstos se sitúan en la parte dorsal, y son de tipo cilíndrico. Su
frecuencia puede variarse de 0 a 125 Hz y la amplitud máxima
es de 1.2 N a 125 Hz.
Estos actuadores se han incluido también en numerosos diseños vestibles, es decir, integrados en ropa convencional. Un
ejemplo es el dispositivo con forma de hombrera de Toney et al.
[168], donde los autores usan vibradores planos para investigar
sobre el uso de displays vibrotáctiles en tejidos (Figura 2.11d).
Otro diseño presentado por Tsukada and Yasumura [169] y bautizado como ActiveBelt, consiste en un cinturón al que se le han
integrado 8 actuadores vibrotáctiles y que combinado con un
GPS permite que el usuario obtenga información direccional a
través del tacto (Figura 2.11b). Tacticycle, creado por Poppinga
et al. [135], también transmite información direccional, pero en
este caso el sistema está compuesto por dos vibradores colocados uno en cada parte del manillar de una bicicleta.
El dispositivo táctil creado por Zelek et al. [187] es más complejo, y trata de representar información visual mediante un sistema vibrotáctil. Para ello, crean una descripción de la escena y
su certeza mediante un sistema de cámaras estéreo que mapean
a un guante que tiene colocados 14 vibradores en el dorso (Figura 2.11c). La disposición de estos vibradores está limitada por
el tamaño de los campos de la piel receptivos a la vibración, tal
que cada motor se pueda identificar por el usuario unívocamente cuando varios motores se activan simultáneamente.
2.2.11
Discusión
La principal consideración a la hora de seleccionar los actuadores táctiles para este trabajo de investigación es la capacidad de
encontrar dispositivos lo suficientemente ligeros y pequeños que
se puedan adaptar al cuerpo de un usuario y por ejemplo formar
parte de un guante. Además, es importante que se puedan encontrar comercialmente, idealmente a bajo precio.
2.2 tecnologías de realimentación táctil
(a) Guante vibrotáctil CyberTouch.
(b) Cinturón vibrotáctil ActiveBelt
Tsukada and Yasumura [169].
(c) Guante vibrotáctil Zelek et al.
[187].
(d) Hombrera vibrotáctil Toney et al.
[168].
Figura 2.11: Dispositivos con actuadores ERM.
Ninguno de los displays descritos satisfacen todos los requisitos para ser un display táctil ideal, tales como coste, volumen,
complejidad o resolución. Además la mayoría presenta otras desventajas como su alto peso, gran volumen o rigidez estructural
que los hace inadecuados para ser vestibles e introducirlos por
ejemplo en guantes de datos. Por último, solamente algunas tecnologías están disponibles comercialmente, las cuales comparamos a continuación. La Tabla 2.2 recoge algunas de sus características principales.
actuadores piezoeléctricos. A pesar de que la tecnología piezoeléctrica se encuentra presente en gran cantidad
de dispositivos de consumo, tales como altavoces y mecheros, los tactores piezoeléctricos han sido usados casi exclusivamente por investigadores. Su mayor ventaja es su
reducido espesor y gran ancho de banda, siendo capaces
de oscilar a frecuencias entre 1-300 Hz. Sin embargo, su
escasa representación en el mercado hace que los pocos
29
30
fisiología del tacto y tecnologías de realimentación táctil
modelos que se pueden encontrar no sean adecuados para todos los posibles usos. Además, son actuadores caros,
frágiles y operan con alto voltaje (~100 V) por lo que es
imprescindible encapsularlos y garantizar su aislamiento
eléctrico para que sean seguros para el usuario.
aleaciones con efecto térmico de memoria (sma). Su
uso como transductor táctil es aun más reducido que los actuadores piezoeléctricos, y solamente en la literatura científica se pueden encontrar algunos prototipos. Aunque se
pueden encontrar alambres SMA comercialmente, es necesario darles una forma adecuada para que sirvan de actuadores táctiles. Además, debido a que su forma depende
de cambios en la temperatura, su inercia térmica evita que
éstos sean muy rápidos, por lo que tienen una respuesta
frecuencial muy pobre.
bobinas móviles. Poseen una buena respuesta frecuencial y,
a pesar de no ser muy comunes fuera del ámbito de la
investigación, se encuentran disponibles comercialmente
principalmente para usos militares, médicos o de aviación.
Su principal desventaja es el elevado precio por unidad y
su tamaño, que las hace inadecuadas para integrarlas en
espacios reducidos como las yemas de los dedos.
actuadores resonantes lineales (lra). Con un precio
competitivo por unidad y un tamaño reducido, tienen unas
características parecidas a los ERM. Otras ventajas son bajo
consumo y gran durabilidad, lo que los hace ideales para
dispositivos móviles. Sin embargo solamente pueden trabajar en su frecuencia resonante (unos 180 Hz), lo que implica
que solamente se pueda hacer una modulación en amplitud y que se necesite un circuito especializado que sea capaz de ajustarse para encontrar la frecuencia resonante en
cada momento, que cambia según factores externos como
el montaje.
motores vibradores (erm). Son los actuadores que ha adoptado la mayor parte de la industria de los dispositivos móviles y videojuegos, por lo que su precio es muy reducido
y se pueden encontrar fácilmente. Son robustos, fáciles de
operar, seguros para el usuario y pequeños, y aun así son
capaces de proporcionar una gran intensidad de vibración.
2.3 estimulación vibrotáctil mediante erm
Entre sus desventajas se encuentra su relativamente alta latencia por ser un dispositivo inercial, y la imposibilidad de
modular de forma separada la amplitud y frecuencia de
vibración, ya que ambas dependen del voltaje aplicado.
lra
erm
piezo
sma
bobinas
Tamaño
Peq.
Peq.
Variab.
Variab.
Grande
Espesor
Peq.
Peq.
Mínimo
Variab.
Grande
Latencia (ms)
20-30
40-80
<1
Alta
<1
Frecuen. (Hz)
~175
50-250
1-300
~10
200-300
Voltaje (V)
<5
<5
50-200
<5
<3
Precio (€)
5-10
1-5
50-170
Variab.
~200
Tabla 2.2: Comparación de distintas tecnologías comerciales de actuadores táctiles.
A la vista de las características de las distintas tecnologías disponibles, la elección para el presente trabajo de investigación ha
sido la utilización de actuadores Eccentric Rotating Mass (ERM).
Otro de los motivos que justifica esta elección es el trabajo de
Brown and Kaaresoja [16], que llevaron a cabo un experimento
con vibradores ERM y tactores C2 comerciales (compuestos de
una bobina electromagnética) en el que variaron el ritmo y la
intensidad de la vibración. Los resultados no mostraron apenas
diferencias a la hora de reconocer las señales vibrotáctiles usando estos dos sistemas. Estos resultados sugieren que en muchos
casos la elección de actuadores ERM puede ser más adecuada
que otros tactores con más ancho de banda y que son mucho
más caros y voluminosos. Además, en posteriores capítulos se
describirá el hardware de control y algoritmos desarrollados para mejorar una de sus principales desventajas, la latencia.
2.3
estimulación vibrotáctil mediante erm
En la literatura se pueden encontrar gran cantidad de proyectos
en los que se usa tecnología vibrotáctil para transmitir información háptica en multitud de ámbitos distintos. En esta sección se
recogen algunos de los trabajos que han usado para ello actuadores de tipo ERM.
31
32
fisiología del tacto y tecnologías de realimentación táctil
2.3.1
Corrección de la postura
Una de las ventajas del sentido del tacto respecto a otros sentidos como la vista u el oído, es la gran cantidad de receptores que
tenemos distribuidos en toda la superficie del cuerpo. Esta característica permite que el cerebro pueda asociar una determinada
sensación táctil con un punto espacial concreto. Muchos investigadores han explotado este hecho para desarrollar aplicaciones
de aprendizaje y corrección de la postura. Rotella et al. [140], por
ejemplo usaron cinco bandas elásticas con cuatro motores vibrotáctiles cada una para guiar al usuario a adoptar una postura
estática. La corrección de la postura con información vibrotáctil
también se ha aplicado a deportes [155], para aprender a tocar
el violín[173], ayudar en los movimientos de rehabilitación [76]
o simplemente orientar el antebrazo en tareas de aprendizaje general Sergi et al. [152].
2.3.2
Sustitución Sensorial
Los displays vibrotáctiles se pueden usar para sustituir o complementar la vista el oído, lo que es especialmente importante
para personas con deficiencias visuales o auditivas.
Así, es posible proporcionar información del entorno de forma
vibrotáctil a una persona ciega para evitar obstáculos e indicarle el camino a seguir. Zelek et al. [187] diseñaron para ello un
guante con 14 vibradores que representaba la información de
profundidad recogida con un sistema estéreo de cámaras. Una
de las principales claves es el algoritmo mediante el cual se hace la correspondencia de la información visual a la información
táctil, dado que esta última es más limitada, no solo por las características del tacto sino también por la del dispositivo usado.
2.3.3
Alertas
Al igual que el oído, y a diferencia de la vista, el tacto es un sentido que siempre está alerta a nuevos estímulos. Además, estos
estímulos pueden ser de carácter privado, lo que tiene mucho
interés para aplicaciones como el aviso de llamadas entrantes en
entornos silenciosos por parte de los dispositivos móviles, lo que
constituye el uso más extendido de la tecnología vibrotáctil. Normalmente la vibración en los móviles alerta de un evento, como
2.3 estimulación vibrotáctil mediante erm
una llamada o un nuevo mensaje, sin embargo existen aplicaciones como Vybe, o Contact Vibrate mediante las cuales se pueden establecer distintos patrones para aportar más información
y así poder conocer el remitente, por ejemplo. Además, tecnologías como VibeTonz de Immersion son capaces de acompañar
las melodías con vibraciones, enriqueciendo la experiencia del
usuario.
La industria del automóvil también se está beneficiando de la
tecnología de vibración para crear sistemas de alerta para los
conductores. Cadillac, por ejemplo, posee un sistema de actuadores vibrotáctiles en el asiento que avisan al conductor de posibles peligros en la carretera, que son recogidos mediante un
sistema de sensores de radar, ultrasónicos y de visión. Citroën
también tiene disponible un sistema de alerta de cambio de carril, mediante el cual se detecta cuando el vehículo está pisando
una línea de delimitación y hace vibrar el lado del asiento correspondiente.
En la literatura científica se pueden encontrar trabajos como
el de Ho et al. [56], que estudió el uso de alertas vibrotáctiles
en el automóvil mediante dos experimentos, concluyendo que
los participantes respondían más rápido a este tipo de estímulos
que a los visuales o auditivos.
2.3.4
Navegación y Percepción Espacial
Lindeman et al. [99] realizaron un estudio para ver la efectividad de aplicar pistas vibrotáctiles direccionales para mejorar el
awareness situacional de soldados en un hipotético ejercicio de
desalojo de un edificio simulado mediante Realidad Virtual. De
esta manera además se compensan las limitaciones de la tecnología actual. En este caso, por ejemplo, las alertas direccionales en
forma de vibración ayudan a paliar el reducido campo de visión
de un casco típico de realidad virtual, informando a los usuarios sobre su exposición a áreas no despejadas y que escapan a
su visión.
El laboratorio de investigación médica aeroespacial y naval
de Estados Unidos creó el Tactile Situation Awareness System
(TSAS) [141]. Este dispositivo, que se integra en la ropa de un piloto de aviación, se ideó para proporcionar consciencia sobre la
situación espacial a los pilotos mediante tactores. Así, la idea es
que los pilotos sean capaces de juzgar adecuadamente el vector
33
34
fisiología del tacto y tecnologías de realimentación táctil
de gravedad o incluso de otros parámetros de vuelo como altitud, velocidad, o situación de una amenaza. Una de las versiones
probadas en vuelo incorporaba cuatro columnas de 5 actuadores
ERM encastrados en cápsulas de nailon y colocados alrededor
del torso . Cardin et al. [18] hicieron un trabajo similar, centrado
en informar al piloto de cuando el avión se ha salido del rumbo
o ha perdido la orientación necesaria.
La región del torso parece especialmente adecuada para mostrar información espacial y de orientación mediante un cinturón[169,
37, 121]. Bloomfield and Badler [12] usaron la región del antebrazo para alertar al usuario de colisiones en entornos virtuales. Los
asientos también se pueden usar para proporcionar información
espacial a su usuario, como propusieron Morrell and Wasilewski
[118] mediante una matriz de 3x5 actuadores.
2.3.5
Sensación de Presencia
La sensación de presencia o inmersión es algo que también se
ha intentado mejorar en el cine por parte de algunos investigadores, que han explorado la posibilidad de añadir un canal de
retorno táctil. Por ejemplo, Kim et al. [80] propusieron un guante
de datos que proporciona a los espectadores sensaciones táctiles
sincronizadas con contenido audiovisual. Este guante está compuesto por 20 motores vibradores, cuya distribución puede verse
en la Figura 2.12a. Según los autores, en una película este tipo de
sensaciones puede conllevar, además de a una mayor inmersión,
a fomentar empatía por los personajes, ayudándoles a ponerse
en su lugar. Éstos son representados mediante imágenes en escala de grises, tal que los niveles de gris se corresponden con la
intensidad de vibración, y su resolución se corresponde con el
número de actuadores.
Por su parte, Lemmens et al. [97] tratan de conseguir una inmersión emocional mediante una chaqueta compuesta de 64 estimuladores táctiles, tal y como muestra la Figura 2.12b. El sistema
puede así generar patrones de vibración configurables mediante un software y sincronizarlos con distintos momentos de una
película. Según su hipótesis, la estimulación táctil puede ayudar
a incrementar la inmersión emocional dado que las emociones
suelen venir acompañadas de distintas reacciones en el cuerpo.
2.3 estimulación vibrotáctil mediante erm
(a) Circuito de control y vibradores
de Kim et al. [80].
(b) Chaqueta de Lemmens et al.
[97].
Figura 2.12: Dispositivos vibrotáctiles orientados a incrementar la sensación de presencia.
2.3.6
Videojuegos
Actualmente el retorno vibrotáctil está presente en la mayor parte de controles comerciales de videojuegos, como los de Nintendo Wii, Microsoft Xbox, y Sony Playstation en sus distintas
versiones, o los mandos de juegos de PC, como joysticks y gamepads. Muchos juegos aprovechan esta tecnología para crear
experiencias más ricas, como los juegos de conducción, que son
capaces de recrear complejas sensaciones no solo táctiles, sino
de retorno de fuerzas en el volante. Empresas como Immersion
apuestan fuertemente por el retorno vibrotáctil, creando librerías
como su Haptic SDK con decenas de efectos para ser usados en
juegos para dispositivos móviles. Algunos autores van más allá y
proponen juegos que solo incluyen contenido táctil, como Nordvall [122] y su Pong vibrotáctil. Para más información, consultar
[127].
2.3.7
Telemanipulación / Realidad Virtual
La telemanipulación se puede definir como la capacidad de una
persona de sentir y manipular objetos de forma remota, mientras
que la telepresencia es la capacidad de hacer sentir al operador
que está en el sitio remoto de forma realista. En la década de los
70 y 80, los esfuerzos para transmitir sensaciones hápticas se centraban principalmente en estos últimos sistemas. A partir de los
noventa se introdujo el término háptico, y se empezó a relacionar con los entornos digitales. Los sistemas de Realidad Virtual
35
36
fisiología del tacto y tecnologías de realimentación táctil
no dejan de ser sistemas de telepresencia donde el entorno remoto es una simulación digital, por lo que se puede beneficiar
de la tecnología háptica desarrollada para telemanipulación.
Entre los dispositivos creados para telemanipulación, hay arrays
de pins capacitivos [87], piezoeléctricos [26], bobinas móviles
[120, 86, 28] y actuadores ERM [42].
Cheng et al. [23] evaluaron el uso de actuadores ERM para sustituir el retorno de fuerzas en operaciones de manipulación delicadas en entornos virtuales. La tarea evaluada consiste en coger
un grano de uva y colocarlo en una copa, y según sus resultados
mediante retorno táctil el tiempo empleado se reduce frente a realimentación sonora o visual, pero en cambio aumenta la presión
ejercida, planteando que podría ser debido a sobreconfianza de
los sujetos. El dispositivo estaba compuesto de un manipulador
para dos dedos compuesto por dos vibradores para cada uno de
ellos, cuya intensidad es proporcional a la fuerza que ejerce el
usuario, informando también a su vez de las colisiones que se
pudieran producir.
Comercialmente es posible encontrar guantes de datos para
Realidad Virtual con actuadores vibrotáctiles como CyberTouch,
de la empresa CyberGlove Systems [6]. Sin embargo son caros y
con un reducido número de actuadores. Es por ello que muchos
laboratorios deciden crearse sus propios dispositivos adaptados
a sus necesidades, como guantes [45, 162, 42, 47, 22, 119, 129,
170], o para ser usados en el antebrazo [149].
2.3.8
Otros
Además, la tecnología ERM como realimentación táctil se ha empleado en otros ámbitos muy diversos, como el enriquecimiento
multimedia de escenas de vídeo Kim et al. [80] o la ayuda a músicos, aportándoles información como el compás mientras tocan
Hayes [52].
Existen otros trabajos relacionados con estos actuadores. Israr
and Poupyrev [62] propusieron un algoritmo que hace sentir al
usuario trazos continuos en una matriz discreta de actuadores
mediante ilusiones psicofísicas. Cohen et al. [24] utilizaron un
film piezoeléctrico para medir las vibraciones de los actuadores
y poder así controlar la intensidad de vibración mediante un
sistema de control de bucle cerrado (Figura 2.13). En el experimento no obtuvieron los resultados esperados, debido a que
2.3 estimulación vibrotáctil mediante erm
los cambios en las condiciones externas que hacen que los vibradores tengan una menor amplitud de movimiento (como la
presión) se compensan con el hecho de que esas mismas condiciones hacen que el usuario note de forma más directa los
actuadores, contrarrestando el efecto y notando una intensidad
subjetiva similar. Por último, Erp [36] recoge una serie de recomendaciones a la hora de desarrollar dispositivos vibrotáctiles
para la interacción persona-ordenador.
Figura 2.13: Bucle de control cerrado propuesto por Cohen et al. [24].
37
Part II
V I B R O TA C T I L E P R O T O T Y P I N G T O O L K I T
3
V I B R O TA C T I L E D I S P L AY
In this chapter, the vibrotactile controller named Vitaki is presented, with the objective of creating a generic ERM prototyping platform that can be used by researchers, preventing them
to deal with low level implementation details. First, a general
overview of the vibrotactile controllers found in the literature is
given. Then, the actuators are described, as well as the driving
techniques to improve their response. Next the design objectives
of the hardware are discussed, followed by the implementation
details of the architecture. Finally, last section evaluates the performance of the driving techniques.
3.1
related work
Hardware controllers used by researchers to connect ERM actuators usually have remarkable flaws. The controller used by
Bloomfield and Badler [12], for example, is composed of several
relays. These electromechanical devices are only able to completely switch on or off the vibrators, and in consequence their
intensity cannot be adjusted. Another approximation found is
the use of a digital/analog converter chip to modulate the vibration intensity [169], connecting the motors directly to the output. However, the circuit used provides a maximum of 3 mA
of current, while this kind of actuators typically requires more
than 50 mA. Some solutions, as the used by Riener and Ferscha
[137], present several limitations. In this case, the intensity level
can be adjusted by Pulse Width Modulation (PWM), but it affects globally to all the channels. Sziebig et al. [162] perform
software PWM, which produces an unnecessary overload in the
system, achieving a reduced frequency and resolution in comparison with the hardware version. Some more advanced controllers, like Tactaboard [98], have 16 channels with independent
PWM modulation, however it does not support any of the two
main driving improvements, overdrive and active braking. Furthermore, this controller is not supported by any of the main
haptic authoring tools.
41
42
vibrotactile display
This situation confirms the need for a controller that provides
the best performance when driving ERM actuators and to make
it available to the research community, so they don’t have to deal
with the electronic design, driver and protocol.
3.2
actuators
This section describes ERM actuators and their driving techniques,
which will be used to specify the requirements of the electronic
controller. These aspects have been experimentally tested and
evaluated with the help of an analog accelerometer, and will be
detailed in Section 3.4.
3.2.1
Description
actuators are based on miniature Direct Current (DC) motors with an offset mass attached to the shaft. The rotation of
the mass causes a displacement of the motor due the asymmetry and hence, a vibration perpendicular to its rotation axis. As
every turn of the motor’s shaft produces a full oscillation, the frequency of the vibration is s/60, where s is the rotational speed
of the motor in revolutions per minute (rpm). This speed is proportional to the potential difference applied between the terminals of the motor, and the direction can be reversed by inverting
the voltage. One limitation of vibrator motors is that they cannot be driven to change its frequency and amplitude independently. However, this is not an important drawback, since there
are studies [120] which state that correlated amplitude and frequency signals, substantially improve the subjective intensity in
a remote or virtual environment. The force of vibration is given
by the following expression:
ERM
Fvibration = m·r·w2
where
m the mass of the eccentric weight
r
mass offset distance
w
speed of the motor (rads-1 ), w = 2π f
Two types of vibrator motors are commercially available: disk
and cylindrical ones. The former are encapsulated and can be
used directly, being the vibrations parallel to the skin surface.
3.2 actuators
Figure 3.1: Different form factors of the
memory is included for scale.
ERM
actuators. A microSD
Usually, the latter has to be encapsulated, resulting in a bigger
footprint. In contrast, the vibrations produced are normal to the
skin surface. Although the device described in this chapter is
designed to support both types of vibrators, the tests have been
conducted with a Samsung L760 disk type vibrator. Figure 3.1
shows these two types of form factors as well as an encapsulated
cylindrical vibrator (on the left).
3.2.2
Vibration strength
The variation of voltage changes the frequency and amplitude of
ERM actuators in a coupled fashion. To this end, microcontrollers
typically generate a PWM signal. This type of digital signal can
change the average voltage by varying the burst width (or duty)
of a periodic signal. The modulated signal is then used to feed
a transistor-like device which is able to provide enough current
for the motor windings. The inductive and resistive nature of the
motor acts as a low pass analogue filter, averaging the pulses of
the modulation.
The intensity control is made in an open-loop scheme. This
means that there is no status feedback to the microcontroller,
and hence the frequency and amplitude are controlled only by
the PWM duty cycle parameter. This solution not only simplifies
the design and reduces costs, but is actually not a limitation,
since some studies have proven closed loop control of ERM to be
counterproductive [24].
43
44
vibrotactile display
3.2.3
Latency
In haptic applications, there is a delay between the detection
of a user movement and the production of the haptic feedback.
If this time is high enough, users experience changes in the
perceived feeling, and if a certain threshold is exceeded, users
no longer consider the sensation to be caused by their actions.
These thresholds were studied by Okamoto et al. Okamoto et al.
[123] who estimated them to be approximately 40 and 60 ms
respectively.
One of the drawbacks of using ERM actuators is their relatively
high latency to achieve the desirable output level at a given time.
This latency can be caused by two factors: the time it takes for
the motor to accelerate, and the time taken to decelerate; each of
which has been addressed with a different technique. A typical
disk type vibrator takes approximately 200 ms to start and 250
ms to stop completely, depending on its specific characteristics.
3.2.4
Pulse Overdrive
The start-up time of an ERM actuator is proportional to the voltage applied. The Figure 3.2a shows the amplitude of the vibrations of an specific ERM actuator along the time depending on
the potential difference. Note that at higher voltages, the duration of the tests have been decreased to avoid any damage to the
actuator. It can be observed that not only the amplitude reached
is proportional to the voltage, but the latency is also significantly
lower at higher voltages. If we measure the time taken to reach
half of its maximum amplitude, the start-up time versus voltage
can be plotted Figure 3.2b. The time is heavily reduced at the beginning, and less markedly from 7 volts for this specific actuator.
Despite every ERM actuator has a nominal voltage of operation,
a short pulse of higher voltage can be applied to accelerate it
without risk of overheating. This leads to a significantly shorter
start-up time, and hence reducing the latency. We refer to this
technique as Overdrive.
The optimal pulse time is different for each model of actuators, although there is a relatively high margin of operation. The
pulse duration depends also on the desired motor speed. The
lower the desired speed is, the shorter the pulse has to be. If
3.2 actuators
(a) Envelop curve of the acceleration along the time for different voltages.
(b) Time required to reach an acceleration of 2G depending on the voltage applied.
Figure 3.2: Start-up characterization of the ERM actuators.
45
46
vibrotactile display
the pulse is too short, however, the motor may not have reached
the desired speed, and thus the start-up time will not be optimized. On the contrary, if the pulse is too long it may produce
overshooting, creating a vibration peak which may not be appropriated.
The use of this technique has another advantage. It allows the
vibrator to start spinning when it is stopped and a low vibration
level is desired. The voltage at low levels may be enough to keep
it rotating, but insufficient to start it, so a small pulse of higher
voltage is needed.
3.2.5
Active Braking
The Active Braking technique is designed to improve the deceleration time, that is, the time required to achieve a lower speed,
or to brake the motor completely.
ERM actuators are braked by generating a negative torque, which
slows down their rotational speed. If no external brake is applied, this negative torque comes mainly from their internal friction, which slowly decelerates them. One approximation to reduce this time, called Passive Braking, consists on shorting the
motor contacts. This uses the back ElectroMotive Force (EMF)
voltage induced in the coils of the motor, and in fact is one of
the modes of operation of an H-Bridge, as it will be detailed in
Section 3.3.
However, this time can be further improved with the Active
Braking technique, which applies current to the motor coils with
reverse polarity. This creates an effective brake for the momentum of the eccentric mass, reducing the stop time significantly.
In order to illustrate this reduction in time, these three ways to
stop an actuator (no braking, passive braking and active braking)
have been tested and the results are depicted in Figure 3.3. The
use of passive braking in this specific motor does not involve
a noticeable improvement in terms of time, as more than 200
ms are required in both No braking and Passive Braking modes.
Using Active Braking, however, decreases this time to 40 ms.
The time gained depends on how fast the actuators dissipate
their mechanical energy when no brake is applied. If the actuators decelerate quickly, then actively braking them makes little difference. This is the case when the actuators are loosely
mounted on the fabric, so they can vibrate almost freely. This
3.2 actuators
Figure 3.3: Curves of deceleration of an ERM actuator under Active
Braking, Passive Braking and free spinning conditions.
condition increases the vibration amplitude, slows down the
vibrators speed and therefore decreases the time required to
stop them. On the contrary, a good tight contact of the actuators against the user skin makes them behave more like a
centered-mass motor, reaching higher oscillation frequency and
thus, needing more time to completely stop them. Another important factor is the actuator size. The bigger they are, the relation between mass and friction tends to increase, and so are the
benefits of using active braking. Lastly, the time required to decelerate depends on the difference between the current and the
desired motor speed.
Having all these variables implies that the negative pulse of
voltage should be variable, and ideally it would need a speed
feedback from the motor itself. In practice, this is not necessary
because some variables are fixed for a specific application, such
as the way an actuator is mounted, its size, and internal characteristics, and therefore only an initial calibration is required. Dynamic variables like the current speed, can be either estimated,
simulating the actuator response along time, or restricted, for
example applying the active braking technique only when the
motor was previously driven at full power and the objective is
to brake it completely. Despite all these variables, ERM actuators
are quite flexible, and the braking period does not have to be
very precise. If the time is lower than the optimal, the motor
will be still spinning, but it will eventually decelerate following
the “No-braking” curve. If the time is higher, there is still a
47
48
vibrotactile display
margin of approximately 10 ms until the motor starts spinning
backwards.
3.3
3.3.1
vibrotactile controller
Design goals
The haptic controller is designed taking into account several objectives. The platform has to be flexible and support a high number of actuators, so the design has to be based on a scalable
architecture. It should support a wide range of vibrators, whose
specifications can vary widely in voltage and current consumption. The overdrive technique implies a mechanism of modulating the voltage, such as the PWM modulation, and the braking
technique requires the ability to invert the polarity. To make it affordable and easily replicable, it should make use of off-the-shelf
components.
This device is conceived as a vibrotactile experimentation platform in Virtual Reality and videogames, which makes important
the ability to test multiple configurations, so it is important to
feature a system where the actuators could be easily connected
and re-located along the haptic device.
Other desirable aspects which should be taken into account
are the updatability, which may be accomplished via firmware
updates of the device, and compactness, making it possible to
be carried by the user.
3.3.2
System Architecture
The basic architecture of the system is depicted in Figure 3.4,
and it is based on the one proposed in [145]. Haptic rendering
module resides in the computer, and it is used to calculate the
haptic stimuli, and to send it to the device through the haptic
driver. The haptic device is composed of actuators and the haptic controller that drives them. This electronic controller contains
the microcontroller, which provides the logic and the communications with the PC; and two electronic modules to drive the
actuators, which will be detailed in Section 3.3.3.
3.3 vibrotactile controller
Figure 3.4: Architecture of the system, based on [145].
3.3.3
Electronic Architecture
The proposed design is based on a scalable architecture. A general overview of the system is depicted in Figure 3.5. In the
basic configuration, 16 vibrators can be controlled both in intensity and direction. To control the direction of rotation, and
thus enabling the device to perform active braking, each motor
is fed by a H-Bridge, which is able to provide up to 600 mA.
This component is able to apply a voltage across a load in either
direction, allowing it to drive a motor clock and counterclockwise. The voltage used to feed the motors is independent from
the logic voltage, and can be up to 36 V. An H-Bridge has two
digital inputs to select one of the four modes of operation: two
to drive the motor in both directions, and two to passively brake
them (shorting their terminals). This passive braking feature
has been empirically tested and discarded in Section 3.2.5, as no
noticeable improvement has been measured. As we are only interested in two of the four modes of operation of the H-Bridge,
only one data line is needed, being the second input inverted to
reduce the number of digital lines required from the microcontroller. This configuration can be used to perform PWM as well,
by using the Enable line of each H-Bridge, and thus controlling
vibration intensity of each actuator. With this solution, two data
lines are necessary to drive each motor (one for PWM, and one to
control rotation direction) and, consequently, this solution is still
not scalable, because the number of vibrators is dependent on
the number of digital outputs of the microcontroller, and specifically on the number of PWM enabled ones.
49
50
vibrotactile display
Figure 3.5: System block interconnection diagram. The components
inside the square (ERM Controller) can be cloned and connected in series to scale the system.
This issue has been solved by using a PWM driver integrated
circuit and a Shift Register in conjunction with the H-Bridges.
The PWM driver is connected to the Enable inputs of the H-Bridges.
It receives a serial communication and provides up to 16 PWM
modulated data lines with a resolution of 12 bits (4096 levels).
Two in-series eight bit shift registers have been used to feed the
H-Bridge data lines to control the rotation direction.
Since both the PWM controller and the shift registers are based
on a serial communication, their “Serial Out” lines could be used
to extend the number of actuators by connecting them in series,
where the serial output of one module is the serial input of the
next module. A theoretical maximum of 40 stages can be connected in series, with an increased latency that depends on the
speed of the serial communication.
The microcontroller used is based on an Atmel ATMega32U4
running at 16 Mhz, and it is connected to a PC through a USB
2.0 connection.
It should be noted that, due to the architecture of the hardware,
the support for LRA actuators is possible, although it has not
been tested. LRA actuators are based on solenoids, and they need
an alternate current at a certain frequency to work, which can be
generated with the H-Bridges of the controller.
3.3 vibrotactile controller
3.3.4
Controller
The current implementation of the circuit is a desktop version,
which has been made to rest on a table and connected through
USB, although it could be attached to a belt around the waist of
the user. A new smaller wireless version is under development,
designed to be worn by the user. The controller provides:
• Power supply port through a standard barrel jack connector. This is the main power line for the actuators. A wide
range of voltage is supported, from 5 up to 36 V, which is
independent from the logic voltage. It needs to be equal or
higher than the overdrive voltage, which can be adjusted
in the configuration of the driver.
• Micro-USB female connector for the connection to the PC.
• Main switch. It allows to easily disable the haptic feedback.
• Volume potentiometer. The user can use this rotary knob
to adjust the global intensity of the vibration in real time.
• Bi-color led indicator. This led indicates the state of the
controller. It blinks when the power supply is not present
or the main switch is turned off. It lights red when the
device is in stand-by mode, and red when haptic data is
being transmitted by the PC.
• Output port. The base configuration provides one port of
16 channels. Each actuator is connected with a standard
two-pin, 0.1” male connector. Each stage plugged-in adds
its own output port connector.
• Link connectors. These internal connectors are used to
scale the board, plugging-in new stages.
Figure 3.6 shows a prototype of the implemented circuit, where
the main components described in Section 3.3.3 are highlighted.
Figure 3.7 shows the external box used. The approximate cost
of the materials, including 16 Samsung L760 vibrators, is under
150 €.
51
52
vibrotactile display
Figure 3.6: Circuit prototype of the Vitaki Controller. The main architecture components are highlighted in different colors.
Figure 3.7: External box of the Vitaki Controller. This case can allocate
up to 5 extra stages. The volume potentiometer, status LED,
and power switch are attached to the top.
3.3 vibrotactile controller
Figure 3.8: Scalability of the Vitaki controller. The basic configuration
supports 16 channels. New stages can be easily stacked on
the top, adding 16 channels each.
3.3.5
Scalability
The controller can be scaled up by stacking new stages on the top
op it, as depicted in Figure 3.8. The maximum number of stages
that can be connected depends on the specific implementation
of the architecture. In particular, there are two basic restrictions:
the maximum current the power supply can provide, and the
maximum overall latency allowed, which depends on the serial
communication between the microcontroller, and the number of
stages. The theoretical maximum number of layers specified by
the integrated circuits is 40, and hence, 640 channels in total.
3.3.6
Voltage Considerations
The voltage used to perform Overdrive and Active Braking techniques can be adjusted from an external source, and it is independent of the logic voltage. Furthermore, it can be configured
as a parameter in the driver, so that when the microcontroller
needs to perform an overdrive, it adjusts it by PWM. The higher
the overdrive voltage, the faster the response of the motors will
be, but it will lead to a higher increase of the temperature and
an increased risk of damaging it. Considerations like the frequency at which the overdriving pulses will be generated should
be taken into account so that the mean voltage is under the actuators nominal specifications and the overheat risk is avoided.
53
54
vibrotactile display
3.3.7
Prototyping Features
The Vitaki controller has been designed to support an easy prototyping of new vibrotactile devices. The following features contribute to this objective.
• The output connector has a standard 0.1” pitch, and it allows hot plugging new actuators on the fly.
• A wide range of actuators can be used: from miniature
vibrators to be used seamlessly integrated in datagloves,
to high-powered ones (up to 600 mA) that are suitable to
be attached to a racing seat.
• A system to redistribute actuators has been developed. Most
of the wearable devices consist of a garment which is used
as a support to attach one or more actuators. However,
their distribution is not trivial, and some experimentation
may be required. Thus, a system to quickly relocate the
actuators is a strong advantage. This system consists of a
small neodymium magnet attached to the vibrator, and another one placed on the other side of the fabric, as seen in
Figure 3.9a.
• A set of extension wires has been built to suit the different
configurations of the actuators. As many different configurations are possible, different cable lengths are necessary
to connect the actuators to the controller. The use of extension wires avoids the use of the soldering iron to build
custom wires every time an actuator has to be reallocated.
An extension wire and the two types of actuators used are
depicted in Figure 3.9b.
The capabilities of the platform are partially illustrated by the
implemented devices described in the following chapters.
3.3.8
Haptic Driver
The haptic driver is a piece of software that communicates the
application that generates the haptic stimulli with the electronic
controller.
3.3 vibrotactile controller
(a) Vibrator attached to a piece of
cloth with a magnet.
(b) Actuators with their connectors
and an extension wire.
Figure 3.9: Prototyping features of the Vitaki controller.
3.3.8.1
Communication Protocol
A custom communication protocol has been designed, optimized
to provide an efficient bi-directional communication. Instead of
sending updates of the actuators state, the communication protocol is based on a streaming paradigm. Every millisecond, a
frame update is transmitted to the device, with the vibration intensity of each actuator. This provides two major advantages.
On one hand, it avoids the buffering delay of the USB driver,
which occurs when a small packet of data is transmitted through
USB. On the other hand, it provides robustness, avoiding the
problems caused by the loss of packets. This is imperative to
support future wireless versions of the controller.
Every frame of the communication protocol consists of 2 bytes
per actuator plus 2 bytes to indicate the end of the frame. The
communication speed of the microcontroller is not a problem for
the scalability of the platform, since it would allow the update of
more than 500.000 actuators, as measured in the tests. The communication latency added with every stage is 31 microseconds.
3.3.8.2
Driver Architecture
The driver is consists of three main modules, as depicted in Figure 3.10.
public interface. It provides a set of high-level functions
that are exposed through a DLL library. These functions
are used by the haptic applications to communicate with
the driver, establishing new values for the vibrators or requesting specific parameters like the number of actuators
available.
55
56
vibrotactile display
Figure 3.10: Main architecture of the driver.
real-time module. This module contains a buffer where the
haptic stimuli that has to be played by the controller is written. It also implements the real-time playback mechanism,
running its own thread at 1 kHz. Every loop, it reads one
sample of the buffer (which has a resolution of 1 ms) and
sends it to the Communication Module. It also maps the
generic vibration values to specific voltages, as detailed in
Section 3.3.8.3.
communication module. This module implements the communication protocol, using a serial port library.
3.3.8.3
Voltage Mapping
Vibration values specified to the driver must be generic, in a way
that they do not depend on specific vibrators used. Therefore,
vibration values can not be specified in terms of voltage. Instead, a unitless vibration value has been defined, which ranges
between -1.5 and 1.5. Values between 0 and 1 are mapped between the minimum and the nominal voltage of the actuators,
while values between 1.0 and 1.5 are scaled up to the overdrive
voltage specified in the configuration. Negative values have the
same consideration, and they are usually used for braking purposes. This correspondence function is designed to maximize
the dynamic range of intensity a vibrator can provide, which
depends on the configured specifications of minimum, nominal
and overdrive voltage. A typical mapping function is shown in
Figure 3.11. Note that values between 0 and the minimum step
size, that is, 1/PWM_Resolution, are mapped to 0.
3.4 performance evaluation
Figure 3.11: Function used to map waveform values to the desired actuator voltage. Main range is between 0 and 1. In this
example it has been considered a PWM resolution of 4096
steps, and minimum, nominal and overdrive voltages of
1.2, 3.3 and 8 volts.
3.4
performance evaluation
Response times and waveform characterization of the acceleration produced by the ERM actuators in response of a certain
input have been objectively measured with the help of an accelerometer and an oscilloscope. This is useful to compare the
behavior under different conditions, such as driving the motors
with their nominal voltage, and using the aforementioned overdrive and active braking techniques.
To this end, an ADLX335 accelerometer by Analog Devices
has been attached to a Samsung L760 disk type vibrator, which
has a nominal voltage of 3 V. The freedom of movement, or the
rigidity of the structure, affects to the response of the actuator,
so it should be as similar as possible to the rigidity it has when
it is mounted on the haptic device and worn by the final user.
In this experiment, the actuator under test is first driven with
a pulse of 300 ms at its nominal voltage. Then, the advanced
driving techniques are applied in order to compare the results.
The analysis of one single pulse gives important information regarding the start and stop times, frequency, and amplitude along
the time, and is used as part of a calibration process. The overdrive and active braking voltage has been set to 8 volts, being
the overdrive and braking pulses 60 and 25 ms long respectively.
57
58
vibrotactile display
Figure 3.12 clearly shows two main differences. One is the
start curve (Figure 3.12a), which slowly raises to its maximum
amplitude in the first case, and more aggressively in the second
one. It takes 73 ms the motor to start spinning (reach 10% of
maximum vibration) and 200 ms to reach the 90% of its maximum amplitude when the motor is fed with its nominal voltage.
When an overdrive pulse is used, 33 ms are required by the motor to start spinning and 67 ms to reach the 90%, which means
a reduction of 40 and 133 ms respectively. This is a very important improvement, since these times are now within the limits
obtained by Okamoto et al. [123]. The second noticeable difference is found in the stop curve, Figure 3.12b. Despite the time
elapsed from 100% to the 90% is very similar (10 ms and 2 ms
respectively), the time elapsed to the 10% is very noticeable (246
ms and 50 ms respectively).
These differences in the behavior may affect the crispness perceived by the user, for example when a sequence of pulses is
reproduced.
3.5
conclusions
In this chapter, a scalable ERM controller is presented, implementing unique features that improve previous proposals. The hardware is capable of driving a wide range of ERM actuators (up to
600 mA per channel), with overdrive and active braking support.
Although untested, the hardware is also capable of driving LRA
actuators as well, given the nature of the electronics used. Each
stage has 16 channels, and they can be connected in series to increase this number. In addition, the hardware provides a rotary
knob to adjust the general intensity of vibration in real time, so
it can be adjusted to each user and thus avoid fatigue. To create
new devices, little magnets have been added to each actuator. A
second magnet located on the other side of the garment fabric
allows to securely place it and to easily redistribute them.
Finally, an objective performance evaluation of the implemented
techniques has been conducted, confirming a considerable reduction of the latency of the actuators.
3.5 conclusions
(a) Standard driving technique.
(b) Advanced driving technique, where the input signal shows the start and
stop pulses.
Figure 3.12: Acceleration response to a 300 ms pulse.
59
4
V I B R O TA C T I L E A U T H O R I N G T O O L
This chapter presents a vibrotactile authoring tool for ERM actuators, which is part of the Vitaki toolkit. First, a detailed description of the software is provided. Then, two different examples
to illustrate its use is provided. Finally, a preliminary evaluation
of this toolkit is presented.
4.1
related work
Although numerous papers that include the design of specific
haptic devices can be found, actually most of these designs are
made without the use of any specific prototyping tool that offers
some support to the designer. As Panëels et al. [131] argue, it
is still difficult to program and develop haptic applications and,
consequently, there is a need to hide the complexity of programming a haptic application or interactions by providing prototyping tools.
Nowadays, there are some proposals that offer different alternatives to facilitate the creation of haptic prototypes. Some of
them try to facilitate the design of force feedback sensations using commercial devices like Phantom of Sensible Tech [113] or
Novint Falcon. Thus, Forrest and Wall [38] developed a haptic
prototyping tool that enables non-programmers to build a haptic
3D model that can be explored using a Phantom device. Using
the Novint Falcon device, Kurmos et al. [90] presented a proposal of how haptics support can be integrated into an X3D authored virtual world using an open source haptics library via
the Scene Authoring Interface (SAI). Eid et al. [31] proposed
an authoring tool (HAMLAT) that is implemented by extending
the 3D Blender modeling platform to support haptic interaction.
This tool allows non-programmer developers or artists to design
prototyping haptic applications.
In the tactile domain, some tools have been developed to facilitate the prototyping of vibrotactile stimuli. Based in the previous
work of the Hapticon Editor proposed by Enriquez and MacLean
[35], Swindells et al. [161] described key affordances required by
61
62
vibrotactile authoring tool
tools for developing haptic behaviors. They propose three main
characteristics to create haptic icons: a waveform editor to represent the magnitude of a haptic signal; a tile palette that contains tiles representing basic haptic effects; and a tile pane that
enables combining these basic haptic tiles into more complex
one. Taking into account this work, Ryu and Choi [143] presented posVibEditor, an authoring tool for designing vibrotactile
patterns that supports multiple vibration motors. Some other
proposals include the use of a musical metaphor [144], simplified vibrotactile signals [130, 71], or are based on demonstration
[59, 25]. On the commercial side, the company Immersion has
created the tool Haptic Studio .
In the next section, Vitaki GUI is presented, a software tool
created to design vibrotactile patterns using the same metaphor
as posVibEditor [143]. This application uses a multi-channel
timeline to compose complex stimuli associated to one or more
actuators. This tool goes beyond and adds the possibility to include the overdrive and braking techniques in the design, allowing the user to create waveforms not only in their positive range,
but in the negative range as well, improving and increasing the
range of sensations an ERM actuator can produce. Furthermore,
some novel features added include an individual volume adjustment per actuator, a visual representation of the device, and a
C++ Application Programming Interface (API) library.
4.2
vitaki authoring tool
The Vitaki authoring tool has been designed to help the user
through the design and testing process of tactile stimuli that
makes use of the electronic controller developed. It can be used
to compose, store and play complex vibrotactile patterns, either
with a graphic interface or through an external application. For
instance, a set of vibrotactile stimuli adapted to a new device
can be created with the user interface. After a testing and refinement period inside the application, an external Virtual Reality
application can access the stored set of stimuli and play them in
response to some predefined events.
This section first introduces its architecture and then describes
the tool.
4.2 vitaki authoring tool
Figure 4.1: Vitaki GUI architecture. Vitaki API contains the logic and a
public interface which is used by Vitaki GUI and external
applications.
4.2.1
Architecture
Two main components have been developed to build the Vitaki
authoring tool. A general overview of the whole system is depicted in Figure 4.1.
The Vitaki API module implements the logic of the system.
It deals with the stimuli database and communicates with the
Haptic driver. In addition, it defines a public interface or API,
allowing external applications to store and play vibrotactile patterns. It is, in consequence, a high level layer that can be used on
top of the Haptic Driver to avoid dealing with low level haptic
details.
The Vitaki GUI module implements the user interface of the
authoring tool. It provides the basic building blocks that the
user can use to compose new stimuli.
4.2.2
Implementation
The Vitaki GUI software is a user interface to graphically design
and test vibrotactile stimuli, as well as configuring different parameters. The software has been implemented with the QT1 libraries, and hence it is multiplatform, with support to GNU/Linux
and Windows. The GUI, shown in Figure 4.2, can be used not
only to adjust different parameters of the controller, but to design
and test complex vibrotactile stimuli over different vibrator ar1 http://qt-project.org/
63
64
vibrotactile authoring tool
Figure 4.2: Vitaki GUI. Application developed to test actuators and vibration patterns. In the image, the user is dragging the
actuator identificator 8 to its position in the device.
rangements. Thorough stimuli can be composed just by dragging simple waveforms, which can be edited as well with the
Waveform Editor. These stimuli can later be used by external
applications through the API.
The configuration dialog, which can be seen in Figure 4.3, allows the user to read information from the device, like the number of actuators available, PWM resolution and current voltage of
the power supply, which is sensed by the microcontroller. Other
information available, like the overdrive, nominal and minimum
voltage of the actuators, can be read and written back to the
device, where it is stored in non volatile memory. These values
are generally specified in the vibrator data-sheet provided by
the manufacturer, but they can be fine tuned experimentally for
a better performance.
Due to differences in the actuators arrangement along the skin,
or the use of different type of actuators, not all the vibrators
are perceived with the same intensity. The Actuator levels section in Figure 4.3 allows to adjust every actuator individually to
compensate these factors. The Master controller, which can be
changed in real time by the user through a rotary knob in the
electronic controller (see subsection 3.3.4), can be thought as a
4.2 vitaki authoring tool
Figure 4.3: Configuration dialog. Top left, read-only values. Bottom
left, user-customizable values. Right, adjustable intensity
levels.
volume controller, and its purpose is to quickly increase or reduce the overall intensity of the vibrotactile stimuli, to suit the
user preferences.
When creating a haptic project with Vitaki GUI, one of the first
steps is choosing a picture that represents the device used, which
in the case of the Figure 4.2 is a glove-like device. Then, the user
can set the number of actuators of the device, 11 in this case.
After that, the actuator identifier can be dragged to its position
on the image, so it could be used later by the user as a reference to help in the creation of the stimulus. Each stimulus is
composed by one or more channels that are associated to one
or more actuators. To compose a stimulus, the user has to drag
basic waveforms from the right panel to the desired channel,
which can then be resized to meet a specific duration. The software provides a default set of basic waveforms but it also allows
the user to create new sets.
The waveforms can be edited in the Waveform Editor, as seen
in Figure 4.4 by clicking them. A waveform is defined by points
which can be either dragged with the mouse, or specified writing their coordinates in a table. It is important to note that the
waveform editor can make use of the overdrive and braking capabilities of the controller. The range of values from 1 to 1.5 will
be associated with overdrive, while negative values will be used
for braking purposes.
65
66
vibrotactile authoring tool
Figure 4.4: Waveform editor, a tool to edit the basic building blocks of
the vibrotactile stimuli. The waveforms can be defined with
the mouse, or writing their coordinates in the table.
4.3 examples of use
In most existing authoring tools, waveform values are defined
in voltage units. However, this is a serious limitation, since that
way the waveforms are linked to a specific actuator. On the
contrary, Vitaki waveforms are device-independent, and defined
within a range from -1.5 to 1.5. The mapping between waveform
values and actuator voltages is performed by the driver in real
time, which is detailed in subsubsection 3.3.8.3.
The combination of all the generated vibrotactile stimuli and
the parameters of the advanced configuration can be saved as a
project to be reused later. It is also useful to generate a library
of sensations which can be used through the API in a haptic
environment.
4.3
examples of use
In Virtual Reality environments, a glove-like device is typically
used to provide the user a natural interaction with the environment, which can be achieved by sensing the position of the
user’s hand in space. These devices are specially suitable for the
integration with the ERM controller, due to the relatively high
number of actuators that can be placed on the fingers, and the
sensitivity of the skin in this area, which benefits from the overdrive and braking techniques. Both the vibrotactile controller
and the software authoring tool have been evaluated in different
scenarios, adjusting the number and position of the actuators to
each one and designing different stimuli.
4.3.1
Vibrotactile Morse Encoder
One of the applications of haptic feedback is the pattern codification to transmit information through the sense of touch. In order
to test this concept, Vitaki GUI has been used to code Morse signals as vibration pulses. In this case, only one actuator is needed
to transmit the stimulus to the user, and it is located in the index
finger of the glove-like device created. Once the number of actuators is set to “1” in the interface, the actuator identifier (ie. the
port where the actuator is connected) is dragged to its position
in the image, as seen in Figure 4.5.
Every number and letter of the alphabet can be coded in Morse
as a composition of dots and dashes, which are pulses of different durations (one and three time units respectively). Figure 4.4
67
68
vibrotactile authoring tool
Figure 4.5: Codification of the Morse signal of letter “d” using the
Vitaki GUI.
shows the creation of a dash with the Waveform Editor, including the overdrive and braking techniques. Morse code does not
define the value of the time unit, which depends on the experience of the operators. In this case, the unit of time used is 50
ms. This means that the short pulse lasts 50 ms (one time unit),
the long pulse (dash) has a duration of 150 ms (three time units)
and the space between pulses is 50 ms as well. Once this custom set of vibration waveforms is created, they just need to be
dragged to form the letter “d” (one dash followed by two dots).
This stimulus is named “Letter d overdrive” in Figure 4.5 (bottom). Once all the letters are stored in the stimuli database of
the project, an external application can make use of the Vitaki
API to play Morse code letter by letter.
The vibration produced by the actuator has been measured
in the case of the aforementioned letter “d” both with regular
pulses and applying the advanced techniques, and can be seen
in Figure 4.6a and Figure 4.6b respectively. The overdrive pulse
in this case of 60 ms for the long pulse, and reduced to 40 ms for
the short pulse. The brake pulse is 20 ms long.
As seen in Figure 4.6, the difference between two consecutive
pulses is more clear if the motor is braked and then overdrived.
On the contrary, not only the initial dash is significantly shorted
in the first case, but during the period of time between pulses
the motor continues vibrating due to its inertia and the pulses
are softened.
4.3 examples of use
(a) Standard driving technique at 3 V.
(b) Advanced driving technique at 8 V.
Figure 4.6: Acceleration measured in response to the Morse-coded letter “d”.
69
70
vibrotactile authoring tool
Figure 4.7: Object fall stimulus defined using Vitaki GUI.
These results suggest that the implementation of the described
techniques may be a good advantage when the actuators have to
display pulses which can come not only from a specific code
sequence, but also as the result of the exploration of a haptic
pattern. An investigation of the effects of these improvements
on subjective tactile perception is left to future work.
4.3.2
Object Fall Detection
One common problem that arises due to the lack of the sense of
touch in virtual environments is that users are not able to notice
that they accidentally drop an object they are holding unless
they are looking directly to it, something that can be difficult
if the user is doing another task at the same time (for example,
navigating or looking for another object). In this situation, a
vibrotactile stimulus can be used to tell the user that the object
is falling. This example is depicted in Figure 4.7.
This time, 11 actuators have been mounted on the glove to
transmit the sensation to the user. The sensation that was modeled
here sends a vibration signal first to the index and thumb fingers
and, when the vibration reaches its maximum, the middle finger
starts vibrating. The same pattern is modeled for the ring and
little fingers. The aim of the stimulus designed for this example
4.4 evaluation of the platform
Figure 4.8: Example application using the Vitaki API. When the object
grasped by the user falls, the previously configured stimulus is played.
is to make the user feel that something is falling from the index
to the little finger, and this is the reason because the vibration
seems to move from one finger to another. An screenshot of the
application can be seen in Figure 4.8.
This complex multi-channel stimulus is designed connecting
several basic units of the default set of waveforms located on the
top-right position of Figure 4.7.
4.4
evaluation of the platform
Different approaches have been followed in order to assess the
strengths and weaknesses of the hardware and software platform proposed. Firstly, the toolkit has been analysed using the
criteria introduced by Olsen [125], which is intended to demonstrate the effectiveness and utility of a new system when usability evaluations are not appropriate. Secondly, the toolkit has
been compared against other proposals previously presented,
resulting in a second analysis that underlines the advantages
and disadvantages of our prototyping system with respect to
them. Finally, different evaluations have been conducted with
users to test in order to test the vibrotactile devices created with
Vitaki when carrying out different tasks, such as 2D texture identification (chapter 6), 3D object recognition (chapter 7), and even
71
72
vibrotactile authoring tool
detection of the size and weight of virtual objects (chapter 8). In
all these latter evaluations, the users could carry out the tasks
properly and their experience was satisfactory. The following
sections focus on the first two analyses, the user evaluations are
addressed in depth in the following chapters.
4.4.1
Quality assessment using Olsen’s criteria
It has been claimed by several authors that usability evaluations
are not always the best way to evaluate a new system [125],
and they can even be considered as harmful under some circumstances [48]. The evaluation of complex systems such as
architectures, User Interface toolkits or software systems for creating interactive applications involving new devices that are offthe-desktop is a difficult task that should not be carried out by
simple usability testing, but after a first evaluation of their utility and effectiveness. Therefore, given the innovative nature of
the proposal presented in this chapter, we conduct a study of its
utility following the ideas proposed by Olsen [125].
Olsen defines three key assumptions in which many usability
experiments are built on, and states that this kind of complex
systems rarely meets any of these premises. These key assumptions are: walk up and use, standardized task, and scale of the problem.
As we argue in the following paragraphs, the system proposed
in this chapter does not seem to meet them either.
Thus, walk up and use assumes that any user with the required
background should be able to use the system. As the potential
users of our toolkit are VR and videogame developers from studios and laboratories, and more precisely haptic designers and
researchers, a substantial specialized expertise is required, so it
would be difficult to find enough users to carry out a usability
evaluation.
The second key assumption, standardized task, means that in
order to make comparisons between systems or subjects, the
task to evaluate must be reasonably similar in both systems, or
the difference between subjects should have the less possible impact on the task. As regards our toolkit, one of the main difficulties would be to get another system to compare against, since
it would involve not only the software, but getting the associated
hardware as well. Also, the way vibrotactile patterns can be composed using the different vibrators makes the task of designing
4.4 evaluation of the platform
a haptic stimuli a creative one, which can be accomplished in
different ways. For example, individual differences in skill and
inspiration may lead to downplay the measurement of the task
completion time, a typical variable in usability evaluation.
The third and last assumption is scale of the problem, related
to the economics of usability evaluation, which tries to keep the
testing as short in time as possible. In order to test the haptic
stimuli designed with the toolkit described in this chapter, a
more complex evaluation should be carried out involving not
only the haptic designers that use the software to tune the vibrotactile output, but also the end users of the haptic device in
the target application, what would undoubtedly lead to a very
long evaluation.
As a consequence, following Olsen’s approach, it does not
seem appropriate to start the evaluation of this innovative proposal with a usability evaluation. As an alternative to this, Olsen
defines a framework for evaluating the quality of a system innovation by attending at what he calls Situations, Tasks, and
Users (STU). These concepts can be defined in the context of this
chapter. Thus, Users are the developers that want to include
some vibrotactile feedback in their applications but that are not
experts in the design of haptic stimuli; the Task is the design of
haptic stimuli that will be transmitted using a vibrotactile device;
and the two Situations that are initially considered are the creation of high-fidelity stimuli –stimuli that tries to replicate the
sense of touch of real objects–, and the creation of low-fidelity
stimuli –stimuli that are used to code signals that are not found
it the real world–. These two situations correspond to the two
examples of use of Vitaki previously described in section 4.3, the
object fall detection (high-fidelity) and the vibrotactile Morse encoder (low-fidelity).
The following subsections analyse the claims of innovative systems identified by Olsen and try to evaluate our proposal based
on them. These claims are Importance, Problem not previously
solved, Generality, Reduce solution viscosity, Empowering new design
participants, Power in combination, and Can it scale up?.
4.4.1.1
Importance
In order to analyse this claim, the focus is set on studying the
importance of the population of potential users of the tool. In
this case, by incorporating not only a software tool for designing
73
74
vibrotactile authoring tool
haptic stimuli but also including a hardware tool, the proposal is
aimed at a wider audience than other similar proposals, because
it can be used by less experienced people. In our case, the system
designers just need to think about the number of actuators they
want to use and place them in a particular place. The placement
of the actuators is simple because a series of magnets are used
to allow the user to place them in an easy way. Moreover, the
interface allows the user to load an image of the actual haptic
device, so it facilitates the design and the understanding of the
effect it can produce.
4.4.1.2
Problem not previously solved
Although there are some similar proposals, the design of haptic
signals can still be considered an unsolved problem. The STU of
the proposal of this chapter is wider since it also incorporates
a prototyping hardware tool and facilitates the creation of new
haptic applications for people not used to this type of haptic
technologies. In turn, our proposal presents important advances
in the design of haptic displays.
4.4.1.3
Generality
As stated in previous section, the STU of this proposal is more
complete than previous ones, since it is possible to provide the
whole solution to a problem (hardware and software). At the
same time, it is possible to address different kind of problems,
from the design of stimuli that can feel natural or that try to
replicate real ones -like the object fall detection example-, to the
design of others less natural but that make use of the haptic
sense to transmit codes or events that the user has to interpret
–the vibrotactile Morse encoder example-. Another feature that
supports the generality claim is that the toolkit has been designed to work with a wide variety of different actuators, since
the hardware supports high-power vibrotactile motors, and their
voltage specifications can be adjusted using the software tool.
4.4.1.4
Reduce solution viscosity
This characteristic tries to analyse the effort required to iterate
on many possible solutions. Olsen defines three ways of reducing solution viscosity: flexibility, expressive leverage, and expressive match.
4.4 evaluation of the platform
Flexibility means that it is possible to make rapid design changes
that can be evaluated by users. Therefore, in order to make the
design-and-test cycle as dynamic as possible, it is important that
the tool can facilitate the design of the complete solution, not
just the output signal, but also the hardware implementation.
Apart from the already described aids to the placement of the
actuators, the software tool allows both the design of a signal
and its correspondence to one or a set of actuators in a simple
way. It is also possible to test the stimuli on the device through
the software tool, making it useful to feel the sensation designed
before it is included in the VR application or videogame. This approach allows the design cycle in the early stages of the creation
of a haptic device to be very agile, making possible the efficient
exploration of alternatives.
The expressive leverage is achieved when a designer can accomplish more by expressing less. As Olsen states, it is best achieved
by reusing some previous design choices. As it is common in
similar approaches, the tool described in this chapter allows the
designer to store and reuse previously designed haptic stimuli.
The last characteristic associated with viscosity is the expressive match, defined as an estimation of how close the means for
expressing design choices are to the problem being solved. The
haptic signals that designers cope with can be too much complex to be handled directly. Therefore, the interface includes
some default patterns which define basic stimuli that can guide
the haptic designer, making it easier the creation of new stimuli
based on them.
4.4.1.5
Empowering new design participants
This claim addresses whether the tool makes possible that other
participants in the design process can benefit of its use, facilitating their involvement in the design tasks. The new population of
participants that could benefit from the existence of this tool is
ergonomics designers, since they are not usually involved due to
the fact that the design of haptic systems is a complex task and
it is even difficult to find designers with enough technical knowledge. Therefore, the availability of tools that involve them in the
design tasks is valuable, since it would lead to better and more
useful devices. In any case, in order to assess whether these new
users benefit from the use of this tool, it is advisable to carry
out experiments that analyse the use of the tool by these users.
75
76
vibrotactile authoring tool
This assessment is pending for future work, and it is planned to
study the tool in more detail in order to draw conclusions about
its usability with different groups of users.
4.4.1.6
Power in combination
A way to demonstrate the effectiveness of a tool is by supporting
combinations of basic building blocks, or in other words, allowing the combination of pieces of design to create more complex
ones. Olsen proposes to study this claim focusing on three characteristics: inductive combination, simplifying interconnection, and
easy of combination.
The idea that lies behind the first one, inductive combination is
that there should be a set of design primitives and some mechanisms to combine in order to create more complex designs. The
tool presented in this chapter offers the designer the possibility
of creating complex signals based on simpler ones. In the Morse
code example, any letter can be coded using the basic vibration
pulses (dot and dash). Olsen also states that it is valuable that
the tool allows the creation of new primitives. The tool proposed
in this chapter facilitates the creation of new primitives that can
be stored and be used by the designer in the creation of more
complex ones. It is achieved by specifying point coordinates or
by dragging the points with the mouse.
The second characteristic is simplifying interconnection, in other
words, keeping as simple as possible the communication between
the components that compound an integrated solution, or reducing the number of interconnection between pieces. The creation
of new patterns of vibration with the proposed tool is easy, since
it does not depend on which application is going to use them.
Similarly, the creation of a new application that needs haptic
support only needs to use the API provided to access all the patterns previously created with the tool.
Finally, it is proposed to analyse the easy of combination, which
refers to the simplicity and robustness of the interconnections.
Similar to the simplifying interconnection case, the API created
provides this easy of combination feature, since the creation of a
new haptic application that wants to use the stimuli previously
designed is straightforward, it just needs to use the API.
4.4 evaluation of the platform
4.4.1.7
Can it scale up?
The last claim proposed by Olsen is the scalability and the ability to apply the solution to complex and large problems. Here
we describe the features in the proposal that allow it to face the
problem of scalability. First, the hardware tool can be daisychained thanks to its scalable architecture, so it is possible to
connect virtually any number of actuators, enabling its application not only to toy problems but also to high demanding ones.
Moreover, each stimulus attempts to produce a certain sensation,
and thus, a project is likely to incorporate many different sensations. For this reason, the tool structures them properly in libraries, facilitating the design of each stimulus and their subsequent
integration into the final application.
In any case, it is needed to validate the use of the tool in large
problems in order to test its scalability, which is one of the objectives that we intend to address in the future. Also, it is worth
pointing out that, apart from the examples of use included in
this chapter, the tool has been used to design more complex
systems that have proven its usefulness in many different situations. Specifically, it has been used to analyse the use of the
vibrotactile technology to identify 2D textures (chapter 6), 3D
objects (chapter 7), and even the size and weight of virtual objects (chapter 8). In all these cases, this tool has facilitated the
rapid creation of the different prototypes and the users that took
part have confirmed the usefulness of the new devices. These
users showed a high degree of satisfaction with the use of the
haptic devices and the stimuli designed with the tool.
4.4.2
Comparison with state-of-the-art tools
Different tools can be found to design vibrotactile stimuli. Some
of them have been used with custom hardware. Hapticon Enriquez and MacLean [35], which was improved as the Haptic
Icon Prototyper Swindells et al. [161], consists of a waveform editor that supports a motor-actuated knob to provide force feedback. The editor allows the direct modification of the patterns,
as well as their concatenation to compose more complex signals. Another tool that includes its own hardware is Techtile
[115], which is composed of a microphone to record sounds,
and tactile transducers based on solenoids. This solution, oriented to education, tries to take advantage of the similarities
77
78
vibrotactile authoring tool
between sounds and vibrations, converting the recorded sounds
into tactile sensations. Despite this is a good approximation to
introduce the haptics concept, the adjustment of the vibrotactile
signals requires a sound editor, and the mapping to a vibrotactile signal is not that obvious. The Tactile Editor [71] supports the
Arduino and Make platforms, nevertheless the described general purpose hardware only supports 6 channels, without the
advanced control techniques. Furthermore, their editor is based
on the On/Off metaphor, turning on and off the motors for different durations and intensities. This naïve view prevents the
use of complex vibrotactile patterns, and hence it lacks expressive power.
The use of metaphors to create tactile contents is not new.
VibScoreEditor [144] uses a musical metaphor, representing the
haptic channels as a score. The different notes represent the duration of the stimulus, their vertical location affects the vibration
pitch, while the strength is displayed with an integer. The use of
this notation is not adequate for unaccustomed users, and it is
difficult to read strength and duration simultaneously. Furthermore, it does not support the use of multiple actuators. TactiPEd
[130] is based on the graphic metaphor of the device shape. File
templates define the situation of the actuators on the device, and
this is used to adjust the tactile features like the amplitude, frequency and sequence duration. Multiple actuators are supported through the use of multiple timeline channels. The limitations of this tool include the impossibility to edit new devices
inside the application, to select more than one actuator for the
same channel, or to freely edit the vibrotactile patterns. Therefore, it is only possible to represent vibrotactile patterns based
on different frequencies or intensities. Demonstration is also a
metaphor used in authoring tools. Hong et al. [59] proposes the
generation of the vibrotactile pattern based on the user gesture
on a touch screen. The duration of the pattern corresponds to
the duration of the gesture, the strength is related to the pressure
on the touch screen, and the frequency depends of the vertical
position. One of the problems is the low precision of the touch
screens to detect pressure. In addition, patterns cannot be edited,
so the whole pattern needs to be registered again. Furthermore,
the use of multiple actuators is not supported by the tool.
In order to support multiple actuator configurations with a
high spatial density, Cuartielles et al. [25] suggest the use of a
touch screen that shows an iconography of the device. Thus,
4.4 evaluation of the platform
the user can design tactile gestures by drawing gestures on the
screen. This approximation, even though it is interesting for
these specific configurations, has a limited range of editable patterns.
Other authoring tools are based on the free waveform edition
and their composition on a timeline, following the same principles as our proposal. The tool posVibEditor [143] is based on
this approach, supporting several actuators as well. Their main
contribution is the perceptually transparent rendering, using the
psychophysical function previously calculated. On the commercial side, the company Immersion offers the tool Haptic Studio
[3]. It supports LRA and ERM actuators, as well as the composition of patterns in a timeline, although it has some limitations.
Complex patterns, defined as waveforms, can only be imported
from an audio file, and their modification is not possible. Furthermore, each channel in the timeline only accepts one type of
pattern, so it is necessary to add as many channels as different
patterns and actuators are desired.
Table 4.1 summarizes the main functionalities of the most representative haptic authoring tools, with the addition of our proposal.
79
80
tactiped
haptic studio
tactile editor
vitaki
Complex waveform support
yes
no
yes
no
yes
Built-in waveform editor
yes
no
no
no
yes
Visual representation of the device
no
yes
no
no
yes
Support for multiple actuators
yes
yes
yes
yes
yes
Multiple actuators per channel
no
no
no
no
yes
Actuator voltage limits adjustment
no
no
no
no
yes
Vol. adjustment p. channel/general
no/no
yes/no
no/no
no/no
yes/yes
Perceptually transparent vibration
yes
no
no
no
no
no/no
no/no
no/no
no/no
yes/yes
no
no
yes
no
yes
ERM
ERM
ERM/LRA
ERM
ERM/LRA
Prototyping-oriented hardware
no
no
no
yes
yes
Scalable hardware
no
no
no
no
yes
Overdrive / Brake support
API library
Actuators supported
Table 4.1: Summary of the functionalities of the authoring tools PosVibEditor [143], TactiPed [130], Immersion Haptic Studio [3], Tactile
Editor [71] and Vitaki.
vibrotactile authoring tool
posvibeditor
feature
4.5 conclusion
4.5
conclusion
In this chapter, a vibrotactile authoring tool for ERM actuators
is presented. This tool facilitates the specification of complex
multi-channel vibratory stimulus associated to a specific event.
To do that, a graphical interface can be used to define different stimuli related with each actuator of the device, making the
creation of haptic stimuli easier for the designer. This software
presents some novel features, like the ability to work with overdrive and negative ranges and thus extending the possibilities
of driving a vibrator. Other relevant features are the capability to compensate the perceived intensity of vibration between
different actuators or users, the creation of an API to use the created haptic patterns by an external application, or the dynamic
adjustment to the voltage limits of the actuators used.
Moreover, to demonstrate the capabilities of this toolkit, it has
been used to design different applications. The first one is related to the use of vibrotactile feedback to transmit encoded information. This example shows how the designer can create
basic vibrotactile waveforms and reuse them, composing elaborated stimuli which are used to play Morse code on a haptic
device. The second one presents a stimulus designed to transmit
the users the sensation that they are accidentally dropping the
object they are holding, illustrating other relevant features of the
application that allow the designer to create different stimuli to
each actuator for a specific event. These are only two examples
of the enormous possibilities of this toolkit.
In addition, two different approaches have been followed in order to show the validity of our prototyping toolkit. On one hand,
the toolkit has been evaluated following the approach proposed
by Olsen [125], that tries to show the effectiveness and utility
of a new system when usability evaluations are not appropriate. On the other hand, a comparison with the state-of-the-art in
the field for software prototyping tools has been presented. The
analysis of the seven claims identified by Olsen and the comparison with previous work show that important progress has been
made with our proposal.
81
Part III
CASE STUDIES
E X P E R I M E N T S F O R T H E E VA L U AT I O N O F T H E
P L AT F O R M
This chapter introduces the experiments conducted to evaluate
the vibrotactile platform. First, the different aspects of the sense
of touch to be covered by the experiments are discussed. Then,
common implementation details are described.
5.1
introduction
One of the main objectives of the Vitaki vibrotactile toolkit is
to provide tactile feedback in VE. To this end, different properties of the objects must be simulated through haptic rendering
algorithms.
Lederman and Klatzky [94] defined a basic set of basic exploratory procedures performed during tasks of object recognition, which are depicted in Figure 5.1. These exploratory procedures have been used to define a series of experiments which
will be detailed in the following chapters. In particular, the simulated object properties include: texture, 2D shape, 3D shape,
weight and size.
Figure 5.1: Exploratory procedures identified by Lederman and
Klatzky [94] used to define the experiments.
85
5
86
experiments for the evaluation of the platform
5.2
system architecture
The use of a haptic device under a VE requires the integration of
multiple modules. A general overview of the architecture used
can be seen in Figure 5.2, which is based on the one explained
in Chapter 3.
The tracking system module captures the position of the optical markers attached to the user. If the application needs a 3D
model of the user hand, then this information is used by the inverse kinematics algorithm to calculate a virtual skeleton. The
graphic engine updates its internal structures with the information from the tracking system, and in parallel, a collision algorithm checks for intersections with the objects of the scene. As a
result of the collisions detected, the haptic stimuli is calculated,
and this information is finally transferred to the haptic driver,
which delivers it to the device.
The visual rendering module is powered by Ogre3D1 rendering engine, and it depicts the hand of the user and the virtual
objects on the screen. Visual and haptic rendering loops are decoupled due to the higher update requirements of the haptic
channel, running at 60 and 480 Hz respectively.
Figure 5.2: Architecture of the system used in the experiments.
1 http://www.ogre3d.org/
5.3 tracking system
5.3
tracking system
The application needs to know the position and orientation of
the user’s hand in space in order to calculate the haptic feedback
that has to be transmitted. To this end, a PhaseSpace Impulse2
optical tracking system has been used. This system runs at 480
Hz, and it is composed by a constellation of cameras and active
LED markers that code a unique identifier.
Although a marker-based constellation of cameras has been
used, different options are arising in the market which could be
used in a domestic environment. For example, depth cameras
can be used to extract hand gestures Wen et al. [185]. Indeed,
3GearSystems3 SDK uses one or two depth cameras to perform
arbitrary tracking of 10 fingers. Leap motion4 controller is also a
good alternative, although the workspace is more reduced. Another different approximation, which we are currently evaluating, includes the use of several Nintendo Wiimote peripherals to
track infrared markers, as a low cost optical tracking system.
Two LED configurations have been used to carry out the experiments. One for bi-dimensional tasks (texture and 2D shape
recognition) and another one for three-dimensional interaction
(3D shape, weight, and size discrimination).
5.3.1
2D Configuration
For this configuration, just one position in space is needed in
order to render a texture or a 2D shape on one actuator. One of
the options is to use the traditional mouse, however the use of
an optical tracking system allows a more natural interaction. To
this end, one LED attached to a glove on top of the index finger
was used, as it can be seen on Figure 5.3.
5.3.2
3D Configuration
The interaction in 3D space requires the computation of a virtual
hand model, not only for the visual representation, but for the
haptic rendering as well. A total of 9 LEDs have been attached to
a nylon glove (see Figure 5.4b). Three markers situated on top of
2 http://www.phasespace.com/
3 http://www.threegear.com/
4 https://www.leapmotion.com/
87
88
experiments for the evaluation of the platform
Figure 5.3: Configuration of the optical markers for 2D interaction.
the hand are used to calculate the global orientation of the hand.
In addition, one marker is situated at the end of each finger,
and one more in the thumb, due to its extra degree of freedom.
As not every articulation of the hand has a marker, an inverse
kinematics algorithm is used to calculate the virtual skeleton.
The distribution of the LEDs and the calculated skeleton can be
seen in Figure 5.4.
(a) Skeleton model calculated by inverse kinematics.
(b) LED markers attached to the
glove.
Figure 5.4: Configuration of the optical markers for 3D interaction.
5.4 actuator arrangement
5.4
actuator arrangement
The location of the actuators is different for each scenario. The
two configurations used are described next.
5.4.1
2D Configuration
As the exploration of textures and 2D shapes is performed with
the index finger, only one actuator attached to the index fingertip
region of a glove is required, as seen in Figure 5.3.
5.4.2
3D Configuration
Recognition of 3D objects is a demanding task that requires multiple actuators distributed along the surface of the hand. The
density of the array is limited by the propagation of vibrations
along the surface of the fabric and the mobility of the fingers.
Different configurations were tested, and finally the arrangement described in Figure 5.5b was chosen. All the actuators
are attached to the outer part of the fabric except the one located
on the center of the palm, due to the concavity of this region of
the hand.
The calculation of the position of each actuator in space, which
is needed by the haptic rendering algorithm, is performed by
associating them to a specific node of the virtual skeleton (see
Section 5.3.2). Thus, when the inverse kinematics algorithm updates the position of each joint of the skeleton, the position of
the actuators are moved accordingly. In addition, each actuator
has associated one or more collision points, one of which corresponds to the location of the actuator.
The configuration is defined in a XML file, and loaded by the
the application. It includes the number of actuators, their parent
node of the skeleton, their relative position to the node, and
the associated collision points. This allows an effective profile
management for each application.
5.5
collision detection
This section describes the collision detection and haptic rendering algorithm used to render textures, 2D shapes and 3D geometric figures. Given the position, orientation and gesture of
89
90
experiments for the evaluation of the platform
(a) Virtual skeleton with optical
markers and actuators.
(b) Actuators attached to the glove.
Figure 5.5: Configuration of the actuators for 3D interaction.
the user’s hand, the application needs to calculate the vibration
intensity of each actuator depending on the collision with the
virtual object. Furthermore, this task has to be executed ideally
up to a thousand times per second to give the user a realistic
sensation and avoid instabilities in force feedback devices. In
our case, the haptic loop runs at 480 Hz, limited by the tracking
system. This is in practice a good refresh rate for a vibrotactile
device, and little improvement can be obtained by increasing it,
due to the inertial nature of the actuators and the open loop
algorithm to control tactile feedback.
Collisions are calculated from one discrete point to the object
surface, and they measure the depth of penetration. Different
methods are used for 2D and 3D scenarios.
5.5.1
2D Collisions
Textures and 2D shapes are stored as a greyscale bitmap in memory. Given the position of the cursor (which is mapped to the location of the vibrator), a simple 2D raycasting algorithm is used
to return the value of the pixel at that position. Black is mapped
to value 255, while white is 0.
5.6 haptic rendering
3D Collisions
5.5.2
When complex objects are used, the computation of collisions
needs to be accelerated using techniques like space subdivision
and/or voxelization so they can be performed in real time. However, the use of geometric objects has the advantage of doing this
task in constant time and with a very high precision by using
mathematical expressions.
5.5.2.1
Mathematical Definitions
Given a point in space with coordinates (x, y, z) and an object
centered in the reference coordinate system, the following expressions calculate the depth of the point inside the object (positive value) or the distance to it (negative value).
Dsphere ( x, y, z) = r −
q
x 2 + y2 + z2
Dcube ( x, y, z) = s − MAX (| x | , |y| , |z|)
p
h
Dcylinder ( x, y, z) = MI N r − x2 + z2 , − |y|
2
h
y p 2
2
− x + z , − |y|
Dcone ( x, y, z) = MI N r 0.5 −
h
2
where
r radius of the sphere, cylinder and cone
s
side of the cube
h
height of the cylinder and cone
5.6
haptic rendering
Haptic rendering is the process of computing and generating
forces or tactile sensations in response to user interactions with
virtual objects [145]. The following sections describe the methods used for haptic rendering of textures, 2D shapes and 3D
objects for the vibrotactile glove created with Vitaki.
5.6.1
Vibroctactile Rendering of Textures and 2D shapes
The 2D position of the finger on the plane is virtually represented by a 5x5 pixel cursor that moved in a 300x270 window,
91
92
experiments for the evaluation of the platform
which is the size of the textures and shapes used. The vibration at each moment is given by the average gray level of the
25 pixels that lay under the cursor and obtained by the collision
detection algorithm. The maximum intensity of the vibration is
set experimentally to avoid annoying the user.
5.6.2
Vibrotactile Rendering of 3D objects
Each vibrotactile actuator has several collision points mapped to
the virtual representation of the hand and distributed around it,
as described in Section 5.4.2. Once the collision algorithm calculates the penetration depth for each collision point, the maximum level of penetration of the associated actuator is calculated
as follows:
P = MAX (depthi ∗ weighti )
0≤ i ≤ N
The value weight is associated to each collision point depending on its distance to the actuator, and ranges between 0 and 1.
The use of several collision points for each actuator allows a soft
transition of the activation of two consecutive actuators, creating
a feeling of continuity despite the low resolution of the array.
The calculation of the vibration intensity follows the Hooke’s
Law, making it proportional to the level of penetration through
the virtual object:
I = k∗P
Different approximations were tested to haptically render the
shape of the virtual forms. An improvement was detected in
the success rate when hollow objects were used, as it helped the
users to follow the contour. Therefore, we defined a region of
two centimeters as the object wall. The vibration is produced
when a collision point associated to a vibrator is within this region, as shown in Figure 5.6. Thus, when the hand of the user is
out of the defined region, the vibration stops.
Finally, when the user touches the object surface, a vibration
pulse is generated to simulate the impact against the surface.
This pulse is proportional to the speed and the angle of impact.
The inclusion of this technique is important in the interaction
5.6 haptic rendering
Figure 5.6: Graphical representation of the wall region of one of the objects. Vibrations are only produced when a collision point
is inside this region.
since it does not only add a degree of realism, but makes it
easier for the user to differentiate which part of the hand collides
with the object. Thus, when a flat surface is touched with the
open hand, the user feels all the vibrators starting at the same
time. On the contrary, a curved surface would activate different
actuators in sequence.
93
6
SHAPE AND TEXTURE RECOGNITION
This chapter presents a user evaluation to test the vibrotactile
platform Vitaki in 2D shape and texture recognition tasks. Furthermore, vibrotactile feedback is compared with force and real
tactile feedback. After reviewing the related work in the first
section, the description of the three feedback methods follows.
The experiment design is then described, and finally results are
presented in the last section.
6.1
related work
As stated by Robles-De-La-Torre [138], haptic feedback is of vital
importance in manipulative and exploration tasks of the daily
life. When stroking a surface with a finger, we experiment a
sensation that gives us an idea of its properties. One of the
tasks that have been carried out to verify the effectiveness of this
technology is the identification of materials and textures.
Force feedback is a type of haptic feedback that conveys information to a user by the generation of forces on a mechanical
interface which the user can move. Minsky et al. [116] used a
joystick to experiment with this type of feedback displaying textures, using a depth map texture where the high reliefs repulsed
the handle and the grooves attracted it. Some other authors have
attempted to optimize the response of these systems when used
to distinguish different materials, either by refining the control
algorithm, like Kuchenbecker et al. [89] or by creating and reproducing parametric models of the resulting vibrations, as described by Okamura et al. [124]. Nevertheless, as explained by
these authors, the bandwidth of many haptic devices is limited
and it is hard to perfectly replicate the measured vibrations.
Vibrotactile feedback is a kind of tactile feedback that uses
vibrations to transmit sensations through the skin which, as described by Hollins et al. [58], play an essential role in the way
that different textures are detected. Kontaniris and Howe investigated the use of vibrotactile displays to transmit vibrations in
telemanipulation and virtual environments. They argued that
95
96
shape and texture recognition
there are tasks in which the detection of vibrations may be the
main goal, while in others they can increase the performance, either reducing the response times or minimizing the forces used.
The device used in that study was composed of small modified
speakers, also used by Wellman and Howe [183] to perceive the
stiffness of a material by tapping it, using similar models as the
ones used by Okamura et al. [124].
In a similar experiment, but using telemanipulation, Gurari
et al. [50] used vibrotactile tactors to compare the feedback on
the fingertip, forearm and foot when trying to discriminate materials of different stiffness. Allerkamp et al. [7] used a rendering
strategy based on vibrations through vibrotactile arrays to simulate the sensations of touching textiles.
ERM actuators have also been present in the context of texture
identification. Kyung et al. [93] conducted an experiment comparing force, tactile and vibrotactile feedback technologies. For
this, they created a pen-shape device joined to a force feedback
system. They integrated tactile feedback through an array of
pins, and vibrotactile feedback thanks to a vibrator motor. It
was considered of interest to follow the basis of this experiment
in order to have a reference against which compare the results
of our platform. However, some changes have been introduced.
The ERM actuator has been located directly on the fingertip, one
of the most sensitive areas of the body Johansson [68], integrating it in a glove. The control algorithm of this vibrator has been
optimized to reduce its latency, as described in Chapter 3, and
one of the problems encountered by these investigators in the
use of a force feedback device, which may have affected their
results, has been addressed. Finally, some paper patterns have
been introduced as a method of tactile feedback, so that a real
model can be considered in the analysis of the results.
6.2
description of the haptic feedback methods
The development of the experiment followed the methodology
described by Gabbard et al. [41], which provides an effective
process to achieve usable VR systems. In a first stage, a heuristic
evaluation is performed where the developer conducted tests to
adjust the parameters and check the system. A second stage
was used to conduct a formative user-centered evaluation with
only three participants and to fine tune each method according
6.2 description of the haptic feedback methods
to their initial assessments. Finally, a third stage was executed
with a comparative evaluation. The experimental design and
results discussed correspond to this last comparative evaluation.
6.2.1
Stimuli
The comparison of the three haptic feedback methods was performed using a discrimination task involving the identification
of 2D shapes and textures. These are referred as Group 1 (G1)
and Group 2 (G2) of patterns respectively. In this experiment,
textures are understood as varying gratings patterns, while shapes
are composed of geometric figures. The former are characterized
by changes in one dimension, so the exploration movements are
rectilinear. The latter are bidimensional, so they require an exploration in the plane.
In order to compare them and extract conclusions, two of the
three groups of patterns used by Kyung et al. [93] were tested.
The third one was discarded to avoid the subjects to become
fatigued during the experiment.
Regarding the first group, which can be seen in Figure 6.1a,
each sample is composed of four times the same geometric shape.
In the second group, each sample is formed by horizontal lines
with different spacing between them, as depicted in Figure 6.1b.
The discarded group consisted of lines with different orientations.
The resolution of the pattern images is 300x270 pixels, the
same as in Kyung’s experiment. However, their real size was
increased from 60x54mm to 100x90mm due to the accuracy of
the tracking system used in the dataglove. A PhaseSpace optical tracking system was used in the vibrotactile feedback setup,
being its resolution 0.5mm, an order of magnitude smaller than
the one provided by the Phantom Omni system used by Kyung
et al.
The conversion of these images into tangible textures followed
this convention: the black areas are 1 mm deeper than the white
ones. In order to distinguish them, the user must perform scanning movements with each of the haptic approaches considered.
97
98
shape and texture recognition
(a) Group 1, geometric forms.
(b) Group 2, unidimensional textures.
Figure 6.1: Shapes and textures used in the experiment.
6.2.2
Force Feedback
In the force feedback field, Sensable’s Phantom haptic devices
are likely to be the most widely used ones [5]. The different
devices of this product line differ mainly in the work space, the
force that they are able to apply and their accuracy. In this case,
a Phantom Premium 1.0A 3 Degrees Of Freedom (DOF) model
have been used, having a resolution of 0.03 mm, a maximum
force of 8.5 N, a work space of 254x178x127 mm and 3 DOF. Figure 6.2 shows a participant using this device.
With this method, textures were represented in a virtual box
that has the pattern embossed in its upper side -grey on the
picture-. A squared border has been added to delimit the exercise area. This correspondence was performed using a depth
map in which the grey level of the image determined the relative
displacement, being black areas about 1 mm deeper than white
areas. The relative stiffness was set to 0.70, which is equivalent
to approximately 2.4 kN/m, since higher values produced some
instability in the haptic rendering algorithm. Both dynamic and
static friction were set to zero in order to allow the user to explore the surface softly.
The implementation used the scene graph library H3DAPI
along with the haptic rendering library [2], which is open source,
cross platform and device independent. More specifically, an
x3d model was created for each pattern, adapting the DephMapSurface example included in the own library. The Ruspini algorithm was selected for the haptic rendering.
6.2 description of the haptic feedback methods
One of the problems identified by Kyung et al. in their force
feedback tests was the phenomenon whereby the cursor used to
get stuck in the deeper parts of the texture, making it difficult
to scan. In order to avoid this, the edges of the patterns were
smoothed by a gradient, so that changing from one area of the
texture to another did not lead to cross abrupt steps, but progressive ramps.
Figure 6.2: User performing one of the tests with the force feedback
device (Phantom Premium A).
6.2.3
Vibrotactile Feedback
The second method to be compared is a dataglove capable of
providing vibrotactile feedback, that was developed with the Vitaki toolkit. For this experiment, only one vibrator located on the
index finger was used, as detailed in Chapter 5. There were two
main reasons for that. First, the rest of haptic feedback methods only make use of the index finger, so it seemed to be the
fairest way of comparing them. And second, the other vibrators,
located on the rest of the fingers, are too far to assist in the detection of the small features that make the textures different. The
index finger tracking was performed by a PhaseSpace optical
tracking system, attaching one of the active LED’s to the glove,
just on top of the fingertip.
99
100
shape and texture recognition
In order to identify a texture, the user must move his hand
on the surface of a table. The physical space occupied by the
virtual patterns was the same than in the previous case, 100x90
mm. The user is allowed to rest their hand on the table except
for the index finger, since its vibration could produce an audible
sound that could be used as an audio feedback.
The 2D position of the finger on the plane was virtually represented by a 5x5 pixel cursor that moved in a 300x270 window,
as this is the size of the textures. The vibration at each moment
was given by the average gray level of the 25 pixels that lay under the cursor, being black the maximum vibration level. The
maximum intensity of the vibration was set experimentally to
avoid annoying the user.
6.2.4
Direct Stimulation
A third method has been designed, called direct stimulation of
the finger, where the user moves his finger directly over the texture. The objective is to compare it with previous methods to
get an idea of the tracking system accuracy, and the quality of
tactile feedback.
The patterns are built using transparent paper (1 mm thickness) and the same size as in previous cases, removing the black
areas to create zones of palpable depression. Each pattern is
pasted on a sheet of paper to ensure the stability of the thinner
areas. Figure 6.4 shows one of these textures in detail.
This method has a great advantage for the user compared to
the previous two approaches, as the tactile information is not
received in just one discrete point, but all over the surface of the
fingertip. It is also an ideal tactile feedback, because the latency
is zero, and the bandwidth and resolution are only limited by
the sensitivity of the skin.
To perform the experiment, samples of paper were placed on a
table beneath another larger table that hid it from the user. The
user actively strokes the paper with his index fingertip of his
dominant hand. With his free hand, the user answered by pressing the corresponding key on a computer, while an operator
changed the patterns according to a pre-established random sequence, which will be detailed in next section. Figure 6.3 shows
the test environment used.
6.3 experiment design
Figure 6.3: Direct stimulation test environment used to carry out the
experiment. The patterns are placed under the table by an
operator while the user tries to identify it.
6.3
experiment design
This chapter describes an experiment divided in two stages. In
the first one [109], 12 different users participated, 4 women and
8 men, mean age of 26.7 years. It was centered in the study of
the user’s behavior while using the different devices. After the
initial experiment, some results showed some tendencies which
shall be analyzed, and justified the development of the second
stage. Given that some gender related differences in the results
were detected, the sample was increased so that the same number of users of each gender participate in the evaluation. The
experiment was conducted again with 18 users, 9 women and 9
men. This second stage would provide new data to analyze and,
if relevant, corroborate the results of the first stage and advance
in the study of the new detected behaviors, trying to determinate
if these deviations were statistically significant.
The users were requested to distinguish the patterns of each of
the two groups using the three aforementioned methods. After
each trial the users were informed about the correct answer. To
prevent that the order in which tests were performed could influence the results, the sequence was counterbalanced by the Latin
101
102
shape and texture recognition
Figure 6.4: Detail of one of the textures cut from transparency paper.
square method [13]. In addition, the number of participants was
multiple of 6, so that each of the test sequences was performed
the same number of times.
For each method and group, the users had a brief period of
time to become familiar with the task, up to a maximum of 5
minutes. Then, each of the five patterns were shown in a random order a total of four times. The users were able to see the
cursor position on the screen, but no other hints that could reveal details about the current pattern.
After each test, the users were asked to rate how much time
they spent learning to detect patterns, and about the difficulty
to distinguish them once the test was more advanced, as well as
their opinions about the comfort of each device.
6.4
results and discussion
The performance of the different haptic methods is analyzed in
this section, using the values of the measures collected and the
information gathered from the questionnaires.
6.4.1
Time And Success Rate
Efficiency has been measured in terms of average duration of
each identification attempt, and is shown in Figure 6.5. Effectiveness is assessed taking into account the average percentage
of correct answers, represented in Figure 6.6.
For the first group of patterns, formed by geometric figures,
it appears that, as expected, the method of direct tactile stimu-
6.4 results and discussion
Figure 6.5: Average duration in seconds of each trial. Bounded lines
represent the interval between the first and third quartile
of the samples.
lation is the fastest and the one which provides the higher percentage of correct answers, due to the advantage of having the
entire surface of the fingertip to follow contours and to identify
shapes. In any case, the percentage of hits is very close to the
obtained by the force feedback method, because in this case the
device guides the user’s finger when the cursor passes over an
area of depression, helping him to follow the contour with little
cognitive effort. In the case of the dataglove, with one vibrator placed on the index fingertip, the stimulation is performed
in just one point and it does not allow accurate tracking of the
border. Therefore, in this case, the user is required to develop
a detection strategy different from the one naturally followed.
This new strategy requires the user to make a greater effort, resulting in higher error rate and time consumption. This will be
studied in depth in Section 6.4.6. The non-parametric Friedman
test (equivalent of the parametric test ANOVA for repeated measures) was used to examine if the differences between the behavior of the three platforms were statistically significant. The test
results χ2 (1, N=18) = 29.939, p=0.000 show that the three haptic
methods are different. Post-hoc analysis with Wilcoxon Signed
Ranks test was conducted with a Bonferroni correction applied,
resulting in a significance level set at p<0.017. There were no significant differences between direct tactile stimulation and force
103
104
shape and texture recognition
Figure 6.6: Average percentage of correct answers per each group of
textures. Bounded lines represent the interval between the
first and third quartile of the samples.
feedback (Z=-1.563, p=0.118) but this was not the case between
vibrotactile stimulation and the rest (Z=-3.744, p=0.000). This
corroborates the first impression that the extra effort required by
the vibrotactile method forces the user to make more mistakes
in the identification of shapes than using other methods.
On the other hand, for the second set of textures (formed by
horizontal lines with different separations) the results are significantly different. In this case the most efficient method, in terms
of time and hit rate, is the vibrotactile feedback, even improving the method of direct stimulation, which can be thought to
obtain better results. To discriminate between different patterns,
the user typically scans the texture across the lines at constant
speed, trying to identify the timing or frequency of the marks.
That is why the vibration is appropriate, since the user perceives
clearly the necessary information. In contrast, when the user
swipes its finger across the paper textures much more spatial
information is received that has to be discarded, so the effectiveness is not as good in terms of time and error rate. Finally, the
worst performance in this scenario is for the force feedback. In
this case, the separation between the lines is perceived mostly at
a kinaesthetic level, affecting the accuracy.
The same statistical analysis was conducted in this case. The
Friedman test reported a statically significant difference χ2 (2,
6.4 results and discussion
N=18) = 13.176, p=0.001. In this case, the Wilcoxon Signed Ranks
Test showed that, although vibrotactile feedback surpassed the
success rate of direct tactile stimulation, this was not statistically
significant (Z=-1.360, p=0.174). Furthermore, it can be said that
there is a difference between these two methods and the force
feedback one, which performs worse (Z=-2.702, p=0.007) against
the vibrotactile (Z=-2.970, p=0.003) and against the direct stimulation. These results showed that vibrotactile feedback can be as
good as direct stimulation to identify textures formed by varying
gratings.
6.4.2
Learning Curves
These results can also be reviewed under the learning curve
point of view. Figure 6.7a shows the average answer response
time and errors per each trial along the 20 trials in total. Figure 6.7b shows the average answer response time along the total
duration of the test, so the relative durations of each method
could be appreciated. In general terms, the time taken to identify each texture is improved along each test, but this is more
noticeable for some groups of textures and methods. In group
1, both tactile and force feedback have a similar behavior, improving very fast in the first four or five answers. The case of
the vibrotactile feedback is something to look into more detail.
Although the times are higher than in the aforementioned cases,
they also decrease considerably in the very first trials. However,
from trial 14 to 17 the time increases, breaking the normal learning curve. Once more, this may be due to various factors. One
possibility is the saturation of the sensibility of the fingertip, affected by the vibrations. However, the performance is improved
afterward, so it does not seem to be the main reason. A different motive may be the fatigue of the user, as this method
requires an extra conscious effort and thus more time to be completed. Figure 6.8b shows that this happens when the other two
method tests have finished, so maybe if those tests were longer
they would produce a similar behavior. Finally, a third cause
could be related to the error rate, represented in Figure 6.8a as
dotted lines. An increase in the error rate, maybe due to one of
the two previous reasons, leads to a loose in the self-confidence
of the user. When the error rate decreases again, the time curve
improves too.
105
106
shape and texture recognition
With regards to group 2 of textures, learning curves of tactile
and vibrotactile feedback improve significantly, but not the force
feedback one, which oscillates along the trial.
(a) Group 1 (geometric forms).
(b) Group 2 (textures).
Figure 6.7: Learning curves along the 20 trials.
6.4.3
Comparison with Kyung et al.
These results have been compared to the obtained by Kyung
et al.. In the case of vibrotactile feedback similar results are
obtained, in time and success rate, for both sets of patterns. The
only remarkable exception is the success rate in the detection
of the first group of textures, which has increased from 59% to
82%. This could be partly explained by the scaling factor of
1.6X applied to the textures, or more likely to the optimization
6.4 results and discussion
(a) Group 1 (geometric forms).
(b) Group 2 (textures).
Figure 6.8: Learning curves along the total duration of the test.
performed in the control algorithm, but it could also be due to
the location of the vibrator directly on the fingertip, instead of
into a pen-shaped device, as used by Kyung et al.
Regarding force feedback, the times are comparable in the
detection of line patterns (Group 2), being the correct answers
slightly higher (73.5% to 84.2%). However, the differences in the
group 1 of textures are more pronounced. The average response
time was reduced from 29.4 to 16.4 seconds, and the success rate
has increased very significantly from 59% to 94.2%. The explanation of this phenomenon should not only be found on the scale
factor, but also in how the problem pointed out by the original
authors was addressed in this experiment, whereby the haptic
cursors got “stuck” in the grooves of the texture while scanning
the texture, making the task difficult.
107
108
shape and texture recognition
6.4.4
Questionnaires
User satisfaction is measured using subjective data obtained from
both the user comments and the questionnaires. These results
are consistent with those formerly described. They found direct
tactile stimulation the easiest haptic method to identify shapes,
followed by force feedback. In the case of grating textures, vibrotactile feedback has the highest score. It can be noted, though,
the sensation of “slight tingling” in the index finger reported by
some users after the vibration tests, although this setup received
almost the same result about comfort than the Phantom one.
6.4.5
Gender Differences
In the first stage of the experiment, a slight deviation was observed in the results attending to a gender based classification
of the participants. As it was formerly described, a total of 4
women and 8 men were part of the first evaluation. This number was increased up to 9 subjects in each group. A statistical
analysis was conducted to confirm or reject the null hypothesis
H0 of equality between both groups of users (men and women),
that is, if they get the same results for the different tasks. In the
cases where the normality and homogeneity tests are passed,
a t-test parametric test was conducted, while a non-parametric
Man-Whiteny test was performed in the rest. Results of the tasks
are presented in Table 6.1.
From these data it can be concluded that there is a statistically
significant difference between the men and women group’s median success rate in the tasks Phantom_G2 (U=15.5, p=0.024) ,
Glove_G1 t(16)=6.021, p=0.000 and Finger_G2 (U=15.5, p=0.022) .
For the tasks Phantom_G1 (U=29.0, p=0.340) , Glove_G2 (U=25.5,
p=0.143) and Finger_G1 (U=33.0, p=0.546) the null hypothesis
could not be rejected, so their behavior is similar.
This means that the texture identification task (G2) -using force
and tactile feedback- and the shape identification task (G1) using vibrotactile feedback- were performed significantly better
by the group of male subjects. Curiously, this coincides with
the groups of patterns which are less suitable for each haptic
method. However, this result is preliminary and more users are
needed to confirm it, but it may motivate further studies to check
these differences.
6.4 results and discussion
haptic method
mean(m)
mean(f)
sd(m)
sd(f)
Phantom (G1)
96.00
91.91
4.86
7.95
Phantom (G2)
92.83
73.07
9.68
17.50
Glove (G1)
85.45
67.40
4.64
7.55
Glove (G2)
97.72
91.14
3.63
10.00
Finger (G1)
96.64
97.13
2.50
4.41
Finger (G2)
96.55
87.28
5.00
9.72
Table 6.1: Results of the tasks, in terms of success rate, based on the
gender (M, F) for each haptic method and pattern (G1, G2).
6.4.6
Identification Strategies
During the force and vibrotactile feedback tests, a log file was
saved with the subject finger position along the time for its latter
analysis. These strokes have been superimposed with the shape
or texture which was being identified, so that an accurate path,
and therefore the detection strategy, is obtained.
Results have confirmed that the detection technique used to
identify the patterns depends on the haptic method used. Thus,
in the case of the shapes and using force feedback, the users look
for a groove first and then they follow the contour guided by the
device. This, however, is not that easy with the vibrotactile dataglove, where the users try to follow a scan-based strategy along
the surface. It can be seen that, sometimes, once they found the
shape border, they try to follow it looking for the limits. Both
detection strategies can be seen clearly in the Figure 6.9.
(a) Vibrotactile glove.
(b) Force Feedback.
Figure 6.9: Example of recorded strokes for the vibrotactile glove and
force feedback methods that reflects the identification techniques used.
109
110
shape and texture recognition
6.4.7
Error Analysis
An analysis of the most common errors made on each task has
been conducted. Depending on the haptic feedback method
used, and on the group of patterns being identified, users tend
to confuse certain pair of samples. This is especially significant
in the vibrotactile feedback case during the shape identification
task. In this case, 27 out of the 84 errors (32%) were due to confusions between the samples 1 and 2, that is, between the circle
and the square. As seen in the previous section, in this case
an exploration method based on the search of contrasts in borders is needed, which differs from the naturally followed one.
Thus, the differentiation between similar shapes becomes more
complicated and can lead to confusions. Figure 6.10 depicts an
example of a square wrongly identified as a circle. Errors analyzed from the group of textures (G2) does not show any error
accumulation between any pair of samples.
This result indicates that the success rate for this task could be
improved if only differently enough shapes are included, that is,
with recognizable topological features. To this end, the success
rate of the shape patterns (G1) have been recalculated discarding
the errors between the two first samples (circle and square). To
check if this condition affects the aforementioned results, where
the vibrotactile feedback behaved worst than the rest, the statistic analysis has been repeated.
Friedman test still reports significant differences between the
three haptic methods χ2 (2, N=18) = 19.175, p=0.000. Post-hoc
Wilcoxon Signed Rank test confirms that there are significant
differences between vibrotactile and force feedback (Z=-3.497,
p=0.000) or tactile feedback (Z=-3.397, p=0.001). This analysis
shows that despite the difference with the rest of methods is
still statistically significant, the improvement in success rate is
notable (from 76 to 83.5%, Figure 6.11), and hence we can conclude that the vibrotactile technology is a valid method to identify shapes when the topological differences are big enough.
6.5
conclusion
In this chapter, a texture discrimination experiment has been
conducted. This experiment consisted of the comparison of force
and tactile feedback with the vibrotactile feedback created with
6.5 conclusion
(a) Vibrotactile glove.
(b) Force Feedback.
Figure 6.10: Square identification by using vibrotactile and force feedback. In this case, the user wrongly identifies it as a circle
when he uses the glove.
Figure 6.11: Average percentage of correct answers per each group of
textures. For Group 1, the second value (lighter color) represents the case of discarding confusions between circle
and square.
the Vitaki toolkit. The differences between the different exploration strategies have been studied, as well as a characterization
of the errors made. It has been observed that shape identification
with vibrotactile feedback needs a non natural strategy which
affects the task performance. It may be interesting to check if
a previous training with an adequate technique allows the improvement in time and success rate.
Motivated by the results obtained in the first stage of the experiment, a statistical analysis of the gender related differences
has been conducted. Results show that in tasks where the patterns are less suitable to the haptic method used, the group of
women gets lower ratings. These results are preliminary and ob-
111
112
shape and texture recognition
tained with a low number of users, but opens the possibility to
perform more studies in this direction.
Regarding the feedback methods, vibrotactile feedback seems
to be the most effective method to distinguish between certain
texture patterns, and more specifically those that can be identified by the frequency changes of their surface features while rubbing it with the finger. However, in tasks where a precise spatial
recognition is needed to identify shapes it has not resulted as effective as other methods, yet has proved to be useful, especially
if the shapes to identify are different enough.
On the other hand, the tactile feedback method through the
paper patterns seemed in theory the more efficient in both types
of textures, but in the case of line patterns it has been overcame
in time and error rate by the vibrotactile method.
This study shows some interesting results which might be corroborated in a future work to expand both the number of users
and the variety of textures to detect.
I D E N T I F I C AT I O N O F 3 D V I R T U A L G E O M E T R I C
FORMS
In this chapter, two experiments to identify virtual 3D shapes
are described. The first one uses a multi-finger force feedback
device, while the second experiment makes use of the Vitaki platform with an array of vibrators distributed on a glove. Results
are compared with similar experiments found in the literature
with single-point force feedback devices.
7.1
related work
Identification of forms in 3D space has been the focus of psychophysical and neuroscientific research before haptic devices
existed [84]. Révész [136] studied the cognitive process followed
during tasks of haptic recognition of objects and how it is different from vision. Gibson [46] compared the performance of active
touch against passive touch in shape recognition tasks. Lederman and Klatzky [96] described the effects of constraining the
exploration with different impairments, like using a rigid probe
or wearing a finger sheath.
Natural exploratory procedures usually implies the use of several fingers at the same time, using one or more exploration
techniques depending on the wanted feature we are trying to
guess [94]. During the exploration, the brain extracts information from the kinaesthetic sensory system as well as the extended
skin surfaces [64]. However, the information provided by haptic
devices is limited, and hence haptically rendering objects for its
identification without visual information results in a demanding
task that can be used to evaluate the limits and possibilities of a
haptic system. This lack of information was studied by Jansson
[64], who discussed the contribution of the perceptual filling-in
effect when a haptic device is used to explore virtual objects.
Geometric figures like the cube, sphere, cylinder and cone
have been used in several experiments to evaluate different factors.
Jansson [63] performed numerous experiments with a Phantom
device to identify geometric objects of different sizes (5 to 100
113
7
114
identification of 3d virtual geometric forms
mm), and he found the larger sizes to be identified more accurately and to require shorter exploration times. Nevertheless,
although the effect of short-term practice improved the results
[66], they were still far from the results obtained with real objects
[65]. Stamm et al. [156] conducted an experiment to identify a
larger variety of geometric forms, including frustums, combined
geometry primitives and the factor of arbitrarily rotate them, obtaining a similar success rate but an increased exploration time.
Not only geometric figures have been used to evaluate a haptic
interface. Kirkpatrick and Douglas [83] proposed the use of the
a standard set of five shapes defined by Koenderink [75]. These
shapes were computed as the direction of curvature of two parabolas, and they tested it with a Phantom device. Furthermore,
virtual objects with a complex form have also been tested by
Jansson and Larsson [67], who confirmed how the success rate
was reduced as the complex level increased.
These studies only used one point of interaction (usually the
finger) and relied on the generation of forces. However, as Jansson [64] stated, it would be interesting to use a multi-finger display to evaluate the perception of 3D forms. This motivates the
first experiment of this chapter, which was carried out with CyberGrasp, a force feedback exosqueleton for the hand.
The second important perceptual key, as stated before, is tactile feedback. No previous studies have been found using a vibrotactile glove to identify 3D shapes. A vibrotactile glove was
developed by Giannopoulos et al. [45], but it was only evaluated
to identify bidimensional forms using the position of the whole
hand. Thus, the objective of the second experiment was to evaluate the perception of 3D forms with an array of vibrotactile
actuators using the Vitaki platform.
7.2
stimuli
The forms used in the experiments, which can be seen in Figure 7.1a, are common 3D geometric objects: cube, sphere, cone
and cylinder. These objects have been chosen because they have
been used before in experiments with a Phantom device [63, 156],
and they are familiar to most people, which enhances the recognition capabilities [84]. The proportions between height and
width have been maintained along all the figures to add a level
of complexity to the task of discrimination. This means that, for
7.2 stimuli
instance, if the cylinder was thinner and longer, being more like
a stick, it could be distinguished more easily from the rest.
The experiment with the force feedback device was conducted with three different sizes, being the maximum in all three
dimensions 50, 100, and 250 mm. The biggest size (250 mm) was
close to the workspace limit available for the used force feedback device, and was the only one used for the experiment with
the vibrotactile glove, due to the difficulty of perceiving smaller
objects.
Two of the chosen sizes (50 and 100mm) do also match with
previous experiments [63, 156], in order to facilitate a further
comparison of the results.
In addition, paper models with the same dimensions and proportions as the virtual ones were constructed to provide the
users a better understanding of their topological properties (see
Figure 7.1b).
(a) Virtual models.
(b) Paper models created to be used by the users as a reference.
Figure 7.1: The four geometric forms used in the experiment.
115
116
identification of 3d virtual geometric forms
7.3
7.3.1
experiment 1: force feedback
Haptic Display
The haptic display used in this experiment is a CyberGrasp from
CyberGlove Systems1 , which provides a multipoint haptic feedback to the user. The CyberGrasp is an state-of-the-art exoskeleton capable of producing grasp forces perpendicular to the fingertips with half DOF, that is, it can only exert forces that pull
from the fingers through flexible tendons. This device is combined with a CyberGlove, which is a dataglove made of elastic
fabric used to sense the angles of the fingers and get a hand gesture. Finally, the grounded robotic arm CyberForce is attached
to the CyberGrasp and used to sense the position and rotation
of the hand in space and to generate forces to the whole hand in
three DOF. This system relays mainly in the force and proprioceptive feedback channels of the user.
7.3.2
Haptic Rendering
Unlike most common devices like Phantom, the haptic device
used is not supported by libraries like OpenHaptics2 or HAPI 3 .
Instead, their VirtualHand Software Development Kit (SDK) has
been used, which supports the use of two modes of operation:
force and impedance. The impedance mode lets the user send
contact patch information of each finger to the system, theoretically allowing the device the computation of forces in an optimized haptic loop. Nevertheless, the impedance mode had to
be discarded due to big instabilities detected in the initial tests.
The reason was the low performance of the collision detection
algorithm of the SDK, which was running at 30 Hz instead of the
1 kHz required for haptic rendering [78]. These problems were
also reported by Zhou et al. [188].
Instead, a custom force rendering algorithm for CyberGrasp
has been implemented, with support only to the basic geometric
forms: cube, sphere, cone and cylinder. This algorithm is based
on the Haptic Interface Point interaction, where only the end
effector point interacts with objects. Due to the simple nature
1 http://www.cyberglovesystems.com/
2 http://geomagic.com/en/products/open-haptics/specifications/
3 http://www.h3dapi.org
7.3 experiment 1: force feedback
of these forms, the collision algorithm can be mathematically
expressed and calculated in constant time, which is detailed in
chapter 5. A Hooke law is then used to calculate the force the
CyberGrasp has to exert to each finger, and the CyberForce to
the entire hand. The tests conducted were satisfactory, providing realistic and stable force feedback, and the haptic loop was
executed at 1 kHz.
The surface of the forms was generated with no static or dynamic friction. In order for the 3D forms to be easily localised,
the users had a visual reference of their position in the real space.
Furthermore, a small force towards the centre of the object was
generated when the user was more than five centimetres away
from the object, to ease the task of finding the object in space.
7.3.3
Participants
This experiment was conducted in the UK. Ten participants took
part (two women and eight men) with a mean age of 29.5 years
(Standard Deviation = 4.8 years). Most of them were members
of the university and no one reported previous experience with
the haptic device used.
7.3.4
Procedure
Before using the Haptic Workstation, participants were informed
about the device and its safety aspects.
A training period of 5 minutes was given to each participant
to acquaint themselves with the device and the forms. The 3D
geometric forms to be used were explained to the participants
with the help of illustrations, and they explored them in the
preliminary stage. The noise of the haptic device was masked
due to the high ambient noise produced in the room were the
experiment took part.
All participants took part in all the experimental conditions.
They were told that it was important both to be accurate and to
answer without unnecessary delay. Figure 7.2 shows one of the
users wearing the force feedback device.
The four 3D virtual forms were presented three times and in
three different sizes, thus altogether 36 forms. All orders were
randomized. Both errors and time was measured. Time was
117
118
identification of 3d virtual geometric forms
Figure 7.2: One of the users performing the experiment 1 with the
force feedback device.
measured after the detection of the first collision between the
user’s hand and the virtual form.
7.4
7.4.1
experiment 2: vibrotactile feedback
Haptic Display
The device used with this experiment was built using the Vitaki
prototyping toolkit, and its details can be found in chapter 5. It
consists of a nylon glove with an array of 12 actuators distributed
along the surface of the hand. In comparison with the multithousand Euros cost of the force feedback system, this device
only costs around 100 Euros, which is two or three orders of
magnitude below. Also, in this case the user does not feel a force
which allows him to follow the contour of the object. Instead, he
has to sweep the air and feel the surface through the activation
of the vibrators array.
7.4.2
Participants
This experiment was conducted in Spain. Eighteen participants
took part in the experiment (two women and sixteen men) with
a mean age of 29.1 years (Standard Deviation = 5.9 years).
7.5 results
Figure 7.3: One of the users performing the experiment 2. The tripod
with the LED marker was used as a reference to render the
virtual object.
7.4.3
Procedure
In order for the 3D forms to be easily localized, the users had a
visual reference of the object position in space. A small tripod
with an LED was situated on the right knee of the user. This
known position was used to center the virtual object and ease
the task of finding them, as shown in Figure 7.3.
Each participant had a training period of 5 minutes to acquaint
themselves with the forms and the device. The geometric forms
were explained, and paper models with the same dimensions
and proportions as the virtual ones were presented to the participants before the experiment to provide a better understanding
of their topological properties. The participants were presented
with the 3D virtual forms one by one and asked to identify their
form as fast and accurately as possible. Three blocks of all the
virtual objects were presented, thus in all 12 objects. The order
was randomized.
7.5
results
The results for experiment 1 with force feedback are presented in
Figure 7.4, while the results for the experiment 2 with vibrotactile feedback are depicted in Figure 7.5. Quartiles 1 and 3 have
been included to measure the dispersion of the data set.
119
120
identification of 3d virtual geometric forms
(a) Average percentage of correct
responses.
(b) Average exploration time per
trial.
Figure 7.4: Results of the experiment with force feedback. Bars indicate quartiles 1 and 3.
(a) Average success rate.
(b) Average exploration time.
Figure 7.5: Results of the experiment with vibrotactile feedback.
Dashed line indicates chance level (25%). Error bars indicate quartiles 1 and 3.
In the first case, the results reflect a correlation between the
size of the forms, and both the accuracy and the exploration time,
with the best results obtained for the bigger objects. Accuracy
ranges between 60 and 83%, while time per trial is between 48.9
and 25.7 seconds. Different exploration methods were followed
by the participants of the experiment depending on the size of
the explored form. With the smallest size, only one point of
interaction was used to explore the surface of the object, typically
the index finger. However, with the 100 and 250 mm the whole
hand was used.
With respect to the experiment with vibrotactile feedback, the
results are promising given the simplicity of the device. Indeed,
the average success rate is 65.1%, with a time per trial of 38.8
7.5 results
seconds, which is better than the obtained in the first experiment with the smaller objects. The analysis of the data indicates
a higher success rate in the identification of the cone and the use
of less time to explore it over the rest of forms. One possible explanation is that the morphological differences between the cone
and the rest are more accentuated and thus, it can be identified
more easily by the users. In personal interviews after the experiment, the users highlighted the relative ease to identify both the
upper vertex and the slopped wall of the cone.
The use of similar experimental conditions allows the comparison with previous experiments performed with a single point
haptic force feedback device, typically a Phantom. Jansson [63]
summarized a set of experiments with different sizes: 5, 7, 9, 10,
50 and 100 mm. These results are depicted in Figure 7.6.
(a) Average percentage of correct
responses.
(b) Average exploration time per
trial.
Figure 7.6: Correct responses and exploration time in experiments by
Jansson. Figure adapted from [63].
It can be observed a similar tendency, where larger objects
get a higher percent of correct judgements and lower exploration time. Experiments conducted by Stamm et al. [156] also
used a Phantom device, although they testes a wider variety of
forms. Sizes used were 12x12x6 mm and, taking into account
only the four forms studied in this experiment, a success identification rate of 88 % (approximated from the figures) was obtained, which is similar to the obtained by Jansson. Exploration
time was significantly higher (74 seconds), but it could be influenciated by the fact that the users had to choose between a wider
range of forms.
Forms sizes of 50 and 100 mm, which match the two smaller sizes of the present experiment, are clearly worse recognised
121
122
identification of 3d virtual geometric forms
with CyberGrasp. Although both experiments make use of force
feedback, the use on only one point of interaction allows the use
of smaller objects. The generation of forces that can only pull
from the fingers in one direction is a serious limitation for this
task. For instance, if the user hits the object with one side of
his index finger, he will not feel any tactile sensation or force
on it. Instead, he will notice the impact on the whole hand, because this device does not provide tactile feedback, and is only
able to exert perpendicular forces to the fingers. This particular behaviour is confusing for the users, which expect different
sensations based on their daily experience. This explanation is
in line with the comments of the users, who reported the lack
of skin sensitivity while touching or hitting the object with any
part of the hand. They could feel the force on the whole hand,
and sometimes in a specific finger (if the collision was against a
finger and the force was near to perpendicular to it), but absence
of a localized sensation on the skin disconcerted them.
In comparison with the vibrotactile glove, there are clear differences in the exploratory procedures. The use of forces guides the
exploration procedure, while the use of a tactile device forces the
users to sweep the air, paying attention to the tactile sensations
produced on their skin. The use of only one point of interaction can be enough to identify regular forms, like the ones used
in this experiment, because the general shape of the object can
be deduced with the examination of a limited area. However,
this feature could became a serious limitation in other scenarios
with objects that require a more exhaustive exploration. In these
scenarios, the use of multiple points of tactile feedback, offered
by our proposal, may provide better results. Not only the area
covered by the hand is wider, but also the use of multiple points
of contact may allow the user to find the particularities that can
be used to categorize each object. Moreover, the subjective sensations and the immersion perceived by the users in a virtual
environment using only a single interaction point is, in general,
worse than the sensation achieved using different haptic stimuli
distributed around the hand, letting the user to feel the sensation of grasping an object. Despite the good results of a force
feedback device like Phantom under these conditions, we can
say that its use is limited to certain applications. Not only the
cost is much higher, but in general it is not appropriated for
tasks that require a large workspace due to their reduced range
and the use of linkages to the ground.
7.6 conclusion
7.6
conclusion
Two experiments have been conducted to test the discrimination
of virtual geometric objects with haptic technology. The first one
used an state-of-the-art multifinger force feedback device, while
a vibrotactile glove built with Vitaki was used in the second experiment. Results have been compared with previous experiments conducted with a single-point force feedback device.
To this end, we have developed a haptic display based on a
vibrotactile glove that includes several considerations to control
the vibration and allow the user to feel gentle sensations. Other
interesting features implemented are the use of a hand skeleton
calculated with inverse kinematics, the use of multiple collision
points per actuator to mitigate the low resolution of the actuators array, the user of hollow objects to help the user to follow
the contour and simulating the impact against the surface with
vibration pulses. Moreover, our proposal allows the user to have
an enhanced sensation of touching a virtual object providing several points of haptic stimuli without reducing the workspace,
allowing the manipulation of large objects.
Although results obtained with the traditional force feedback
devices indicate a higher accuracy, they provide a small workspace, and the use of only one finger is a limitation for natural
interaction in VE. The use of force feedback to feed multiple fingers, on the other hand, has multiple drawbacks as well. Not
only it is difficult to wear, but the use of half degree of freedom for each finger and the absence of tactile feedback does not
provide the users the cues they expect to obtain the best performance. Finally, the use of the vibrotactile glove, despite being the
simplest approximation, performed reasonably well for such a
complex task, given the similar proportions of the objects tested.
These results allow us to think that the vibrotactile haptic display
is a good option not only when a high workspace or a low cost
is needed, but also in more demanding scenarios which were
reserved to force feedback devices until now. Of course, tasks
where the haptic cues are added to visual feedback can obtain a
high benefit from this kind of devices, greatly increasing the sensation of immersion and improving the interaction. Moreover, it
certainly can be used to let users stop thinking they are moving their hands in the air and create the illusion that they are
interacting with objects instead.
123
124
identification of 3d virtual geometric forms
As a future work, it would be interesting the combination of
vibrotactile and force feedback in the recognition task, since this
would provide important information to the user when a collision between the hand and the object is produced.
8
W E I G H T A N D S I Z E D I S C R I M I N AT I O N W I T H
V I B R O TA C T I L E F E E D B A C K
In this chapter the possibility to transmit weight and size information to the user through the use of vibrations is analysed. To
this end, an experiment with two parts is described, where each
one evaluates one of these magnitudes. First section introduces
the related work about weight and size discrimination. Next,
the methods used to map these physical magnitudes to vibrations are described. Then the experiment design is detailed and
finally the results of the experiments are analysed.
8.1
8.1.1
related work
Weight
The weight is a property of the objects which, physically speaking, is a function that depends on the gravitational force, the
density of the object, and its size. Ernst Weber (1795-1978) was
the pioneer of psychophysics, and in the early 1830’s he studied
that the information of the perceived weight is more precise if
an object is lifted rather than if it is just placed on the top of the
hand statically. Since then, multiple experiments have been conducted to measure our ability to discriminate weight [55], even
in microgravity conditions [139]. The perception of this magnitude depends on several factors. Some of them, like the size
of the object, are well known due to the famous weight-size illusion, which is produced for example when a smaller object is
perceived heavier than a bigger one, when in fact they have the
same mass. Ellis and Lederman [32] studied this illusion, proving that determining the size haptically is even more important
in this illusion than doing it visually, and that it is actually possible to recreate the effect without using the visual channel. This
gives an idea of how important is to provide the user a size sensation haptically. Other factors which influence the weight perception, like the material an object is made of, are less popular.
The weight-material illusion [33] is described as the perception
125
126
weight and size discrimination with vibrotactile feedback
of heaviness of an object when the material is more dense, like
for instance a metallic material, compared with a porous one,
like wood. The shape of the object, or even the colour [15] also
affect the perception of the weight. Moreover, Valenti and Costall [171] described the capacity to infer the object’s properties
from the observation of someone lifting an object, depending
on the dynamics of these movements. Most of these psychological aspects can be explained by the inference system of the
brain, which responds to the necessity of estimating the necessary force to hold or in general to interact with an object, and
thus being able to exert and adequate initial force.
Simulation through force feedback is the most realistic way
to simulate object properties like weight, friction or hardness.
However, the use of these devices are not always possible due to
their inconvenients, like cost, reduced workspace or complexity.
Some researchers have tried to compensate this through PseudoHaptic (PH) feedback, which is a way to use visual feedback to
create an illusion, and thus transmitting the user the haptic properties of an object. Dominjon et al. [29] proved that it is possible
to use PH to affect the perceived weight sensation of an object.
This was done by changing the ratio between the movement of
the object manipulated by the user and the one shown in the
screen. In addition, Ooka and Fujita [126] used the Vibrotactile
Phantom Sensation (VPS) on the fingertips through a four-pin vibrotactile device to simulate tangential forces, allowing the users
to feel the weight and slip of a grasped object.
The objective of the first task of the experiment, is to assess
the feasibility of rendering the weight of an object in a virtual
environment through vibrotactile actuators.
8.1.2
Size
The study of the size perception, mainly by researchers of the
psychophysics field, has been focused especially in confronting
the sense of touch and vision to determine which one dominates
when doing an estimation. The sizes studied are primarily the
ones that allow an user grasping or manipulating them, and the
conclusion is that vision is usually the dominant when an incoherence is found [82]. Recent studies [174] suggest that the influence of the visual channel is bigger when the haptic experience
only included kinesthetic information and passive movements.
8.2 haptic display
However, the addition of cutaneous information resulted in the
haptic channel becoming more important. In any case, it is interesting to know that the haptic information complements the
visual information to obtain a more precise appreciation of the
size [186].
Thus, given the difficulty to add force feedback to a virtual
environment, the second task of the experiment will evaluate
the possibility to discriminate different sizes of an object located
on the user’s hand given the provided vibrotactile cues.
8.2
haptic display
The device used in this experiment is a vibrotactile glove developed with the Vitaki toolkit for interaction in 3D environments. The glove is composed of an array of ERM actuators, and
was detailed in chapter 5.
8.3
haptic rendering methods
This section describes the method used to transmit weight and
volume information through vibrotactile actuators, as well as the
interaction techniques that the user had to follow to perform
these tasks.
8.3.1
Weight
In real life, a subject is able to determine an object mass by overcoming the force of gravity while holding it, that is, the object’s
weight. To transmit the weight information through a vibrotactile interface, three different methods have been developed:
• In the first one, the force the user is exerting to hold the
object, which is proportional to it’s mass, is mapped to the
vibration intensity of the actuators distributed in the user’s
hand. In addition, when the object is accelerated upwards,
this force is bigger, and lower or even null when decelerated. Thus, the effect of feeling the weight, which is done
naturally when lifting small objects, can be simulated, creating small vertical movements to improve the sensation.
The interaction technique of the users consists of carefully
lifting the object with vertical movements, attending to the
vibration intensity.
127
128
weight and size discrimination with vibrotactile feedback
• With the second method the vibration intensity is constant,
and does not depend on the object’s mass. Instead, the
time an object takes to come down when it is launched upwards is artificially changed, being inversely proportional
to its weight. This simulation gives the user a notion of
lightness associated for example to a balloon or a falling
feather. This technique is inspired by the research of Valenti
and Costall [171], where the users could reliably discriminate different levels of lifted weight by just observing video
displays. To identify an object weight, the user should
thoughtfully launch the object upwards and wait for the
collision with his hand, attending to the time it takes to do
so. It is worth mentioning that the object is guided to the
user’s hand when falling, so that he can not miss it.
• Third method is a combination of the two previous ones.
Therefore, not only the vibration intensity is proportional
to the object’s weight, but the time it takes to fall down is
inversely proportional to it. Thus, the user has the option
to use any of the two interaction techniques (or both) to
guess the object’s weight.
8.3.2
Size
The user should identify the diameter of a tube-shaped object
within three possible options. As the test is blind, the tube is
fixed to the palmar side of the hand in every moment. The user
has to start each trial completely opening the hand. From this position, he should slowly close it until he feels a collision against
his fingers, which is transmitted through the vibration of actuators located on their first and third phalanges. At this moment,
the user has to determine the size of the object depending on
the aperture of the hand. In this procedure, the skin receptors
alert the user of the contact against the object, so he knows when
to stop closing the hand, while the proprioceptors indicate the
finger positions, which is what finally helps the user to identify
the size of the object.
The vibration in this case is used to indicate a collision between
the surface of the object and the fingers. The collisions are calculated independently for each finger, and specifically for each
of the two actuators located on them. The vibration intensity is
8.4 description of the experiment
proportional to the collision depth, but in this case the margin is
reduced to clearly define the object limits.
8.4
description of the experiment
The experiment is composed of two parts. The first one is focused on the size discrimination, while the second one deals
with the weight identification.
8.4.1
Stimuli
Three different sizes are used in the first part of the experiment.
The maximum vibration intensity has been kept far below annoyance level, while the minimum intensity has been determined by
the capabilities of the actuators.
As for the second part of the experiment, three tube diameters
have been used: 35, 50, and 60 mm.
8.4.2
Participants
A total of 18 participants took part in the experiment (7 women
and 11 men), with a mean age of 29.6 years (standard deviation
6.8 years). All of them carried out the size and weight identification tests, in a random order.
8.4.3
Method
In total, four tasks were carried out, one to identify sizes and
three to identify sizes (one for each weight-rendering method).
Tasks were composed of 6 trials of each of the 3 possible sizes
or weights, that is, 18 trials in total for each task. First, each
participant was informed about the objective of the task, the procedure and, in the case of the weight, which parameters would
help them to discriminate the objects (see subsection 8.3.1). Next,
each user had a maximum of 5 minutes of training before each
task, so they could get used to the device and the specific conditions. The users could not see the virtual scene nor their own
hand, to avoid any visual reference.
129
130
weight and size discrimination with vibrotactile feedback
8.5
results
Results of the experiment are summarized in Figure 8.1. In general, the success rate can be considered very high for all the tasks,
ranging between 78 and 89%. Indeed, they are above chance
level, which is 33%.
The size discrimination task allowed the users to get nearly
an 89% success rate with a low variability, as it can bee seen in
the error bars of Figure 8.1a. Interquartilic range, which gives a
notion of the results dispersion, is only 5%. The time taken to
complete this task is only 8.3 seconds in average.
As for the weight discrimination, different success rates are
obtained depending on the rendering method chosen. The use
of the vibration intensity (method 1) gets the lowest success rate
(78.7%), although with a high variability depending on the user,
which is demonstrated by the huge interquartilic range (33%).
The use of the second method, where the time taken by the object to fall was changed, obtained a success rate of 81.7%. Overall, the method that obtained the best results is the one that
combines both weight rendering techniques (method 3), with
a 85.7%. The lowest time taken to complete the task is the first
method, with 8.1 seconds per trial in average, compared to the
9.9 seconds obtained by the second technique, or the 9.6 seconds
when the third method is used. The reason for this is the difference between the procedures carried out to identify the weight.
In the first case, the users gently lift the objects attending at the
vibration intensity, while with the second technique the users
have to throw the object and estimate the weight depending on
the time it takes to come down. In addition, most of the times
the users throw the object several times when the user is not sure
of the answer. Finally, with the combined technique, the time
spent is between these two cases because although the vibration
intensity could be enough to guess the answer, sometimes the
users try to be sure by launching upwards the object as well.
8.6
conclusion
In this chapter, different methods to transmit size and weight
information through vibrotactile actuators have been described,
which could be used under Virtual Reality environments. These
methods have been evaluated with an experiment where 18 users
8.6 conclusion
(a) Average success rate.
(b) Average exploration time.
Figure 8.1: Sizes and weight discrimination results. Dashed line indicates chance level (33%). Error bars indicate quartiles 1 and
3.
took part, and that consisted of 2 tasks, one to discriminate sizes
and other to identify weights. Results of the experiment allow
us to successfully validate the introduced techniques, with a success rate higher than 88% in the case of the sizes, and between
78 and 85% in the case of the weight, depending on the method
used. The average time taken per trial is also relatively low,
between 8 and 10 seconds per answer.
Furthermore, the techniques used could be used together to
simultaneously represent both physical magnitudes, which could
be evaluated as a future work. Also, it would be interesting to
check if using these methods is still possible to reproduce the
weight-size illusion, and how does it affect transmitting the information through vibrations.
131
9
CONCLUSIONS
In this section, an overview of the research work is described,
highlighting the main contributions. Next, some future research
lines are suggested. Finally, the scientific contributions of the author are detailed, including the participation in research projects,
the collaboration with other research centers and the publications produced.
9.1
contributions
This thesis has investigated the use of vibrotactile technology,
and more specifically, ERM actuators, to provide tactile feedback
in VE. As a result, the main contributions of the research are:
1. An overview of the perception of the human sense of touch,
and a deep review of tactile technologies available to provide tactile feedback.
2. A hardware prototyping platform for vibrotactile feedback
has been developed. It includes novel features like an scalable architecture, the support for a wide range of ERM actuators, the inclusion of two advanced driving techniques
(overdrive and active braking), and a volume adjustment
knob to suit the user preferences.
3. A software authoring tool that, together with the hardware
platform, can be used to easily design and test complex
tactile patterns distributed along several actuators. It can
be used to define overdrive and active braking pulses to
exploit the hardware capabilities, and it provides an API
library to reuse the designed stimuli from an external application. Other interesting features are the inclusion of
a graphic representation of the actuators distribution, and
the possibility to compensate the different sensibilities of
the skin by adjusting the gain of each channel.
4. A vibrotactile rendering method for rendering 2D shapes
and textures through an ERM actuator, as well as an ex-
133
134
conclusions
periment to evaluate its performance when comparing it
against a force feedback device and the bare finger.
5. Two experiments have been conducted to identify regular
3D shapes without visual guidance. The first one was carried out with a multi-point force feedback device, while
for the second one a vibrotactile glove built with the prototyping platform was built. To this end, an adapted haptic
rendering algorithm has been developed, using a virtual
hand model, and multiple collision points per actuator. In
addition, an optimized collision algorithm for regular 3D
shapes has been implemented to work in constant time.
The results of these two experiments were compared with
previous results found in the literature with single point
force feedback devices.
6. An experiment to assess the feasibility of transmitting an
object’s weight and size information to the user in a VE
through vibrations. Three different methods for rendering
the weight properties of the object were tested with a user
evaluation.
These contributions fulfill the objectives raised in Chapter 1.
9.2
future work
The research work developed has opened up questions and ideas
which have been outside the scope of this thesis. These ideas are
summarized in this section.
• Development of a wearable version of the controller. Current desktop version of the controller is appropriate for lab
setups and, in general, environments with low mobility.
However, it would relatively easy the implementation of a
wireless controller to make it truly wearable, by adding a
battery and a radio communication module (such as bluetooth). The wearable version of the controller may also be
scalable using the same architecture, but instead of stacking new boards on the top of the controller, daisy-chaining
them with a cable would be more convenient.
• Design of vibrotactile cues oriented to manipulative tasks.
In this thesis, the vibrotactile feedback has been focused
on rendering physical aspects of virtual objects, such as
9.3 scientific contributions
texture, shape, size, and weight. The next step is to provide
useful information when the user is interacting with the VE,
for example rotating a dial or actuating a lever.
• Evaluation of the use of vibrotactile feedback in combination with the visual channel. All the experiments conducted in this thesis have been based on discrimination
tasks without visual guidance, which is a demanding task
for the user, due to relatively simple nature of the provided feedback. However, it remains to be tested how this
augmented perception would help the user when used together with the visual channel.
• Inclusion of Pseudo-Haptic illusions to improve the interaction. In VE, the lack of force feedback implies going
through objects with one’s hands, which can reduce the
immersion of the user. The use of vibrotactile feedback in
combination to PH, which transmits the information through
the visual channel, can compensate this absence of force.
For instance, Dominjon et al. [29] demonstrated how a
change in the Control/Display ratio can influence the perceived weight of a virtual object, so it would be interesting
to study this illusion to further improve the immersion sensation when using vibrotactile feedback.
9.3
scientific contributions
The PhD candidate has been a member of the group LoUISE,
in the University of Castilla-La Mancha. He has participated in
several research projects and collaborated with other research
centers.
9.3.1
Participation in R&D projects
• Entornos Virtuales Colaborativos aplicados a sistemas de
aprendizaje
Reference: PAI06-0093-8836
Financial Entity: Junta de Comunidades de Castilla-La Mancha, Fondo Social Europeo
Participants: University of Castilla-La Mancha, University
Miguel Hernández
135
136
conclusions
Main Investigator: Pascual González
Period: 20/11/2007 to 14/03/2008
• DMAP Application
Financial Entity: Eurocopter Spain
Participants: University of Castilla-La Mancha
Main Investigator: Pascual González
Period: 22/04/2008 to 15/01/2009
• MyMobileWeb: Tecnologías avanzadas para el acceso móvil,
independiente de dispositivo e inteligente
Financial Entity: Telefónica
Participants: University of Castilla-La Mancha
Main Investigator: Pascual González
Period: 02/02/2009 to 28/02/2009
Collaboration with other research centers
9.3.2
A research stay was made at Bristol Interaction and Graphics
(BIG) group, University of Bristol, United Kingdom, from November 2012 to February 2013. The researcher worked under the supervision of professor Sriram Subramanian, collaborating with
other researchers of the BIG group. During the stay, the first experiment detailed in Chapter 7 was conducted. In addition, the
collaboration with the BIG group produced research results that
have not been published yet.
Publications related with the thesis
9.3.3
As a result of this research several publications directly related
with this thesis have been produced. Other publications where
the proposed vibrotactile platform have been integrated into different systems are included as well.
9.3.3.1
Journals
1. Martínez, D., García, A.S., Martínez, J., Molina, J.P. & González, P. (2008). A Model of Interaction for CVEs Based on
the Model of Human Communication. Journal of Universal
Computer Science, 14(19), 3071–3084.
Abstract: This paper summarizes a model of interaction
for CVEs inspired by the process followed in human com-
9.3 scientific contributions
munication in the real world, detailing both the main elements and the communication process itself. The model
proposed copies some properties of the real world communication but also allows the easy integration of Task
Analysis to the design of CVEs, helping the developer in
the design of the application. Furthermore, some of the
benefits that the usage of this model brings to the user
are also shown. Finally, some implementation details of a
prototype supporting the described model are given. This
prototype is used all along the paper to illustrate the explanation of some parts of the model.
2. Martínez, J., García, A.S., Martínez, D., Molina, J.P. & González, P. (2011). Comparación de Retorno de Fuerza , Vibrotáctil
y Estimulación Directa para la Detección de Texturas. Novática,
214, 58–60.
Abstract: En este artículo se realiza un análisis que permite
evaluar diferentes estrategias de retroalimentación háptica
para la discriminación de texturas en mundos virtuales. En
concreto se han evaluado, junto al uso real del tacto para
detectar las distintas texturas, tanto técnicas de retroalimentación de fuerza como de retorno vibrotáctil. Para ello
se ha usado un dispositivo Phantom de realimentación de
fuerzas, un guante creado en nuestro laboratorio con capacidad de vibración, y prototipos palpables de papel que
representan un modelo ideal de retorno táctil. Estos tres
métodos se han usado para detectar dos tipos de patrones,
uno en el que varía la forma de las figuras geométricas, y
otro en el que varía la densidad de líneas que forman un
patrón. Los resultados muestran que el guante con retorno
vibrotáctil tiene un comportamiento muy bueno en la detección de texturas en las que tan solo varia la frecuencia
de los estímulos táctiles, e incluso es de utilidad para texturas más complejas.
3. Martínez, J., García, A.S., Molina, J.P., Martínez, D. & González, P. (2013). An empirical evaluation of different haptic feedback for shape and texture recognition. The Visual Computer,
29(2), 111–121.
Abstract: The scope of this research is to evaluate three different haptic feedback methods for texture discrimination
in virtual environments. In particular, a Phantom force
137
138
conclusions
feedback device, a custom-made vibrotactile dataglove and
paper palpable prototypes have been used. This paper describes a new study which corroborates the results of an
initial experiment (Martínez et al. in 2011 International
Conference on Cyberworlds, pp. 62-68, 2011) and performs a more in-depth evaluation of some results of interest and, in particular, those based on gender. In the
experiment expansion, the number of users has been increased, so both genders are even, and the texture identification strategies have been analyzed. Finally, statistical
analyses have been conducted to assess the differences between both genders, showing a new path which could be
explored with new experiments. In addition, the vibrotactile dataglove has proved to have a notable behavior in the
detection of varying grating textures, and it is even useful
to identify shapes.
4. Martínez, J., García, A.S., Oliver, Miguel, Molina, J.P. &
González, P. (2014). VITAKI: A Vibrotactile Prototyping Toolkit
for Virtual Reality and Videogames. International Journal of
Human-Computer Interaction (IJHCI) (accepted with major changes).
Abstract: The use of haptics in videogames and virtual reality has received growing attention as a means of enhancing
the sensation of immersion in these environments. The
sensation of touching virtual objects not only augments
the impression of reality but it can improve the performance. However, the design of haptic interactions is not
an easy task, and it usually needs a great effort due to the
absence of powerful prototyping toolkits. Thus, this paper proposes a vibrotactile prototyping toolkit for Eccentric Rotating Mass (ERM) actuators, named VITAKI. The
main objective of this platform is to facilitate the prototyping and testing procedures of new vibrotactile interaction
techniques for Virtual Reality and videogames. A detailed
description of the design of the system is provided, presenting the hardware and software elements that make up
the VITAKI toolkit. In addition, its application to two different examples to illustrate its use is provided. Finally,
a preliminary evaluation of this toolkit is presented. This
evaluation is divided into two main stages. On one hand,
a study of Olsen’s criteria is performed to analyze its gen-
9.3 scientific contributions
eral capabilities. On the other hand, a comparison with
previously presented proposals is included too. These two
analyses, together with other experiments where the devices created with our toolkit were tested by end users,
highlight its main features and its advantages over other
proposals.
5. Martínez, J., García, A.S., Oliver, Miguel, Molina, J.P. &
González, P. (2014). Identification of 3D Geometric Forms with
a Vibrotactile Glove. IEEE Computer Graphics and Applications (accepted with major changes).
Abstract: The emergence of new interaction devices, that allow the interaction beyond the screen, is bringing the field
of Virtual Reality to our homes. We can manipulate virtual
objects without the use of traditional peripherals. However,
in order to increase the sensation of reality and ease the interaction, the inclusion of other stimuli is needed. We propose the incorporation of haptic feedback to improve the
execution of manipulative tasks and the user experience
under these new environments. To this end, we have designed a new haptic display based on a vibrotactile glove
that includes several considerations to control the vibration
and allow the user to feel gentle sensations. We have performed an experiment with eighteen participants in order
to evaluate the capabilities of our proposal under the high
demanding task of the identification of 3D objects without
visual feedback. Finally, the results allow us to demonstrate the capability of this technology.
9.3.3.2
Conferences
1. Martínez, J., García, A.S., Martínez, D. & González, P. (2009).
Desarrollo de un Guante de Datos con Retorno Háptico Vibrotáctil Basado en Arduino. Interacción 2009 - Jornadas de Realidad Virtual (pp. 1–10) Barcelona.
Abstract: En este trabajo se presenta la concepción, diseño
y desarrollo de un guante de datos con retorno vibro-táctil
basado en el microcontrolador open- source Arduino, con
el objetivo de usarlo de forma experimental en diferentes
aplicaciones de realidad virtual, e investigar sobre técnicas
de interacción que hagan uso de este retorno. En este tra-
139
140
conclusions
bajo se detallan también algunas de esas aplicaciones, y las
conclusiones que se extraen del uso de este guante.
2. García, A.S., Molina, J.P., González, P., Martínez, D. &
Martínez, J. (2009). An experimental study of collaborative interaction tasks supported by awareness and multimodal feedback.
Proceedings of the 8th International Conference on Virtual
Reality Continuum and its Applications in Industry - VRCAI ’09 (pp. 77) Tokyo, Japan.
Abstract: Awareness and feedback have been identified by
many researchers as key concepts to achieve fluent collaboration when performing highly interactive collaborative
tasks. However, it is remarkable that few studies address
the effect that adding special kinds of feedback has on user
awareness and task performance. This work follows a preliminary experiment in which we already studied awareness in Collaborative Virtual Environments, evaluating the
effect of visual cues in collaborative task performance and
showing that users tend to make more mistakes when such
feedback is not provided, that is, they are less aware of the
object at hand and the task mate. These early results were
promising and encouraged us to continue investigating the
benefit of increasing the awareness support for tasks that
require close collaboration between users, but this time analyzing more types of awareness and experimenting with
visual, audio and vibrotactile feedback cues.
3. García, A.S., Molina, J.P., González, P., Martínez, D. &
Martínez, J. (2009). A study of multimodal feedback to support
collaborative manipulation tasks in virtual worlds. Proceedings
of the 16th ACM Symposium on Virtual Reality Software
and Technology - VRST ’09 (pp. 259–260) New York, New
York, USA.
Abstract: In the research community, developers of Collaborative Virtual Environments (CVEs) usually refer to
the terms awareness and feedback as something necessary
to maintain a fluent collaboration when highly interactive
tasks have to be performed. However, it is remarkable that
few studies address the effect that including special kinds
of feedback has on user awareness and task performance.
This work follows a preliminary experiment where we already studied awareness in CVEs, evaluating the effect of
9.3 scientific contributions
visual cues in the performance of collaborative tasks and
showing that users tend to make more mistakes when such
feedback is not provided, that is, they are less aware. These
early results were promising and encouraged us to continue investigating the benefit of improving awareness in
tasks that require close collaboration between users, but
this time analyzing more types of awareness and experimenting with visual, audio and vibrotactile feedback cues.
4. Martínez, J., García, A.S. & Martínez, D. (2011). Texture
Recognition: Evaluating Force, Vibrotactile and Real Feedback.
Human-Computer Interaction - INTERACT 2011 (pp. 612–
615).
Abstract: A force-feedback Phantom device, a custom-built
vibrotactile dataglove, and embossed paper sheets are compared to detect different textures. Two types of patterns are
used, one formed by different geometrical shapes, and the
other with different grooves width. Evaluation shows that
the vibrotactile dataglove performs better in the detection
of textures where the frequency of tactile stimuli varies,
and it is even useful to detect more complex textures.
5. Martínez, J., García, A.S., Martínez, D., Molina, J.P. & González, P. (2011). Comparación de Retorno de Fuerza, Vibrotáctil y Estimulación Directa para la Detección de Texturas. Actas del XII Congreso Internacional de Interacción PersonaOrdenador Lisboa, Portugal.
6. Martínez, J., Martínez, D., Molina, J.P., González, P. & García, A.S. (2011). Comparison of Force and Vibrotactile Feedback with Direct Stimulation for Texture Recognition. 2011 International Conference on Cyberworlds (pp. 62–68) Banff,
Canada.
Abstract: In this paper a study is conducted in order to evaluate three different strategies of haptic feedback for texture discrimination in virtual environments. Specifically,
both force and vibrotactile feedback have been evaluated,
as well as the direct use of the sense of touch, to detect
different textures. To this end, a force feedback Phantom
device, a custom built vibrotactile dataglove and paper palpable prototypes, which represent an ideal model of tactile
feedback, have been compared. These three methods have
141
142
conclusions
been used to detect two types of patterns, one formed by
different geometrical shapes, and the other with different
grooves width. Results show that the vibrotactile dataglove
has a notable behaviour in the detection of textures where
the frequency of tactile stimuli varies, and it is even useful
to detect more complex textures.
Other Publications
9.3.4
The collaboration with the research group has produced the following publications, which are not directly related with the contents of the thesis.
9.3.4.1
Journals
1. Martínez, D., Kieffer, S., Martínez, J., Molina, J.P., Macq, B.
& González, P. (2010). Usability evaluation of virtual reality
interaction techniques for positioning and manoeuvring in reduced, manipulation-oriented environments. The Visual Computer, 26(6-8), 619–628.
Abstract: This paper introduces some novel interaction techniques based on the concepts of composite position- ing
and composite manoeuvring (described in the paper). In
contrast with other previous proposals, these techniques
have been designed and evaluated in the context of a user
centred process. The results of this evaluation and some
rel- evant findings for the field of human computer interaction are also described.
2. García, A.S., Olivas, A., Molina, J.P., Martínez, J., González,
P., & Martínez, D. (2013). An Evaluation of Targeting Accuracy in Immersive First-Person Shooters Comparing Different
Tracking Approaches and Mapping Models. Journal of Universal Computer Science (JUCS), 19(8), 1086–1104.
Abstract: Immersive Virtual Environments typically rely on
a tracking system that captures the position and orientation of the head and hands of the cyber-user. Tracking devices, however, are usually quite expensive and require a
lot of free space around the user, preventing them from being used for gaming at home. In contrast with these expensive capture systems, the use of inertial sensors (accelerometers and gyroscopes) to register orientation is spreading
9.3 scientific contributions
everywhere, finding them in different control devices at
affordable prices, such as the Nintendo Wiimote. With a
control like this, the player can aim at and shoot the enemies like holding a real weapon. However, the player
cannot turn the head to observe the world around because
the PC monitor or TV remains in its place. Head-mounted
displays, such as the Oculus Rift, with a head-tracker integrated in it, allows the player to look around the virtual
world. Even if the game does not support the head-tracker,
it can still be used if the sensor driver emulates the mouse,
so it can control the player’s view. However, the point of
view is typically coupled with the weapon in first-person
shooting (FPS) games, and the user gets rapidly tired of
using the neck muscles for aiming. In this paper, these two
components -view and weapon- are decoupled to study
the feasibility of an immersive FPS experience that avoids
position data, relying solely on inertial sensors and the
mapping models hereby introduced. Therefore, the aim
of this paper is to describe the mapping models proposed
and present the results of the experiment carried out that
proves that this approach leads to similar or even better targeting accuracy, while delivering an engaging experience
to the gamer.
9.3.4.2
Conferences
1. Martínez, D., Martínez, J., García, A.S., Molina, J.P. & González, P. (2008). Diseño e Implementacion de un Modelo de
Interacción para CVE’s basado en el Modelo de Comunicación
Humana. Interacción 2008 (pp. 50).
Abstract: Este artículo resume un modelo de interacción
para CVE’s basado en el modelo de comunicación humana,
describiéndose los principales elementos que toman parte
y el proceso que define. Asimismo, se resumen algunos
detalles de implementación que permiten soportar dicho
modelo sobre un prototipo desarrollado para tal efecto.
Asimismo, se aportan varios ejemplos de las ventajas del
modelo propuesto empleando para ello el prototipo implementado.
2. García, A.S., González, P., Molina, J.P., Martínez, D. &
Martínez, J. (2010). Propuesta de Modelo de Awareness para
143
144
conclusions
Entornos Virtuales Colaborativos. Actas del XI Congreso Internacional de Interacción Persona-Ordenador Valencia, Spain.
Abstract: Los modelos de awareness desarrollados para
sistemas CSCW se muestran demasiado generales como
para aplicarlos a tareas tan específicas como aquellas que
requieren manipulación concurrente de objetos junto con
comunicación verbal y no verbal (closely-coupled collaboration tasks). Por esta razón, este artículo presenta una
nueva aproximación a estos modelos, tratando de cubrir
el hueco existente entre la teoría de awareness y la implementación, ofreciendo un modelo que el diseñador debe
tener en cuenta a la hora de plantearse el desarrollo de un
CVE.
3. Martínez, D., Molina, J.P., García, A.S., Martínez, J. & González, P. (2010). AFreeCA: Extending the Spatial Model of Interaction. 2010 International Conference on Cyberworlds
(pp. 17-24).
Abstract: This paper analyses the Spatial Model of Interaction, a model that rules the possible interactions among
two objects and that has been widely accepted and used
in many CVE systems. As a result of this analysis, some
deficiencies are revealed and a new model of interaction
is proposed. Additionally, a prototype illustrating some of
the best features of this model of interaction is detailed.
4. Olivas, A., Molina, J.P., Martínez, J., González, P., García,
A.S. & Martínez, D. (2012). Proposal and evaluation of models
with and without position for immersive FPS games. Proceedings of the 13th International Conference on Interacción
Persona-Ordenador (pp. 277-284) Elche, Spain.
Abstract: The video game industry is undergoing a time
of change and renewal, where new forms of interaction
pave the way. Nintendo Wii, PlayStation Move or Kinect
are examples of this trend and mark the way forward in
the coming years. One the most popular game form is the
First Person Shooter (FPS), which use unrealistic control
modes based on the controller of a console or the keyboard
and mouse of a computer. In this type of games, motion
capture systems could be used to achieve a highly realistic
experience. The disadvantages of these systems are their
9.3 scientific contributions
high cost and the large space that often requires their installation. The aim of this paper is to study the possibility
of obtaining results with a similar level of realism using devices that capture only the orientation of the user’s head
and hand. This work has involved the complete development of an immersive FPS game. Together with it, new
models of control have been introduced, using both the position and orientation of the user as well as the orientarion
only. To evaluate these models, an experiment has been
carried out with twenty-four users, who tested the system
and expressed their opinion on it.
5. Muñoz, M. A., Martínez, J., Molina, J. P., González, P., &
Fernádez-Caballero, A. (2013). Evaluation of a 3D Video Conference System Based on Multi-camera Motion Parallax. In 5th
International Work-Conference on the Interplay Between
Natural and Artificial Computation, IWINAC 2013 (pp. 159168). Mallorca, Spain.
Abstract: Video conference systems have evolved little regarding 3D vision. An exception is where it is necessary
to use special glasses for viewing 3D video. This work
is based primarily on the signal of vision motion parallax.
Motion parallax consists in harnessing the motion of the
observer, and offering a different view of the observed environment depending on his/her position to get some 3D
feeling. Based on this idea, a client-server system has been
developed to create a video conference system. On the
client side, a camera that sends images to the server is used.
The server processes the images to capture user movement
from detecting the position of the face. Depending on this
position, an image is composed using multiple cameras
available on the server side. Thanks to this image composition, and depending on the user standpoint, 3D feeling is
achieved. Importantly, the 3D effect is experienced without
the use of glasses or special screens. Further, various composition models or change modes between cameras have
been included to analyze which of them achieves a greater
improvement of the 3D effect.
145
BIBLIOGRAPHY
[1] Engineering Acoustics Inc. URL http://www.atactech.
com. (Cited on pages 24 and 25.)
[2] H3D Open Source Haptics. URL www.h3dapi.org. (Cited
on page 98.)
[3] Immersion Corporation.
URL http://www.immersion.
com/. (Cited on pages 79 and 80.)
[4] Mide. URL http://www.mide.com. (Cited on pages 15, 16,
and 17.)
[5] Sensable.
page 98.)
[6] CyberGlove
URL http://www.sensable.com.
Systems,
2012.
URL
(Cited on
http://www.
cyberglovesystems.com/. (Cited on page 36.)
[7] Dennis Allerkamp, Guido Böttcher, Franz-Erich Wolter,
Alan C. Brady, Jianguo Qu, and Ian R. Summers. A vibrotactile approach to tactile rendering. The Visual Computer, 23(2):97–108, July 2006. ISSN 0178-2789. doi:
10.1007/s00371-006-0031-5. (Cited on page 96.)
[8] Y. Bar-Cohen, C. Mavroidis, M. Bouzit, B. Dolgin, D.L.
Harm, G.E. Kopchok, and R. White. Virtual reality robotic
telesurgery simulations using MEMICA haptic system. In
Proc. SPIE 8th Annual International Symposium on Smart
Structures and Materials, number 4329, pages 5–8. Citeseer,
2001. doi: 10.1117/12.432667. (Cited on page 22.)
[9] Olivier Bau, Uros Petrevski, and Wendy Mackay. BubbleWrap: A Textile-Based Electromagnetic Haptic Display.
In Proceedings of the 27th international conference extended
abstracts on Human factors in computing systems - CHI EA
’09, page 3607, New York, New York, USA, 2009. ACM
Press. ISBN 9781605582474. doi: 10.1145/1520340.1520542.
(Cited on pages 23 and 25.)
[10] M Benalikhoudja, M Hafez, and A Kheddar. VITAL: An
electromagnetic integrated tactile display. Displays, 28(3):
147
148
bibliography
133–144, 2007. ISSN 01419382. doi: 10.1016/j.displa.2007.
04.013. (Cited on page 24.)
[11] JC Bliss. A relatively high-resolution reading aid for the
blind. IEEE Transactions on Man-Machine Systems, 10(1):1–9,
1969. (Cited on pages 15 and 16.)
[12] Aaron Bloomfield and Norman I. Badler. Collision Awareness Using Vibrotactile Arrays. In 2007 IEEE Virtual Reality
Conference, pages 163–170. IEEE, 2007. ISBN 1-4244-09055. doi: 10.1109/VR.2007.352477. (Cited on pages 4, 34,
and 41.)
[13] S.D. Bolboaca, L. Jäntschi, and R.E. Sestras. Statistical Approaches in Analysis of Variance: from Random Arrangements to Latin Square Experimental Design. Leonardo Journal of Sciences, 8(15):71–82, 2009. (Cited on page 102.)
[14] M Bovenzi. Exposure-response relationship in the handarm vibration syndrome: an overview of current epidemiology research. International Archives of Occupational and
Environmental Health, 71:509–519, 1998. (Cited on page 10.)
[15] Jean-pierre Bresciani, Knut Drewing, and Marc O Ernst.
Human Haptic Perception and the Design of HapticEnhanced Virtual Environments. In Antonio Bicchi, Martin Buss, Marc O. Ernst, and Angelika Peer, editors, The
Sense of Touch and its Rendering, pages 61–106. Springer
Berlin Heidelberg, 2008. ISBN 978-3-540-79034-1. doi:
10.1007/978-3-540-79035-8\_5. (Cited on page 126.)
[16] Lorna M Brown and Topi Kaaresoja. Feel Who’s Talking:
Using Tactons for Mobile Phone Alerts. In CHI ’06 extended
abstracts on Human factors in computing systems - CHI EA ’06,
volume computings, page 604, New York, New York, USA,
2006. ACM Press. ISBN 1595932984. doi: 10.1145/1125451.
1125577. (Cited on page 31.)
[17] G. Burdea and P. Coiffet. Virtual reality technology, volume 12. John Wiley & Sons, Inc., New York, USA, 2 edition, 2003. (Cited on page 15.)
[18] Sylvain Cardin, Frédéric Vexo, and Daniel Thalmann.
Vibro-tactile interface for enhancing piloting abilities during long term flight. Journal of Robotics and Mechatronics,
pages 1–12, 2006. (Cited on page 34.)
bibliography
[19] Tom Carter, Sue Ann Seah, Benjamin Long, Bruce
Drinkwater, and Sriram Subramanian.
UltraHaptics:
Multi-Point Mid-Air Haptic Feedback for Touch Surfaces.
In Proceedings of the 26th annual ACM symposium on User
interface software and technology - UIST ’13, pages 505–514,
New York, New York, USA, 2013. ACM Press. ISBN
9781450322683. doi: 10.1145/2501988.2502018. (Cited on
page 22.)
[20] A Chang, S. O’Modhrain, R Jacob, E Gunther, and H Ishii.
ComTouch: design of a vibrotactile communication device.
In Proceedings of the 4th conference on Designing interactive
systems: processes, practices, methods, and techniques, page
320. ACM, 2002. (Cited on pages 23 and 24.)
[21] E.Y. Chen and B.A. Marcus. Exos slip display research and
development. Dynamic Systems and Control, 55-1:265, 1994.
(Cited on page 26.)
[22] LT Cheng. Design of a vibrotactile feedback virtual testbed.
In CCECE ’97. Canadian Conference on Electrical and Computer Engineering. Engineering Innovation: Voyage of Discovery. Conference Proceedings, volume 1, pages 173–176. IEEE,
1997. ISBN 0-7803-3716-6. doi: 10.1109/CCECE.1997.
614818. (Cited on page 36.)
[23] LT Cheng, R Kazman, and John Robinson. Vibrotactile
feedback in delicate virtual reality operations. Proceedings
of the fourth ACM . . . , 1997. (Cited on page 36.)
[24] Justin Cohen, Masataka Niwa, Robert W. Lindeman,
Haruo Noma, Yasuyuki Yanagida, and Kenichi Hosaka.
A closed-loop tactor frequency control system for vibrotactile feedback. In CHI ’05 extended abstracts on Human
factors in computing systems, page 1296, New York, USA,
2005. ACM Press. ISBN 1595930027. doi: 10.1145/1056808.
1056900. (Cited on pages 36, 37, and 43.)
[25] D. Cuartielles, A. Göransson, T. Olsson, and S. Stenslie.
Developing Visual Editors for High-Resolution Haptic Patterns. In The Seventh International Workshop on Haptic and
Audio Interaction Design, pages 42–44, Lund, Sweden, 2012.
(Cited on pages 62 and 78.)
149
150
bibliography
[26] T. Debus, T. Becker, P. Dupont, T.J. Jang, and R. Howe.
Multichannel vibrotactile display for sensory substitution
during teleoperation. Design, pages 28–31, 2001. (Cited on
page 36.)
[27] T. Debus, T.J. Jang, P. Dupont, and R.D. Howe. Multichannel vibrotactile display for teleoperated assembly. In
IEEE International Conference on Robotics and Automation,
2002. Proceedings. ICRA’02, volume 1, pages 592–597. Ieee,
2002. ISBN 0-7803-7272-7. doi: 10.1109/ROBOT.2002.
1013423. (Cited on page 14.)
[28] J.T. Dennerlein, P.A. Millman, and R.D. Howe. Vibrotactile feedback for industrial telemanipulators. In Sixth Annual Symposium on Haptic Interfaces for Virtual Environment
and Teleoperator Systems, ASME International Mechanical Engineering Congress and Exposition, volume 61, pages 189–
195, 1997. (Cited on page 36.)
[29] Lionel Dominjon, A. Lecuyer, J. Burkhardt, P. Richard, and
S. Richir. Influence of control/display ratio on the perception of mass of manipulated objects in virtual environments. In IEEE Proceedings. VR 2005. Virtual Reality,
2005., pages 19–25. IEEE, 2005. ISBN 0-7803-8929-8. doi:
10.1109/VR.2005.1492749. (Cited on pages 126 and 135.)
[30] J Edmison, M Jones, Z Nakad, and T Martin. Using piezoelectric materials for wearable electronic textiles. In iswc,
volume 7, page 0041. Published by the IEEE Computer Society, 2002. (Cited on page 14.)
[31] Mohamad Eid, Sheldon Andrews, Atif Alamri, and Abdulmotaleb El Saddik. HAMLAT : A HAML-based Authoring
Tool for Haptic Application Development. In Eurohaptics,
volume 5024, pages 857–866, 2008. (Cited on page 61.)
[32] R R Ellis and S J Lederman. The role of haptic versus
visual volume cues in the size-weight illusion. Perception
& psychophysics, 53(3):315–24, March 1993. ISSN 0031-5117.
(Cited on page 125.)
[33] R R Ellis and S J Lederman. The material-weight illusion revisited. Perception & psychophysics, 61:1564–1576,
1999. ISSN 0031-5117. doi: 10.3758/BF03213118. (Cited
on page 125.)
bibliography
[34] E.T. Enikov, K.V. Lazarov, and G.R. Gonzales. Microelectrical Mechanical Systems Actuator Array for Tactile Communication. Computers Helping People with Special Needs,
pages 245–259, 2002. (Cited on page 17.)
[35] M.J. Enriquez and K.E. MacLean. The hapticon editor:
a tool in support of haptic communication research. In
11th Symposium on Haptic Interfaces for Virtual Environment
and Teleoperator Systems, 2003. HAPTICS 2003. Proceedings.,
pages 356–362. IEEE Comput. Soc, 2003. ISBN 0-7695-18907. doi: 10.1109/HAPTIC.2003.1191310. (Cited on pages 61
and 77.)
[36] JBF Van Erp. Guidelines for the use of vibro-tactile displays in human computer interaction. Proceedings of Eurohaptics, 2002. (Cited on page 37.)
[37] Alois Ferscha, Bernadette Emsenhuber, and Andreas
Riener. Vibro-tactile space-awareness. Human Factors, 2008.
(Cited on page 34.)
[38] Neil Forrest and Steven A Wall. ProtoHaptic : Facilitating
Rapid Interactive Prototyping of Haptic Environments. In
Proceedings of HAID’06, pages 18–21. Springer, 2006. (Cited
on page 61.)
[39] V. Frati and D. Prattichizzo. Using Kinect for hand
tracking and rendering in wearable haptics. 2011 IEEE
World Haptics Conference, pages 317–321, June 2011. doi:
10.1109/WHC.2011.5945505. (Cited on pages 26 and 27.)
[40] T. Fukuda, H. Morita, F. Arai, H. Ishihara, and H. Matsuura. Micro resonator using electromagnetic actuator
for tactile display. In Micromechatronics and Human Science, 1997. Proceedings of the 1997 International Symposium
on, pages 143–148, 1997. (Cited on page 23.)
[41] J.L. Gabbard, D. Hix, and J.E. Swan. User-centered design
and evaluation of virtual environments. IEEE Computer
Graphics and Applications, 19(6):51–59, 1999. ISSN 02721716.
doi: 10.1109/38.799740. (Cited on page 96.)
[42] Péter Galambos. Vibrotactile Feedback for Haptics and
Telemanipulation : Survey , Concept and Experiment. 9
(1):41–65, 2012. (Cited on pages 4 and 36.)
151
152
bibliography
[43] A.S. García, J.P. Molina, P. González, D. Martínez, and
J. Martínez. An experimental study of collaborative interaction tasks supported by awareness and multimodal
feedback. In Proceedings of the 8th International Conference
on Virtual Reality Continuum and its Applications in Industry VRCAI ’09, page 77, Tokyo, Japan, 2009. ACM Press. ISBN
9781605589121. doi: 10.1145/1670252.1670270. (Cited on
page xi.)
[44] A.S. García, J.P. Molina, P. González, D. Martínez, and
J. Martínez. A study of multimodal feedback to support
collaborative manipulation tasks in virtual worlds. In Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology - VRST ’09, pages 259–260, New York,
New York, USA, 2009. ACM Press. ISBN 9781605588698.
doi: 10.1145/1643928.1643994. (Cited on page xi.)
[45] Elias Giannopoulos, Ausias Pomes, and Mel Slater. Touching the Void: Exploring Virtual Objects through a Vibrotactile Glove. publicationslist.org, 11(3):19–24, 2012. (Cited
on pages 4, 36, and 114.)
[46] J J Gibson. Observations on active touch. Psychological review, 69(6):477–91, November 1962. ISSN 0033-295X.
(Cited on page 113.)
[47] S J Graham, W R Staines, A Nelson, D B Plewes, and W E
McIlroy. New devices to deliver somatosensory stimuli
during functional MRI. Magnetic resonance in medicine : official journal of the Society of Magnetic Resonance in Medicine /
Society of Magnetic Resonance in Medicine, 46:436–442, 2001.
ISSN 0740-3194. (Cited on pages 4 and 36.)
[48] Saul Greenberg and Bill Buxton. Usability evaluation considered harmful (some of the time). Proceeding of the twentysixth annual CHI conference on Human factors in computing
systems - CHI ’08, page 111, 2008. doi: 10.1145/1357054.
1357074. (Cited on page 72.)
[49] M. J. Griffin. Vibration Discomfort. In Handbook of Human
Vibration, chapter 3, page 988. 1996. ISBN 978-0-12-3030405. (Cited on page 10.)
[50] Netta Gurari, Kathryn Smith, Manu Madhav, and Allison M Okamura. Environment discrimination with vi-
bibliography
bration feedback to the foot, arm, and fingertip. In
2009 IEEE International Conference on Rehabilitation Robotics,
pages 343–348, Kyoto, June 2009. IEEE. ISBN 978-1-42443788-7. doi: 10.1109/ICORR.2009.5209508. (Cited on
pages 4, 24, and 96.)
[51] C. Hasser and J.M. Weisenberger. Preliminary Evaluation
of a Shape-Memory Alloy Tactile Feedback Display. In
Advances in robotics, mechatronics and haptic interfaces, 1993,
pages 73–80, New Orleans, Louisiana„ 1993. American Society of Mechanical Engineers. (Cited on page 17.)
[52] Lauren Hayes. Vibrotactile Feedback-Assisted Performance. In Proceedings of the International Conference on New
Interfaces for Musical Expression, pages 72–75, Oslo, Norway,
2011. (Cited on page 36.)
[53] Vincent Hayward and Karon Maclean. Do it yourself haptics: part I. IEEE Robotics & Automation Magazine, 14(4):
88–104, December 2007. ISSN 1070-9932. doi: 10.1109/
M-RA.2007.907921. (Cited on page 4.)
[54] Andreas Hein and Melina Brell. conTACT-A Vibrotactile
Display for Computer Aided Surgery. In Proceedings of the
Second Joint EuroHaptics and Symposium on Haptic Interfaces
for Virtual Environment and Teleoperator Systems, pages 531–
536, 2007. ISBN 0769527388. (Cited on page 24.)
[55] A Hellström. Sensation weighting in comparison and discrimination of heaviness. Journal of experimental psychology.
Human perception and performance, 26:6–17, 2000. ISSN 00961523. doi: 10.1037/0096-1523.26.1.6. (Cited on page 125.)
[56] Cristy Ho, Hong Z. Tan, and Charles Spence. Using spatial vibrotactile cues to direct visual attention in driving
scenes. Transportation Research Part F: Traffic Psychology and
Behaviour, 8(6):397–412, November 2005. ISSN 13698478.
doi: 10.1016/j.trf.2005.05.002. (Cited on page 33.)
[57] JH Hogema and SC De Vries. A Tactile Seat for Direction
Coding in Car Driving: Field Evaluation. IEEE Transactions
on Haptics, 2(4):181–188, 2009. (Cited on page 4.)
[58] M. Hollins, S J Bensmaïa, and E A Roy. Vibrotaction and
texture perception. Behavioural brain research, 135(1-2):51–6,
September 2002. ISSN 0166-4328. (Cited on page 95.)
153
154
bibliography
[59] Kyungpyo Hong, Jaebong Lee, and Seungmoon Choi.
Demonstration-based vibrotactile pattern authoring. In
Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction - TEI ’13, volume 1, page
219, New York, New York, USA, 2013. ACM Press. ISBN
9781450318983. doi: 10.1145/2460625.2460660. (Cited on
pages 62 and 78.)
[60] G. Inaba and K. Fujita. A Pseudo-Force-Feedback Device
by Fingertip Tightening for Multi-Finger Object Manipulation. In Proceedings of EuroHaptics, pages 275–278, Paris,
France, 2006. (Cited on page 26.)
[61] S. Ino, S. Shimizu, T. Odagawa, M. Sato, M. Takahashi,
T. Izumi, and T. Ifukube. A tactile display for presenting
quality of materials by changing the temperature of skin
surface. In Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication, pages 220–224.
Ieee, 1993. ISBN 0-7803-1407-7. doi: 10.1109/ROMAN.
1993.367718. (Cited on page 16.)
[62] Ali Israr and Ivan Poupyrev. Tactile Brush: Drawing on
Skin with a Tactile Grid Display. In Proceedings of the 2011
annual conference on Human factors in computing systems CHI ’11, page 2019, New York, New York, USA, 2011.
ACM Press. ISBN 9781450302289. doi: 10.1145/1978942.
1979235. (Cited on page 36.)
[63] G Jansson. Basic issues concerning visually impaired people’s use of haptic displays. In The 3rd International conference on disability, virtual reality and associated technologies,
pages 33–38, 2000. (Cited on pages 113, 114, 115, and 121.)
[64] G Jansson. The Potential Importance of Perceptual FillingIn for Haptic Perception of Virtual Object Form. Eurohaptics, Birmingham, UK, 2001. (Cited on pages 113 and 114.)
[65] Gunnar Jansson. The Importance of Available Exploration
Methods for the Efficiency of Haptic Displays. In Symposium on Multimodal Communication, Linköping, Sweden,
1999. (Cited on page 114.)
[66] Gunnar Jansson and A Ivå s. Can the efficiency of a haptic display be increased by short-time practice in explo-
bibliography
ration? In Haptic Human-Computer Interaction, pages 88–
97, Glasgow, UK, 2001. Springer Berlin Heidelberg. doi:
10.1007/3-540-44589-7\_10. (Cited on page 114.)
[67] Gunnar Jansson and K Larsson. Identification of haptic
virtual objects with different degrees of complexity. In
Proceedings of Eurohaptics 2002 (EH’02), pages 57–60, Edinburgh, UK, 2002. (Cited on page 114.)
[68] R. S. Johansson. Tactile sensibility in the human hand: receptive field characteristics of mechanoreceptive units in
the glabrous skin area. The Journal of physiology, 281:101–
25, August 1978. ISSN 0022-3751. (Cited on page 96.)
[69] A.D. Johnson. Shape-Memory Alloy: Tactical Feedback
Actuator. Technical Review AAMRL-TR-90-039, 1990. (Cited
on page 17.)
[70] K. O. Johnson. Neural Basis of Haptic Perception. In
H. Pashler and S. Yantis, editors, Stevens’ Handbook of Experimental Psychology, pages 537–583. John Wiley & Sons,
Inc., Hoboken, NJ, USA, 2002. (Cited on pages 8 and 10.)
[71] Markus Jonas. Tactile Editor: A Prototyping Tool to Design and Test Vibrotactile Patterns. Master’s thesis, RWTH
Aachen University, 2008. (Cited on pages 62, 78, and 80.)
[72] M Jungmann and H.F. Schlaak. Miniaturised electrostatic
tactile display with high structural compliance. In Proceedings of the Conference Eurohaptics. Citeseer, 2002. (Cited on
pages 20 and 21.)
[73] H. Kajimoto, N. Kawakami, T. Maeda, and S. Tachi.
Electro-tactile display with force feedback. Proc. World
Multiconference on Systemics, Cybernetics and Informatics
(SCI2001, 11:95–99, 2001. (Cited on page 19.)
[74] H. Kajimoto, N. Kawakami, S. Tachi, and I. Masahiko.
SmartTouch: Electric Skin to Touch the Untouchable. IEEE
computer graphics and applications, 24(1):36–43, 2004. ISSN
0272-1716. (Cited on page 19.)
[75] a M Kappers, J J Koenderink, and I Lichtenegger. Haptic identification of curved surfaces. Perception & psychophysics, 56(1):53–61, July 1994. ISSN 0031-5117. (Cited
on page 114.)
155
156
bibliography
[76] Pulkit Kapur, Mallory Jensen, Laurel J. Buxbaum, Steven a.
Jax, and Katherine J. Kuchenbecker. Spatially distributed
tactile feedback for kinesthetic motion guidance. In 2010
IEEE Haptics Symposium, pages 519–526. IEEE, March
2010. ISBN 978-1-4244-6821-8. doi: 10.1109/HAPTIC.2010.
5444606. (Cited on page 32.)
[77] G.L. Kenaley and M.R. Cutkosky. Electrorheological fluidbased robotic fingers with tactile sensing. In Proc. IEEE
International Conference on Robotics and Automation, pages
132–136, 1989. (Cited on page 21.)
[78] Thorsten A. Kern, editor. Engineering Haptic Devices.
Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. ISBN
978-3-540-88247-3. doi: 10.1007/978-3-540-88248-0. (Cited
on page 116.)
[79] K H Kim, Y J Nam, R Yamane, and M K Park.
Smart mouse: 5-DOF haptic hand master using magnetorheological fluid actuators. Journal of Physics: Conference
Series, 149:012062, 2009. ISSN 1742-6596. doi: 10.1088/
1742-6596/149/1/012062. (Cited on page 22.)
[80] Y. Kim, J. Cha, I. Oakley, and J. Ryu. Exploring tactile
movies: An initial tactile glove design and concept evaluation. IEEE MultiMedia, 17(3):33–44, 2010. (Cited on
pages 34, 35, and 36.)
[81] Chih-Hung King, Martin O Culjat, Miguel L Franco,
James W Bisley, Erik Dutson, and Warren S Grundfest. Optimization of a pneumatic balloon tactile display for robotassisted surgery based on human perception. IEEE transactions on bio-medical engineering, 55(11):2593–600, November
2008. ISSN 1558-2531. doi: 10.1109/TBME.2008.2001137.
(Cited on page 13.)
[82] Jo Ann S. Kinney and S. M. Luria. Conflicting visual
and tactual-kinesthetic stimulation, 1970. ISSN 0031-5117.
(Cited on page 126.)
[83] AE Kirkpatrick and SA Douglas. A shape recognition
benchmark for evaluating usability of a haptic environment. In Haptic Human-Computer Interaction, pages 151–
156, Glasgow, UK, 2000. doi: 10.1007/3-540-44589-7\_16.
(Cited on page 114.)
bibliography
[84] R L Klatzky, S J Lederman, and V a Metzger. Identifying objects by touch: an expert system. Perception &
psychophysics, 37(4):299–302, April 1985. ISSN 0031-5117.
(Cited on pages 113 and 114.)
[85] D. Klein, H. Freimuth, G.J. Monkman, S. Egersdörfer,
A. Meier, H. Böse, M. Baumann, H. Ermert, and O.T.
Bruhns. Electrorheological tactel elements. Mechatronics,
2004. (Cited on page 21.)
[86] D.A. Kontarinis and R.D. Howe. Tactile display of vibratory information in teleoperation and virtual environments. Presence: Teleoperators and Virtual Environments, 4
(4):387–402, 1995. ISSN 1054-7460. (Cited on page 36.)
[87] D.A. Kontarinis, J.S. Son, W.J. Peine, and R.D. Howe. A
Tactile Shape Sensing and Display System for Teleoperated
Manipulation. In IEEE International Conference on Robotics
and Automation, pages 641–641. INSTITUTE OF ELECTRICAL ENGINEERS INC (IEEE), 1995. (Cited on pages 17,
23, and 36.)
[88] I.M. Koo, K. Jung, J.C. Koo, J.D. Nam, Y.K. Lee, and H.R.
Choi. Development of soft-actuator-based wearable tactile
display. IEEE Transactions on Robotics, 24(3):549–558, 2008.
(Cited on pages 20 and 21.)
[89] Katherine J Kuchenbecker, Jonathan Fiene, and Günter
Niemeyer. Improving contact realism through event-based
haptic feedback. IEEE transactions on visualization and computer graphics, 12(2):219–30, 2006. ISSN 1077-2626. doi:
10.1109/TVCG.2006.32. (Cited on page 95.)
[90] Liam Kurmos, Nigel W John, and Jonathan C. Roberts. Integration of Haptics with Web3D using the SAI. In Web3D,
volume 1, pages 25–32. ACM, 2009. ISBN 9781605584324.
(Cited on page 61.)
[91] K.U. Kyung, S.W. Son, D.S. Kwon, and M.S. Kim. Design
of an integrated tactile display system. In 2004 IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04, volume 1, pages 776–781, 2004. (Cited
on pages 15 and 16.)
157
158
bibliography
[92] K.U. Kyung, M. Ahn, D.S. Kwon, and M.A. Srinivasan.
A compact broadband tactile display and its effectiveness
in the display of tactile form. In Eurohaptics Conference,
2005 and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2005. World Haptics 2005.
First Joint, pages 600–601. Ieee, 2005. ISBN 0-7695-2310-2.
doi: 10.1109/WHC.2005.4. (Cited on page 15.)
[93] K.U. Kyung, J.Y. Lee, and J.S. Park. Comparison of force,
tactile and vibrotactile feedback for texture representation
using a combined haptic feedback interface. In Ian Oakley and Stephen Brewster, editors, Haptic and Audio Interaction Design, volume 4813, pages 34–43. Springer Berlin
/ Heidelberg, Banff, 2007. ISBN 978-3-540-76701-5. doi:
10.1007/978-3-540-76702-2\_5. (Cited on pages 96 and 97.)
[94] S J Lederman and R L Klatzky. Hand movements: a window into haptic object recognition. Cognitive psychology, 19
(3):342–68, July 1987. ISSN 0010-0285. (Cited on pages 85
and 113.)
[95] SJ Lederman and RL Klatzky. Haptic perception: A tutorial. Attention, Perception, & Psychophysics, 71(7):1439–1459,
2009. doi: 10.3758/APP. (Cited on page 3.)
[96] Susan J Lederman and Roberta L Klatzky. Haptic identification of common objects: effects of constraining the
manual exploration process. Perception & psychophysics, 66
(4):618–28, May 2004. ISSN 0031-5117. (Cited on page 113.)
[97] Paul Lemmens, Floris Crompvoets, Dirk Brokken, Jack
van den Eerenbeemd, and Gert-Jan de Vries. A bodyconforming tactile jacket to enrich movie viewing. World
Haptics 2009 - Third Joint EuroHaptics conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, pages 7–12, March 2009. doi: 10.1109/WHC.
2009.4810832. (Cited on pages 34 and 35.)
[98] RW Lindeman and JR Cutler. Controller design for a
wearable, near-field haptic display. In 11th Symposium
on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2003. HAPTICS 2003. Proceedings., pages 397–
403. IEEE Comput. Soc, 2003. ISBN 0-7695-1890-7. doi:
10.1109/HAPTIC.2003.1191323. (Cited on page 41.)
bibliography
[99] R.W. Lindeman, J.L. Sibert, E. Mendez-Mendez, S. Patil,
and D. Phifer. Effectiveness of directional vibrotactile
cuing on a building-clearing task. In Proceedings of the
SIGCHI conference on Human factors in computing systems,
pages 271–280. ACM, 2005. (Cited on pages 4 and 33.)
[100] Y. Liu, R.I. Davidson, P.M. Taylor, J.D. Ngu, and J.M.C.
Zarraga. Single cell magnetorheological fluid based tactile
display. Displays, 26(1):29–35, January 2005. ISSN 01419382.
doi: 10.1016/j.displa.2004.10.002. (Cited on page 21.)
[101] K.E. MacLean and Vincent Hayward. Do It Yourself Haptics: Part II Interaction design. IEEE Robotics and Automation Magazine, 15(1):104–119, 2008. (Cited on page 4.)
[102] N. Mahmoudian, M.R. Aagaah, G.N. Jazar, and M. Mahinfalah. Dynamics of a micro electro mechanical system (MEMS). Proceedings. 2004 International Conference on
MEMS, NANO and Smart Systems, 2004. ICMENS 2004.,
pages 614–619, 2004. doi: 10.1109/ICMENS.2004.1332392.
(Cited on page 17.)
[103] D. Martínez, A.S. García, J. Martínez, J.P. Molina, and
P. González. A Model of Interaction for CVEs Based on
the Model of Human Communication. Journal of Universal Computer Science, 14(19):3071–3084, 2008. (Cited on
page xi.)
[104] D. Martínez, J. Martínez, A.S. García, J.P. Molina, and
P. González. Diseño e Implementacion de un Modelo de
Interacción para CVE’s basado en el Modelo de Comunicación Humana. In Interacción 2008, volume 341, page 50,
2008. (Cited on page xi.)
[105] D. Martínez, J.P. Molina, A.S. García, J. Martínez, and
P. González. AFreeCA: Extending the Spatial Model of
Interaction. In 2010 International Conference on Cyberworlds,
pages 17–24. IEEE, October 2010. ISBN 978-1-4244-8301-3.
doi: 10.1109/CW.2010.63. (Cited on page xi.)
[106] J. Martínez, A.S. García, D. Martínez, and P. González.
Desarrollo de un Guante de Datos con Retorno Háptico Vibro-táctil Basado en Arduino. In Interacción 2009
- Jornadas de Realidad Virtual, pages 1–10, Barcelona, 2009.
(Cited on page xi.)
159
160
bibliography
[107] J. Martínez, A.S. García, and D. Martínez. Texture Recognition: Evaluating Force, Vibrotactile and Real Feedback. In
Human-Computer Interaction - INTERACT 2011, pages 612–
615, 2011. doi: 10.1007/978-3-642-23768-3\_95. (Cited on
page xi.)
[108] J. Martínez, A.S. García, D. Martínez, J.P. Molina, and
P. González. Comparación de Retorno de Fuerza, Vibrotáctil y Estimulación Directa para la Detección de Texturas.
In Actas del XII Congreso Internacional de Interacción PersonaOrdenador, Lisboa, Portugal, 2011. (Cited on page xi.)
[109] J. Martínez, D. Martínez, J.P. Molina, P. González, and A.S.
García. Comparison of Force and Vibrotactile Feedback
with Direct Stimulation for Texture Recognition. In 2011
International Conference on Cyberworlds, pages 62–68, Banff,
Canada, October 2011. IEEE. ISBN 978-1-4577-1453-5. doi:
10.1109/CW.2011.23. (Cited on pages xi and 101.)
[110] J. Martínez, A.S. García, J.P. Molina, D. Martínez, and
P. González. An empirical evaluation of different haptic
feedback for shape and texture recognition. The Visual
Computer, 29(2):111–121, May 2013. ISSN 0178-2789. doi:
10.1007/s00371-012-0716-x. (Cited on page xi.)
[111] J. Martínez, A.S. García, Miguel Oliver, J.P. Molina, and
P. González. VITAKI: A Vibrotactile Prototyping Toolkit
for Virtual Reality and Videogames. International Journal
of Human-Computer Interaction (IJHCI) (under review), 2014.
(Cited on page xi.)
[112] J. Martínez, A.S. García, Miguel Oliver, J.P. Molina, and
P. González. Identification of 3D Geometric Forms with a
Vibrotactile Glove. IEEE Computer Graphics and Applications
(under review), 2014. (Cited on page xi.)
[113] TH Massie and JK Salisbury. The phantom haptic interface: A device for probing virtual objects. In Symposium
on Haptic Interfaces for Virtual Environment and Teleoperator
Systems, volume 55, Chicago, IL, 1994. (Cited on page 61.)
[114] K. Minamizawa and S. Fukamachi. Gravity grabber: wearable haptic display to present virtual mass sensation. In
ACM SIGGRAPH 2007 emerging technologies, New York ,
bibliography
USA, 2007. ACM. doi: 10.1145/1278280.1278289. (Cited
on pages 26 and 27.)
[115] Kouta Minamizawa, Yasuaki Kakehi, Masashi Nakatani,
Soichiro Mihara, and Susumu Tachi. TECHTILE toolkit:
a prototyping tool for design and education of haptic
media. In Proceedings of the 2012 Virtual Reality International Conference on - VRIC ’12, page 1, New York, New
York, USA, 2012. ACM Press. ISBN 9781450312431. doi:
10.1145/2331714.2331745. (Cited on page 77.)
[116] Margaret Minsky, Ouh-young Ming, Oliver Steele, Frederick P. Brooks, and Max Behensky. Feeling and seeing:
issues in force display. In Proceedings of the 1990 symposium
on Interactive 3D graphics - SI3D ’90, pages 235–241, New
York, New York, USA, 1990. ACM Press. ISBN 0897913515.
doi: 10.1145/91385.91451. (Cited on page 95.)
[117] G.J. Monkman. An electrorheological tactile display. Presence: Teleoperators and Virtual Environments, 1(2):219–228,
1992. (Cited on page 21.)
[118] John Morrell and Kamil Wasilewski. Design and evaluation of a vibrotactile seat to improve spatial awareness
while driving. In 2010 IEEE Haptics Symposium, pages
281–288. IEEE, March 2010. ISBN 978-1-4244-6821-8. doi:
10.1109/HAPTIC.2010.5444642. (Cited on pages 4 and 34.)
[119] Yuichi Muramatsu, Mihoko Niitsuma, and Trygve Thomessen. Perception of tactile sensation using vibrotactile
glove interface. In 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom), pages
621–626. IEEE, December 2012. ISBN 978-1-4673-5188-1.
doi: 10.1109/CogInfoCom.2012.6422054. (Cited on pages 4
and 36.)
[120] A.M. Murray, R.L. Klatzky, and P.K. Khosla. Psychophysical Characterization and Testbed Validation of a Wearable
Vibrotactile Glove for Telemanipulation. Presence: Teleoperators & Virtual Environments, 12(2):156–182, 2003. (Cited on
pages 10, 36, and 42.)
[121] Saskia K Nagel, Christine Carl, Tobias Kringe, Robert
Märtin, and Peter König. Beyond sensory substitution–
161
162
bibliography
learning the sixth sense. Technical report, 2005. (Cited on
page 34.)
[122] Mathias Nordvall. SIGHTLENCE: Haptics for Computer
Games. PhD thesis, Linköping University, 2012. (Cited on
page 35.)
[123] Shogo Okamoto, Masashi Konyo, Satoshi Saga, and
Satoshi Tadokoro. Detectability and Perceptual Consequences of Delayed Feedback in a Vibrotactile Texture Display. IEEE Transactions on Haptics, 2(2):73–84, 2009. (Cited
on pages 44 and 58.)
[124] AM Okamura, JT Dennerlein, and R.D. Howe. Vibration
feedback models for virtual environments. In Proceedings.
1998 IEEE International Conference on Robotics and Automation, volume 1, pages 674–679, Leuven, 1998. IEEE. ISBN
0-7803-4300-X. doi: 10.1109/ROBOT.1998.677050. (Cited
on pages 95 and 96.)
[125] Dan R. Olsen. Evaluating user interface systems research. In Proceedings of the 20th annual ACM symposium
on User interface software and technology - UIST ’07, page
251, New York, New York, USA, 2007. ACM Press. ISBN
9781595936792. doi: 10.1145/1294211.1294256. (Cited on
pages 71, 72, and 81.)
[126] Tatsuya Ooka and Kinya Fujita. Virtual object manipulation system with substitutive display of tangential force
and slip by control of vibrotactile phantom sensation.
In 2010 IEEE Haptics Symposium, number c, pages 215–
218. IEEE, March 2010. ISBN 978-1-4244-6821-8. doi:
10.1109/HAPTIC.2010.5444652. (Cited on page 126.)
[127] Mauricio Orozco and Juan Silva. The Role of Haptics in
Games. In Abdulmotaleb El Saddik, editor, Haptics Rendering and Applications, chapter 11, pages 217–235. 2012. ISBN
978-953-307-897-7. doi: 10.5772/32809. (Cited on page 35.)
[128] M.V. Ottermo, O. Stavdahl, and T.A. Johansen. Electromechanical Design of a Miniature Tactile Shape Display for
Minimally Invasive Surgery. In World Haptics, pages 561–
562, 2005. (Cited on page 26.)
bibliography
[129] S. Pabon, E. Sotgiu, R. Leonardi, C. Brancolini, O. PortilloRodriguez, A. Frisoli, and M. Bergamasco. A data-glove
with vibro-tactile stimulators for virtual social interaction
and rehabilitation. Presence 2007, pages 345–348, 2007.
(Cited on pages 4 and 36.)
[130] Sabrina Panëels, Margarita Anastassova, and Lucie Brunet.
TactiPEd: Easy Prototyping of Tactile Patterns. pages 228–
245, Cape Town, South Africa, 2013. Springer Berlin Heidelberg. doi: 10.1007/978-3-642-40480-1\_15. (Cited on
pages 62, 78, and 80.)
[131] Sabrina A. Panëels, Jonathan C. Roberts, and Peter J.
Rodgers. HITPROTO : a Tool for the Rapid Prototyping
of Haptic Interactions for Haptic Data Visualization. In
Haptics Symposium 2010 IEEE, pages 261–268, 2010. ISBN
9781424468225. (Cited on page 61.)
[132] J. Pasquero and V. Hayward. STRESS: A practical tactile
display system with one millimeter spatial resolution and
700 Hz refresh rate. In Proc. Eurohaptics, volume 2003,
pages 94–110. Citeseer, 2003. (Cited on pages 14 and 16.)
[133] E.M. Petri and W.S. McMath. Tactile Operator Interface for
Semi-Autonomous Robotic Applications. In Proc. Int. Symposium on Artificial Intell. Robotics Automat. in Space, pages
77–82, France, 1992. (Cited on page 23.)
[134] E Piateski and L Jones. Vibrotactile Pattern Recognition on
the Arm and Torso. In First Joint Eurohaptics Conference and
Symposium on Haptic Interfaces for Virtual Environment and
Teleoperator Systems, pages 90–95. IEEE, 2005. ISBN 0-76952310-2. doi: 10.1109/WHC.2005.143. (Cited on page 4.)
[135] B. Poppinga, M. Pielot, and S. Boll. Tacticycle: a tactile display for supporting tourists on a bicycle trip. In Proceedings
of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services, page 41. ACM, 2009.
(Cited on page 28.)
[136] G Révész. Psychology and Art of the Blind. Longmans, Green
and Co, 1950. (Cited on page 113.)
[137] Andreas Riener and Alois Ferscha. Raising awareness
about space via vibro-tactile notifications. Smart Sensing
and Context, pages 235–245, 2008. (Cited on page 41.)
163
164
bibliography
[138] G Robles-De-La-Torre. The Importance of the Sense of
Touch in Virtual and Real Environments. IEEE Multimedia, 13(3):24–30, July 2006. ISSN 1070-986X. doi: 10.1109/
MMUL.2006.69. (Cited on pages 3 and 95.)
[139] H E Ross and M F Reschke. Mass estimation and discrimination during brief periods of zero gravity. Perception
& psychophysics, 31:429–436, 1982. ISSN 0031-5117. doi:
10.3758/BF03204852. (Cited on page 125.)
[140] Michele F. Rotella, Kelleher Guerin, and Allison M. Okamura. HAPI Bands: A haptic augmented posture interface.
2012 IEEE Haptics Symposium (HAPTICS), pages 163–170,
March 2012. doi: 10.1109/HAPTIC.2012.6183785. (Cited
on page 32.)
[141] A.H. Rupert. An instrumentation solution for reducing spatial disorientation mishaps. IEEE Engineering in
Medicine and Biology Magazine, 19(2):71–80, 2000. ISSN
07395175. doi: 10.1109/51.827409. (Cited on page 33.)
[142] Jonghyun Ryu. Psychophysical Model for Vibrotactile Rendering in Mobile Devices. Presence: Teleoperators and Virtual Environments, 19(4):364–387, August 2010. ISSN 10547460. doi: 10.1162/PRES\_a\_00011. (Cited on pages 10
and 24.)
[143] Jonghyun Ryu and Seungmoon Choi. posVibEditor :
Graphical Authoring Tool of Vibrotactile Patterns. In Proceedings of the IEEE International Workshop on Haptic, Audio
and Visual Environments and Games, number October, pages
120–125, 2008. ISBN 9781424426690. (Cited on pages 62, 79,
and 80.)
[144] Jonghyun Ryu and Seungmoon Choi. Vibrotactile score:
A score metaphor for designing vibrotactile patterns. In
World Haptics 2009 - Third Joint EuroHaptics conference and
Symposium on Haptic Interfaces for Virtual Environment and
Teleoperator Systems, pages 302–307. IEEE, March 2009.
ISBN 978-1-4244-3858-7. doi: 10.1109/WHC.2009.4810816.
(Cited on pages 62 and 78.)
[145] Kenneth Salisbury, Francois Conti, and Federico Barbagli.
Haptic Rendering: Introductory Concepts. IEEE Computer
bibliography
Graphics and Applications, (April):24–32, 2004. (Cited on
pages 48, 49, and 91.)
[146] EL Sallnäs, K Rassmus-Gröhn, and C Sjöström. Supporting Presence in Collaborative Environments by Haptic
Force Feedback. ACM Transactions on Computer-Human Interaction (TOCHI), 7(4):461–476, 2000. (Cited on page 3.)
[147] I. Sarakoglou, N. Tsagarakis, and D.G Caldwell. A
Portable Fingertip Tactile Feedback Array - Transmission
System Reliability and Modelling. In First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual
Environment and Teleoperator Systems, pages 547–548. IEEE,
2005. ISBN 0-7695-2310-2. doi: 10.1109/WHC.2005.17.
(Cited on page 26.)
[148] Mitsuru Sato, S. Ino, Naoki Yoshida, Takashi Izumi, and
Tohru Ifukube. Portable Pneumatic Actuator System Using MH Alloys, Employed as Assistive Devices. Journal of
Robotics and Mechatronics, 19(6), 2007. (Cited on pages 12
and 13.)
[149] S. Schätzle, T. Hulin, C. Preusche, and G. Hirzinger. Evaluation of vibrotactile feedback to the human arm. In Proceedings EuroHaptics, pages 577–560, 2006. (Cited on pages 4
and 36.)
[150] R. Scheibe, M. Moehring, and B. Froehlich. Tactile feedback at the finger tips for improved direct interaction in
immersive environments. In Proc. IEEE Symposium on 3D
User Interfaces 2007, Charlotte, North Carolina, pages 10–14,
Weimar, Germany, 2007. (Cited on pages 17 and 18.)
[151] Jongman Seo and Seungmoon Choi. Initial Study for
Creating Linearly Moving Vibrotactile Sensation on Mobile Device. In 2010 IEEE Haptics Symposium, pages 67–
70. IEEE, March 2010. ISBN 978-1-4244-6821-8. doi:
10.1109/HAPTIC.2010.5444677. (Cited on page 25.)
[152] Fabrizio Sergi, Dino Accoto, Domenico Campolo, and Eugenio Guglielmelli. Forearm orientation guidance with a
vibrotactile feedback bracelet: On the directionality of tactile motor communication. In 2008 2nd IEEE RAS & EMBS
165
166
bibliography
International Conference on Biomedical Robotics and Biomechatronics, pages 433–438. IEEE, October 2008. ISBN 978-14244-2882-3. doi: 10.1109/BIOROB.2008.4762827. (Cited
on pages 4 and 32.)
[153] M. Shinohara, Y. Shimizu, and A. Mochizuki. ThreeDimensional Tactile Display for the Blind. IEEE Transactions on Rehabilitation Engineering, 6(3):249–56, September
1998. ISSN 1063-6528. (Cited on page 25.)
[154] Rajinder Sodhi, I Poupyrev, M Glisson, and A Israr.
AIREAL: interactive tactile experiences in free air. In ACM
SIGGRAPH, volume 32, 2013. (Cited on page 22.)
[155] Daniel Spelmezan, Mareike Jacobs, Anke Hilgers, and Jan
Borchers. Tactile motion instructions for physical activities.
In Proceedings of the 27th international conference on Human
factors in computing systems - CHI 09, page 2243, New York,
New York, USA, 2009. ACM Press. ISBN 9781605582467.
doi: 10.1145/1518701.1519044. (Cited on page 32.)
[156] M Stamm, M E Altinsoy, and S Merchel. Identification
Accuracy and Efficiency of Haptic Virtual Objects Using
Force-feedback. In 3rd International Workshop on Perceptual Quality of Systems, Bautzen, Germany, 2010. (Cited
on pages 114, 115, and 121.)
[157] S. S. Stevens. Tactile vibration: Dynamics of sensory intensity. Journal of Experimental Psychology, 57(4):210–218,
1959. ISSN 0022-1015. doi: 10.1037/h0042828. (Cited on
page 10.)
[158] R.J. Stone. Haptic feedback: A potted history, from telepresence to virtual reality. In The First International Workshop
on Haptic Human-Computer Interaction, pages 1–7. Citeseer,
2000. (Cited on pages xvii, 13, and 14.)
[159] Jeremy Streque, A. Talbi, Philippe Pernod, and Vladimir
Preobrazhensky.
Electromagnetic actuation based on
MEMS technology for tactile display. Haptics: Perception,
Devices and Scenarios, pages 437–446, 2008. (Cited on
page 18.)
[160] I.R. Summers, C.M. Chanter, A.L. Southall, and A.C. Brady.
Results from a Tactile Array on the Fingertip. In Proceed-
bibliography
ings of Eurohaptics 2001, pages 26–28. Citeseer, 2001. (Cited
on page 14.)
[161] Colin Swindells, Evgeny Maksakov, K.E. MacLean, and
Victor Chung. The Role of Prototyping Tools for Haptic
Behavior Design. In 2006 14th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, pages
161–168. IEEE, 2006. ISBN 1-4244-0226-3. doi: 10.1109/
HAPTIC.2006.1627084. (Cited on pages 61 and 77.)
[162] Gabor Sziebig, Bjorn Solvang, Csaba Kiss, and Peter Korondi. Vibro-tactile feedback for VR systems. In 2009
2nd Conference on Human System Interactions, pages 406–
410. IEEE, May 2009. ISBN 978-1-4244-3959-1. doi: 10.
1109/HSI.2009.5091014. (Cited on pages 36 and 41.)
[163] A. Talbi, O. Ducloux, N. Tiercelin, Y. Deblock, P. Pernod,
and V. Preobrazhensky. Vibrotactile using micromachined
electromagnetic actuators array. Journal of Physics: Conference Series, 34:637–642, 2006. ISSN 1742-6588. doi:
10.1088/1742-6596/34/1/105. (Cited on pages 23 and 25.)
[164] H.Z. Tan and A. Pentland. Tactual displays for wearable
computing. Personal Technologies, 1(4):225–230, December
1997. ISSN 0949-2054. doi: 10.1007/BF01682025. (Cited on
page 24.)
[165] H. Tang and D.J. Beebe. A microfabricated electrostatic
haptic display for persons with visual impairments. IEEE
Transactions on rehabilitation engineering, 6(3):241–248, 1998.
(Cited on page 20.)
[166] P.M. Taylor, A. Moser, and A. Creed. The design and control of a tactile display based on shape memory alloys. Proceedings of the IEEE International Conference on Robotics and
Automation, 2:1/1–1/4, 1997. (Cited on page 17.)
[167] P.M. Taylor, D.M. Pollet, A. Hosseini-Sianaki, and C.J. Varley. Advances in an electrorheological fluid based tactile
array. Displays, 18(3):135–141, 1998. (Cited on page 21.)
[168] Aaron Toney, Lucy Dunne, B.H. Thomas, and S.P. Ashdown. A shoulder pad insert vibrotactile display. In
167
168
bibliography
Seventh IEEE International Symposium on Wearable Computers, 2003. Proceedings., pages 35–44. IEEE, 2003. ISBN 07695-2034-0. doi: 10.1109/ISWC.2003.1241391. (Cited on
pages 4, 28, and 29.)
[169] K. Tsukada and M. Yasumura. Activebelt: Belt-type wearable tactile display for directional navigation. In Proceedings of the 11th International Conference on Human-Computer
Interaction with Mobile Devices and Services, pages 384–
399. Springer, 2004. doi: 10.1007/978-3-540-30119-6\_23.
(Cited on pages 4, 28, 29, 34, and 41.)
[170] H. Uchiyama, M. a. Covington, and W. D. Potter. Vibrotactile Glove guidance for semi-autonomous wheelchair operations. In Proceedings of the 46th Annual Southeast Regional
Conference on XX - ACM-SE 46, page 336, New York, New
York, USA, 2008. ACM Press. ISBN 9781605581057. doi:
10.1145/1593105.1593195. (Cited on page 36.)
[171] S S Valenti and A Costall. Visual perception of lifted
weight from kinematic and static (photographic) displays.
Journal of experimental psychology. Human perception and performance, 23:181–198, 1997. ISSN 0096-1523. doi: 10.1037/
0096-1523.23.1.181. (Cited on pages 126 and 128.)
[172] A. B. Vallbo and R. S. Johansson. Properties of cutaneous
mechanoreceptors in the human hand related to touch sensation. Human neurobiology, 3(1):3–14, January 1984. ISSN
0721-9075. (Cited on page 7.)
[173] Janet van der Linden, Erwin Schoonderwaldt, Jon Bird,
and Rose Johnson. MusicJacket–Combining Motion Capture and Vibrotactile Feedback to Teach Violin Bowing.
IEEE Transactions on Instrumentation and Measurement, 60
(1):104–113, January 2011. ISSN 0018-9456. doi: 10.1109/
TIM.2010.2065770. (Cited on page 32.)
[174] George H Van Doorn, Barry L Richardson, Dianne B
Wuillemin, and Mark A Symmons. Visual and haptic influence on perception of stimulus size. Attention, perception
& psychophysics, 72(3):813–22, April 2010. ISSN 1943-393X.
doi: 10.3758/APP.72.3.813. (Cited on page 126.)
[175] R.T. Verillo. Cutaneous sensations. In Bertram Scharf and
George Stanley Reynolds, editors, Experimental sensory psy-
bibliography
chology, page 286. Scott Foresman & Co, Gleenview, 1975.
ISBN 978-0673054289. (Cited on page 12.)
[176] Ronald Verrillo. Investigation of Some Parameters of the
Cutaneous Threshold for Vibration. Journal of the Acoustical
Society of America, 34:1768 – 1773, 1962. ISSN 00014966. doi:
10.1121/1.1909124. (Cited on page 9.)
[177] Ronald T. Verrillo, Anthony J. Fraioli, and Robert L. Smith.
Sensation magnitude of vibrotactile stimuli. Perception &
Psychophysics, 6(6):366–372, November 1969. ISSN 00315117. doi: 10.3758/BF03212793. (Cited on page 10.)
[178] R. Voyles, M. Stavnheim, and B. Yap. Electrorheological
Fluid-Based Fingertip With Tactile Sensing, 1989. (Cited
on page 21.)
[179] R.M. Voyles, G. Fedder, and P.K. Khosla. Design of a
modular tactile sensor and actuator based on an electrorheological gel. Proceedings of IEEE International Conference on Robotics and Automation, (April):13–17, 1996. doi:
10.1109/ROBOT.1996.503566. (Cited on page 21.)
[180] C.R. Wagner, S.J. Lederman, and R.D. Howe. Design And
Performance of a Tactile Shape Display Using RC Servomotors. Haptics-e, 3(4), 2004. (Cited on pages 26 and 27.)
[181] Q. Wang and V. Hayward. Compact, Portable, Modular,
High-performance, Distributed Tactile Transducer Device
Based on Lateral Skin Deformation. In 14th Symposium
on Haptic Interfaces for Virtual Environment and Teleoperator Systems, number March, pages 67–72, 2006. (Cited on
page 14.)
[182] R.J. Webster, T.E. Murphy, L.N. Verner, and A.M. Okamura. A novel two-dimensional tactile slip display: design, kinematics and perceptual experiments. ACM Transactions on Applied Perception, 2(2):150–165, April 2005. ISSN
15443558. doi: 10.1145/1060581.1060588. (Cited on
pages 26 and 27.)
[183] P Wellman and RD Howe. Towards Realistic Vibrotactile
Display in Virtual Environments. In In Proceedings of the
ASME Dynamic Systems and Control Division, pages 713–
718, 1995. ISBN 1581139071. (Cited on page 96.)
169
170
bibliography
[184] P.S. Wellman, W.J. Peine, G.E. Favalora, and R.D. Howe.
Mechanical design and control of a high-bandwidth shape
memory alloy tactile display. Proceedings of the International Symposium of Experimental Robotics, (June):56–66,
1998. (Cited on pages 17 and 18.)
[185] Yan Wen, Chuanyan Hu, Guanghui Yu, and Changbo
Wang. A robust method of detecting hand gestures using
depth sensors. 2012 IEEE International Workshop on Haptic
Audio Visual Environments and Games (HAVE 2012) Proceedings, pages 72–77, October 2012. doi: 10.1109/HAVE.2012.
6374441. (Cited on page 87.)
[186] Wan-Chen Wu, Cagatay Basdogan, and Mandayam A
Srinivasan. Visual, Haptic, and Bimodal Perception of Size
and Stiffness in Virtual Environments. In Proceedings of the
American Society of Mechanical Engineers Dynamic Systems
and Control Division, volume 67, pages 19–26, 1999. (Cited
on page 127.)
[187] J.S. Zelek, S. Bromley, D. Asmar, and D. Thompson. A
Haptic Glove as a Tactile-Vision Sensory Substitution for
Wayfinding. Journal of Visual Impairment & Blindness, 97
(10), 2003. (Cited on pages 28, 29, and 32.)
[188] Zhihua Zhou, Huagen Wan, Shuming Gao, and Qunsheng
Peng. A Realistic Force Rendering Algorithm for CyberGrasp. Ninth International Conference on Computer Aided Design and Computer Graphics (CAD-CG’05), pages 409–414,
2005. doi: 10.1109/CAD-CG.2005.13. (Cited on page 116.)
[189] Thomas G. Zimmerman, Jaron Lanier, Chuck Blanchard,
Steve Bryson, and Young Harvill. A hand gesture interface device. In Proceedings of the SIGCHI/GI conference on
Human factors in computing systems and graphics interface CHI ’87, pages 189–192, New York, USA, 1987. ACM Press.
ISBN 0897912136. doi: 10.1145/29933.275628. (Cited on
page 15.)