icorr `99 - International Conference on Rehabilitation Robotics

Transcription

icorr `99 - International Conference on Rehabilitation Robotics
PROCEEDINGS
ICORR ’99
SIXTH INTERNATIONAL CONFERENCE ON
REHABILITATION ROBOTICS
STANFORD, CALIFORNIA, U.S.A.
JULY 1-2, 1999
Proceedings of ICORR ’99,
Sixth International Conference on Rehabilitation Robotics
© Board of Trustees of Stanford University,
Stanford, California, U.S.A.
1999
- ii ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Welcome to ICORR’99. With the conference theme of
“Communication and Learning”, our goal is to urge you to view
rehabilitation robots primarily as enablers of human social
functions rather than only as mechatronic devices. The devices
we are proud to be developing and building will be accepted and
embraced by people with disabilities and their caregivers only
when we can show value in the context of mainstream social
activities. The activities we find essential to our lives are indeed
to communicate with others and to pursue new knowledge.
Robots can help by centralizing the point of interaction for a person with a severe
physical disability and by providing control over computer and communication media
like paper, CD-ROMs, phones, the network and videotapes. Robots can also help to
provide educational learning experiences and physical therapy ‘relearning’ following
stroke and other neurological conditions. By designing the robot’s activities from
this perspective, the mechanical aspect becomes embedded in the social one: the
robot becomes part of the human-centered task at hand.
Advances in computing, real-time systems and integrated sensors are doing to
robotics what the microprocessor did to computing in the 1980s: it’s getting
personal. Personal Robotics and Service Robotics are slowly moving the field from
autonomous to shared-control, interactive system architectures. Rehabilitation
Robotics has already been there for twenty years, and has faced the additional
challenge of finding innovative ways for people with disabilities to control these
systems. We have a lot to offer the mainstream robotics R&D movement in terms of
insights into interactivity, and we will certainly continue to have a lot to gain from
advances in robot theory and practice.
So let’s use this conference opportunity to share our work, show each other the steps
we have been taking since the last ICORR two years ago in Bath, U.K., and spend
some time to discuss our goals for the coming years. We are the core community
shaping Rehabilitation Robotics, and this conference represents the best forum we’ll
have for another two years to carve our own future.
Wishing you a great experience here in the San Francisco Bay Area,
H.F. Machiel Van der Loos, Ph.D., Conference Chairman, ICORR’99
Rehabilitation R&D Center
Palo Alto VA Health Care System
3801 Miranda Ave. #153
Palo Alto, CA 94304-1200 U.S.A.
Phone: +1-650-493-5000 #65971; fax: +1-650-493-4919;
Email: [email protected]
- iii ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Conference Web Site is at http://www.rehabrobotics.org
The organizers of ICORR’99 gratefully acknowledge the following people and
organizations for their contributions to this conference:
Sponsored by:
ΠVA Palo Alto Health Care System Rehabilitation R&D Center (RRDC)
ΠStanford University:
Stanford Learning Laboratory (SLL)
Center for Design Research (CDR)
Dept. Mechanical Engineering
Dept. Computer Science
Dept. Functional Restoration
With Financial Support from:
ΠParalyzed Veterans of America
Spinal Cord Research Foundation (SCRF)
ΠAdept Technology, Inc.
With Acknowledgments to:
ΠNiels Smaby for Cover Design
ΠJoe Wagner for Graphic Design, especially the Namaste logo
ΠBetty Troy, who modeled her hand for the logo
ΠDavid Jaffe for the use of the robot gripper Ralph for the logo
ΠHypertouch, Inc., for Internet services
ΠStanford Univ. Computer and Communication Services and Internet
Commerce Services, Corp. for supplying e-commerce capability.
- iv ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
ICORR’99 Committee Board Members
Chairman
H.F. Machiel Van der Loos, Ph.D.
Program Committee:
Peter Lum, Ph.D. (chair)
Larry Leifer, Ph.D.
Charles Burgar, M.D.
Vincent Hentz, M.D.
Oussama Khatib, Ph.D.
Review Board:
Francisco Valero-Cuevas, Ph.D. (chair)
Kyong-Sok Chang, M.S.C.S.
Hisato Kobayashi, Ph.D.
Vijay Kumar, Ph.D.
Richard Mahoney, Ph.D.
Jun Ota, Ph.D.
Tariq Rahman, Ph.D.
David Reinkensmeyer, Ph.D.
Richard Simpson, Ph.D.
Local Organizing Committee:
Niels Smaby, M.S.M.E. (chair)
David Jaffe, M.S.
Michelle Johnson, M.S.M.E.
Oscar Madrigal, M.S.M.E.
Peggy Shor, O.T.R.
Joe Wagner, M.S.M.E. (ICORR’99 web site administrator)
Michael Wickizer, O.T.R.
Administration and Treasury:
Sonia Fahey (SLL, conference coordinator)
Carolyn Ybarra (SLL, administrator)
Lisa Brown (SUSCC, liaison)
Mary Thornton (PAIRE, administrator)
-vICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Author Index
Abboudi, R.L. ...........255
Agrawal, S. ..........16,187
Aisen, M.L. ................16
Avizzano, C.A. .........261
Bajcsy, R. .................122
Barner, K. ...................16
Baxter, F. ....................99
Bejczy, A.K. .............283
Bekey, G. ......................1
Bergamasco, M. .......261
Bien, Z. .......................42
Bolmsjö, G. ..............129
Boschian, K. .............136
Bool, van de, E. ........106
Buckmann, O. ..........129
Burgar, C.G. ...227,235,250
Busnel, M. ................149
Campos, M.F.M. ......276
Chang, K.-S. .............250
Clarkson, J. ...............156
Connor, B.B. ..............79
Coulon-Lauture, F. ...149
Cowlin, D.A. ..............79
Craelius, W. ..............255
Croasdell, V. ............240
Didi, N. .......................92
Diels, C. ......................16
Dobkin, B.H. ............283
Driessen, B.J.F. ........129
Edelstein, L. ...............16
Edgerton, V.R. .........283
Eftring, H. ................136
Garfinkel, A. ............283
Gelin, R. ...................149
Hagan, K. ...................86
Hagan, S. ....................86
Harkemal, S.J. ..........283
Harwin, W. ...............170
Henderson, J. ..............82
Hillman, M. ................86
Hogan, N. ...................16
Horiuchi, T. ..............183
Hunter, H. ...................60
Jau, B.M. ..................283
Jepson, J. ....................86
Johnson, M.J. ...........227
Jones, T. ...................201
Jung, J.-W. .................42
Karlsson, M. ...............60
Katevas, N. ..........60,142
Keates, S. .................156
Kim, J.-S. ...................42
Kimura, A. ...............183
Kishida, T. ................270
Kobayashi, T. ...........216
Krebs, H.I. .............27,34
Krovi, V. ..................122
Kumar, V. .................122
Kvasnica, M. ..............50
Kwee, H. ..................106
Lacey, G. .............60,163
Le Blanc, J.-M. .........149
Lee, H. ........................42
Leifer,L.J. .................227
Lesigne, B. ...............149
Lilienthal, G.W. .......283
Lum, P.S. ..................235
MacNamara, S. ....60,163
Mahoney, R. .............122
Matsuoka, Y. ............177
McClenathan, K. ........67
McGuan, S.P. ...........283
Mokhtari, M. ..............92
Nagai, K. ..................270
Nakanishi, I. .............270
Newby, N.A. ............255
O'Connell, S. ............115
Okada, S. ..................183
Okajima, Y. ..............183
Orpwood, R. ...............86
Petrie, H. ....................60
Pinto, S.A. de P. .......276
Pledgie, S. ..................16
Poirot, D. ....................99
Quaedackers, J. ........106
Rahman, T. .................67
Rao, R. .....................187
Reinkensmeyer, D.J. ....9
Robinson, P. .............156
Roby-Brami, A. ..........92
Rundenschöld, J. ........60
Rymer, W.Z. ................9
Sakaki, T. .................183
Schmit, B.D...................9
Scholz, J.P. ...............187
Shor, P. .....................235
Siegel, J.A. ...............240
Simpson, R. ................99
Smaby, N. .................250
Smith, J. ...................244
Song, P. ....................122
Song, W.-K. ...............42
Speth, L. ...................106
Stefanov, D. .............207
Takahashi, Y. ...........216
Taki, M. ....................183
Tanaka, N. ................183
Tejima, N. ..................74
Theeuwen, L. ...........106
Tomita, Y. ................183
Topping, M. ......115,244
Uchida, S. .................183
Van der Loos, M.227,235,250
Volpe, B.T. .................16
Wagner, J.J. ..............250
Wall, S. .....................170
Weiss, J.R. ...............283
Wing, A.M. ................79
Woerden, van, J.A. ...129
- vi ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Thursday, July 1: First Day
7:00-11:00
Registration
9:00-10:30
Session 1:
9:00
9:30
Welcome
H.F. Machiel Van der Loos
Keynote Speech
AUTONOMY AND LEARNING IN
MOBILE ROBOTS .........................................................1
George Bekey
10:30-11:00
Coffee Break
11:00-12:30
Session 2: Therapy 1
11:00 – 11:20 CAN ROBOTS IMPROVE ARM MOVEMENT
RECOVERY AFTER CHRONIC BRAIN INJURY? A
RATIONALE FOR THEIR USE BASED ON
EXPERIMENTALLY IDENTIFIED MOTOR
IMPAIRMENTS ..............................................................9
David J. Reinkensmeyer*, Brian D. Schmit,
W. Zev Rymer
11:20 – 11:40 TREMOR SUPPRESSION THROUGH FORCE
FEEDBACK .................................................................16
Stephen Pledgie*, Kenneth Barner,
Sunil Agrawal
11:40 – 12:00 PROCEDURAL MOTOR LEARNING IN
PARKINSON’S DISEASE: PRELIMINARY RESULTS ........27
H.I. Krebs*, N. Hogan, W. Hening,
S. Adamovich, H. Poizner
12:00 – 12:20 ROBOT-AIDED NEURO-REHABILITATION IN
STROKE: THREE-YEAR FOLLOW-UP ............................34
H.I. Krebs*, N. Hogan, B.T. Volpe, M.L. Aisen,
L. Edelstein, C. Diels
12:20 – 12:30 DISCUSSION
- vii ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
12:30-13:30
Lunch and Posters in the Gates Computer Science Bldg.
Outside Patio and in Robotics Lab.
POSTER 1:
A STUDY ON THE ENHANCEMENT OF
MANIPULATION PERFORMANCE OF A
WHEELCHAIR-MOUNTED REHABILITATION
SERVICE ROBOT .........................................................42
Jin-Woo Jung*, Won-Kyung Song, Heyoung
Lee, Jong-Sung Kim, Zeungnam Bien
POSTER 2:
A MODULAR FORCE-TORQUE TRANSDUCER FOR
REHABILITATION ROBOTICS ......................................50
Milan Kvasnica
POSTER 3:
ADAPTIVE CONTROL OF A MOBILE ROBOT FOR THE
FRAIL VISUALLY IMPAIRED ........................................60
Gerard Lacey, Shane MacNamara*, Helen
Petrie, Heather Hunter, Marianne Karlsson,
Nikos Katevas, Jan Rundenschöld
POSTER 4:
POWER AUGMENTATION IN REHABILITATION
ROBOTS......................................................................67
Kelly McClenathan*, Tariq Rahman
POSTER 5:
FORCE LIMITATION WITH AUTOMATIC RETURN
MECHANISM FOR RISK REDUCTION OF
REHABILITATION ROBOTS ..........................................74
Noriyuki Tejima
13:30-14:30
Demos in Robotics Lab, Room 100
DEMO 1:
COGNITIVE REHABILITATION USING
REHABILITATION ROBOTICS (CR3) ............................79
B.B. Connor*, A. M. Wing, D. A. Cowlin
DEMO 2:
GO-BOT CHILD’S MOBILITY DEVICE.........................82
J. Henderson
TOUR 1:
NOMADIC, INC. OMNIDIRECTIONAL MOBILE
ROBOT
Robert Holmberg
TOUR 2:
ROMEO AND JULIET OMNIDIRECTIONAL MOBILE
MANIPULATORS
Oussama Khatib, Kyong-Sok Chang
TOUR 3:
HAPTIC INTERFACE
Diego Ruspini
- viii -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
14:30-15:50
Session 3: Wheelchair and Mobile Robots
14:30 – 14:50 A WHEELCHAIR MOUNTED ASSISTIVE ROBOT.............86
Michael Hillman*, Karen Hagan, Sean
Hagan, Jill Jepson, Roger Orpwood
14:50 – 15:10 PREPROGRAMMED GESTURES FOR ROBOTIC
MANIPULATORS: AN ALTERNATIVE TO SPEED UP
TASK EXECUTION USING MANUS. ............................92
N. Didi*, M.Mokhtari, A. Roby-Brami
15:10 – 15:30 EVALUATION OF THE HEPHAESTUS SMART
WHEELCHAIR SYSTEM ................................................99
Richard Simpson*, Daniel Poirot, Mary
Francis Baxter
15:30 – 15:50 POCUS PROJECT: ADAPTING THE
CONTROL OF THE MANUS MANIPULATOR
FOR PERSONS WITH CEREBRAL PALSY ......................106
Hok Kwee*, J. Quaedackers, E. van de
Bool, L. Theeuwen, L. Speth
15:50-16:10
16:10-17:30
Coffee Break
Session 4: Evaluation and Simulation
16:10 – 16:30 A USER’S PERSPECTIVE ON THE HANDY 1 SYSTEM ..115
Stephanie O’Connell, Mike Topping*
16:30 – 16:50 DESIGN OF HUMAN-WORN ASSISTIVE DEVICES
FOR PEOPLE WITH DISABILITIES................................122
Peng Song, Vijay Kumar, Ruzena Bajcsy,
Venkat Krovi, Richard Mahoney*
16:50 – 17:10 A RAPID PROTOTYPING ENVIRONMENT FOR
MOBILE REHABILITATION ROBOTICS ........................129
B.J.F. Driessen*, J.A. v. Woerden, G. Bolmsjö,
O. Buckmann
17:10 – 17:30 TECHNICAL RESULTS FROM MANUS USER TRIALS .136
Håkan Eftring*, Kerstin Boschian
17:30 – 17:40 DISCUSSION
17:40-18:30
18:30-19:30
19:30-22:00
22:00-23:00
Break
Bus to Reception Dinner
Dinner, Dessert and After-Dinner Remarks by Larry Leifer
Bus Returning to Hotels
- ix -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Friday, July 2: Second Day
7:00-9:00
8:30-10:00
Registration
Session 5: Assistive Robots
8:30 – 8:50
MOBINET: THE EUROPEAN RESEARCH
NETWORK ON MOBILE ROBOTICS TECHNOLOGY
IN HEALTH CARE SERVICES ......................................142
Nikos I. Katevas
10:00-10:30
10:30-12:00
8:50 – 9:10
AFMASTER: AN INDUSTRIAL REHABILITATION
WORKSTATION .........................................................149
Rodolphe Gelin*, F. Coulon-Lauture,
B. Lesigne, .J.-M. Le Blanc, M Busnel
9:10 – 9:30
DESIGNING A USABLE INTERFACE FOR AN
INTERACTIVE ROBOT ................................................156
Simeon Keates*, John Clarkson, Peter Robinson
9:30 – 9:50
A ROBOTIC MOBILITY AID FOR FRAIL VISUALLY
IMPAIRED PEOPLE ....................................................163
Shane MacNamara*, Gerard Lacey
9:50 – 10:00
DISCUSSION
Coffee Break
Session 6: Therapy 2
10:30 – 10:50 MODELLING HUMAN DYNAMICS IN-SITU FOR
REHABILITATION AND THERAPY ROBOTS .................170
William Harwin*, Steven Wall
10:50 – 11:10 DOMESTIC REHABILITATION AND LEARNING OF
TASK-SPECIFIC MOVEMENTS ....................................177
Yoky Matsuoka
11:10 – 11:30 TEM: THERAPEUTIC EXERCISE MACHINE FOR
HIP AND KNEE JOINTS OF SPASTIC PATIENTS .............183
Taisuke Sakaki*, S. Okada, Y. Okajima,
N. Tanaka, A. Kimura, S. Uchida, M. Taki,
Y. Tomita, T. Horiuchi
11:30 – 11:50 A ROBOT TEST-BED FOR ASSISTANCE AND
ASSESSMENT IN PHYSICAL THERAPY ........................187
Rahul Rao*, Sunil K. Agrawal, John P. Scholz
11:50 – 12:00 DISCUSSION
-x-
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
12:00-12:30
Bus to VA Palo Alto Rehabilitation R&D Center
12:30-14:00
Lunch and Posters, VA Palo Alto Rehabilitation
R&D Center
13:00-15:00
POSTER 1:
RAID – TOWARD GREATER INDEPENDENCE IN
THE OFFICE & HOME ENVIRONMENT ........................201
Tim Jones
POSTER 2:
INTEGRATED CONTROL OF DESKTOP MOUNTED
MANIPULATOR AND A WHEELCHAIR .........................207
Dimiter Stefanov
POSTER 3:
UPPER LIMB MOTION ASSIST ROBOT.........................216
Yoshihiko Takahashi*, Takeshi Kobayashi
Demos and Tours of the VA Palo Alto
Rehabilitation R&D Center
DEMO 1:
DRIVER'S SEAT: SIMULATION ENVIRONMENT
FOR ARM THERAPY .................................................227
Michelle J. Johnson*, H.F. Machiel Van der
Loos, Charles G. Burgar, Larry J. Leifer
DEMO 2:
A ROBOTIC SYSTEM FOR UPPER-LIMB EXERCISES
TO PROMOTE RECOVERY OF MOTOR FUNCTION
FOLLOWING STROKE ................................................235
Peter S. Lum*, H.F. Machiel Van der Loos,
Peggy Shor, Charles G. Burgar
DEMO 3:
INTERFACING ARTIFICIAL AUTONOMICS, TOUCH
TRANSDUCERS AND INSTINCT INTO
REHABILITATION ROBOTICS .....................................240
John Adrian Siegel*, Victoria Croasdell
DEMO 4:
Demo 5:
THE DEVELOPMENT OF HANDY 1, A ROBOTIC
SYSTEM TO ASSIST THE SEVERELY DISABLED ...........244
Mike Topping, Jane Smith*
ProVAR assistive robot interface...........................250
Joseph Wagner*, Niels Smaby, Kyong-Sok
Chang, H.FM. Van der Loos, Charles Burgar
TOUR 1:
TILT-I PEDALING ERGOMETER
Michael Slavin, Julie Harvey
TOUR 2:
DIFFERENTIAL PRESSURE WALKING ASSIST SYSTEM
Douglas Schwandt, Ellie Buckley, Yang Cao
- xi -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
15:00-15:30
Bus to Stanford; Coffee Break
15:30-17:00
Session 7: Prosthetics and Orthotics
15:30 – 15:50 CONTROL OF A MULTI-FINGER PROSTHETIC
HAND .......................................................................255
William Craelius*, Ricki L. Abboudi, Nicki
Ann Newby
15:50 – 16:10 TECHNOLOGICAL AIDS FOR THE TREATMENT OF
TREMOR ...................................................................261
C.A. Avizzano*, M. Bergamasco
16:10 – 16:30 DESIGN OF A ROBOTIC ORTHOSIS ASSISTING
HUMAN MOTION IN PRODUCTION ENGINEERING
AND HUMAN CARE ...................................................270
Kiyoshi Nagai*, Isao Nakanishi, Taizo Kishida
16:30 – 16:50 A SIMPLE ONE DEGREE-OF-FREEDOM
FUNCTIONAL ROBOTIC HAND ORTHOSIS ...................276
Mário F.M. Campos, Saulo A. de P. Pinto*
17:00-18:00
Session 8: Moderated Discussion on the Future of
Rehabilitation Robotics and ICORR
17:00 – 17:15 ANALYSIS AND CONTROL OF HUMAN
LOCOMOTION USING NEWTONIAN MODELING
AND NASA ROBOTICS .............................................283
James.R. Weiss*, V.R. Edgerton, A.K. Bejczy,
B.H. Dobkin, A. Garfinkel, S.J. Harkema1,
G.W. Lilienthal, S.P. McGuan, B.M. Jau
17:15 – 17:50 DISCUSSION
17:50 – 18:00 CLOSE OF CONFERENCE
H.F. Machiel Van der Loos
- xii ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
AUTONOMY AND LEARNING IN MOBILE ROBOTS
George A. Bekey
Computer Science Department
University of Southern California
Los Angeles, CA 90089-0781
[email protected]
http://www-robotics.usc.edu/
Abstract
Recent trends in autonomous mobile
robots are presented, with an emphasis
on machines capable of some degree of
learning and adaptation. Following a
historical review, the paper discusses
developments in humanoids,
entertainment robots, service robots,
and group robotics. Some of the
applications are illustrated with
examples from the author’s laboratory.
Introduction
A robot as a machine that senses,
thinks and acts. Such systems are
frequently called intelligent agents, or
simply agents. In this sense,
autonomous robots, i.e., robots capable
of some degree of independent, selfsufficient behavior, are intelligent
agents par excellence. They are
distinguished from software agents in
that robots are embodied agents,
situated in the real world. As such,
they are subject to both the joys and
sorrows of the world. They can be
touched and seen and heard
(sometimes even smelled!), they have
physical dimensions, and they can
exert forces on other objects. These
objects can be like a ball in robot
soccer games, they can be parts to be
assembled, airplanes to be washed
carpets to be vacuumed, terrain to be
traversed or cameras to be aimed.
More relevant to this conference, these
objects can be tools for assisting
persons with disabilities.
Since robots are agents in the world
they are also subject to its physical
laws, they have mass and inertia, their
moving parts encounter friction and
hence heat, no two parts are precisely
alike, measurements are corrupted by
noise, and, alas, parts break. Of
course, robots also contain computers,
and hence they are also subject to the
slings and arrows of computer
misfortunes, both in hardware and
software. Finally, the world into which
we place these robots keeps changing,
it is non-stationary and unstructured,
so that we cannot predict its features
accurately in advance.
In order to adapt to the world, and
learn from experience, autonomous
-1ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
robots require sensors to perceive
various aspects of their environment
and computers to implement various
approaches to machine learning. They
are an imitation of life, and we are
drawn to watching them as they
perform their tasks. It is not only the
fact that they move, since many things
move in the world (sometimes by
gravity or sometimes by motor power),
but that they move with apparent
intelligence and purpose. For those of
us who design and build them, this is
precisely our goal.
A short history of robot intelligence
During the 19th century there was a
great deal of fascination with automata,
machines that moved automatically in
imitation of living creatures. A
number of animated dogs and human
figures were built. Churches and
public buildings were equipped with
moving figures controlled by complex
mechanical clockwork. While these
machines were not robots in that they
did not have sensors to ascertain the
state of the world, one may consider
their clocks as primitive computers,
which controlled the actuators and
produced movement. Robots, in the
sense of programmable mechanical
systems, arose relatively recently.
Robot manipulators were proposed by
Devol in the United States in 1954; a
company started by Devol and
Engelberger produced the first
commercial versions of these machines
in 1962. Industrial robots rapidly
assumed an important role in
manufacturing (particularly in the
automobile industry, where they are
used extensively for painting, welding
and assembly). In the following 20
years the manufacture of robots
gradually shifted from the US to
Europe and Japan. Japan currently has
the largest number of manufacturing
robots of any country in the world.
While the early manipulators were
strictly pre-programmed mechanical
arms, capable only of specific
movements in highly structured
environments, in recent years they
have been equipped with increasing
numbers of sensors (such as vision and
force) which have given them some
ability to adapt to changes in the
environment.
However, manipulators used for
manufacturing are not autonomous
agents, even if they have some degree
of adaptability. Another line of
development led to the development of
mobile robots, which could interact
with the world and perform some
cognitive functions. In Japan the
pioneer in this line of work was Ichiro
Kato from Waseda University. The
Waseda biped robot that walked many
km and the Wasebot piano playing
humanoid were the stars of the show
during the Japan Expo World’s Fair of
1985. The piano playing robot was a
mechanical marvel. It could read sheet
music with a video camera and use
-2ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
these inputs to control its arms and ten
fingers as it sat on a piano bench. The
Japanese fascination with these
machines and robots in general is well
known [1]. Numerous other walking
machines were built in the US, Japan
and Europe, with two or four or six or
eight legs. Raibert’s Pogo stick was a
one-legged robot, which maintained its
balance as it hopped in a circle at the
end of a boom. It and other remarkable
machines are described in his book [2].
The foundation of behavior-based
control of mobile robots was provided
by Brooks [3], in whose laboratory
many autonomous robots were
designed and built. Perhaps the largest
and most varied collection of mobile
autonomous robots was designed and
constructed by Hirose and his
collaborators, e.g. [4,5].
The degree of “intelligence” with
which mobile robots are endowed is
highly variable. The Waseda piano
playing robot was a simple translator
from printed notes to finger movement.
One may term this intelligent behavior,
in the same sense that it requires
intelligence to read out loud, i.e., to
translate from the printed word to
movement of the vocal folds.
However, the Waseda piano player had
no ability to learn. About 20 years
ago, here at Stanford, the robot Shakey
was used for experiments in planning
and learning. Shakey would take
pictures of its surroundings and then
plan a path to the next room that
avoided obstacles, move a little, take
new pictures, re-plan, etc..
Sojourner, the small NASA robot
which moved about on the surface of
Mars, displayed limited autonomy, but
not much intelligence nor the ability to
learn.
We discuss other recent “intelligent”
robots in later sections of this paper.
Recent developments in robot
hardware and software
In recent years there have been
dramatic improvements in the
subsystems available to build robots.
To sense the world, a robot needs
sensors, such as cameras to see,
ultrasonic and infrared proximity
sensors to avoid hitting obstacles,
microphones to hear, touch sensors,
pressure sensors, an electronic nose for
smelling, and so on. Flying robots may
be equipped with GPS, thus facilitating
localization. All these sensors and
many more are now available.
Further, since all sensors are noise and
imperfect, the information they
transmit to the robot may be
inconsistent, and some form of sensor
fusion is often required. To think, the
needs a computer and appropriate
algorithms based on artificial
intelligence research. In the past this
was difficult because computers were
too large and too slow and too
expensive. All that has changed, and
we can put an enormous amount of
-3ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
computation into a few chips. The
improvements in computers have been
dramatic, and they have made an
enormous difference in our ability to
build robots with some intelligence.
Robot learning
Many of the standard approaches to
machine learning have been applied to
learning in robotics, including
reinforcement learning, supervised
learning, neural networks, evolutionary
algorithms, learning by imitation, and
several probabilistic approaches. A
number of these methods are discussed
in a recent publication [6]. In
particular, mobile robots are now able
to navigate in surprisingly complex
environments and learn their properties
so their performance improves on
successive trials. In our own
laboratory, we use mobile robots to
explore and map the basic topological
features of hallways in buildings.
Others, like Thrun and his colleagues
[6] use probabilistic approaches and a
grid map of an area to obtain accurate
metric maps. We are also attempting
to use learning by imitation to develop
a control strategy for a robot
helicopter. Specifically, we use a
method called "learning by showing",
in which the robot tries to imitate the
control signals produced by a human
pilot who flies the vehicle by radio.
The learning method produces fuzzy
rules for coarse control and neural
networks for fine control [7].
Humanoids
Both in Japan and the US there is
renewed interest in building machines
that resemble humans, both in structure
and behavior, that display some degree
of autonomy. One of the most
remarkable is a walking robot designed
and built by the Honda company since
1996, is about the size of large person.
It wears a helmet, which contains the
vision system. It carries a backpack
that contains power supplies,
computers and communication
equipment. This is truly a remarkable
robot, capable of walking without
falling, not only on a level surface, but
also up and down stairs. It has an
excellent balance reflex. It can adapt
to changes in load and that pressure on
its “chest” will cause it to start walking
backwards rather than falling. The
applications for this robot are not yet
clear; it simply demonstrates that a
human-like two-legged robot can be
built.
Brooks’ current robot being
constructed at MIT, named Cog, is a
humanoid torso, with head, eye and
arm movements, and some ability to
hear, learn and speak [8]. Cog learns
from interaction with humans. This
represents one of the current trends in
autonomous robots, i.e., the
incorporation of learning. Thus, the
development of autonomous robots has
-4ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
moved from emphasis on movement to
emphasis on cognition and learning.
Entertainment robotics
The entertainment industry has used
“robots” for many years. However,
many “robots” in the movies (e.g., the
robot “Short Circuit”) are teleoperated
devices, controlled by hidden human
operators. They are not true
autonomous robots. The same can be
said of the robots used at Disney parks
or at Universal Studios. The latter
location features a Tyrannosaurus Rex
“robot”, which is pre-programmed to
move as a boat carrying frightened
passengers moves by. By contrast with
such devices, Sony Corporation has
developed a four-legged “pet robot”
which was announced commercially in
May of this year [9]. In contrast with
industrial manipulators or
rehabilitation robots, this device is
designed entirely for amusement, with
no other practical use. I believe that
many entertainment robots will be
introduced in the next few years. In
the US there is a furry toy named
Furby which appeared in 1998. Furby
can move his head and eyes, recognize
some words and learn how to speak
perhaps as many as 50 words. The
Sony dog does not speak, but it is
capable of a number of amazing
behaviors. The robot will chase a ball,
push it with its paw and follow it
around. Of course, it has vision. It
also has touch sensors built into its
head; a pat on the head will result in a
different behavior, such as lying down,
or sitting and waving. One of the
remarkable things about these robots is
that when they fall, they are capable of
getting up and continuing to walk. The
behavior control computer is
implemented on an insertable card,
similar to a PCMCIA card.
In the near future Omron Corporation,
also from Japan, is expected to
introduce a robot “cat”, designed as a
companion robot for the elderly. The
behaviors included with this robot
include recognition of the owner’s
voice, purring when stroked, and
following the owner with its head and
eye movements.
Devices like the Sony “dog” or the
Omron “cat” are true robots, since they
sense, think, and act upon the world.
They are frequently programmed on
the basis of behaviors and they display
some limited learning ability. Also,
such robots are designed for close
contact with humans. This means that
they should be perceived as "friendly"
rather than potentially dangerous. I
believe that the issue of perceived
friendliness in these agents will be
increasingly important in the future.
The Sony robot "pets" were frequently
described by such terms as "charming",
"lovable", "cute" or "friendly. This is
quite a compliment for an inanimate
agent.
-5ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Human service and cooperation
In addition to the medical and
rehabilitation applications discussed at
this conference, I believe that more and
more robots will be used to assist
people in a variety of tasks. Such jobs
may include street cleaning, gasoline
pumping, vacuuming large carpets,
washing aircraft, or inspecting
pipelines from the inside. Prototype
robots for such tasks have already been
built. Some degree of autonomy will
be completely essential for such robots.
One of the distinguishing features of
human service robotics (as with
rehabilitation and personal
entertainment robots) is the fact that
the machines will be working in close
proximity to and in cooperation with
humans. This is drastically different
from the early days of industrial
robotics, where great care was taken to
insure that humans and robots were
well separated to minimize the risk of
injury. Such human-robot interaction
will require that the agents relate to
humans in novel ways, in order to be
able to respond to commands,
motivations and goals. The agents may
be required not only to understand
spoken commands, but also to "read"
the tone of voice, facial expressions,
and gestures of their human coworkers.
The Robotic Engineering Center at
Carnegie Mellon University has been
developing an autonomous robot
tractor (named Demeter). The machine
has already demonstrated the ability to
operate in large fields and to perform
harvesting operations. Autonomous
road building machinery is being tested
in such applications as excavation,
pipe laying, and paving. Construction
robots in Japan are being used to
assemble steel beam structures and to
spray asbestos for fireproofing. In the
area of transportation, projects at CMU
and in Germany have demonstrated the
ability of autonomous passenger
automobiles to travel on highways for
long distances at normal traffic speeds.
Cooperative groups of robots
The above examples have featured
applications of individual robots to
specific tasks. Another major trend is
the increasing development of
computational models and tools to
created behavior-based colonies of
agents. Work at by Mataric, Arkin, and
Fukuda (see, for example, [10]) are
only a few examples of a major and
growing trend. Our own laboratory at
USC is working on a colony of agents
(involving both ground-based and
flying vehicles) to perform
reconnaissance and other tasks, with a
minimum of inter-agent
communication and outside
supervision. Such tasks typically
involve the ability of a colony to reach
global goals when each agent has only
local information.
-6ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Conclusion
This paper has presented an overview
of some of the current trends in
robotics. The survey is not complete
by any means, but is intended to
indicate some of the current directions
in the field. We see robots in the
future incorporating the following
features:
1. Many robots will include some
form of machine learning,
applicable to behavior in the real
world.
2. Robots will become available in
very small sizes. Very tiny robots
will be able to swim through
bloodstream and identify possible
diseases. Very small robots may be
able to assemble electronic circuits
and microprocessors.
3. There will be more human robot
cooperation. Rather than being
afraid of robots, people will learn to
treat them as partners in many
activities. Among such activities
will be robot caretakers for elderly
people and persons with disabilities,
particularly in countries where
families tend to separate and not
live together. Human-robot
interaction will include the ability
of the agents to respond to a large
variety of commands and cues from
humans.
4. There will be more intelligent
robots for entertainment and more
humanoid robots, which resemble
humans in physical appearance,
behavior and some aspects of
cognition. Emotional components
will be included in entertainment
robots.
5. There will be more emphasis on
group robotics, involving
cooperative actions and cooperative
problem solving among many
robots.
In summary, we can expect that the
robots of the future will become more
intelligent, have greater ability to learn
from experience, and to interact with
each other and with us in new and
unexpected ways.
References
1.
Schodt, F.L. (1988). Inside the
Robot Kingdom, Kodansha
International.
2.
Raibert, R.A., 1986,
Legged
Robots that Balance, Cambridge, MA:
MIT Press.
3.
Brooks, R. (1986). A robust
layered control system for a mobile
robot. IEEE J. of Robotics and
Automation, 2:14-23.
4.
Hirose, S. , (1984), A study of
design and control of a quadruped
-7ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
walking vehicle, Int. J. Robotics
Research, 3:113-133.
5.
Hirose, S. (1993). Biologically
Inspired Robots, Oxford University
Press
6. Hexmoor, H. and Mataric, M.,
Editors (1998), Learning in
Autonomous Robots (Special Issue of
Autonomous Robots, 5:237-420)
7. Montgomery, J. Learning Nonlinear
Control Through “Teaching by
Showing”, Ph.D. Dissertation, USC,
May 1999
8.
Brooks, R. and Stein, L. (1994).
Building brains for bodies. Autonomous
Robots 1:7-25.
9.
…… Business Week
10. Arkin, R. and Bekey, G., editors
(1997) Robot Colonies (Special Issue of
Autonomous Robots, 4:5-153)
-8ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
CAN ROBOTS IMPROVE ARM MOVEMENT RECOVERY AFTER
CHRONIC BRAIN INJURY? A RATIONALE FOR THEIR USE BASED ON
EXPERIMENTALLY IDENTIFIED MOTOR IMPAIRMENTS
David J. Reinkensmeyer1, Brian D. Schmit2, and W. Zev Rymer2
1: Dept. of Mechanical and Aerospace Engineering, University of California, Irvine
2: Sensory Motor Performance Program, Rehabilitation Institute of Chicago
ABSTRACT
Significant potential exists for robotic
and mechatronic devices to deliver
therapy to individuals with a movement
disability following stroke, traumatic
brain injury, or cerebral palsy. We
performed a series of experiments in
order to identify which motor
impairments should be targeted by such
devices, in the context of a common
functional deficit – decreased active
range of motion of reaching – after
chronic brain-injury. Our findings were
that passive tissue restraint and agonist
weakness, rather than spasticity or
antagonist restraint, were the key
contributors to decreased active range
of motion across subjects. In addition,
we observed striking patterns of
abnormal contact force generation
during guided reaching. Based on these
results, we suggest that active assistance
exercise is a rational therapeutic
approach to improve arm movement
recovery after chronic brain injury. We
briefly discuss a simple, cost-effective
way that such exercise could be
implemented
using
robotic/
mechatronic technology, and how such
exercise could be adapted to treat
abnormal muscle coordination.
BACKGROUND
Recently there has been a surge of
interest in bringing robotic and
mechatronic technology to bear on
rehabilitation of movement after brain
injury [1]. Stroke is currently the
leading cause of severe disability in the
U.S., and arm and hand movements are
often preferentially impaired after
stroke. A significant amount of recent
research has therefore been focused on
devices for therapy of the arm after
stroke. Such devices could ultimately
benefit approximately 300,000 new
stroke survivors per year, as well as the
more than 1.5 million chronic stroke
survivors with movement disability in
the U.S.
A current difficulty in designing
appropriate robotic technology for
movement therapy of brain-injured
individuals is that the optimal therapy
techniques
are unknown.
More
fundamentally, it is unclear what
induces the observed movement
impairments. Brain injury is often
accompanied by a series of motor
impairments,
including
weakness,
spasticity, impaired movement range
and impaired motor coordination.
These impairments are mediated, in
-9ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
part, by changes to neural pathways,
reflex systems, muscle, and connective
tissue. Physical rehabilitation – and
robotic therapy devices – could be
targeted at any of these impairments.
The goal of this study was therefore to
identify the role of three motor
impairments to a common functional
deficit – decreased active range of
motion of reaching (or decreased active
“workspace”). Briefly, the three
impairments were:
1. Increased passive tissue restraint,
which may arise due to disuse and
persistent abnormal posture of the
spastic arm [2], and could cause an
increased resistance to voluntary
movement of the arm.
2. Antagonist muscle restraint, which
could arise from reflex activation of
antagonists (spasticity), or abnormal
antagonist coactivation [3].
3. Agonist muscle weakness, arising
from destruction of key motor
centers and outflow pathways and
potentially by disuse atrophy [4].
METHODS
To distinguish these three motor
impairments,
detailed
mechanical
measurements were made of the arms of
five spastic hemiparetic subjects during
reaching along a motorized guide. The
device, which was used in the
configuration shown in Fig. 1, allowed
measurement of hand position and
multi-axial force generation during
guided reaching movements in the
horizontal plane, and application of
Figure 1: The Assisted Rehabilitation and
Measurement Guide (“ARM Guide”). The
subject’s forearm/hand was attached to a
handle/splint that slid along a linear
constraint via a low-friction, linear bearing.
A six-axis force/torque sensor sensed contact
forces between the hand and the constraint in
the coordinate frame shown. A computercontrolled motor attached to a chain drive
was used to drive the hand along the
constraint. An optical encoder measured the
position of the hand along the constraint.
motorized stretches to the arm. After
establishing workspace deficits along
the device by the subjects, two tests
were performed to elucidate the causes
of these deficits. Each test was applied
following individual reaches by each
subject, across a set of twelve reaches:
Passive Restraint Test: To evaluate the
level of passive tissue restraint at the
workspace boundary, the ARM Guide
- 10 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
30
TS
TS
y
F [N]
20 Passive Restraint Level
10
0
−10
Workspace Bndry
Reach
0
5
10
15
TS
0
y
dF [N]
5
−5
dK
−10
−15
r2=0.98
11
40
12
13
14
15
TS
Fx [N]
returned the subject’s hand to the
position from which the most recent
reach was initiated. The arm was then
moved slowly (< 4 cm/sec) back to the
workspace boundary achieved by the
most recent reach, and the force needed
to hold the passive arm at the boundary
was measured (Fig. 2, top). For
comparison, the passive force generated
by the contralateral arm (which was
ostensibly normal) at a matched
position was also evaluated. During
these slow passive movements, EMG
recordings
of
seven
muscles
surrounding the shoulder and elbow
were used to verify that muscles were
inactive.
20
Active Restraint Test: We hypothesized
that any active restraint arising from
activation of antagonist muscles during
reaching would manifest itself as an
increased stiffness following reaching,
while the subject was still activating
muscles and trying to move beyond the
boundary. To evaluate this stiffness, a
small stretch (the “terminal stretch”, 4
cm amplitude, bell-shaped velocity
trajectory with a peak velocity of 15
cm/sec) was applied to the arm when
hand velocity had dropped and
remained below 1 mm/sec for 150
msec. An identical small stretch was
applied following the slow passive
movement of the arm through the same
range (Fig. 2 top). The restraint force
measured following the passive
movement was then subtracted from the
restraint force measured following
reaching, in order to subtract out any
TS
0
0
5
10
Position (−y) [cm]
15
Figure 2: Top: Example of force measured
along ARM Guide in y direction (see Fig. 1)
during an active reach with a spastic arm
(open circles), and during a slow passive
movement through the same range (filled
circles). Each movement was followed by an
identical 4 cm terminal stretch (labeled TS).
Middle: Expanded view of differential force
(i.e. Fy for TS following following reach minus
Fy for TS following passive movement.) dK =
active stiffness of arm. Regression to find dK
was performed only over first 200 msec to
minimize possible effects of voluntary
intervention by subject. Bottom: Horizontal
off-axis force during reach (open circles) and
during passive movement (filled circles).
passive forces common to the two
conditions, such as those arising from
passive stiffness, inertia, and damping.
- 11 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Five subjects were tested, each having
suffered a hemispheric brain injury
(four ischemic stroke, one traumatic
brain injury) at least two years
previously. The subjects had a wide
range of movement ability as gauged by
a standard clinical exam. The two
subjects with the greatest movement
ability exhibited workspace deficits
during free movement, yet had a full
active range of motion during reaching
along the ARM Guide. To induce a
workspace deficit along the ARM
Guide, these subjects (D and E) were
loaded with a light spring load (stiffness
2.5 N/cm). All subjects had mild to
moderate spasticity in elbow flexor
muscles as detected manually.
RESULTS
All subjects showed highly repeatable
active range of motion as they reached
along the ARM Guide: the standard
deviation of the final hand resting
position was less than 1.5 cm for all
subjects, while mean movement
amplitudes ranged from 7.0 to 16.0 cm
across subjects. The well-defined limit
to active range of motion occurred well
before the end of the passive range of
20
*
*
10
*
y
F [N]
*
0
A a
B b
C c
D d
E e
C c
Subject
D d
E e
10
dK [N/cm]
The result was the restraint force due
solely to coactivation of muscles at the
workspace boundary (Fig. 2 middle).
For comparison, the terminal stiffness
of the contralateral arm following
matched, targeted, reaching movements,
and following slow passive movement
through the same range, were evaluated
in a similar fashion.
*
5
0
*
A a
B b
Figure 3: Top: Passive restraint force for
subjects A – E at the workspace boundary.
upper case = spastic arm; lower case =
contralateral arm Bottom: Active stiffness at
the workspace boundary. Asterisks denote
significant difference between spastic and
contralateral arms (t-test, p < .05). Bars = 1
SD.
motion of the arm. Specifically, the
subjects’ arms stopped moving at least
7.0 cm before the mechanical limit to
passive range of motion determined
manually by the experimenter. Thus,
the cause of the workspace boundary
was not a passive mechanical limit to
either elbow extension or shoulder
flexion.
A striking feature of force development
during reaching was that all subjects
generated large, perpendicular forces
against the ARM Guide with the spastic
arm. The forces were greatest in the
horizontal plane, were medially
directed, and reached a maximum near
the end of the range of motion (Fig. 2
bottom). For all subjects, the horizontal
contact force at the end of the range of
- 12 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
motion was significantly more medial
by more than 20.0 N than the horizontal
force generated by the contralateral arm
(one-sided t-test, p < .0001). We have
shown previously [5] that such medial
contact force generation is consistent
with clinical descriptions of the
abnormal extension muscle synergy (i.e.
elbow extension coupled with shoulder
internal rotation and adduction).
Mechanical Tests to Determine
Origins of Workspace Deficits
Interspersed with reaches to the
workspace boundary, two mechanical
tests were performed on each subject’s
arm (Fig. 2). For the passive restraint
test, the subject’s relaxed arm was
slowly moved to the workspace
boundary achieved by the previous
reach. For four of five subjects, the
level of passive restraint force
generated by the spastic arm at the
workspace boundary was significantly
greater than the restraint force
generated by the contralateral arm at a
matched position (Fig. 3 top, t-test, p <
.05).
The average increase across
subjects was 4.6 N (SD 0.8).
For the active restraint test, a terminal
stretch was applied to the arm
immediately following reaching, and
compared to a terminal stretch
following slow passive movement
through the same range. For all
subjects, the difference between the
restraint force in the two conditions,
plotted as a function of hand position,
was well approximated by a linear
relationship (Fig. 2 middle). The mean
variance accounted for by linear
regression of this relationship across all
subjects was 0.86 (SD 0.05) for the
spastic arms, and 0.85 (SD 0.10) for the
contralateral arms. As judged by the
slope of the differential force response,
the stiffness of the impaired arm
following reaching was increased by an
average of 5.3 N/cm (SD 2.3) across
subjects compared to arm stiffness
following passive movement (Fig. 3).
Similarly, arm stiffness increased in the
contralateral arm following matched
reaching movements as compared to
following passive movement by an
average of 5.5 N/cm (SD 1.6). These
differences were significantly different
from zero (t-test, p < .001), but not from
each other.
On a subject-by-subject
basis, only one subject showed a
statistically greater active stiffness in
the spastic arm.
DISCUSSION AND CONCLUSION
The increased passive tissue restraint
we measured most likely resulted from
disuse of the spastic arm. Muscle,
tendon, and joint capsules tend to
shorten and stiffen when held in a
shortened position for an extended time
period [2]. Since spastic hemiparetic
patients often have difficulty moving
their arm across the full workspace, and
typically decline to use the spastic arm
in favor of the contralateral arm, one
would expect to observe changed
passive tissue properties. Such changes
have been frequently observed in the
lower extremity after brain injury [e.g.
- 13 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
6], and have been suggested to occur at
the elbow [7].
The finding that active stiffness of the
spastic arms was comparable to that of
the contralateral arms was surprising.
All subjects had clinically detectable
spasticity in their elbow flexor muscles.
Also, all subjects exhibited gross
patterns
of
abnormal
muscle
coactivation during reaching, as
witnessed by the generation of large
off-axis contact forces. Despite these
possible indicators of antagonist
restraint, however, the stiffness
measurements demonstrated that the net
effect of reflex-based antagonist
activation and abnormal antagonist
coactivation was not excessive,
compared to antagonist levels during
normal movement (i.e with the
contralateral arm).
A Rationale for Robotic Therapy
Based on these results, we suggest that
a rational plan for treating workspace
deficits in chronic brain injury is to
target agonist weakness and passive
tissue restraint. Robotic therapy devices
could help implement such treatment by
providing active assist exercise. The
principle of active assist exercise is to
complete a desired movement for the
patient if the patient is unable. The
effect of such exercise is to interleave
repetitive movement attempts and
passive range of motion exercise.
Repetitive movement exercise, in which
an individual attempts repeatedly to
activate damaged motor pathways, has
shown promise in improving agonist
strength in the hand [8]. Passive range
of motion exercise, in which shortened
soft tissues are extended and held in a
lengthened position, can help alleviate
passive tissue restraint [2].
By
interleaving these two exercises via
active assistance, robotic therapy
devices could address both passive
tissue restraint and agonist weakness in
a single, efficient exercise.
The reaching guide used in this study
provides an example of a simple, costeffective means to provide active assist
therapy for reaching movements across
the user’s workspace. The device makes
use of a passive linear constraint to
guide movement along desired straightline reaching trajectories. The passive
constraint can be moved and locked to
allow reaching in different directions
across the workspace. Thus, only a
single actuator is required to assist
reaching in a wide variety of directions.
A final consideration is the abnormal
coordination patterns we observed in
the subjects. Mechanically completing a
movement for a person may encourage
use of abnormal muscle synergy
patterns, since the person may develop
more force for reaching when using the
pattern, and since any misdirected (i.e.
off-axis) forces will be counteracted by
the
mechanical
assistance.
Incorporating feedback of off-axis force
generation during guided reaching may
enhance development of coordinated
movement. One approach is to provide
- 14 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
visual or auditory feedback of off-axis
contact forces. Another approach is to
reduce the stiffness of the guiding
mechanism, so that if a user exerts large
off-axis forces, the arm will deviate
from the desired reaching path.
Acknowledgements:
The
authors
gratefully acknowledge support of
NIDRR
Field-Initiated
Grant
H133G80052, and Whitaker Foundation
Biomedical Eng. Research Grant to
DJR.
Contact:
David J. Reinkensmeyer, Ph.D.
Dept of Mechanical and Aerospace
Engineering
4200 Engineering Gateway
University of California, Irvine 926973975 [email protected]
References
[1] Reinkensmeyer DJ, Dewald JPA,
Rymer WZ. Robotic devices for
physical rehabilitation of stroke
patients: Fundamental requirements,
target therapeutic techniques, and
preliminary designs.
Tech.
and
Disability 5:205-215, 1996.
[2] Goldspink G, Williams PE. Muscle
fibre and connective tissue changes
associated with use and disuse. In: Ada
L, Canning C, eds. Foundations for
practice: Topics in neurological
physiotherapy. Heinemann, 1990:197218.
[3] Hammond MC, Fitts SS, Kraft GH,
Nutter PB, Trotter MJ, Robinson LM.
Co-contraction in the hemiparetic
forearm: quantitative EMG evaluation.
Arc Phys Med Reh 1988;69:348-51.
[4] Bohannon RW. Measurement and
nature of muscle strength in patients
with
stroke.
J
Neuro
Rehab
1997;11:115-125.
[5] Reinkensmeyer DJ, Dewald JPA,
Rymer
WZ.
Guidance
based
quantification of arm impairment
following brain injury: A pilot study.
To appear IEEE Trans Reh Eng 1999
[6] Sinkjaer T, Magnussen I. Passive,
intrinsic, and reflex-mediated stiffness
in the ankle extensors of hemiparetic
patients. Brain 1994;117:355-363.
[7] Lee WA, Boughton A, Rymer WZ.
Absence of stretch reflex gain
enhancement in voluntarily activated
spastic muscle. Exp Neurology
1987;98:317-335.
[8] Butefisch C, Hummelsheim H,
Denzler P, Mauritz K. Repetitive
training of isolated movement improves
the outcome of motor rehabilitation of
the centrally paretic hand. J Neurol.
Sciences 1995;130:59-68.
- 15 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
TREMOR SUPPRESSION THROUGH FORCE FEEDBACK
Stephen Pledgie1, Kenneth Barner2, Sunil Agrawal3
University of Delaware
Newark, Delaware 19716
Tariq Rahman4
duPont Hospital for Children
Wilmington, Delaware 19899
Abstract
This paper presents a method for
designing non-adaptive force feedback
tremor suppression systems that achieve
a specified reduction in tremor energy.
Position, rate, and acceleration feedback
are examined and two techniques for the
selection of feedback coefficients are
discussed. Both techniques require the
development of open-loop humanmachine models through system identification.
It is demonstrated that nonadaptive force feedback tremor suppression systems can be successfully designed when accurate open-loop humanmachine models are available.
1. Introduction
Tremor is an involuntary, rhythmic, oscillatory movement of the body
[2]. Tremor movements are typically
categorized as being either physiological
or pathological in origin. Physiological
tremor pervades all human movements,
both voluntary and involuntary, and is
generally considered to exist as a consequence of the structure, function, and
physical properties of the neuromuscular
and skeletal systems [13]. Its frequency
varies with time and lies between 8 and
12 Hz. Pathological tremor arises in
cases of injury and disease and is typically of greater amplitude and lower frequency than physiological tremor. In its
mildest form, pathological tremor impedes the activities of daily living and
hinders social function. In more severe
cases, tremor occurs with sufficient amplitude to obscure all underlying voluntary activity [1, 3].
The medical and engineering research communities have invested considerable time and effort in the development of viable physiological and
pathological tremor suppression technologies. Physiological tremor suppression is of particular value in applications
1
Biomechanics and Movement Science Program
Department of Computer and Electrical Engineering
3
Department of Mechanical Engineering
4
Extended Manipulation Laboratory
2
- 16 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
such as teleoperation and microsurgery
where slight rapid movements, whether
voluntary or involuntary, can have far
reaching consequences.
Pathological
tremor suppression is generally motivated by a desire to improve the quality
of life for individuals stricken with abnormal tremor conditions.
A number of digital filtering algorithms have been developed for the purpose of removing unwanted noise from
signals of interest and have thus found
application in tremor suppression.
Riviere and Thakor have investigated the
application of adaptive notch filtering for
the purpose of suppressing pathological
tremor noise during computer pen input
[10, 11]. When a reference of the noise
signal is available, adaptive finite impulse response (FIR) filters can produce
a closed-loop frequency response very
similar to that of an adaptive notch filter
[14]. Gonzalez et al. developed a digital
filtering algorithm that utilized an optimal equalizer to equilibrate a tremor
contaminated input signal and a target
signal that the subject attempted to follow on a computer screen [6]. Inherent
human tracking characteristics, such as a
relatively constant temporal delay and
over and undershoots at target trajectory
extrema, were incorporated in a “pulledoptimization” process designed to minimize a measure of performance similar to
the squared error of the tracking signal.
Force feedback systems implement
physical intervention methodologies designed to suppress tremor behavior.
Several projects have investigated the
application of viscous (velocity dependent) resistive forces to the hand and wrist
of tremor subjects for the purpose of
suppressing tremor movements [3, 4, 12,
14]. Experimentation with varying levels
of velocity dependent force feedback
showed, qualitatively, that tremor
movements could be increasingly suppressed with increasing levels of viscous
force feedback, but that concurrent impedance of voluntary movement may occur.
Previous
investigations
into
non-adaptive force feedback tremor suppression systems have not utilized quantitative performance criteria during the
design of the feedback control system.
They addressed the question of whether
or not velocity dependent resistive forces
(damping) could effectively suppress
tremor movements, but were not concerned with achieving a specified statistical reduction in the tremor. Additionally, the possibility of incorporating
position and acceleration feedback to
achieve improved performance was not
addressed in these studies.
The objective of this research was
the development of a methodology that
incorporates quantitative performance
criteria as well as position, rate, and acceleration feedback into the design of a
non-adaptive force feedback tremor suppression system. The remainder of this
paper is divided into five sections. Section 2 presents the results of an analysis
of pathological tremor movements. The
design process for the force feedback
system is described in Section 3. Next, a
- 17 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
method of system identification for the
human-machine system is discussed.
Section 5 presents the results of an
evaluation of the force feedback system.
Finally, the paper is completed with a
brief discussion and concluding remarks.
2. Analysis of Tremor Movements
An investigation into the spatiotemporal characteristics of tremor
movements was performed to gain insight into the spatial distribution and
time-frequency properties of pathological
tremor movements. Previous investigations into tremor frequency have typically applied the Fast Fourier Transform
(FFT) algorithm to a sampled data sequence to obtain information regarding
the exact frequency content of the data.
However, no information with respect to
the evolution of the frequency content
over time is generated with the FFT. It is
for this reason that a time-frequency
analysis of pathological tremor movements was undertaken. The spatial distribution of tremor movements was also
examined. A tremor suppression system
could potentially take advantage of
unique temporal and spatial distributions
in the tremor.
jects ages 18 to 91 participated in the
study.
The tremor subjects were qualitatively categorized with respect to the severity of their tremor. Two subjects possessed the ability to write in a somewhat
legible manner and received a low severity label. Relatively large tremor amplitude that prevented legible writing was
observed in two of the subjects. The remaining tremor subject exhibited high
variability in tremor amplitude and, as
such, received a variable severity label.
The origin of the tremor in subjects B, D,
and E was unknown because no medical
diagnosis was available.
The subjects performed targettracking tasks while seated in front of a
17” computer display. The position of an
on-screen cursor was controlled by manipulating a stylus attached to the
end-effector of the PHANToM, a small
robotic arm used in haptic interfaces.
Experimental Design
A broad set of experiments was
developed to examine the pertinent
tremor characteristics. Five tremor sub-
Table 1. Subject information.
Subject
Age
Gender
Tremor
Severity
Source
A
18
M
Var.
Head
injury
B
72
M
Mod.
?
C
71
M
Mod.
Parkinson’s
D
80
F
Low
E
91
M
Low
?
?
- 18 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
and then counting the number of data
points within each cell of a two dimensional mesh.
Table
Computer
Display
Y
Coordinate
System
X
PHANToM
Chair
Arm Rest
Figure 1. Experimental Setup.
A target tracking task required the subject to follow an on-screen target with a
cursor as it propagated along a displayed
straight line or sinusoidal pattern. The
horizontal position of the PHANToM’s
end-effector controlled cursor location in
a manner analogous to computer mouse
input. Pattern orientation, shape, and size
as well as target velocity were systematically varied across a number of trials.
End-effector position was sampled at 100
Hz throughout each task.
Data Analysis
The frequency content of the
tremor subjects’ movements was estimated using both Welch’s average periodogram method as well as the ShortTime Fourier Transform. Tremor frequencies were selected as those frequencies at which the energy distribution
contained a distinct peak. The spatial
distribution of the tremor movements
was calculated by first isolating the
higher frequency tremor “noise” component with a 5th order IIR highpass filter
Results
As shown in Table 2, little variation was observed in the tremor frequencies across the various target tracking
tasks when Welch’s average periodogram method was employed to find the
spectral energy of the movement over the
entire task time interval. Subject C consistently exhibited tremor with two distinct frequency components and subject
A’s tremor was by far the most variable
and possessed a rather broad distribution
of energy with a mild peak.
Each category of tremor (low,
moderate, and variable) exhibited a
unique time-frequency relationship, as
illustrated in Figure 2. The level of color
on the plot indicates the intensity of the
movement at a particular time and frequency. Coloration observed at or below
approximately 1 Hz represents the voluntary movement and that above 1 Hz
can be attributed to tremor movement. A
constant frequency and magnitude characterized the moderately severe tremor
Table 2. Mean tremor frequencies.
[Hz]
[Hz]
Subject
Mean Freq.
Variance
A
B
C
D
E
3.61
4.03
4.79, 8.78
5.04
5.02
0.21
0.03
0.03, 0.06
0.01
0.01
- 19 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
movements of subjects B and C (Figure
2.A). Low severity tremor (Figure 2.B)
occurred at a relatively constant frequency but with variable magnitude
during the task. Subject A’s tremor was
highly variable (Figure 2.C).
The spatial distribution of tremor
movements was found to be non-uniform
for all of the subjects. In general, the
spatial distributions were highly elliptical, indicating a predominant direction of
tremor movement.
Three
conclusions
regarding
pathological tremor characteristics were
made based on the results of the target
tracking tasks: 1.) Tremor frequency is
relatively invariant with respect to the direction and speed of movement. 2.)
Tremor frequency during task performance is relatively constant, but the intensity, or amplitude, of the tremor may
vary. 3.) Tremor movements possess
non-uniform spatial distributions.
The conclusions stated above suggest that the methodology behind the design of a force feedback tremor suppression system can include the assumption
of a constant tremor frequency.
3. Modification of the HumanMachine Frequency Response
The open-loop properties of the humanmachine system are modeled with a
Figure 2. Time-frequency plots. A.) Moderate
tremor. B.) Low tremor. C.) Variable tremor.
linear second order time-invariant transfer function, as shown in the forward
path of Figure 3. The plant possesses a
mass M, damping C, and stiffness K that
represent the combined properties of the
human limb and the robotic arm as
viewed at the end-effector of the PHANToM. This approach was motivated by
the work of Dolan et al. and Hollerbach
on the impedance characterization of the
human arm [5, 7].
Second order negative feedback
was generated by the manipulator to create the closed-loop system depicted in
Figure 3 which has the transfer function
T(s) =
1
(1)
( M + a1 )s + (C + a2 )s + ( K + a3 )
2
- 20 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
The feedback coefficients a1 , a 2 , and a 3
impact the effective mass, damping, and
stiffness of the closed-loop system in an
additive fashion. The magnitude response of the closed-loop system is a
function of the plant parameters M, C,
and K as well as the feedback coefficients and can be expressed as
Rω =
1
[ K + a − ( M + a )ω ] + (C + a ) ω
2 2
3
1
2
(2)
2
2
The feedback coefficients are selected to increase the attenuation at a
specified tremor frequency and preserve
the low frequency magnitude response of
the open-loop system.
Figure 4 illustrates the design methodology where
Setting ω to zero in Equation (2),
reveals that a nonzero position feedback
coefficient a 3 will introduce undesirable
low frequency attenuation in the closedloop system. For this reason, the position feedback coefficient a 3 is set to
zero.
The first technique for selecting
the feedback coefficients permits the selection of either the rate or acceleration
feedback coefficient. First, the openloop magnitude response of the humanmachine system at a tremor frequency ωt
is determined by evaluating Equation (1)
20Log|T(S)|
ωp
ωt
ω
An
open-loop
Ad
F(s)
+
+-
1
Ms2 + Cs + K
X(s)
closed-loop
Figure 4. Illustration of the magnitude response modification technique. The closed
loop system increases the attenuation at the
tremor frequency while ideally not impeding
lower frequency voluntary movements.
a1s2 + a2s + a3
Figure 3. Closed-loop human-machine system with 2nd order feedback.
the closed-loop system produces a desired attenuation Ad at a designated
tremor frequency ωt but does not introduce additional attenuation at frequencies
below a designated passband frequency
ω p . This tremor suppression technique
is not well suited for individuals whose
tremor frequency lies very close to voluntary movement frequencies.
with estimates of the plant parameters
and zero feedback. Next, a desired level
of closed-loop attenuation for movements at the tremor frequency is selected
and used to evaluate one of the following
expressions depending on whether acceleration ( a1 ) or rate ( a 2 ) feedback is desired.
2


 1 
1 
2 2
 − C ωt − M (3)
a1 = 2 K + 

ωt 
 Rωt 


- 21 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
2
1
a2 =
ωt
 1 
2

 − ( K − Mωt2 ) − C (4)
 Rωt 
The second technique for selecting
the rate and acceleration feedback coefficients directly addresses the issue of preserving the low frequency magnitude response of the open-loop human-machine
system. In this case, two additional frequency-attenuation pairs are selected: the
zero frequency gain of the open-loop
system and the open-loop attenuation at a
frequency ω p that represents the highest
frequency for which the closed-loop
magnitude response should approximate
the open-loop magnitude response (see
Figure 4). A general least-squares fitting
algorithm is used to select the feedback
coefficients that will produce a closedloop magnitude response that is a leastmean-square approximation to the desired response described by the frequency-attenuation pairs.
4. System Identification
The apparent mass, damping, and
stiffness of the open-loop humanmachine system are required in order to
select the appropriate rate and acceleration feedback coefficients. These parameters were estimated by approximating the frequency response of a discretetime auto regressive moving average
(ARMA) human-machine model with
that of a second order continuous-time
model.
To generate the ARMA model of
the human-machine system, a band-
limited zero-mean white noise force profile was applied by the manipulator while
the tremor subject grasped the attached
stylus. The resulting movement profile
was then sampled at 1 kHz and filtered
using an adaptive FIR filter to remove
the active tremor component that does
not arise from the physical properties of
the system. Next, the least-squares
modified Yule-Walker method was employed to determine the coefficients of
the ARMA model [9]. The discrete-time
frequency response of the ARMA model
was then mapped, in a least-squares
sense, to a second order continuous-time
model.
5. Results
The tremor suppression technique
described in Section 3 was evaluated on
three tremor subjects C,D, and E, as
subject B was unavailable and the variable tremor of subject was not suitable
for evaluation. The experimental setup
was identical to that during the targettracking tasks. Open-loop humanmachine models were developed, as described above, and suitable feedback coefficients were calculated. Next, the
force feedback controller was implemented using the robotic manipulator
and ability of the system to create the desired tremor reduction was evaluated.
Tables 3, 4, and 5 present the estimated mass, damping, and stiffness
values. These values represent the combined parameters of both the human and
the robotic arm. Subjects A and C, who
possessed the most severe tremor, also
- 22 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
exhibited the greatest stiffness (i.e. rigidity).
Once the open-loop humanmachine models were developed, the
feedback coefficients required to produce
10 dB and 20 dB of tremor attenuation
were calculated. Three feedback configurations were examined: strictly rate
feedback, strictly acceleration feedback,
and the coexistence of rate and acceleration feedback (via the least-squares
method). It was found that the level of
damping required for the “strictly rate
feedback” configuration designed to generate 20 dB of tremor attenuation was
prohibitively large. For this reason, the
ability of the system to create 20 dB of
tremor attenuation using strictly rate
feedback was not evaluated.
The tremor subjects were asked to
grasp the stylus attached to the endeffector and manipulate it slowly
throughout the entire workspace. The
force feedback configurations were individually implemented and applied during
separate trials. During each trial, the robotic arm operated at 1 kHz.
The reduction in the tremor
movement power was used as a measure
of the tremor attenuation achieved
through the force feedback. Table 6
shows the average levels of tremor attenuation achieved with each feedback
configuration. When a 10 dB reduction
in tremor amplitude was sought, rate
feedback provided, on average, the best
performance. The coexistence of rate
and acceleration feedback provided the
best performance when 20dB of tremor
Table 3. Mass estimates for the open-loop
human-machine system [Kg].
Subject
X
Y
Z
A
0.547
0.505
1.176
C
0.568
1.073
0.772
D
0.245
0.286
0.292
E
0.249
0.736
0.292
Table 4. Damping estimates for the open-loop
human-machine system [Ns/m].
Subject
X
Y
Z
A
4.969
15.317
28.819
C
6.121
10.913
19.646
D
4.189
8.515
7.281
E
7.556
16.219
8.356
Table 5. Stiffness estimates for the
human-machine system [N/m].
Subject
X
Y
A
190.335
312.758
C
264.673
300.219
D
16.637
213.570
E
47.824
68.562
open-loop
Z
219.873
283.694
186.215
53.293
Table 6. Avg. tremor energy reduction [dB]
Feedback
Goal: 10dB
Goal: 20dB
Config.
attenuation
attenuation
Rate
10.679
(not tested)
Acceleration
7.752
14.391
Rate & Accel.
8.811
15.073
attenuation was sought.
Figure 5 shows subject C’s performance on a pattern-tracing task. A
desired spatial trajectory was displayed
on the computer screen and the subject
was instructed to trace the pattern with a
cursor controlled through manipulating
the stylus. Both rate and acceleration
feedback were applied in an attempt to
achieve 20 dB of tremor attenuation.
- 23 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
tion. When second order feedback is
present, additional frequency domain
constraints, such as the preservation of
lower frequency voluntary movements,
can be addressed.
Desired Spatial Trajectory
15
Y [cm]
10
5
0
-5
-3
-1
-5
1
3
5
3
5
X [cm]
(A)
Tremor Subject’s Spatial Trajectory
With No Force Feedback
15
Y [cm]
10
5
0
-5
-3
-1
-5
1
X [cm]
(B)
Tremor Subject’s Spatial Trajectory
With Force Feedback
15
10
Y [cm]
6. Discussion & Conclusions
Two techniques for the design of
non-adaptive force feedback tremor suppression systems have been developed.
Both methods utilize quantitative frequency domain performance criteria
during the selection of the gain in rate
and acceleration feedback pathways.
The issue of preserving voluntary
movement in the presence of adequate
tremor suppression can be addressed
when both rate and acceleration feedback
exist simultaneously.
The ability of the force feedback to
produce a desired level of tremor attenuation depends on the accuracy of the
parameters in the open-loop humanmachine model. Only the average impedance of the human arm was characterized in this research and, for this reason, localized inaccuracies of the humanmachine models may exist and lead to
degraded performance. Additionally, the
reflex behavior and force-velocity properties of the muscles in the human arm
have not been considered.
It is suggested that future investigations utilize adaptive second order
feedback that seeks an “optimal” level of
tremor reduction. Additionally, higher
order feedback systems could provide
improved performance but may suffer
from significant noise amplification and
instability problems.
In conclusion, it has been demonstrated that a non-adaptive force feedback system can be designed such that
movements at a designated frequency
experience a specified level of attenua-
5
0
-5
-5
-4
-3
-2
-1
0
1
2
3
4
5
X [cm]
(C)
Figure 5. Qualitative example showing the effect of force feedback on pattern tracing performance. A.) Desired spatial pattern. B.)
Performance without force feedback. C.) Improved performance with force feedback.
- 24 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Acknowledgements
This research was funded by the
National Institute on Disability and
Rehabilitation Research (NIDRR) of the
U.S. Department of Education under
grant #H133E30013.
References
[1] Adelstein, B.D., Rosen, M.J., and
Aisen, M.L. Differential diagnosis of
pathological tremors according to mechanical load response. Proc. of the
RESNA 10th Annual Conf., 829 – 831,
1987.
[2] Anouti, A., and Koller, W.C. Tremor
disorders: diagnosis and management.
The Western Journal of Medicine,
162(6):510 – 514, 1995.
[3] Arnold, A.S., Rosen, M.J., and Aisen,
M.L. Evaluation of a controlled-energydissipation-orthosis for tremor suppression. J. Electromyography and Kinesiology, 3(3):131 – 148, 1993.
[4] Beringhause, S., Rosen, M.J., and
Haung, S. Evaluation of a damped joystick for people disabled by intention
tremor. Proc. of the RESNA 12th Annual
Conf., 41 – 42, 1989.
[5] Dolan, J.M., Friedman, M.B., and
Nagurka, M.L. Dynamic and loaded impedance components in the maintenance
of human arm posture. IEEE Trans.
Systems.
Man,
and
Cybernetics,
23(3):698 – 709, 1993.
[6] Gonzalez, J.G., Heredia, E.A., Rahman, T., Barner, K.E., and Arce, G.R.
Filtering involuntary motion of people
with tremor disability using optimal
equilization. Proc. IEEE Int. Conf. On
Systems, Man, and Cybernetics, 3(3),
1995.
[7] Hollerbach, K., and Kazerooni, H.
Modeling human arm movements constrained by robotic systems. Advances in
Robotics ASME, DSC-Vol.42:19 – 24,
1992.
[8] Iaizzo, P.A., and Pozos, R.S. Analysis of multiple EMG and acceleration
signals of various record lengths as a
means to study pathological and physiological oscillations. Electromyography
and Clinical Neurophysiology, 32:359 –
367, 1992.
[9] Proakis, J.G., and Manolakis, D.G.
Digital Signal Processing: Principles,
Algorithms, and Applications. (3rd ed.).
Prentice Hall, Upper Saddle River, New
Jersey, 1996.
[10] Riviere, C.N., and Thakor, N.V.
Assistive computer interface for pen input by persons with tremor. Proc.
RESNA 1995 Conf., 440 – 442, 1995.
[11] Riviere, C.N., and Thakor, N.V.
Modeling and canceling tremor in human-machine interfaces. IEEE Engineering in Medicine and Biology,
15(3):29 – 36, 1996.
- 25 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
[12] Rosen, M.J., Arnold, A.S., Baiges,
I.J., Aisen, M.L., and Eglowstein, S.R.
Design
of
a
controlled-energydissipation-orthosis (CEDO) for functional suppression of intention tremors.
J. Rehabilitation and Development,
32(1):1 – 16, 1995.
[13] Stile, R.N. Lightly damped hand
oscillations: acceleration related feedback and system damping. J. Neurophysiology, 50(2):327 – 343, 1983.
[14] Xu, Q. Control strategies for tremor
suppression. Unpublished master’s thesis, University of Delaware, 1997.
Contact Information
Stephen Pledgie: [email protected]
Kenneth Barner: [email protected]
Sunil Agrawal: [email protected]
Tariq Rahman: [email protected]
- 26 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
PROCEDURAL MOTOR LEARNING IN PARKINSON’S DISEASE:
PRELIMINARY RESULTS
H. I. Krebs1, N. Hogan1,2, W. Hening3, S. Adamovich3, H. Poizner3
1
Massachusetts Institute of Technology, Mechanical Engineering Department,
Newman Laboratory for Biomechanics and Human Rehabilitation
2
Massachusetts Institute of Technology, Brain and Cognitive Sciences Department
3
Rutgers University, Center for Molecular and Behavioral Neuroscience
Abstract
The purpose of this study is to examine
if PD (Parkinson disease) patients
present a deficit in procedural motor
learning. A portable robotic device is
being used to generate forces that
disturb subjects’ arm movements.
Patients and age-matched controls have
to learn to manipulate this “virtual
mechanical
environment.”
Our
preliminary results suggest that, indeed,
PD patients present a deficit in the rate
of
procedural
motor
learning,
particularly in presence of “novelty.”
Introduction
We have been investigating "implicit
motor learning". Implicit learning
refers to acquisition without awareness
of the learned information and its
influence. In particular, we have been
investigating "procedural learning",
which is a form of implicit learning
where skill improves over repetitive
trials.
Neuroimaging results using a serial
reaction
time
(SRT)
paradigm
indicated an increase in activation in
structures which constitute key
elements of the cortico-striatal loop,
thus supporting models that posit the
cortico-striatal loop as playing a
significant role during implicit learning
[Rauch, 1995]. Other neuroimaging
studies using a pursuit rotor task
indicated an increase of activity in the
cortico-cerebellar loop, thus supporting
models that hypothesize that procedural
learning takes place in the motor
execution areas [Grafton, 1994].
We speculated that the apparently
different role played by the two brain
loops in different paradigms could be
related to the different mechanisms
associated with procedural learning in a
task with prominent motor demands
(rotor pursuit) versus a task with more
cognitive-perceptual
demands
(sequence learning). Therefore, we set
our goal to design a procedural learning
paradigm whose demands might shift
from more cognitive-perceptual to
motor, and test a hypothesis that the
cortico-striatal and cortico-cerebellar
loop activities change as the demands
of the learning task change.
We pioneered the integration of robotic
technology with functional brain
imaging [Krebs, 1995 and 1998a]. PET
- 27 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
was used to measure aspects of neural
activity underlying learning of the
motor task involving the right hand of
right-handed subjects, while a portable
robotic device was used to generate
conservative force fields that disturbed
the subjects’ arm movements, thereby
generating a "virtual mechanical
environment" that subjects learned to
manipulate [Shadmehr & Mussa-Ivaldi,
1994]. We found that Early Learning
activated the right striatum and right
parietal area, as well as the left parietal
and primary sensory area, and that
there was a deactivation of the left
premotor area. As subjects became
skilled at the motor task (Late
Learning), the pattern of neural activity
shifted to the cortico-cerebellar
feedback loop, i.e. there was significant
activation in the left premotor, left
primary motor and sensory area, and
right cerebellar cortex. These results
support the notion of different stages of
implicit learning (Early and Late
Implicit Learning), occurring in an
orderly fashion at different rates.
Moreover these findings indicate that
the cortico-striatal loop plays a
significant role during early implicit
motor learning, whereas the corticocerebellar loop plays a significant role
during late implicit motor learning
[Krebs, 1998a].
Our results were in agreement with
current theories of human motor
learning and memory that consider the
brain composed of fundamentally and
anatomically separate but interacting
learning
and
memory
systems
[Schacter & Tulving, 1994]. In fact,
borrowing from computer science,
current theories suggest patterns of
unsupervised (pre-frontal cortex),
supervised (cortico-cerebellar), and
reinforcement learning (cortico-striatal)
in human motor learning [Alexander &
Crutcher, 1990; Graybiel, 1993 &
19951; Houk&Wise, 1995; Houk, 1997;
Beiser, 1997; Beiser & Houk, 1998;
Berns, 1997; Berns&Sejnowski, 1998].
In view of our neuroimaging results
indicating that the cortico-striatal loop
plays a significant role in implicit
motor learning, we predicted that
patients with parkinson disease (PD)
should present a deficit in the rate of
motor learning while learning to
manipulate similar "virtual mechanical
environment" generated by a robotic
device. In what follows, we present our
experimental results to date of agedmatched normal and PD patients.
Methods
In this pilot study, we used the novel
robot MIT-MANUS, which has been
designed for clinical neurological
applications. Unlike most industrial
robots, MIT-MANUS was designed to
have a low intrinsic end-point
impedance (i.e., back-driveable), with a
low and nearly-isotropic inertia and
friction [Hogan, 1995; Krebs, 1998b].
1
Graybiel suggested a “blend” of
unsupervised and supervised learning
schemes to describe striatal processing.
We suggest that reinforcement learning
may be a more appropriate wording.
- 28 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
the off-medication patients’ group
(mean age 70.2). To date, four righthanded healthy age-matched subjects
were included for comparison (3
females and one male). The subjects
were between 67 and 84 years old
(mean 78.5). All subjects were naive to
the motor learning task.
To date, seven right-handed subjects
with parkinsonism (2 females and 5
males) participated in the study. The
subjects were between 56 and 78 years
old. All subjects were clinically
evaluated by a trained movement
disorders specialist at the time of
testing and found to have mild to
moderate Parkinson’s disease (HoehnYahr stages 2 and 3) and minimum
tremor. Patients were tested early in the
morning prior to the administration of
any daily medication except for one
patient (56 years old female), who
could not perform any function due to
“freezing.” This subject received her
medication 30 minutes prior to testing
and her results were segregated from
The visually-evoked and visuallyguided task is similar to the one used in
our neuroimaging studies [Krebs,
1998a] and it consisted of moving the
robot end-effector from its initial
position towards a target, in a point-topoint movement. The target set had a
fixed number of positions in a
horizontal plane as shown in figure 1.
Force Fields
Monitor Displaying
Task to Subject
0.05
0.05
0.05 Xscreen
0.05
coordinates
Yscreen
coordinates
Condition 1
(motor
performance)
Block 1
Block 2
Block 3
Block 4
Condition 3
(late learning)
Block 5
Block 6
Condition 4
(neg transfer)
Block 7
Block 8
0.10 Y world Condition 2
(early learning)
0.1125 0.1250.1125 coordinates
X world
coordinates
Plan View
(distances in
meters)
Fx
Fy
0 -B
B 0
Vx
Vy
B is a coeff. equal to + 12 N.sec/m
Velocity in X-Y direction in m/s
Force in X-Y direction in N (see Fig.1.)
Condition 5
Block 9
(after effect motor perf)
FIG.1. General Arrangement and Force Fields in Different Conditions
Subject while sitting, moved the robot end-effector in a point-to-point task in
a virtual haptic environment with different force fields for each condition.
- 29 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
The outward targets 1 to 4 were
randomly presented. The inward
homing target 0 was presented
following each of the outward targets.
Every outward target was presented an
equal number of times. Note that the
hand coordinates were different from
the visual coordinates, in order to
compensate for the rectangularity of
the monitor. The subject sat in a chair
in front of the robot and monitor, and
grasped a handle on the end-effector of
the robot. He was instructed to move
the end-effector to the presented target
within 0.8 sec. The color of the target
changed for the subsequent 0.8 sec, and
a new target was presented. Note that
the monitor screen was positioned
perpendicular to the subject’s line of
sight, therefore moving the endeffector handle towards the subject
corresponded to moving down on the
monitor. The subject’s movement was
performed predominantly with the arm
and forearm.
The robot measured the kinematics and
dynamics of the subject’s hand
motions, and imposed perturbation
forces as follows:
Condition Motor Performance:
the robot generated no perturbation, but
recorded the behavior of the subject
(blocks 1 & 2). The subject practiced as
needed to become fully comfortable
with the task.
Condition Early Motor Learning:
the robot measured the behavior, and
also perturbed the movement of the
subject (blocks 3 & 4).
Condition Late Motor Learning:
the robot measured the behavior, and
also perturbed the movement of the
subject (blocks 5 & 6). This condition
differs from Condition 2 by the degree
of smoothness of the motor response.
Condition Negative Transfer: the
characteristic of the perturbation forces
was reversed (blocks 7 & 8).
Condition After-Effect Motor
Performance: the robot generated no
perturbation, but recorded the behavior
of the subject (block 9). The objective
was to determine the influence of
fatigue.
The perturbation forces were velocitydependent, generating a conservative
force field according to the following
relations:
 Fx   0 − B  V x 
F  = 
 
 y   B 0  V y 
where B is a coefficient equal to 12 (or
–12) N.sec/m; the velocity in X-Y
direction (Vx, Vy) are given in m/sec;
and the forces in the X-Y direction (Fx,
Fy) are in Newtons with X-Y directions
indicated in figure 1.
All conditions described above were
divided into two blocks. Each block
entailed a total of 80 movements (40
movements to the outward positions
and 40 movements to the homing
position).
Preliminary Results
Normal subjects make unconstrained
point-to-point
movements
in
approximately a straight line with bellshaped speed profiles [Flash & Hogan,
- 30 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
1985]. Kinematic analysis of subjects’
movements was performed including
the mean squared difference between
the movement and the minimum-jerk
speed profiles described above.
This index showed a consistent pattern
while learning the task. The baseline
condition (block 1 & 2) was followed
by deterioration of the performance as
the force field was applied (block 3).
The subsequent results showed a
progressive reduction of the difference,
indicating learning (block 4 to 6).
Similar patterns can be observed as
subjects were challenged with a new
force field with reverse characteristics
(block 7 & 8). Subjects’ performance
resembled baseline condition after the
force field was eliminated suggesting
that fatigue is not a primary factor
(block 9). Figure 2 shows the learning
rate assessed by the slope of the
regression to the normalized mean
squared speed difference averaged
across subjects of the age-matched
control group and across subjects of the
PD group. We used the mean squared
speed difference of each group during
blocks 1 and 2 as the normalizing
factor. Figure 2 also shows the ratio
between the learning rate of the agematched control group and the PD
patients. Note that the mean learning
rate is faster for the age-matched group
than the PD groups for all conditions.
The control group learns on average
18% faster during Early Learning
(condition 2), 3% faster during Early +
Late Learning (conditions 2 and 3), and
433% faster during Negative Transfer
(condition 4).
Conclusion
Existing evidence strongly suggest a
role of the striatum in learning novel
motor tasks. If this is actually the case,
we should expect that patients with PD
should present a deficit in the rate of
procedural motor learning, particularly
in presence of “novelty”. Indeed, this
appears to be the case. Our results
indicate the largest difference between
the learning rate of the age-matched
subjects and the PD patients groups
during the Early Learning and Negative
Transfer (conditions 2 & 4). These
conditions correspond to “novelty”
scenarios. Consistent with our view of
different stages of procedural motor
learning, we observed minimal learning
rate difference during Late Learning
(condition 3) for which neuroimaging
results posits a significant role to the
cortico-cerebellar loop [Grafton, 1994,
Krebs, 1998a].
While PD subjects achieve normal
accuracy under a wide variety of
feedback
conditions,
including
remembered targets acquired without
visual feedback [Poizner, 1998], they
have particular difficulty in a novel
task where they are required to
transform from visual to proprioceptive
space [Adamovich, 1997]. Our results
for procedural motor learning are
similar to results of procedural
cognitive learning in Parkinson’s
disease [Brown & Marsden, 1990;
Saint-Cyr, 1988; Taylor, 1986]
indicating learning deficiencies.
- 31 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
This result raises questions about the
role of the direct and indirect pathways
(i.e., the excitatory and inhibitory loops
within the basal ganglia “circuitry”).
One possible explanation is that the
direct
pathway
reinforces
the
appropriate cortical motor pattern,
while the indirect pathway brakes it
[Alexander & Crutcher, 1990]. In view
of our results, one might speculate that
for our PD patients, “braking or
switching” motor patterns is the
primary learning deficiency. If so, this
raises important questions about
optimal rehabilitation strategies.
Learning Rate Ratios Controls vs PD
5
+433%
4.5
Ratio Learning Rate
4
3.5
3
2.5
2
1.5
+18%
+3%
1
0.5
0
Early Learning
Early+Late
Learning
Negative Transfer
FIG.2. Learning Rate Ratios -- Four Age-Matched Controls versus Six PD Patients
The plot shows the learning rate ratio between the age-match and PD groups. The
number at the top of each column represents how much faster the Age-Match
Controls learned.
circuits: neural substrates of parallel
processing, Trends Neurosci, 13(1990),
pp.266-271.
Alexander, G.E., Basal gangliathalamocortical circuits: their role in
control of movements. J. Clin
Neurophysio, 11 (1994), pp.420-31.
Beiser, D.G., Hua, S.E., Houk, J.C.,
References
Network models of the basal ganglia,
Adamovich, S., Berkinblit, M.,
Cur Op Neurobi, 7(1997), pp.185-190.
Smetanin, B., Fookson, O., Poizner, H.,
Beiser, D.G, Houk, J.C., Model of
Influence of movement speed on
cortical-basal ganglionic processing:
accuracy of pointing to memorized
encoding the serial order of sensory
targets in 3D space, Neurosci Let, 172
events, J Neurophysio, 79(1998).
(1994), pp.171-174.
Berns G.S., Cohen JD, Mintun M.A.,
Alexander, G.E., Crutcher, M.D.,
Brain regions responsive to novelty in
Functional architecture of basal ganglia
- 32 Grant Support
Supported in part by The Burke
Medical Research Institute and NSF
under Grant 8914032-BCS to MIT, and
NIH 5-R01-NS-36449-02 and 2-R01NS-28665-07 to Rutgers University.
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
the absence of aqwareness, Sci., 276
(1997), pp.1272-1275.
Berns, G.S., Sejnowski, T.J., A
computational model of how the basal
ganglia produce sequences, J Cog
Neurosci, 10:1(1998), pp.108-121.
Brown,
R.G.,
Marsden,
C.D.,
Cognitive function in parkinson’s
disease: from description to theory,
TINS, 13:1 (1990), pp.21-29.
Flash; T., Hogan, N., The coordination
of arm movements: an experimentally
confirmed mathematical model, J.
Neurosci, 5 (1985), pp.1688-1703.
Grafton ST, Woods RP, Tyszka,
Functional imaging of procedural
motor learning: relating cerebral blood
flow
with
individual
subject
performance, Human Brain Mapping, 1
(1994), pp.221-234.
Graybiel, A.M., Functions of the
nigrostriatal system, Clin Neurosci, 1
(1993), pp.12-17.
Graybiel, A.M., Aosaki, T., Flaherty,
A.W., Kimura, M., The basal ganglia
and adaptive motor control, Sci, 265
(1994), pp.1826-1831.
Hogan, N., Krebs, H.I., Sharon, A.,
Charnnarong, J., Interactive robotic
therapist, U.S. Patent #5,466,213, MIT.
Houk, J.C., Wise, S.P., Distributed
modular architectures linking basal
ganglia, cerebellum, and cerebral
cortex: their role in planning and
controlling action, Cerebral Cortex,
5:2(1995), pp.95-110.
Houk, J.C., On the Role of the
Cerebellum and Basal Ganglia in
Cognitive Signal Processing, Progr
Brain Res, 114 (1997), pp.543-552.
Krebs, H.I., Brashers-Krug, T., Rauch,
S.L., Savage, C.R., Hogan, N., Rubin,
R.H., Fischman, A.J., Alpert, N.M.,
Robot-Aided Functional Imaging, Proc
2nd Int Symp Med Robotics & Comp
As. Surgery, (1995), pp296 to 299-E5.
Krebs, H.I., Brashers-Krug, T., Rauch,
S.L., Savage, C.R., Hogan, N., Rubin,
R.H., Fischman, A.J., Alpert, N.M.,
Robot-Aided Functional Imaging:
Application to a Motor Learning Study,
Hum Brain Mapping,6(1998),pp.59-72.
Krebs, H.I., Hogan, N., Aisen, M.L.,
Volpe, B.T., Robot-aided neurorehabilitation, IEEE Trans. on Rehab.
Eng, 6:1(1998), pp.75-87.
Poizner, H., Fookson, O., Berkinblit,
M., Hening, W., Feldman, G.,
Adamovich,
S.,
Pointing
to
remembered targets in 3D space in
parkinson’s disease, Motor Control
(1997).
Rauch SL, Whalen PJ, Savage CR,
Curran T, Kendrick A, Brown HD,
Bush G, Breiter HC, Rosen BR, Striatal
recruitment during an implicit sequence
learning task as measured by functional
magnetic resonance imaging, Human
Brain Mapping, 5 (1997), pp.124-132.
Saint-Cyr, J.A., Taylor, A.E., Lang,
A.E.,
Procedural
learning
and
neostriatal dysfunction in man, Brain,
111 (1988), pp.941-959.
Schacter, D.L., Tulving, E. (eds),
Memory systems (1994), MIT Press.
Shadmehr, R., Mussa-Ivaldi, F.A.,
Adaptive representation of dynamics
during learning a motor task, J
Neurosci, 14:5 (1994), pp.3208-3224.
Taylor, A.E., Saint-Cyr, J.A., Lang,
A.E., Frontal lobe dysfunction in
parkinson’s disease, Brain, 109 (1986),
pp.845-883.
- 33 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
ROBOT-AIDED NEURO-REHABILITATION IN STROKE:
THREE-YEAR FOLLOW-UP
H. I. Krebs1, N. Hogan1,2, B.T. Volpe3, M.L.Aisen4, L. Edelstein5, C. Diels5
1
Massachusetts Institute of Technology, Mechanical Engineering Department,
Newman Laboratory for Biomechanics and Human Rehabilitation
2
Massachusetts Institute of Technology, Brain and Cognitive Sciences Department
3
Cornell University Medical College, Department Neurology and Neuroscience,
Burke Institute of Medical Research
4
Veterans Health Administration, Department of Rehabilitation and Development
5
Burke Rehabilitation Hospital
Abstract
We are applying robotics and
information technology to assist,
enhance,
and
quantify
neurorehabilitation. Our goal is a new class
of interactive, user-affectionate clinical
devices designed not only for
evaluating patients, but also for
delivering meaningful therapy via
engaging “video games.” Notably, the
robot MIT-MANUS has been designed
and
programmed
for
clinical
neurological applications, and has
undergone extensive clinical trials for
more than four years at Burke
Rehabilitation Hospital. Recent reports
showed that stroke patients treated
daily with additional robot-aided
therapy during acute rehabilitation had
improved outcome in motor activity at
hospital discharge, when compared to a
control group that received only
standard acute rehabilitation treatment.
This paper will review results of a
three-year follow-up of the 20 patients
enrolled in that clinical trial. The threeyear follow-up showed that:
• The improved outcome was
sustainable over three years.
• The
neuro-recovery
process
continued far beyond the commonly
accepted 3 months post-stroke
interval.
• Neuro-recovery
was
highly
dependent on the lesion location.
Introduction
Over four million Americans suffer
from disabilities and impairments as a
result of the leading cause of
permanent disability in the U.S.: stroke.
Physical and occupational therapy
provides a standard, presumably
beneficial treatment, but it is laborintensive, often requiring one or two
therapists to work with each patient.
Demand for rehabilitation services is
also certain to increase in the coming
decades due to the graying of the
population.
The expected increase in the number of
stroke patients will increase the
nation’s health care financial burden,
which continues to grow above the rate
of inflation (HCFA). Until recently,
- 34 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
health care providers have attempted to
reduce the costs of caring for patient’s
rehabilitation primarily by shortening
inpatient stays. Once the practical limit
of abbreviated inpatient stays is
reached, further efficiencies will be
attainable chiefly by addressing clinical
practices themselves. Our research
suggests that robotics and information
technology can provide an overdue
transformation of rehabilitation clinics
from primitive manual operations to
more technology-rich operations.
Claims that manipulation of the
impaired limb influences recovery
remains controversial. Therefore, we
tested in a pilot study whether
manipulation of the impaired limb
influences recovery during the inpatient
rehabilitation period. The results were
positive and reported elsewhere (Aisen,
1997; Krebs, 1998). This paper
describes our efforts to assess whether
the previously reported improved
outcome during inpatient rehabilitation
was sustainable after discharge, or
alternatively, whether manipulation of
the impaired limb influenced the rate of
recovery during the inpatient phase, but
not the “final” plateau.
Methods
We used the novel robot MITMANUS, which has been designed for
clinical neurological applications.
Unlike most industrial robots, MITMANUS was designed to have a low
intrinsic end-point impedance (i.e., be
back-driveable), with a low and nearly-
isotropic inertia and friction [Hogan,
1995; Krebs, 1998]1.
Twenty sequential hemiparetic patients
were enrolled during 1995 and part of
1996 in the pilot study. Patients were
admitted to the same hospital ward and
assigned to the same team of
rehabilitation professionals. They were
enrolled in either a robot-aided therapy
group (RT, N=10) or in a group
receiving "sham" robot-aided therapy
(ST, N=10). Both groups were
described in detail elsewhere (Aisen,
1997; Krebs, 1998). Patients and
clinicians were blinded to the treatment
group (double blind study). Both
groups received conventional therapy;
the RT group received an additional 4-5
hours per week of robot-aided therapy
consisting of peripheral manipulation
of the impaired shoulder and elbow
correlated with audio-visual stimuli,
while the ST group had an hour of
weekly robot exposure.
Twelve of these 20 inpatients were
successfully recalled and evaluated
almost three years post-stroke (of the
remaining 8 patients, 4 could not be
located, 1 died, 3 had a second stroke
or other medical complications). Six
patients in the RT and in the ST group
were
comparable
in
gender
distribution, lesion size (RT = 53.8 ±
22.9 cm3, ST = 53.9 ± 28.2 cm3 ), and
An overview of research efforts in
rehabilitation robotics at MIT, the Palo
Alto VA, the Rehab Institute of
Chicago, and U.C. Berkeley can be
found in Reinkensmeyer et al. (1999).
1
- 35 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
length of time from stroke to follow-up
(RT: 1113.3 ± 59, ST: 960 ± 81 days).
There was no control over patients’
activities after hospital discharge.
The same standard assessment
procedure used every other week to
assess all patients during rehabilitation
was used at recall three years posthospital discharge (RT and ST groups).
This assessment was performed by the
same
“blinded”
rehabilitation
professional. Patients’ motor function
was assessed by standard procedures
including: the upper limb subsection of
the Fugl-Meyer (F-M), Motor Power
for shoulder and elbow (MP), Motor
Status Score for shoulder and elbow
(MS1), and Motor Status Score for
wrist and fingers (MS2).
Results
The improved outcome observed in the
first phase of the pilot study was
sustained after three years.
Table I shows the change in scores for
the twenty patients enrolled in the first
phase of trial between admission and
discharge from the rehabilitation
hospital. Table II shows the same
change in score during this first phase
limited to the twelve patients
successfully recalled (Volpe-a, 1999).
This table also shows the change in
scores between recall and discharge, as
well as total change (between recall
and admission to the rehab hospital).
This data should be interpreted with
caution due to the small number of
subjects. Nevertheless, the group of
- 36 -
patients treated daily with additional
robot-aided therapy during acute
rehabilitation had improved outcome in
motor activity at hospital discharge,
when compared to a control group that
received
only
standard
acute
rehabilitation treatment. Improved
outcome was limited to the muscle
groups trained in the robot-aided
therapy, i.e., shoulder and elbow (Table
II MS1 - ∆1 score). The improved
outcome during inpatient rehabilitation
was sustainable after discharge. Note
that, comparing the overall recovery
(between admission and recall) the
MS1 for shoulder and elbow (which
were the focus of robot training) of the
experimental group improved twice as
much as the control group (Table II
MS1 - ∆3 score). Note also that both
groups had comparable improvement
between hospital discharge and threeyear recall (period without robot-aided
therapy, Table II - ∆2 score).
Furthermore, eight out of twelve
patients successfully recalled continued
to improve substantially in the period
following discharge (RT & ST
subjects). This finding challenges the
common perception that patients stop
improving motor function after about
11 weeks post-stroke (e.g., Jorgensen,
1995, The Copenhagen Stroke Study).
It suggests that there may be an
opportunity to further improve the
motor recovery of stroke patients by
continuing therapy in the out-patient
phase, for example, using the
technology that is the focus of our
project.
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
To tailor therapy to the patient’s need,
we must understand the process of
neuro-recovery and systematically
classify different strokes. Brain
imaging technology allows us to
Group
RT
ST
F-M
(out of 66)
∆1
14.1
9.9
MP
(Out of 20)
∆1
3.9
2.3
classify strokes according to lesion site.
For the patients recalled in the followup, CT scans showed 6 pure subcortical
MS1
(Out of 40)
∆ 1*
9.4
0.8
MS2
(Out of 42)
∆1
5.5
4
Table I. Change during Acute Rehabilitation (20 patients): Experimental (RT) vs
Control (ST) Group
Group
RT
ST
F-M (out of 66)
MP (Out of 20)
∆ 1 ∆ 2 ∆ 3 ∆ 1* ∆ 2 ∆ 3
15.3 5.0 20.3 4.5 4.6 9.1
8.0 12.3 20.3 1.6 3.5 5.1
MS1 (Out of 40) MS2 (Out of 42)
∆ 1* ∆ 2 ∆ 3* ∆ 1 ∆ 2 ∆ 3
12.0 9.4 21.4 8.2 8.3 16.4
-1.0 10.2 9.2 3.7 8.0 11.7
Table II. Change during Acute Rehabilitation & Follow-Up (12 patients):
Experimental (RT) vs Control (ST) Group.
Both Tables: ∆1 admission to discharge of rehabilitation hospital; ∆2 discharge to
follow up; ∆3 admission to follow up; one-way t-test that RT > ST with p < 0.05 for
statistical significance (*).
and 6 subcortical plus cortical lesions.
We excluded a pure thalamic lesion
from the subcortical group. The
comparison of outcome for 5 patients
with corpus striatum lesions (CS)
versus 6 patients with corpus striatum
plus cortex (CS+) is shown in Fig.1.
(Volpe-b, 1999). These patients had
comparable demographics and were
evaluated by the same therapist on
hospital admission (19 days + 2 poststroke), discharge (33 days + 3 later),
and follow-up (1002 days + 56 post
discharge). The CS group had smaller
lesion size (CS = 13.3 + 3.9cm3, CS+ =
95.1 + 25.2cm3, p < 0.05).
Although the CS group had smaller
lesion size, recent report suggested that
patients with stroke confined to basal
ganglia (CS) have diminished response
to rehabilitation efforts compared to the
patients with much larger lesion (CS+).
Miyai et al. suggested that isolated
basal ganglia strokes may cause
persistent corticothalamic-basal ganglia
interactions that are dysfunctional and
impede recovery (Miyai, 1997). Our
results are consistent with Miyai’s
- 37 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
observation. Note in Fig.1. that the
CS+ group outperformed the CS group
during
inpatient
rehabilitation.
However, note also in Fig.1. that the
CS group outperformed the CS+ group
between discharge to follow-up. In
fact, the CS group outcome is superior
at follow-up. Our results are consistent
with studies suggesting that transneural
degeneration follows a stroke in the
basal
ganglia
and
once
the
degeneration is completed, recovery
proceeds (e.g., Saji, 1997). As stated
earlier, motor recovery during inpatient
rehabilitation may not be completed
and understanding motor recovery will
require longitudinal studies beyond the
inpatient period.
Change in M-P Score
Change in MS1 Score
18
7
16
6
14
5
MS1
12
10
4
CS
3
CS +
CS
CS+
8
6
2
4
1
2
0
0
I n pat i en t
Inpatient
F ol l ow- u p
Follow-up
Change in MS1 Score
Change in MS2 Score
18
16
16
14
14
12
12
CS
8
MS1
MS2
10
CS+
6
10
CS
8
CS+
6
4
4
2
2
0
0
Inpatient
Follow-up
Inpatient
Follow-up
Group
F-M (out of 66)
MP (Out of 20) MS1 (Out of 40) MS2 (Out of 42)
∆ 1 ∆ 2* ∆ 3* ∆ 1 ∆ 2 ∆ 3 ∆ 1 ∆ 2* ∆ 3* ∆ 1 ∆ 2* ∆ 3*
SC
9.3 25.0 34.3 2.1 6.1 8.2 1.0 16.0 17.0 10.0 14.5 24.5
SC+ 10.7 -1.3 9.4 4.3 2.8 7.1 7.7 4.2 11.9 3.3 3.2 6.5
∆1 admission to discharge of rehab hospital; ∆2 discharge to follow-up; ∆3
admission to follow-up; one-way t-test SC > SC+ with p < 0.05 for statistical
significance (*).
- 38 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
appears to move exceptionally slowly.
Yet, the hand path is generally well
aimed towards the target. In contrast,
the CS+ patient appears to make a
faster movement, but poorly aimed.
The CS+ patient’s mis-aiming appears
to be consistent with observations that
activity of populations of motor
cortical neurons are correlated with the
intended
direction
of
reaching
movements (Georgopoulos, 1984).
We compared the speed-accuracy
tradeoff of aimed movements by using
the first successful attempt of six
patients (3 CS patients w/ mean lesion
size 12.6 cm3 and 3 CS+ patients w/
mean lesion size 92.1 cm3). Patients
were asked to hit eight outboard
We evaluated the overall patient
performance using standard assessment
procedures. Yet those are limited. To
understand the functional motor
consequences of the neuro-recovery
process, a facility to measure and
manipulate the motor system is needed.
Robotic technology can serve this
purpose. Figure 2 shows examples of
reaching movements made by patients
with CS (8.9 cm3) and CS+ (109.9 cm3)
lesions. The left column shows a plan
view of the patients’ hand path
attempting a point-to-point movement.
The right column shows the tangential
speed of the hand. Comparing the two
patients, note that the CS patient
CS+
0.15
0.25
0.1
0.2
y-direction (m)
0.05
0.15
-0.05
Subject A,
Right Armpoint-to-point
movement
speed (m/s)
0
0.1
0.05
-0.1
0
-0.1
-0.05
0
0.05
0.1
0.15
0
10
20
30
time (sec)
x-direction (m)
CS
0.15
0.25
0.1
0.2
0
-0.05
0.15
Subject P,
Right Armpoint-to-point
movement
speed (m/s)
y-direction (m)
0.05
0.1
0.05
-0.1
0
-0.1
-0.05
0
0.05
0.1
0.15
0
10
20
30
time (sec)
x-direction (m)
FIG.2. Examples of reaching movements made by patients with CS (8.9
cm3) and CS+ (109.9 cm3) lesions. The left column shows a plan view of the
patients’ hand path attempting a point-to-point movement. The right column
shows hand speed.
- 39 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
targets, equally spaced around a
horizontal 2D circle of 10cm diameter,
and presented in a clockwise fashion
starting at 12 o’clock position. The
“inner” home target was presented
following each the outboard targets.
For all patients, kinematic measures
demonstrated
diminished
speedaccuracy performance. CS patients had
a predominantly speed impairment,
while CS+ patients had a predominant
aiming impairment (Aisen, 1998).
Conclusions
These findings suggest that (a)
manipulation of the impaired limb
influences recovery, (b) the improved
outcome was sustained after three
years, (c) the neuro-recovery process
continued far beyond the commonly
accepted 3 months post-stroke interval,
and (d) neuro-recovery was highly
dependent on the lesion location.
We just completed a second clinical
trial with a larger pool of patients of 60
patients. The objective of this second
trial was to address the main limitation
of the first study, i.e., small sample
size. At the time of writing this paper,
we are analyzing data. Nevertheless, it
might be not far fetched to conclude
that, while few persons will pass
through life unaffected directly or
indirectly by the consequences of
stroke, now however, the benefits of
technology that have so deeply
penetrated other medical sectors might
be available to help the victims of
debilitating stroke maximize their
potential for recovery.
- 40 -
Grant Support
The Burke Medical Research Institute
and NSF under Grant 8914032-BCS.
References
Aisen, M.L., Krebs, H.I., McDowell,
F., Hogan, N., Volpe, N., The effect of
robot assisted therapy and rehabilitative
training on motor recovery following
stroke, Archives of Neurology,
54(1997), pp.443-446.
Aisen, M.L., Krebs, H.I., Hogan, N.,
Volpe, N., Lesion location and speedaccuracy tradeoff in stroke patients,
Proc. 1998 Am. Acad. Neur., (1998).
Georgopoulos, A.P., Kalaska, J.F.,
Crutcher, M.D., Caminiti, R., Massey,
J.T., The representation of movement
direction in the motor cortex: single
cell and population studies. In:
Dynamic Aspects of Neocortical
Function. John Wiley & Sons, (1984).
Hogan, N., Krebs, H.I., Sharon, A.,
Charnnarong, J., Interactive robotic
therapist, U.S. Patent #5,466,213, MIT.
Jorgensen, H.S., Nakayama, H.,
Raaschou, H.O., Vive-Larsen, J.,
Stoier, M., Olsen, T.S., Outcome &
time course of recovery in stroke, I:
Outcome, II: Time course of recovery,
Copenhagen Stroke Study, Arch. Phys.
Med. Rehab., 76:5(1995), pp.399-412.
Krebs, H.I., Hogan, N., Aisen, M.L.,
Volpe, B.T., Robot-aided neurorehabilitation, IEEE Trans. on Rehab.
Eng, 6:1(1998), pp.75-87.
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Miyai, I., Blau, A.D., Reding, M.J.,
Volpe, B.T., Patients with stroke
confined to basal ganglia have
diminished response to rehabilitation
efforts, Neurol., 48(1997), pp.95-101.
Reinkensmeyer, D.J., Hogan, N.,
Krebs, H.I., Lehman, S.L., Lum, P.S.,
Rehabilitators, robots, and guides: new
tools for neurological rehabilitation, In:
Biomechanics and Neural Control of
Movement, Spr-Verlag, 1999 (in press).
Saji, M., Endo, Y., Miyanishi, T.,
Volpe, B.T., Ohno, K., Behavioral
correlates transneuronal degeneration
of substantia nigra reticulata neurons
are reversed by ablation of subthalamic
nucleus, Behavioral Brain Research,
84(1997), pp. 63-71.
Volpe, B.T., Krebs, H.I., Hogan, N.,
Edelstein, L., Diels C., Aisen, M.L.,
Robot-Training
Enhanced
Motor
Outcome in Patients with Stroke
Maintained in Three Year Follow-up,
Proc 1999 Am Acad Neur, (1999, sub).
Volpe, B.T., Krebs, H.I., Hogan, N.,
Edelstein, L., Diels C., Aisen, M.L.,
Comparison of the Motor Recovery in
Patients with Subcortical and Cortical
Stroke: Inpatient Rehabilitation to
Three Years Post Stroke, Proc. 2nd
World Cong Neur Rehab (1999, sub).
- 41 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
A STUDY ON THE ENHANCEMENT OF MANIPULATION
PERFORMANCE OF WHEELCHAIR-MOUNTED
REHABILITATION SERVICE ROBOT
Jin-Woo Jung*, Won-Kyung Song*, Heyoung Lee*, Jong-Sung Kim**,
Zeungnam Bien*
*
Dept. of Electrical Engineering, KAIST, **MIT Team/Multimedia Dept., ETRI
Abstract –A wheelchair-mounted rehabilitation service robot called KARES
(KAist
Rehabilitation
Engineering
Service system) has been realized to assist the disabled / the elderly. One of the
most important factors to be considered
in the design of this system is to enhance
reliability so that the disabled / the elderly can use with feelings of safety and
confidence. For enhancing the reliability, it is suggested that autonomous manipulation and manual manipulation be
integrated in a proper manner. The basic
autonomous tasks for KARES are
grasping an object on the table, grasping
an object on the floor, and manipulating
a switch on the wall. For manual manipulation of the disabled / the elderly, a
3D input device called SPACEBALL
2003 is used and an auxiliary device is
designed for the disabled to facilitate
rotational input function. Using this
auxiliary device and SPACEBALL
2003, the disabled / the elderly are able
to make a manual adjustment during the
autonomous task. Integration of
autonomous and manual operation
proves to be robust and reliable. The
performance of the system is verified by
experiment.
I. INTRODUCTION
In the coming era, the activity of designing automation systems should not
be confined to manufacturing area but be
directed toward “service sector” as well.
A service robot is re-programmable,
sensor-based mechatronic system that
can perform useful works to human activities [1]. Functions of service robots
are generally related to the ordinary human life like repair, transfer, cleaning,
and health care, etc. Service robots may
include rehabilitation robots, surgery robots, housekeeping robots, repair robots,
and cleaning robots, etc. In this paper,
rehabilitation service robots are mainly
considered.
The objective of rehabilitation service
robots is to assist physically handicapped or weak persons such as the disabled / the elderly to lead independent
livelihood. In the case of Korea, the
number of people who are 65 years old
or more is 5.7% of the total population
at present but it is reported to be steadily
growing. Also posteriori physically disabled people tend to increase due to industrial or traffic accidents, etc. In a
sense, everyone has a possibility to be
handicapped because of unfortunate ac-
- 42 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
cidents or inevitable outcomes of the
nature. Thus, development of a system
that can assist humans for their incomplete activities and lost senses is strongly
desirable.
The history of the rehabilitation service
robots is relatively short [2]. Rehabilitation service robots can be divided into
three classes with respect to mobility;
workstation-based systems, mobile systems, and wheelchair-based systems [3].
KARES (KAist Rehabilitation Engineering Service system) has been realized in KAIST to assist the disabled / the
elderly for the independent livelihood
without any assistance as shown in
Fig. 1.
0.5 ∼ 7 kg•f gripping force is used as the
gripper. To control 6 joints and a gripper
on the robotic arm, a multi-motion controller is used.
To recognize the environment, two sensors, i. e., vision and force / torque sensors are equipped. JAI-1050 color CCD
camera is used as the vision sensor. This
camera can be easily mounted on the robot end-effector for small size (12mm in
diameter) with a remote head type. To
process vision information, Genesis
board (MATROX) is used. JR3 (50M31,
140g, 50mm in diameter, and 31mm in
thickness) is used as the 6 DOF force /
torque sensor. For the manual control of
the robotic arm, 6 DOF input device
with 10 keys (SPACEBALL 2003) is
mounted upon the side of a wheelchair.
In addition, simple voice commands can
be used to operate the robotic arm.
Robot Arm
Fig.1. KARES
(Mounted on
the Wheelchair)
Host PC
for Control
Force Sensor
DSP Board
Step Motor #0~5
Multi Motion
Controller
Encoder #0~5
Drivers
Limit Switches
TCP/IP
Communication
Specifically, KARES is a wheelchairmounted rehabilitation service robot and
consists of powered wheelchair, 6 DOF
robotic arm, a gripper, the controller of
the robotic arm, color vision system,
force / torque sensor, driver, and user
interface, etc (Fig. 2). VORTEX (Everest & Jennings, USA) is used as the
powered wheelchair of KARES. Mu
gripper RH707 with on/off control and
Gripper
Host PC
for Vision
Objects
(Cup)
User
on the
Wheelchair
Camera
(Mounted on
the Robot Arm)
Vision
Board
Interface
Voice recognition,
LCD panel, etc.
Spaceball
Fig. 2. Overall block diagram of
KARES.
- 43 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
The target users of KARES are those
who have limited manipulability and
limited mobility, including the physically disabled and the elderly that have
difficulties in using arms and legs. Also
some potential users are persons with
spinal cord injuries (C5, C6 and C7)
who have difficulties living independently [4] and need engineering solutions.
II. SYSTEM PROBLEM
DESCRIPTION
For intelligent service robots, friendly
human-machine interface, reliable human-machine interaction, and compatible human-machine integration are three
major functions to be captured during
the design [5]. Specially, the humanmachine interaction is an important issue
in the rehabilitation robotics. The operation of a robotic arm for grasping, moving, and contacting with the target is essential. In some sense, manual or direct
control of the robotic arm is similar to
the operation of a tele-manipulator.
However,
compared
with
telemanipulator, the manual control of the
robotic arm by physically disabled persons would take a high cognitive load on
the user part since they may have difficulties in operating joysticks or pushing
buttons for delicate movements. The
limited movement of manual operation
can be enhanced by incorporating autonomy for the robotic arm [6][7].
ond task is to pick up a pen that is laid
on the floor. It is noted that the users that
sit in the wheelchair have difficulty in
picking up objects on the table or on the
floor. The third task is to move an object
to the user’s face for drinking, eating, or
for touching. Finally, the fourth task is
to operate a switch on the wall.
For these tasks, it is found that a key issue is recognition of a target in the environment. With information of the environment, motions of the robotic arm can
be divided into free-space motions and
constrained-space motions [5]. In the
free-space motions, vision-based control
is useful for the accurate motions. In the
constrained-space motions, it is possible
for the moving robotic arm to come in
contact with external objects, and thus
force-based control is useful for appropriate motions. These are complementary with each other. Therefore, various
information of the environment needs to
be obtained from vision and force sensors, etc. and they are used to carry out
autonomous tasks (Fig. 3).
Specifically, KARES is designed to be
capable of conducting 4 basic autonomous tasks. The first task is to pick up a
cup on the table for drinking. The sec-
Vision Sensor
Force Sensor
Fig. 3. Vision and force sensors on the
end-effector.
- 44 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
It is remarked that, in general, autonomous manipulation of a wheelchairmounted rehabilitation service robot is
vulnerable to some basic technical
problems. First, vibration of the robotic
arm exists due to the user’s motions and
due to the action of contact with objects
in the environment. Also, the rubber
wheels of the wheelchair can be a source
of vibration. The vibration of the robotic
arm can be a serious problem for the
autonomous tasks. Second, the vision
sensor is not robust to the change of illumination in the complex environment.
Finally, an autonomous task is executed
based on a finite number of preprogrammed manipulations. For a real
world problem, such a sequence of discrete manipulations may render an unsatisfactory form that is quite different
from human’s way of executing a task.
III. MANUAL MANIPULATION
FOR THE DISABLED /THE ELDERLY
The robot arm of KARES has 6 DOF.
To manipulate such a robot, 3D input
device is needed but, the disabled / the
elderly, in general, cannot operate such a
3D input device very well because of
their limited manipulability and mobiity.
For the disabled / the elderly to manually manipulate the robot easily, it is
proposed that a 3D input device called
SPACEBALL 2003 is adopted with an
auxiliary device which is designed for
the disabled to facilitate the rotational
input functions (Fig. 4).
If someone controls the service robot by
manual manipulations, he (or she) may
perform various tasks or continually to
attain robustness to the complex environment and to reduce vibration of the
robotic arm. Note that the disabled or the
elderly has difficulties in manual manipulation. Hence a specified device is
designed for the disabled / the elderly to
easily manipulate the robot and it is proposed that autonomous manipulation and
manual one be integrated to enhance the
manipulation performance.
Fig. 4. SPACEBALL 2003 and
auxiliary device
- 45 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Using this auxiliary device and
SPACEBALL 2003, the disabled / the
elderly can now make a maual
adjustment (Fig. 5,6).
IV. INTEGRATION OF MANUAL
AND AUTONOMOUS
MANIPULATION
There are two types of manipulation in
integrating manual and autonomous manipulations. One is manipulation for
known objects and the other is manipulation for unknown objects.
Fig. 5. Hand shapes for
translational inputs
For known objects, autonomous manipulation can be possible, but to be robust against vibration of the robotic arm
and for complex environment, we integrate manual and autonomous manipulation. For unknown objects, autonomous manipulation is impossible so
manual manipulation is carried out.
For manipulating a robot manually, sensitivity setting of the robot movement is
an essential factor for efficient task. If
we don’t use sensitivity setting, the robot runs with only one speed and thus
the time for completing a task can be
long. In this paper, the sensitivity is a
scalar number from zero to three representing the maximum limit velocity
level for a specific unit direction. If the
sensitivity is zero for some unit direction, then the robot cannot approach toward the direction.
Fig. 6. Hand shapes for rotational inputs
The auxiliary part is designed because
those with C6 and C7 (C: Cervical
nerves) quadriplegia can use the thumb
In order to release the load of the disbut cannot use other fingers and so, in
abled / the elderly, we propose an autogeneral, they cannot generate rotational
matic sensitivity setting. The automatic
inputs.
- 46 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
sensitivity setting is a method of setting
automatically the sensitivity of each unit
direction of the gripper in the Cartesian
coordinate. For automatic sensitivity
setting, we assume that the distance from
the gripper to the object plane (a plane
which the target object is placed) is
known. This assumption is valid for the
table and floor task in the home environment. Then we proceed as follows:
First, we set the basic sensitivity (the
sensitivity of the direction which is normal to the object plane) using the distance from gripper to the object plane.
Second, we set the sensitivity of the
other direction larger than the basic sensitivity. If the distance from the gripper
to the object plane is less than 5cm, then
we set the basic sensitivity zero to protect the collision between robotic arm
and the object plane.
We decompose the table task and the
floor task in order to integrate manual
and autonomous manipulation (Table 1).
Subtask 1 and subtask 3 is preprogrammable but subtask 2 is changed
every time and isn’t robust for the vibration of the robotic arm and complex environment.
For unknown object, subtask 1 and subtask 3 is operated full-autonomously and
subtask 2 is operated full-manually. But
for pre-known object, subtask 1 and
subtask
3
are
operated
fullautonomously and subtask 2 is operated
by the manual adjustment during the
autonomous motion.
Table 1. The decomposition of each task
Table e.g. Catching the cup on the table and
task
moving the cup to the lip
Subtask 1 Move the gripper near
the table
Subtask 2 Catch the object using
the vision and force sensor
Subtask 3 Move the object near the
lip
Floor e.g. Catching the pen on the floor and
task
moving the pen to the lip
Subtask 1 Move the gripper near
the floor
Subtask 2 Catch the object using
the vision and force sensor
Subtask 3 Move the object near the
lip
The task begins by the voice command
(e.g. “table”, “floor”, etc.) and subtask1
is performed. If the object is not known,
the robot sends voice message to the
user for manual manipulation.
If the object is known, the subtask 2 is
performed automatically and the manual
adjustment based on the automatic sensitivity setting is possible. And if no
contact force exists during the subtask 2
or the user wants manual manipulation,
manual manipulation is started.
The subtask 3 is started from the user’s
voice command or recognition of the
weight of the object.
V. RESULTS
To confirm the robustness against vibration of the robotic arm, we have set up a
scenario for known object that the position of the handle of a cup is changed
- 47 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
from ‘a’ to ‘b’ in Fig. 7. Vibration of the
robotic arm can be interpreted as the
change of the position of the target object from the viewpoint of the robot. In
this figure, the solid line represents the
trajectory of the robot end-effector.
When the robot arm closes in the handle
of the cup, the user interrupts the
autonomous manipulation by the fail
detection of autonomous task. Then, the
change of the cup is overcome by the
human’s direct control.
Moreover, subtask 2 can be performed
for an unknown object. Also manual
manipulation during the subtask 2 can
manage the errors of the sensors by the
complex environment.
Fig. 7. Experiment for the robustness of
the integration of manual and
autonomous manipulation
VI. DISCUSSION
In manual manipulation, there usually
exist both translational components and
rotational components in a command by
user’s operation because the operation
by hand of the disabled / the elderly are
very limited. Thus only one component
between them is transferred to the controller by using a button as an additional
input.
For the various kinds of the disabled /
the elderly to use KARES, another input
device may be needed. For C6 and C7
quadriplegia, SPACEBALL 2003 and
the auxiliary device are enough. But, for
C5 quadriplegia, head movement, eye
gaze, EMG (electromyography), or EEG
(electroencephalogram), etc. can help in
inputting the user’s command.
VII. CONCLUSIONS
It is reported that a service robot called
KARES is designed as a rehabilitation
service robot with a wheelchair-mounted
robotic arm to assist the disabled / the
elderly for the independent livelihood.
KARES can do four basic autonomous
tasks using color vision and force /
torque sensors.
But vibration of the robotic arm and the
errors of the vision sensor in the complex environment are found critical factors in conducting tasks. For enhancing
the reliability, we have proposed a strategy of the integration of manual and
autonomous manipulation. And for the
disabled / the elderly to use 3D input device easily, it is reported that the auxiliary device is needed.
- 48 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
ACKNOWLEDGMENTS
The authors gratefully acknowledge the
help provided by Taejon St. Mary’s
Hospital, Korea and National Rehabilitation Center, Korea.
REFERENCES
[1] K. Kawamura, R.T. Pack, M. Bishay,
and M. Iskarous, “Design philosophy
for service robots”, Robotics and
Autonomous Systems, vol.18, no. 1-2,
pp. 109-116, 1996.
[2] Z. Bien and W. Zhu, “Service robotics with special attention to surgical
robots and rehabilitation robots”,
KITE Journal of Electronics Engineering, vol. 7, no. 1, March, pp. 1324, 1996.
[3] K. Kawamura and M. Iskarous,
“Trends in service robots for the disabled and the elderly” in Proc. IROS,
pp.1647-1654, 1994
[4] K. Nemire, A. Burke, and R. Jacoby,
“Human factors engineering of a
virtual laboratory for students with
physical disabilities”, Presence, vol.
3, no. 3, pp. 216-226, 1994
Trans. Rehabilitation Engineering,
vol. 3, no. 1, pp. 3-13 1995.
[7] J.L. Dallaway, R.D. Jackson, and
P.H.A. Timmers, “Rehabilitation robotics in Europe”, IEEE Trans. Rehabilitation Engineering, vol. 3, no.
1, pp. 35-45, 1995
AUTHOR ADDRESS
Prof. Zeungnam Bien
Dept. of Electrical Engineering, KAIST,
373-1 Gusong-dong, Yusong-gu, Taejon
305-701 KOREA
E-mail : [email protected]
Tel. : +82-42-869-3419
Fax. : +82-42-869-3410
Homepage:
http://ctrgate.kaist.ac.kr/~kares
[5] W.K. Song, H. Lee, J.S. Kim, Y.S.
Yoon, and Z. Bien, “KARES: Intelligent Rehabilitation Robotic System
for the Disabled and the Elderly”,
IEEE/EMBS, Vol. 20, no.5, pp.
2682-2685, 1998
[6] W.S. Harwin, T. Rahman, and R. A.
Foulds, “A review of design issues in
rehabilitation robotics with reference
to north American research”, IEEE
- 49 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
A MODULAR FORCE-TORQUE TRANSDUCER FOR
REHABILITATION ROBOTICS
Milan Kvasnica
Technical University of Zvolen, T. G. Masaryka 24, SK-96053 Zvolen, Slovakia
G.R.A.S.P. Laboratory, School of Engineering and Applied Sciences
University of Pennsylvania, Philadelphia, PA 19104, USA
E-mail: [email protected] [email protected]
ABSTRACT
Intelligent sensory systems are an essential part of any system aimed at augmenting the functional capabilities of
visually or mobility impaired persons.
This paper describes a six-DOF forcetorque sensor, originally designed for
robotic and man-machine interface applications that can be used to improve
the communication, control and safety of
assitive systems. This modular forcetorque sensor transduces three linear displacements and three rotations by measuring the incidence of four light or laser
beams onto a photosensitive CCD array.
This low-cost, force-torque sensors is
easy to build and can be used in artificial
arms or legs, range-incline finders, hand
controllers for wheelchairs, keyboards
for blind people and handwriting scanners.
INTRODUCTION
The function of the intelligent sensors is
based on the six DOF system for the
scanning of linear displacement and rotation. This is done by means of a square
(or annular) CCD element (CCD Charge Coupled Device) and with appropriate changes by means of the PSD
element (PSD - Position Sensitive Device), and four light beams (or planes)
creating the shape of pyramid. This simple construction enables low cost customization, according to the demanded
properties by means of the modular sensory system consisting of the following
basic modules: A -stiff module of two
flanges connected by means of microelastic deformable medium, B -compliant
module of two flanges connected by
means of macroelastic deformable medium, C -the module of square CCD
elements, D -the module of the insertion
flange with basic light sources configuration and focusing optics, E -the module
of the insertion flange with auxiliary
light sources configuration and focusing
optics, F -the module of the plane focusing screen, G -the module of forming
focusing screen, H -the module of the
optical member for the magnifying or
reduction of the light spots configuration, I -the module of switchable muff
coupling for changing the scanning mode
for the micromovement and the macromovement-active compliance, J -the
module for the preprocessing of scanned
light spots configuration, see [4], [5], [7],
[8]. The problem of the customization of
- 50 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
six-DOF sensory systems according to
the enhanced accuracy and operating frequency of scanning of the 6-DOF information is possible to improve by means
of the modules: K -the module of insertion flange with the configuration of
light sources with strip diaphragms, creating the light planes with strip light
spots, M -the module of the single or
segmented linear or annular CCD or PSD
elements with higher operating frequency, N -the module of two, parallel
working, concentric CCD annulars with
higher reliability, see [5].
The explanation of the activity is introduced on the force-torque sensor, see
Figure 1 and Figure 2, composed from
modules A,C,D,F,H, of the intelligent
modular sensory system [7]. Laser diodes
1 emit the light beams 2 creating the
edges of a pyramid intersecting the plane
of the square CCD element, here alternatively the focusing screen 8 with light
spots 3. The unique light spots configuration changes under linear displacement
and rotations between the inner flange 5
and the outer flange 6 connected by
means of elastic deformable medium 7.
An alternatively inserted optical member
9 (for the magnification of micromovement, or the reduction of macromovement) projects the light spots configuration from the focusing screen onto the
square CCD element 4. Four light beams
simplify and enhance the accuracy of the
algorithms for the evaluation of six DOF
information, see [6]. The algorithms for
the evaluation of three linear displacements and three radial displacements are
based on the inverse transformation of
the final position of points A,B,C,D, related to the original basic position of
points A0,B0,C0,D0,S0 of the plane coordinate system xCCD, yCCD of the
square CCD element, see Figure 1 and
Figure 2. The information about linear
displacements caused by forces Fx, Fy,
Fz and rotations caused by torques Mx,
My, Mz are sampled and processed according to a calibration matrix, see [10].
The intelligent modular sensory system
enables us to compose in a customized
way the various modifications of the
multi-DOF force-torque sensors and
compliant links for artificial arms, or
legs, range incline finders, hand controllers for wheelchairs, tactile sensors, keyboards for blind people and handwriting
scanners.
HUMAN ARTIFICIAL LIMBS
The effort to imitate by means of robot
the human behavior of inserting a peg in
a hole for the purposes of automatic assembly led to the development of the sixcomponent force-torque sensor. For the
scientist it is more satisfying to utilize
such sensors to substitute for the missing
limbs of the human body by an artificial
limb of higher quality. Universal, low
cost, intelligent modular sensory systems
enable us to evaluate a man’s hand or leg
dynamics while in motion. A part of the
artificial leg consisted of the joint 10
connecting a shin with a foot 11 is depicted in Figure 3. The motion of the
joint 11 is controlled by means of the six
DOF information gained from two sixcomponent sensors. The joint’s 10 drive
transmission is switched by means of the
- 51 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
coupling muff 9 in order to control the
dynamics of the motion. The sixcomponent information about the leg’s
dynamics processed from two forcetorque sensors enables us to use the drive
Power intelligently, even to convert the
damping of the joint 10 motion for energy recuperation into the battery. The
joint 13 connects the foot 11 with the
toes part 14. The rotation, (here for example a, b ), of the joint 13 is used for
accommodation to the ground’s incline
12a, 12b, according to the information
from the range-incline finder.
RANGE-INCLINE FINDER
The ground’s incline under the artificial
leg is scanned by means of the rangeincline finder mounted in a heel, see Figure 3, consisting of the modules A, C, D,
H. The light spots 3 from the light beams
2 on the ground 12a, 12b create the configuration scanned by the square CCD
element. The processing of this information enables us to evaluate the incline of
the ground in two perpendicular planes.
Real-time algorithms suitable for the
single cheap microprocessor are described in [2], [3]. An acoustic signal as
indicator of the ground’s incline helps
the user to keep stability. The rangeincline finder mounted on a wheelchair
helps to keep the desired distance from a
wall.
CUSTOMIZED DESIGN OF A
DEXTEROUS HAND
In rehabilitation robotics and in the
health care any tasks occur frequently,
see [8], [9], [11], for example at the
feeding of disabled people:
- The approaching of the artificial hand
with the feeding utensil into the required position in front of a target
object
- The sequence of the operations until
the time instant of the first contact
with the target part of the body
- The inserting into a target part of a
body
- Following this is the force-torque
manipulation with a target object,
with the aim, here for example to
load the food into the mouth and to
protect the hurt.
Intelligent sensory systems for the solution of these tasks may be implemented
instead of a missing part of a human
hand, or as the part of a robot’s hand. In
addition there is a possibility to evaluate
the weight of gripped food on dynamic
way while a motion of robot’s hand in
order to check the caloric limit.
A simple solution of an universal dexterous hand consists of three sensory system with two independently working
CCD, see Figure 4.
The first sensory system is the rangeincline finder-positioner, composed of
three modules C, D, H, alternatively
working into the CCD element 4b. The
range-incline finder-positioner consists
of two pairs mutual perpendicularly situated cross light beams (planes) 2a radiated from the laser diodes 1a situated on
the gripper. The configuration of the
light spots (strips) 3a on the surface of
the target object is projected by means of
the zoom optical member 9a into the
- 52 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
CCD element 4b. This multi-laser scanning equipment is used in the approach
of the robot’s gripper to the target and
for simplifying some tasks in recognizing
three-dimensional backgrounds, see [2].
The second sensory system is a sixcomponent stiff force-torque sensor,
composed of three modules A, C, D, alternatively working into the CCD element 4b. The laser diodes 1b fastened on
the outer flange 6b radiate the light
beams (planes) 2b against the CCD element 4b, fastened on the inner flange 5b.
The unique light spots (strips) configuration 3b is changed under the forcetorque acting between flanges 5b and 6b,
both mutual connected by means of microelastic deformable medium 7b.
The third sensory system is the sixcomponent active compliant link composed of six modules B, C, D, F, H, I,
working into the CCD element 4c. The
laser diodes 1c emits the light beams
(planes) 2c against the focusing screen
8c. An optical member 9c mediates the
reduction of the macro-movement of the
light spots (strips) 3c. The unique light
spots (strips) configuration 3c is changed
under the force-torque acting between
flanges 5c and 6c, connected by means
of the active compliant medium 7c. An
active compliance is solved by means of
pneumatic, programmable switched,
segmented hollow rubber annulars 7c.
Alternative use of the six-component
stiff force-torque sensor or the active
compliant link is switched by means of
coupling muff 10.
Unified modular intelligent sensory system enables customized design for wide
variety of tasks in rehabilitation robotics.
HAND CONTROLLER
Efficiency in using a wheelchair depends
on the user’s effectiveness in communicating with the driving gear. A low cost
six degrees-of freedom hand controller
means for many users not luxury but the
possibility for personal autonomy in their
daily activities. A multi DOF hand controller is possible to use for the control of
the feeding utensil combined with a simple mechanism, described in [11]. The
multi degrees-of-freedom hand controller
(low cost), or of enhanced reliability is
depicted in Figure 5, under the influence
of the acting force +Fz. This device consists of the (module C of the square CCD
element), or of the module N, for example in medical use of enhanced reliability
for surgeons with two independently
parallel working CCD annulars 4, fastened in mutually opposite directions in
front of the (module D) modules K of the
(light beams) light planes 2. The configuration of the (light beams) light
planes 2 of the pyramid shape is radiated
from the laser diodes 1 fastened on the
outer flange 6. The configuration of light
(beams) planes 2 creates in the plane of
(square CCD elements) the CCD annulars the configuration of light (spots)
strips 3. The inner flange 5 is fastened on
the stand 8 and connected by means of
the elastic deformable coupling balks 7
with the outer flange 6. The design of the
outer flange 6 is shaped for a humanhand friendly form.
- 53 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
KEYBOARD FOR BLIND PEOPLE
Six-component force-torque sensors that
make it possible to pass judgment about
the heterogeneity of a man’s hand dynamics, for example the handwriting of
two different persons, may be used like a
keyboard for blind people. Because of
the lack of place for the six-component
force-torque sensor between a nib and
a penholder, the configuration, see Figure 6, was used, where the inner flange 5
is put on the end of a penholder 8. The
outer flange creates a steady mass. This
handwriting scanner is possible to use as
a keyboard for blind people in order to
improve their communication with a
computer. Another configuration, of the
hand writing scanner, where the sixcomponent force-torque sensor is inserted between the writing plate 6 and
the support 8 of the writing hand is depicted in Figure 7. This device may be
used as a signature scanner in banking.
CONCLUSION
The level of the design concerning the
imitation of human sensing is not only
the indicator for the progress of a human
creative capability. Using sensory systems in producing prostheses as well as
other supports for disabled people is
a sensitive and reliable indicator of the
level of democracy in every country. The
aim of this paper is to introduce the use
of intelligent sensory systems for robotics and the man-machine interface in order to help disabled people. The main
advantage of the described intelligent
modular sensory system design is low
cost solution of many control problems.
Introduced solution has regard for the
current trends in the design of the products oriented on easy reparability, uniform spare parts for more types of sensors, service life, accommodation for different purposes and recycling, in order to
protect the environment.
KEYWORDS
Intelligent Modular Sensory System; Six
Degrees-of-Freedom Force-Torque Sensor, Artificial Arm or Leg; Hand Controller for a Wheelchair; Keyboard for
Blind People; Handwriting Scanner;
Range-Incline Finder.
ACKNOWLEDGMENTS
This paper was inspired by the research
program of the General Robotics and
Active Sensory Perception (GRASP)
Laboratory, directed by Prof. R. Bajcsy,
University of Pennsylvania, 3401 Walnut
Street 300C, Philadelphia, PA 19104
USA. The support of NATO Scientific
Affairs Division - grant award EXPERT
VISIT HIGH TECHNOLOGY, EV
950991 is gratefully acknowledged.
REFERENCES
[1] Hirzinger G., Dietrich J., Gombert J.,
Heindl J.,Landzettel K.,Schott J. (1992).
„The Sensory and Telerobotic Aspects of
Space Robot Technology Experiment
ROTEX“. Proceedings of the International Symposium on Artificial Intelligence, Robotics and Automation in
Space, Toulouse, Labege, France.
[2] Kvasnica, M.. (1986). „Scanning and
Evaluation System of Object Surface
Using Cross Light Beams with the CCD
- 54 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Camera“. Proceedings of the International Symposium on Robot Manipulators: Modeling, Control, and Education,
Albuquerque, USA.
[3] Kvasnica M.. (1992). „New Concept
of Sensory Outfit for Space Robotics“.
Proceedings of IFAC Symposium on
Automatic Control in Aerospace, Ottobrunn, Germany.
[4] Kvasnica M.. (1992). „Six-Component Force-Torque Sensing by Means of
One Quadrate CCD or PSD Element“.
Proceedings of the 2nd International
Symposium on Measurement and Control in Robotics, AIST Tsukuba Science
City, Japan.
[5] Kvasnica, M.. (1993). „Fast Sensory
System for the Scanning of the SixComponent Linear displacements and
Radial Displacements“. Proceedings of
the International Symposium on Measurement and Control in Robotics, Torino, Italy.
[6] Kvasnica, M.. (1993). „Algorithms
for the Scanning of the Six-Component
Linear displacements and Radial Displacements by Means of Only One CCD
Element“. Proceedings of the International Symposium on Industrial Robots,
Tokyo, Japan.
[7] Kvasnica M.. (1997). „Flexible Sensory Brick-Box Concept for Automated
Production and Man-Machine Interface“.
Proceedings of the NOE Conference in
Intelligent Control and Integrated Manufacturing Systems, Budapest, Hungary.
[8] Kvasnica M.. (1998). „Intelligent
Sensors for the Control of Autonomous
Vehicles“. Proceedings of the 6th International Conference and Exposition on
Engineering, Construction and Operation
in Space and on Robotics for the Challenging Environments - Space and Robotics’98, Albuquerque, New Mexico,
USA.
[9] Merklinger A., Sly I.. (1997). „Rendez-Vous and Docking“. Proceedings of
the 3rd International Symposium on
Measurement and Control in Robotics,
Torino, Italy.
[10] Sásik J.. (1987). „Multi-component
Force-Torque Sensors Calibration Methods for Robotics Application“. Strojnícky þDVRSLV1R%UDWLVODYD6Oovakia.
[11] Vezien J-M, Kumar V., Bajcsy R.,
Mahoney R., Harwin W. (1996). Design
of Customized Rehabilitation Aids. Proceedings of the IARP Workshop on
Medical Robots, Vienna, Austria.
[12] Kvasnica M.. (1992). The Equipment for the Robot Control in Defined
Distance from the Object. Patent CSFR
AO 272457.
[13] Kvasnica M.. (1993). The Equipment for the Force-Torque Scanning.
Patent CZ AO 278212, SK AO 277944.
- 55 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Figure 1. The Approach of Six-DOF Scanning
Figure 2. Six-Component Force Torque Sensor
- 56 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Figure 3. Six-Component Force-Torque Sensors Mounted in
Artificial Leg and the Range-Incline FinderBuilt in the Heel.
- 57 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Figure 4: Customized Design of Dexterous Hand
Figure 5: Multi-DOF Hand Controller
- 58 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Figure 6. Keyboard for Blind People.
Figure 7. Signature Scanner.
- 59 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
ADAPTIVE CONTROL OF A MOBILE ROBOT FOR THE
FRAIL VISUALLY IMPAIRED
Gerard Lacey1, Shane MacNamara1, Helen Petrie2, Heather Hunter3,
Marianne Karlsson4, Nikos Katevas5 and Jan Rundenschöld6
1
2
Computer Science Department, Trinity College, Dublin, Ireland
Sensory Disabilities Research Unit, University of Hertfordshire, England
3
National Council for the Blind of Ireland
4
Chalmers University of Technology, Sweden
5
Zenon SA, Greece
6
Euroflex System AB, Sweden
ABSTRACT
BACKGROUND
This paper describes the development
and evaluation of a novel robot
mobility aid for frail Visually
Impaired People (VIPs).
Frailty
makes the use of conventional
mobility aids for the blind difficult or
impossible and consequently VIPs are
heavily dependent on carers for their
personal mobility. In the context of a
rapidly increasing proportion of
elderly in the population this level of
support may not always be available
in the future. The aim of this research
is to develop a robot that will increase
the independence of frail VIPs. This
paper will describe the walking aid
and its overall control system. The
controller adapts its operating mode
to satisfy the constraints imposed by
both the environment and the user
using a probabilistic reasoning
system. The reasoning system and
the software architecture of the robot
will be described in detail as will the
evaluation of the robot in a residential
home for visually impaired men.
Dual disability can severely limit the
range of mobility aids a person may
use. This is particularly true of the
frail VIPs. 75% of VIPs are aged 65+
and frailty is also common among
this age group. An estimate of the
number of people can be achieved by
analysing the survey data produced
by Ficke [1]. His study of nursing
home residents in the USA showed
that of the 1.5 million residents, 22%
were visually impaired and 70% had
mobility impairments. His survey did
not directly measure the incidence of
dual disability however Rubin and
Salive [2] have noted the correlation
between visual impairment and
frailty.
Mobile robot technology has been
applied in assistive technology to
develop smart wheelchairs [6] [7] [8].
The mobility aid described in this
paper, the Personal Adaptive Mobility
AID (PAM-AID), aims to improve
the independent mobility by assisting
a frail VIP to take moderate exercise
- 60 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
within the confines of a rest home or
hospital.
This is achieved by
providing the physical support similar
a walker or rollator and navigational
help similar to that provided by a
carer or guide dog.
ROBOT DESIGN
The application of robotics to the
mobility of the elderly blind is a
significant challenge given their
unfamiliarity information technology,
their poor short-term memory and
motivational problems in dealing with
new things. The underlying design
principal of the PAM-AID project
was that of Interactive Evaluation as
described by Engelhardt and Edwards
in [3]. This involved regular contact
with the users through interviews and
regular field trials of prototypes and
sub-systems.
The design process was iterative,
involving the construction and
evaluation of three prototypes and
several user interfaces. The central
concept was that of a walker or
rollator with the ability to avoid
obstacles and inform the user about
the environmental conditions. Figure
1 shows the progression from
Concept Prototype to the final Active
Demonstrator system over the course
of the PAM-Aid project. The main
design
challenges
were
the
development of an acceptable user
interface and the development of a
adaptive control system.
The Active Demonstrator consisted of
a custom-built mobile robot chassis,
fitted with sonar sensors and a laser
range finder. The main controller
was a PC however many of the real
time tasks were devolved to
MC68332 and MC68HC11 based
micro-controllers. The controller was
implemented in C++, using WIN32
threads.
Figure 1: Concept Prototype, Rapid Prototype and
Active Demonstrator
- 61 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
The user interface was a critical
component of the system. User input
was by means of a set of direction
switches or an optional voice input
system. User feedback was provided
via
proprioception
and
voice
feedback. The voice feedback enabled
the robot to provide information to
the user regarding the nature of the
environment, such as presence of
junctions, doors, etc. as well as
warnings about the presence and
location of obstacles.
CONTROLLER DESIGN
The device operated in two modes,
manual and automatic.
Selection
between the modes was by means of a
switch. In manual mode the user
determined the direction by means of
input switches or voice commands.
The robot followed these commands
except if a potential collision was
detected. In this case the robot
stopped and provided information to
the user. Control is then returned to
the user to facilitate manual obstacle
avoidance.
Related work by some of the authors
has developed a passive version of
PAM-AID [4], which the user pushes.
However, the active approach, which
provides its own traction, allows for
the autonomous operation of the robot
within a hospital or nursing home.
For example an active PAM-AID
could be shared between several users
in a residential home as it has the
ability to travel independently to each
user on request. This functionality is
foreseen within Smart Healthcare
Environments as outlined in [5].
In automatic mode the robot
implemented an adaptive shared
control scheme based on Bayesian
Networks [11]. Adaptation was
achieved by balancing environment
constraints with an estimate of the
user’s goals. The bayesian network
calculated the user’s goals by fusing
a-priori probabilities with the current
user input and sensor readings. The
ultimate outcome of the adaptation
scheme was the selection of the most
appropriate operating mode for the
Reasoning
System
User
Assistance
Door
Passage
Navigation
Feature
Extraction
Risk Assessment
Figure 2 Schematic of Software Architecture
- 62 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
robot. Further details of the adaptive
reasoning system can be found in
[12].
SOFTWARE ARCHITECTURE
The
adaptation
scheme
was
encapsulated within the Reasoning
System module shown in Figure 2.
The software architecture is a threelayer system, similar to the 3T
architecture of Bonasso et. al.[10].
The Risk Assessment module ran at
highest priority and was responsible
for detecting potential collisions and
initiating the appropriate action on the
part of the motion controller and user
interface. It used the 0o to 180o laser
scan and a set of sonar sensors to
assess the risk of collision.
Sensor input was processed in the
Feature Extraction module. The
Range Weighted Hough Transform
[9] was used to extract straight-line
features from the range data. The
lines were further processed to detect
walls, doors and junctions. No apriori map was used in this process
thereby facilitating the immediate use
of the device in new environments.
The feature data and user input was
passed to the Reasoning System and
was then used to select the operating
mode for the robot. The possible
operating modes were: Door Passage,
Navigation and User Assistance.
Door Passage was an autonomous
task that guided the robot through
doors safely.
The door passage
routine identified the centre line of
the door from the feature data and
tracked it through the door.
Navigation was a shared control
mode where the relative importance
of robot control and user input was
determined by the risk of collision as
determined by the Risk Assessment
module. The navigation system used
the laser system that provided a 0o to
180o scan of the environment every
25th of a second. The shared control
method is based on the MVFH as
described by Bell in [13]. However
as the laser data is more accurate than
sonar no occupancy grid was
required.
Multiple
parabolic
weighting functions were used to
implement the sharing of control
between the user and the robot. The
parameters of the parabolic functions
were selected on the basis of the
measured risk of collision.
The User Assistance module was a
dialogue-based module invoked when
the robot did not have enough
information to make a reliable mode
selection.
For example the user
would be consulted when a dead-end
was reached. Typically the user
would initiate the manual mode in
this situation.
- 63 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
RESULTS
During the development of PAM-AID
three field trials were carried out, in
seven locations, involving 30
participants, ranging in age from 55 to
94. During the trials a wide range of
design ideas were evaluated and the
users were encouraged to suggest
alternatives and improvements. The
main factors evaluated were the
acceptability of the device to the
target user group, the user’s feeling of
security while using the device and
the performance of user interface.
Participant’s responses were rated on
a five-point scale ranging from 1
(Very Low) to 5 (Very High).
Participants gave positive measures
for the acceptability noting that the
device was easy to use (3.5) and that
they felt quite safe while using the
device (3.2). When asked if the
device would be useful, Participants
gave they device a mean rating of
(4.42).
CONCLUSION
This paper has described research to
develop and evaluate a robot mobility
aid for the frail visually impaired. It
is motivated by the need to maintain
the independent mobility of frail VIPs
within a structured environment such
as a nursing home or hospital.
has been outlined. The device has
undergone regular evaluation during
its development and some results
from these evaluations have been
provided.
This research has described a novel
mobility aid that has been accepted
by the user community however
much research remains to be done.
Our research goals include the
expansion of the operating modes of
the robot, the development of reliable
down-drop
sensors
and
the
integration of PAM-AID within an
intelligent building system [5]
ACKNOWLEDGEMENTS
The authors
would like
to
acknowledge the funding of the
National Rehabilitation Board of
Ireland, the Trinity Foundation and
the EU Telematics Applications
Programme. We would also like to
acknowledge the contribution of all
the Participants and the Carers during
the user trials and the contribution of
fellow researchers Anne Marie
O’Neill, Blaíthín Gallagher, Pontus
Engelbrektsson and Domitilla Zoldan.
The
design
of
the
Active
demonstrator has been described and
the operation of an adaptive controller
- 64 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Intelligent Robots and Systems, pp
113-120, 1995.
REFERENCES
1.
Ficke RC, Digest of Data on
Persons with Disabilities, National
Institute
on
Disability
and
Rehabilitation Research. Washington,
DC 20202, USA, 1991.
2. Rubin GS and Salive ME, Vision
and Hearing, The Women’s Health
and Ageing Study: Health and Social
Characteristics of Older Women with
Disability, Bethesda, MD: National
Institute on Ageing 1995.
3. Englehardt KG and Edwards R,
Human-Robot Interaction for Service
Robots, Human Robot Interaction,
Taylor and Francis, pp 315-346,1992.
4. MacNamara S and Lacey G PAMAID: A Passive Robot for Frail
Visually
Impaired
People.
Proceedings of RESNA 1999.
5. O’Hart F, Foster G, Lacey G and
Katevas
N,
User
Oriented
Development of New Applications for
a Robotic Aid To Assist People With a
Disability. Computer Vision and
Mobile
Robotics
Workshop
(CVMR’98).
Santroini,
Greece,
September 1998.
6. Borgolte U., Hoelper R., Hoyer H.,
Heck H., Humann W., Nedza J.,
Craig I., Valleggi R., Sabatini A.M.,
Intelligent Control of a SemiAutonomous
Omnidirectional
Wheelchair,
Symposium
on
7. Katevas N., Sgouros N.M.,
Tzafestas S. G., Papakonstantinou G.,
Beattie P., Bishop J.M., Tsanakas P.,
Rabischong P. and. Koutsouris D The
Autonomous
Mobile
Robot
SENARIO:
A
Sensor-Aided
Intelligent Navigation System for
Powered
Wheelchairs,
IEEE
Robotics and Automation Magazine,
December, Vol. 4, No. 4, pp. 60-70,
1998.
8.
Simpson R., Levine S. P., Bell
D. A., Jaros L. A., Koren Y. and
Borenstein J., NavChair: An Assistive
Wheelchair Navigation System with
Automatic Adaptation, in Assistive
Technology
and
Artificial
Intelligence, Lecture Notes in AI,
1458, Springer, pp 235-255, 1998.
9.
Larsson U., Forsberg J, and
Wernersson Å, Mobile Robot
Localization:
Integrating
Measurements from a Time-of-Flight
Laser, IEEE Transactions on
Industrial Electronics 43(3), pp 422431, 1996.
10. Bonasso R.P., Kortenkamp D. and
Whitney T., Using a Robot Control
Architecture to Automate Space
Shuttle Operations, 9th Conference
on Innovative Applications of AI
(IAAI97).
- 65 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
11. Pearl J, Probabilistic Reasoning
in Intelligent Systems, Morgan
Kaufmann, 1988.
12 Lacey G Adaptive Control of a
Robot Mobility Aid for the Frail
Visually Impaired, PhD Thesis,
Trinity College Dublin. 1999.
13 Bell D.A. Modelling Human
Behaviour for Adaptation in HumanMachine Systems, PhD Thesis,
University of Michigan, 1994.
AUTHOR’S ADDRESS:
Gerard Lacey
Department of Computer Science,
O’Reilly Institute,
Trinity College,
Dublin 2,
Ireland
Email: [email protected]
WEB: www.cs.tcd.ie/Gerard.Lacey
WEB: www.cs.tcd.ie/PAMAID
- 66 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
POWER AUGMENTATION IN REHABILITATION ROBOTS
Kelly McClenathan and Tariq Rahman, Ph.D. Extended Manipulation Laboratory,
duPont Hospital for Children/University of Delaware
Abstract
A force-assist mechanism has been
developed to mount on the Chameleon a wheelchair mounted rehabilitation
robot. The device will amplify the
forces applied by the user, making it
possible to lift a large weight with a
smaller force. This paper describes the
preliminary test bed study and details a
pilot study currently in progress to
investigate the precision and accuracy of
the Chameleon under varying gains on
the force-amplifier.
Introduction
The Chameleon is a body-powered
rehabilitation robot designed at the
Extended Manipulation Laboratory of
the duPont Hospital for Children. It is
designed to be an easy-to-use, costeffective,
multi-degree-of-freedom,
wheelchair-mounted robot [1,2] to assist
people with SCI or similar disabilities
perform their daily living tasks.
site of the Chameleon is a mouthpiece
that the user grips with his or her teeth.
Moving the mouthpiece in three
dimensions maneuvers the master
(Figure 1). A direct mechanical linkage
of Bowden cables currently controls the
pitch and roll joints.
The moment arm of the input device (R
in Figure 1.) is much smaller than the
moment arm of the mechanical arm (r in
Figure 2.).
The direct mechanical
linkage from the cable dictates that the
torque at both joints must be equivalent.
Because the moment arm is smaller,
even if a light object is lifted, a large
force is required at the input site, which
corresponds to a large force applied by
the temporomandibular joint (TMJ).
The current Chameleon design, shown in
Figures 1, 2, and 3, consists of a head
operated input device that controls a
mechanical arm and gripper. The input
control uses pitch (nodding the head
“yes”) and roll (shaking the head “no”)
Figure 1. Master (Input) Component of
Chameleon
to correspond to flexion/extension and
horizontal abduction/adduction of the
shoulder joint respectively. The input
- 67 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
sense when an object has been grasped
or has been dropped. The user must also
be able to determine if he has contacted
an obstacle. Additionally, the system
must be stable.
Figure 2. Slave (Output) Component
of Chameleon
Adding a force-assist mechanism offers
a significant reduction in user strength
requirements and provides added
precision and accuracy to movements of
the Chameleon. The proposed powerassist device is novel in that it provides
power assistance while maintaining a
constant position relationship between
the user and the robot movements.
Figure 3. User with Chameleon
Background
With one exception, there are currently
no rehab robots that offer the user a
sense of force or contact with the
environment. Workstation robots such
as the ProVAR do not offer a direct
coupling between the user and robot.
When a user is controlling the robot with
a joystick control such as the one used in
the ProVAR, he does not receive any
feedback from the robot except for visual
position feedback, which makes control
more difficult [3].
The Helping Hand [4,5] and the
MANUS [6] are two rehabilitation
robots that can be mounted on the
wheelchair and controlled with a joystick
or a switch-pad. These two robots are
completely motorized and do not offer
any force feedback to the user. The
Magpie [7] is an example of a
wheelchair mounted, mobile robot that
The user must be aware of the weight of the user operates with his or her foot and
the object in the gripper, so that he can leg motions. This design does provide
sensory feedback due to the cable
- 68 The goal of this project is to implement a
force amplification device at the pitch
joint to assist with lifting loads in order
to eliminate the pain and fatigue that are
currently encountered at the input site.
Ideally, the user will be able to lift a
heavy load using only a small amount of
force.
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
connection; however, the system is
totally body powered.
An important feature of cable-operated
prosthetic and orthotic devices is
extended physiological proprioception
(EPP) [3]. EPP allows the operator of a
device to sense its static and dynamic
characteristics
through
physical
sensations that mimic the natural
sensations of movement. The addition of
EPP to a rehabilitation robot greatly
improves ease-of-use and functionality
because the user has a sense of his
position in the environment and he is not
constantly forced to watch the endeffector of the device [3].
Test-Bed Development
In order to determine the proper control
scheme for the power augmentation
system. A test-bed has been designed to
mimic the system of pulleys, cables, and
lever arms in place on the existing
Chameleon. The test-bed consists of two
Fh =
R1 * Fi
L1
(1)
Governing Equations
The equations that govern the system are
based on Figure 4. The torque applied
L1, L2 – Lengths of lever arms (m)
R1, R2 – Radius of pulley (m)
Fi, Fext – Force in cable (N)
Fh, Fo, W – External Forces (N)
FSR – Sensor, measures force as a voltage
Shaft
Motor
Pulley
L1
pulleys with lever arms attached that
apply a force at a distance from the
center of each pulley. The cable is
rigidly secured to a third pulley mounted
to a motor located in the center of the
test-bed, as shown in Figure 4. A Force
Sensing Resistor (FSR) is mounted in
a casing to ensure even force
distribution, and is mounted in tension in
order to sense the forces transmitted
through the cable. From the data we
found that, within the range that we were
testing, it was most appropriate to use a
third order polynomial equation (1) to
relate the applied force to the sensor
voltage. The R2 value of ~0.9995 was a
good fit for this system.
R2
R1
R1
L2
Fext
Fh
Fi
FSR
Fo
Fi
W
Figure 4. Force-assist Test-bed Schematic
- 69 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
by the human is identical to the torque in
the left side of the cable, so the force in
that cable can be expressed as a function
of the force applied by the human:
gearing of the motor, we can redefine the
torque required by the motor as:
Tm = (α
L2
− 1) * Fi * R2 * C
L1
(6)
Fi = 0.73 * V i 3 − 103
. * V i 2 + 2.95 * V i − 0.21 (2)
Where Fi is the force in the left side of
the cable, Fh is the force applied by the
human, R1 is the radius of the pulley and
L1is the length of the lever arm. The
external torque applied is identical to the
torque in the right side of the cable and
therefore the force in that cable is a
function of the external weight:
W=
R1 * Fext
L2
(3)
As before, W is the external weight, R1 is
the radius of the pulley, L2 is the length
of the second lever arm and Fext is the
force in the right side of the cable. The
force required by the motor must be
equal to the difference between the force
in each section of the cable in order to
maintain static equilibrium. The torque
required by the motor can then be
expressed as a function of the forces in
each part of the cable:
Tm = R 2 *( Fext − Fi )
(4)
We require that the force applied by the
human be some reduced value (α), of the
external weight.
Fh =
W
α
(5)
C is a constant describing the behavior
(gearing/speed reduction) of the motor.
From the equation relating torque and
current in a motor:
Tm = K t * I
(7)
or
Tm = K t *
Vc
R
Kt is the torque constant of the motor,
supplied by the manufacturer. Vc is the
voltage needed to drive the motor and R
is the resistance of the circuit. Solving
for Vc and substituting in equation (7) the
general equation for the voltage sent to
the motor can be written as:
Vc =
L2
R
*(α
− 1)* Fi * R2 * C
Kt
L1
(8)
This is the equation used in the Labview
program, where R, R2, Kt,α, L1, L2, and C
are all constants, Fi is the force sensed by
the FSR and Vc is the calculated voltage
sent to the motor.
Force Discernment Test
The average human is able to
discriminate between weights that vary
by more than 8% [8]. A preliminary test
was conducted to determine whether the
system was accurate to within this range
for two different weights at different
Substituting equations (2), (3) and (5) in
equation (4) and accounting for the
- 70 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
gains. The test was conducted by resting
the input lever on an ATI Force Sensor
and measuring the effective force at the
input site. For example, it is expected
that if the human lifts 0.2 kg at a gain of
1.0, it will require the same input force
to lift it as it will to lift a 1.0 kg weight at
a gain of 5.0.
We conducted a series of trials in order
to determine the accuracy of the system.
First we set six expected forces ranging
from 1.2 to 4.2 kg. Then, knowing the
five masses we would use, we calculated
the gains that, when paired with each of
the masses, would yield the expected
forces. Each mass/gain pair was tested
five times to determine the repeatability
of the trial, and the average percent error
from the expected force was calculated.
A total of 150 trials were conducted. A
sample of the data is shown in Figure 5.
This shows the average deviation for five
trials that were expected to yield the
same force of 2.00 N.
After conducting the trials we calculated
the t-distribution for the samples. The
results for each weight were tested for
95% confidence. We found that the data
fell within –17.5% to –5.9% of the
expected average overall.
Evaluation
The goal of this testing is to analyze the
behavior of the force-assist mechanism
working in conjunction with the
Chameleon. In our testing, we will only
be operating the Chameleon with two
degrees of freedom: roll and pitch of the
head. These movements correspond to
horizontal abduction and adduction of
the shoulder and flexion and extension of
the shoulder. We will not include the
flexion and extension joint of the elbow
or any of the operations of the gripper at
this stage as we are interested only in the
efficacy of the power assist device,
rather than the functionality of the
Chameleon.
Force
Gain
2.67
3.65
4.63
5.12
7.57
Exp Force Avg Force
2.00
1.55
2.00
1.43
2.00
1.53
2.00
1.90
2.00
2.10
Figure 5. Sample Trial Data
4.5
4
Expected Force (N)
Mass
0.55
0.75
0.95
1.05
1.55
3.5
3
2.5
2
1.5
Figure 6 shows the averaged actual data
plotted against the expected data (the
line y=x) for all of the trials. Clearly as
Figure 6. Actual Data vs. Expected
the expected force increases, the actual
Data
force decreases from the expected value.
This is not a serious problem because the
actual force is still lower than the We want to evaluate the effect of adding
expected force, which does not pose a force assist in the performance of two
joints of the Chameleon. In order to
concern to the user.
- 71 1
1.00
1.50
2.00
2.50
3.00
3.50
4.00
4.50
Actual Force (N)
Avg Force
Exp Force
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
analyze the force-assist mechanism, one
test, a Fitts’ movement test, will be
repeated three times with the force-assist
mechanism/Chameleon setup. In the
test, the user will hold a laser pointer in
the gripper of the Chameleon arm. On
the wall at a distance of six feet away
will be a collection of targets of three
different sizes in a grid formation. The
user will be asked to point the laser
pointer back and forth between two preselected markers of the same size –
moving diagonally in order to combine
the motions in the horizontal and vertical
planes. Time will be recorded as the user
repeats the trajectory a total of ten times.
The time will then be averaged over the
ten trials to yield an average value for
one task. This test will help determine
how performance is effected when
strength is added to the system.
Each of the trials will be repeated with
the Chameleon gain set at three different
levels, the max gain that the system can
sustain ~7.0, zero gain and a mid range
gain ~3.5. This test will give us a
measure of how the control of the
Chameleon is affected by changing the
gain.
acquisition tasks. Additionally, we will
use the subjects’ responses to the Likerttype questionnaire (1= strongly disagree,
2= disagree, 3= neutral, 4= agree, 5=
strongly agree) which will be filled out
at the end of each day, for a descriptive
analysis study.
Discussion
Informal testing has yielded significant
power assistance for the Chameleon.
This has made using the device much
lighter and as a result, easier to use for
extended periods of time. We propose
that the addition of the power assist
mechanism to the Chameleon will
decrease the amount of force and time
needed by the user to acquire targets at
no sacrifice to his precision movement
abilities. Upon completion of the testing
for this project, we will determine
whether the addition to the Chameleon is
a worthwhile expenditure, and if it is
deemed
successful,
the
power
augmentation system will be utilized in
other projects.
Acknowledgements
This research is supported by the U.S.
Department of Education Rehabilitation
Engineering Research Center on
Rehabilitation
Robotics,
Grant
H133E30013 from the National Institute
on Disability and Rehabilitation
Research (NIDRR) and the Nemours
Foundation.
Experimental Design
The independent variable in this study is
the level of gain set on the system. The
dependent variable is the index of
performance (bits/sec) as calculated
using Fitts’ Law. This data will be
statistically analyzed using a one-way
ANOVA test for repeated measures. We
will also study how our data correlates to
Fitts’ Law, which relates speed and
precision measurements in target
- 72 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
References
[1.] Stroud, S., Rahman, T. “A Body
Powered Rehabilitation Robot”
Proceedings of the RESNA ’96
Annual Conference. Pp. 363-365.
June 1996.
[6.]
Kwee, H. “Integrated Control of
MANUS
Manipulator
and
Wheelchair
Enhanced
by
Environmental
Docking”.
Robotica. Vol. 16. Pp. 491-498.
1998.
[2.]
Stroud, S., Rahman, T. “A Body
Powered Rehabilitation Robot”
Proceedings of the RESNA ’97
Annual Conference. Pp. 387-389.
June 1997.
[7.]
“MAGPIE-Its Development and
Evaluation”. Internal Report:
Oxford Orthopaedic Engineering
Centre. Nuffield Orthopaedic
Centre.
Headington, Oxford,
England. 1991.
[3.]
Childress, D., Heckathorne, C.,
Grahan, E., Strysik, J., Gard, S.
“Extended
Physiological
Proprioception
(E.P.P.)
An
Electronic
Cable-Actuated
Position-Servo Controller for
Upper-Limb Powered Prostheses”.
http://pele.repoc.nwu.edu/progress
/jrrd.dva.9009.EPP.html
[8.]
Cohen, S., Ward, L. Sensation and
Perception.
Pp.
260-261.
Harcourt Brace Jovanovich Inc.
San Diego. 1984.
[4.]
Sheredos, S., Taylor, B., Cobb, C.,
Dann, E. “The Helping Hand
Electro-Mechanical
Arm”.
Proceedings of the RESNA ’95
Annual Conference. Pp. 493-495.
June 1995.
[5.]
Sheredos, S., Taylor, B. “Clinical
Evaluation of the Helping-Hand
Electro-Mechanical
Arm”.
Proceedings of the RESNA ’97
Annual Conference. Pp. 378-380.
June 1997.
Address
Kelly McClenathan
Extended Manipulation Laboratory
duPont Hospital for Children/ University
of Delaware
P.O. Box 269, 1600 Rockland Rd.
Wilmington, DE 19899
(302) 651-6868
Email: [email protected]
- 73 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
FORCE LIMITATION WITH AUTOMATIC RETURN MECHANISM FOR
RISK REDUCTION OF REHABILITATION ROBOTS
Noriyuki TEJIMA
Ritsumeikan University, Kusatsu, Japan
Abstract
In this paper, a new mechanism to reduce
the risk of rehabilitation robots contacting
the human body is proposed. It was
c o mp o s e d o f a f o r c e l i m i t a t i o n
mechanism and a soft structure with
anisotropic viscosity. A prototype was
developed, and its basic features were
experimentally evaluated. The size of the
prototype was too b ig, b ut it w as
confirmed that the new mechanism had
many advantages. It could avoid a
stronger force than a threshold level
would affe c t a p e rson. As the
arrangement of the mechanism was not
restricted to the robotic joints, the effect
of posture of a robot upon the limitation
force was able to be reduced to a certain
degree (although not entirely). And,
because elastic energy was consumed in
the return process, it would not resonate.
Introduction
Lately rehabilitation robots have become
of general interest. However, there are
very few reports on how to reduce the
risk of rehabilitation robots hitting
humans. Because robots are essentially
dangerous, industrial robots must be used
in isolation from human work spaces.
Contrary to this, rehabilitation robots
cannot be separated from human work
spaces because of their purposes. As a
basic solution to this problem, a new risk
reduction strategy for rehabilitation
robots must be formulated to prevent
accidents.
Solutions to this problem have been
previously suggested. The method by
which a robot stops by ultrasonic or
beam sensor signals before contact with
a human body is unreliable [1][2]. Losing
dead angles of the sensing area is difficult
in this method. It can be considered that
this is an additional method for risk
reduction. As another method, force
s e ns o rs a n d t o r q u e s e ns o rs w e re
s ugge s t e d t o d e te c t c o nt a c t s [3 ].
However, problems lie in low reliability
caused by intolerability of electronic
devices to electromagnetic noise. Soft
mechanisms, such as soft arms, soft joints
or soft covers, feature to reduce the peak
of impulsive force [4]. However, no
report has clarified the most suitable
compliance values. If a soft system such
a s a w h i p i s r e s o n a n t , i t ma y b e
dangerous. It is also a problem that a soft
structure is deformed even by a weak
force. As practical solutions for a simple
system, force (or torque) limitation
mechanisms and small power actuators
are suggested [5]. However, deciding the
limitation torque value for an articulated
robot is a difficult planning problem
because of its complex relationship
between torques and an external force.
Every method has its merits and demerits.
In the present situation where a proper
countermeasure cannot be found, it is
difficult to make the use of rehabilitation
robots widespread. The purpose of this
s t u d y w a s t o d e ve lo p a ne w r is k
reduction mechanism that combines the
advantage of a soft structure and a force
limitation mechanism.
Design Rationale
A new force limitation mechanism was
proposed. A force limitation mechanism
is rigid against weaker forces than a
threshold, but it is activated to move or to
slip by stronger forces. It can protect a
user against excessive forces from a
r o b o t . Ho w e ve r , p r e v i o u s f o r c e
limitation mechanisms could not return by
themselves after releasing forces. They
were restricted to be arranged on joints of
articulated robots because their return
movements were produced by actuators
that drove the joints. If force limitation
mechanisms can automatically return
after releasing force, it becomes possible
freely to arrange them on any part of the
robot arm. It will be easier to decide the
limitation force value and it will lead to
new possibilities for the force limitation
me c h a n i s m a c c o rd ingly. C a r e fu l
consideration should be given to a
mechanical impedance of the return
mechanism; If viscosity is set low when
the mechanism operates under excessive
forces, rapid responses to excessive
Spring
Damper
Magnets
Figure 1 Structure of a prototype of a
force limitation with automatic return
mechanism.
Table 1 Feature of the damper
Damper Type
ADA510MTP
Stroke
100mm
Max. Load
2000N
Speed(compress)
0.47m/s(500N)
Speed(extend)
0.03m/s(500N)
forces will be available. On the other
hand, high viscosity on the return will
avo id t he re s o na nt p roblem. The
mechanism should have anisotropic
viscosity after all.
Development
A prototype of this mechanism was
developed to confirm its features (see
Figure 1). The total size was 400 mm in
length and 200 mm in diameter. A
commercial damper (Enidine
ADA510MTP) with anisotropic viscosity
was used. The viscosity of the damper in
extension, which was adjustable, was set
at the highest value. Features of the
damper are shown in Table 1. Two types
of mechanical spring for generating the
Results
A typical example of the results is shown
Magnets
Spring type I
Spring type II
4
229.1±7.4[N]
283.0±8.5[N]
5
318.6±9.6[N]
371.1±4.9[N]
Table 3 Loads for travel measurement
Magnets
Spring type I
Spring type II
4
230[N]
330[N]
5
330[N]
430[N]
Travel(mm)
Methods
A total of four prototypes of two kinds of
spring and two kinds of magnet were
examined by static forces. Each prototype
was rigid against weak forces, but was
activated to move by strong forces.
Results of the threshold force are shown
in Table 2. The threshold force was
adjustable by the magnets and the spring.
However, the results obtained did not
agree with the theoretical results. The
standard deviations were so wide as to be
3%, but I think that they were permissible
because the diversities of a human are
wider. The factors affecting it could be
friction, the dead load, the unbalanced
load, the flatness and the quality of the
s t e e l . T h i s w i l l b e imp r o ve d b y
introduction of a stiffer bearing system.
The travel o f t he me c ha nis m w a s
measured with a laser displacement
sensor (Keyence LK-2500) when a force
was given and released statically. The
constant force for the experiment is
shown in Table 3.
Table 2 Results of threshold force
60
40
20
0
-0.3 0 0.25 0.5 0.75 1
Time(sec)
(a) Forward movement
Travel(mm)
return movement were prepared: spring
type I had a stiffness of 2900 N/m and
was fixed with a pre-load of 58 N, and
spring type II had a stiffness of 4900 N/m
and was fixed with a pre-load of 98 N.
Force limitation was realized by four or
five magnets, each of which had an ideal
holding force of 98 N with steel. The
straight movement was supported by a
ball bearing.
20
0
-2 0 2 4 6 8 10
Time(sec)
(b) Return movement
Figure 2 A typical result of travel
measurement (five magnets and spring
type I).
in Figure 2. The results obtained agreed
approximately with those expected.
W he n the force was given, t he
mechanism was started immediately and
it traveled 55 mm within 0.25 seconds.
On the other hand, the mechanism
returned slowly after release. Time
constants of the return were 3.4 seconds
for spring type I and 2.4 seconds for
spring type II, which were long enough to
avoid resonance. On the last two or three
millimeters of movement, the mechanism
quickly returned by the magnetic force,
but this would not be a disadvantage of
the mechanism. The distance of the
quick movement was determined by the
force of the spring and the magnets.
A two-dimensional application model by
which a force is given to a robotic link
wit h two moment limitations with
automatic return mechanisms is shown in
Figure 3. Although the prototype moved
straight, a rotation type was used in the
simulation. When the threshold moment
at mechanism A is MAmax and one at B is
MBmax, the external force F is limited as
follows:
M A m ax
M B m ax
F ≤
a nd F ≤
l 1 sin θ
l 2 sin (θ + α )
A typical result of the simulation is
shown in Figure 4. The force is limited
as the thick line by two mechanisms.
Because the threshold force is finite at
any angle, the contact force can be
limited in a certain range independently
of the posture of the robot. As the result
of simulation, a free arrangement of the
mechanism will bring various
advantages.
Discussion
To be applied to rehabilitation robots, the
Figure 3 A two-dimensional model of a
robot arm with torque limitation
mechanism.
0
1/4%
1/2%
3/4%
Angle of force (radian)
%
Figure 4 Result of simulation of the
model.
mechanism should be reduced to a size of
50-100 mm and a threshold force of 50100 N. However, I believe that I showed
this new idea to be beneficial. Being
miniaturized by developing a small
damper would be possible instead of a
commercial one in which viscosity was
adjustable. The viscosity, the stiffness
and the threshold force value should be
considered for a rehabilitation robot
experimentally. There will be a better
arrangement than the simulation by using
t h r e e o r mo re mo me nt li mi t a t i o n
mechanisms. It is easy to expand to a
three-dimensional model. It would also
be applicable to an anisotropic force
limitation mechanism.
Conclusion
A prototype of a new mechanism to
reduce the risk of a rehabilitation robot
hitting the bodies was developed. It was
confirmed that the new mechanism had
many advantages, such as a flexible
a r r a n g e me n t , a n d n o r e s o n a n c e .
Miniaturization and a way to determine
parameters will be subjects for future
study.
Acknowledgments
The author would like to acknowledge
the assistance and efforts of Tuyoshi Itoh;
I also wish to thank the New Industry
Research Organization and the
KEYENCE Co. Ltd. for their support.
References
[1] M. Kioi, S. Tadokoro, T. Takamori: A
Study for Safety of Robot Environment;
Proc. 6th Conf. Robotic Soc. Japan, 393394(1988)(in Japanese)
[2] H. Tsushima, R. Masuda: Distribution
Problem of Proximity Se ns o rs for
Obstacle Detection; Proc. 10th Conf.
Robotic Soc. Japan, 1021-1022(1992) (in
Japanese)
[3] K. Suita, Y. Yamada, N. Tsuchida, K.
Imai: A study on the Detection of a
Contact with a Human by a ComplianceCovered Robot with Direct Torque
Detection Function ~In Case of 1 Link
Robot; Proc. ROBOMEC’94, 897902(1994) (in Japanese)
[4] T. Morita, N. Honda, S. Sugano:
Safety Method to Achieve Human-Robot
Cooperation by 7-D.O.F. MIA ARM Utilization of Safety Cover and Motion
Control -; Proc. 14th Conf. Robotic Soc.
Japan, 227-228 (1996) (in Japanese)
[5 ] T. Sa it o , N . Sug i mo t o : Ba s ic
Requirements and Construction for Safe
Robots; Proc. ROBOMEC’95, 287290(1995) (in Japanese)
Author Address
Noriyuki Tejima
Dept. of Robotics, Ritsumeikan Univ.
1-1-1 Noji-higashi, Kusatsu, Shiga,
525-8577, Japan
E-mail: [email protected]
Phone: +81 (77) 561-2880
Fax: +81 (77) 561-2665
COGNITIVE REHABILITATION USING
REHABILITATION ROBOTICS (CR3)
B. B. Connor1,2,3 , J.. Dee2, and A. M. Wing3
1
University of North Texas, Denton, TX, 2Stirling Dynamics Limited, Bristol, UK
3
Centre for Sensory Motor Neuroscience, The University of Birmingham, UK
Abstract
Cognitive deficits are a well
known problem associated with many
disabling conditions, such a traumatic
brain injury, stroke, and other
neurological disorders. Their presence
may be less obvious, but potentially as
disabling, in conditions such as
multiple sclerosis, drug and alcohol
related disorders, and psychotic
disorders such as schizophrenia. This
paper reports work in progress with
individuals with brain damage using
robot aided cognitive rehabilitation.
Introduction
Traditionally, the field of
rehabilitation robotics has focused on
physical disabilities where robots are
used as a substitute for absent or
diminished motor function. More
recently there has been a concern with
robotic aides for motor rehabilitation
[1]. For example, Krebs et al. [2],
using robot-aided rehabilitation (a
robotic arm) with stroke patients,
demonstrated that robot-aided therapy
does not have adverse effects, patients
do tolerate the procedure, and brain
recovery may be aided in the process.
In their experimental paradigm, power
assistance was used to enhance
movements being made by the patient.
Cognitive Rehabilitation using
Rehabilitation Robotics (CR3) is being
developed to retrain diminished
cognitive function following nonprogressive brain injury using guided
movement.
It combines errorless
learning, a proven method of teaching
new information to individuals with
memory problems, and the Active
Control Stick, currently being used in
the aerospace industry, that can prevent
errors from being made during
learning. Thus, CR3 offers a new area
for rehabilitation robotics, relevant to
perceptual motor skills assisted by
errorless learning.
Errorless learning is a method of
teaching individuals to successfully
make discriminations which are
otherwise difficult for them to make
under conditions which ensure that few
or no errors are made during learning.
Research in the field of cognitive
rehabilitation with memory impaired
individuals has demonstrated that
conscious awareness during learning is
necessary for error correction to occur
[3]. For most individuals with brain
damage, this conscious awareness, or
memory of the event, is not available to
them. When errors are allowed to
occur during learning, it is the incorrect
response that is often unconsciously
remembered and repeated. It is not
surprising that errorless learning has
been found to be superior to trial and
error learning for memory impaired
- 79 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
individuals [3,4,5]. The broad aim of
our project is the development of
clinical applications of errorless
learning and evaluation of its
effectiveness with cognitive problems
in addition to memory.
Methods
Equipment--Active Force Field (AFF)
technology, currently being used in the
aeronautic and aerospace industries
with an Active Control Stick, provides
a force field interaction between the
pilot and the aircraft or simulator
control system via biodynamic
feedback
and
proprioceptive
compensation. The electric motors of
the Active Control Stick can be used in
shaping motor behavior.
For any
rehabilitation program based on the
participant using movement to select
the correct option from a set of
alternatives, the Active Control Stick
can be set to guide the individual to the
correct alternative. The role of the
therapist is to set the force field
parameters (e.g. motor synthesized
spring strengths) according to the
individual’s needs, while continually
trying to reduce the degree of guidance
with the goal being that the individual
carry out the action unaided in the end.
The distinct advantages to the
use of CR3 include: a time and labor
saving tool for therapists while
reducing the potential human error
introduced when the therapist attempts
to guide the patient’s movement; it is
possible to adapt the program to the
individual needs of the patient; and it is
not necessary to constrain the patient’s
environment, which may be possible in
- 80 -
a protected setting but not in a real
world environment, since the patient’s
responses are being constrained during
retraining. Also, since the patient’s
movements are taking place in three
dimensional space, this particular
technique makes it possible for patients
to make more realistic movements
during learning.
PATIENT
VIDEO
SYSTEM
MANIPULATOR
AFF
CONTROL
SYSTEM
COMPUTER
TASK CONTROL
(THERAPIST)
Figure: CR3 system components. The
patient responds to video presented
information by making movements of
the manipulator. These are subject to
guiding forces produced by the AFF
control system whose parameters may
be adaptively tuned by the therapist
using both clinical observation and
system measures of performance.
Proposed Study
Proposed Patient Study--Patient studies
are currently underway applying
errorless learning using rehabilitation
robotics to deficits in executive/motor
functions and attention. For example,
the Active Control Stick is being used
with
a
patient
with
‘action
disorganization syndrome,’ as a result
of frontal lobe damage, who is being
trained to select correct sequences of
action for everyday tasks such as
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
writing a letter, using a menu system in
which the component actions in the
task are listed [6]. Here the patient is
constrained from making incorrect
selections by the robot aid. Transfer of
learning is assessed using behavioral
measures of performance in everyday
tasks [6,7].
In a second case, a line bisection
task is being used to train a patient with
unilateral neglect, as a result of stroke,
to bisect stimuli at their centers. Here
the Active Control Stick prevents the
patient from tracking too far into the
ipsilesional field, and orients his
perceptual and motor responses toward
the center of lines. Bisection training is
applied using stimuli in different areas
of the visual field, to establish
generalized perceptual-motor routines
linked to objects rather than to a fixed
response to one location. The transfer
of learning to other measures of neglect
is being assessed.
Discussion
The line bisection task has been
tested on normal subjects in a paradigm
designed to simulate unilateral neglect
in which the visual image is degraded.
Preliminary results show that the robot
aided errorless learning training
improves both speed and accuracy of
performance in the impoverished
condition.
References.
[1] P. van Vliet and A.M. Wing, “A
new challenge--Robotics in the
rehabilitation of the neurologically
motor impaired,” Physical Therapy,
vol. 71, pp. 39-47, 1991.
- 81 -
[2] H.I. Krebs, N. Hogan, M.L. Aisen
and
B.T.
Volpe,
“Robot-aided
neurorehabilitation,”
IEEE
Transactions
on
Rehabilitation
Engineering, vol. 6, no. 1, pp. 75-85,
1998.
[3] A.D. Baddeley and B.A. Wilson,
“When implicit learning fails: Amnesia
and the problem of error elimination,”
Neuropsychologia, vol. 32, pp. 53-68,
1994.
[4] B.A. Wilson, A.D. Baddeley, J.J.
Evans, and A. Shiel, “Errorless
learning in the rehabilitation of
memory
impaired
people,”
Neuropsychological
Rehabilitation,
vol. 4, pp. 307-326, 1994.
[5] B.A. Wilson and J.J. Evans, “Error
free learning in the rehabilitation of
individuals
with
memory
impairments,” Journal of Head Trauma
Rehabilitation, vol. 11, no. 4, pp. 5464, 1996.
[6] G.W. Humphreys, E.M.E Forde,
and D. Francis, “The organization of
sequential actions,” in S. Monsell and
J. Driver (Eds.), Attention and
Performance XVIII, Cambridge, MA:
MIT Press, (in press).
[7] G.W. Humphreys and E.M.E Forde,
“Disordered action schema and action
disorganization syndrome,” Cognitive
Neuropsychology, (in press)
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
THE GOBOT:
A TRANSITIONAL POWERED MOBILITY AID
FOR YOUNG CHILDREN WITH PHYSICAL DISABILITIES
Christine Wright-Ott, MPA, OTR
Rehabilitation Technology & Therapy Center
Lucile Packard Children’s Health Services at Stanford
ABSTRACT
The following paper describes a new
and innovative mobility aid, the
GoBot, designed for children under the
age of six years who have a physical
disability, which limits their ability to
achieve self-initiated mobility. The
GoBot was developed at the
Rehabilitation Engineering Center,
Lucile Packard Children’s Hospital at
Stanford from 1991 to 1995 through a
grant (Grant H189P00018-91) from the
U.S. Department of Education, Office
of Special Education Programs. The
original team included an Occupational
Therapist, Rehabilitation Engineer and
Design Engineer. The GoBot is now
being manufactured and distributed by
Innovative Products Incorporated.
INTRODUCTION
During the first three years of life,
children become mobile, learn to talk,
play with toys, interact with peers and
explore the environment.
Infants
transition through several stages of
mobility during the first year from
belly crawling to rolling, creeping,
crawling and finally to an upright
posture for ambulating (Bly, 1994).
Young children are typically observed
being in a state of perpetual motion,
reaching out to their environment. In
contrast, children who have physical
limitations, such as those who are
unable to stand and ambulate
independently, are typically limited in
their ability to reach out to interact
with their environment. They are often
restricted to static positions such as on
the floor, in a stroller or positioned in
therapeutic equipment such as a
standing frame.
They have few
opportunities to act upon the
environment rather the environment
has to be brought to them.
Until recently, there were very few
options for a child with severe physical
disabilities, such as cerebral palsy, to
achieve self-initiated mobility to
interact with the environment. If the
child could not use a manual walker,
the only alternatives were to use a
powered wheelchair or an adapted toy
vehicle (Wright, 1997). Adapted toy
vehicles are noisy and cannot be used
indoors, where young children spend a
majority of their time.
A power
wheelchair can be costly, ($15,000$20,000) particularly if the child
requires a custom seating system and
alternative controls such as switch
input rather than a joystick. Health
care professionals are often reluctant to
recommend a power wheelchair for a
- 82 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
young child and do so only if the child
can demonstrate excellent driving
skills. Could a new type of mobility
device be designed that would provide
young children with the ability to
explore the environment by allowing
children to move close enough to reach
and touch people and objects around
them? Could the device provide a
transitional means of mobility for the
child to experience the sensory and
perceptual aspects of mobility:
vestibular,
proprioceptive,
visual
perceptual, spatial relations and
problem solving? Could this device be
made available for a cost more equal to
custom orthotic mobility aids ($4,000$5,000) rather than the cost of a power
wheelchair ($10,000-$15,000)? The
GoBot, originally designed as the
Transitional Powered Mobility Aid
(TPMA), is such a mobility device
(Wright, 1998).
It is specifically
designed to provide children as young
as 12 months of age with the ability to
achieve developmentally appropriate
mobility for the purpose of exploring,
while standing upright. The GoBot
enables these children to explore the
environment while assisting in
transitioning them to other methods of
mobility such as a walker, manual or
power wheelchair.
PRODUCT DESCRIPTION
The GoBot (Figure 1) consists of an
adjustable positioning frame attached
to a battery- powered base, which can
be driven with a joystick or up to four
switches. The frame is easily adjusted
without the need for tools to
accommodate children from 12 months
Figure 1: Photo of the GoBot
to 6 years of age. It has been designed
to accommodate children with various
positioning needs such as those with
low muscle tone or weakness and
children with spasticity or reflexive
posturing. Children can be positioned
in standing, semi-standing or in a
seated position by adjusting the
positioning frame’s height in relation
to the height of the footplate. Features
of the positioning frame include a seat
which slides backwards between the
vertical backpost to allow for hip
extension for positioning children in
standing. The seat can be also be
adjusted
forwards
for
children
requiring more support under the
pelvis and thighs as when sitting or
semi-standing. The vertical backpost
unlatches and swings down to easily
- 83 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
transfer the child in and out of the
GoBot by one adult. The anterior trunk
pad’s vertical post is mounted to an
adjustable sprocket joint to adjust the
pitch of the child’s trunk, either
forwards or backwards. There is one
strap around the backside of the
anterior trunk pad, which fastens
behind the child’s back. The GoBot
was purposely designed to be restraint
free for the child. This encourages the
child to use movements and weight
shifting when reaching and exploring
objects. Kneepads are available to
provide support to the knees during
standing or semi-standing. The pads
are curved longer on one side than the
other to provide lateral support at the
knees to reduce abduction of the hips.
The pads can be removed from the
posts and rotated to provide medial
support at the knees to reduce
adduction of the legs. However, it is
preferable to not use the kneepads so
the child has the ability to move the
legs freely.
The base of the GoBot houses the
electronics, driving mechanisms and
the 12-volt battery. It can drive about
8 miles before needing to be charged.
Speed is variable up to 4 miles per
hour. It is operated by a joystick or up
to 4 switches. A multi-adjustable, fivesided tray allows for placement of
switches in any location so the child
can maneuver the GoBot by using
movements of the hands, head or feet.
Most children who use switches to
maneuver the GoBot prefer using their
hands, because they are able to see the
switches. A timed latch mode is
available which allows the child to
travel a distance without maintaining
contact on the switch. A remote
joystick is available for controlling
power on the GoBot from a distance.
ENVIRONMENTAL
CONSIDERATIONS
The GoBot is best used in an
environment designed to facilitate
exploratory experiences, such as
Mobility Technology Day Camp
(Wright, 1997). Such an environment
encourages successful exploration and
problem solving experiences.
The
children use the GoBot in a large room
where they can get close to the walls,
shelves, cabinets and doors, reaching
and touching objects they have never
had an opportunity to get near.
Developmentally appropriate activities
are introduced at each session such as
pushing and pulling toys, knocking
down blocks, looking into large boxes,
kicking balls, watching themselves in a
wall mirror while moving around the
room and playing hide and seek with
peers. The children often experience
for the first time new sensations such
as vestibular from moving fast and in
circles; propioceptive sensations from
bumping into walls (which is referred
to as “finding” the wall) and visual
perceptual experiences while watching
people and objects while moving
themselves through space.
SUMMARY
The GoBot is both an educational and
therapeutic tool intended to provide a
means for children with physical
disabilities to explore the environment
- 84 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
using upright, self-initiated mobility to
experience a course of development
more equal to their able bodied peers.
It is intended for young children who
would
otherwise
spend
their
developmental years sitting passively
in a stroller or manually dependent
wheelchair. The GoBot may facilitate
development in the areas of language,
socialization, self-esteem, visual-motor
and upper extremity function. It is not
intended to replace the need for a
power wheelchair. Rather, it is a tool
for providing children with exploratory
or transitional mobility experiences,
which may lead to functional mobility
(Wright, Egilson, 1996).
ACKNOWLEDGEMENTS
The following people are recognized
for their contribution to this project:
The project
team contributors,
Margaret Barker and John Wadsworth;
parents and children of subjects
included in the project; therapists and
teachers who participated in interviews
and trials; volunteers Snaefridur
Egilson and Marilynn Jennings; RJ
Cooper, Jim Steinke; Phil Disalvo and
the staff at the Rehabilitation
Engineering Center, Lucile Packard
Children’s Hospital at Stanford, now
known
as
the
Rehabilitation,
Technology and Therapy Center.
The GoBot has been licensed to
Innovative Products Incorporated, 830
South 48th Street, Grand Forks, ND
58201, the sole manufacturer and
distributor of the GoBot.
REFERENCES
Bly, L. (1994). Motor skills acquisition
in the first year. Tucson: Therapy Skill
Builders.
Wright-Ott C.,(1996) Egilson S:
Mobility. Occupational Therapy for
Children, 3rd ed. Mosby-Year Book
Inc. pp 562-580.
Wright-Ott. C. (1997) The transitional
powered mobility aid: a new concept
and tool for early mobility. Pediatric
Powered Mobility: Developmental
Perspectives, Technical Issues, Clinical
Approaches. RESNA, VA (pp58-69).
Wright-Ott, C. (1998) Designing a
transitional powered mobility aid for
young children with physical
disabilities. Designing and Using
Assistive Technology, The Human
Perspective, Brooks Publishing, (pp
285-295).
ADDRESS
Christine Wright-Ott, MPA, OTR
Rehabilitation Technology & Therapy
Center,
Lucile Packard Children’s Health
Services at Stanford
1010 Corporation Way, Palo Alto, CA
94303.
650-237-9200
FAX: 650-237-9204
Email:
[email protected]
- 85 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
A WHEELCHAIR MOUNTED ASSISTIVE ROBOT
Michael Hillman, Karen Hagan, Sean Hagan, Jill Jepson, Roger Orpwood
Bath Institute of Medical Engineering Ltd, UK
development. In the case of the
wheelchair-mounted robot project we
have been in contact with about 30
volunteers, covering 5 disability
groups. Of these a smaller number of
local volunteers have been involved in
more detailed discussions. We have
also tried to involve disabled
volunteer’s carers wherever possible,
because they too are users of the
device.
Abstract
A robotic manipulator has been
mounted to an electric wheelchair to
assist people with disabilities.
Particular emphasis has been given to
the constraints and requirements for
wheelchair mounting.
Background
Many different approaches to assistive
robotics have been both suggested and
implemented. Whilst in some situations
(for example a vocational setting) a
fixed site workstation is suitable [1], in
other cases (for example someone
living independently in their own
home) a mobile device [2] is more
appropriate.
In order to gauge volunteers' reactions
to a device before investing time and
expense in producing a working
prototype it is often valuable to build a
model or full scale non-working mock
up. In the case of this project, this was
a valuable way of gaining an insight
into how users might react to having a
large robotic device mounted to their
wheelchair.
An earlier project at our Institute
implemented a low cost mobile robot
by mounting a manipulator on a simple
non-powered trolley base, which could
be moved around the home by a carer.
A fully working prototype is necessary
to evaluate the functionality of a
device. However, the prototype is not
an end in itself but is only the first
stage in making finished devices
available to those who need them.
In order to extend the flexibility of this
system, the same manipulator is now
mounted onto an electric wheelchair as
described in the current paper.
Specification
Many surveys [4] have reported
different tasks which a disabled user
might use an assistive robot for. Other
papers [5] have described the use of
robots in real life situations. It is not
Methods
Central to the Institute’s design
philosophy [3] is the involvement of
users at all stages of a device’s
- 86 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
• affect seat adjustment (or any
similar facilities of the chair);
• affect transfers into or out of the
wheelchair;
• cause an unacceptable drain on the
wheelchair batteries.
appropriate to repeat these statistics.
However it is useful to divide the tasks
briefly into groupings.
• Eating and drinking
• Personal hygiene
• Work
• Leisure
• Mobility
Design Description
Vertical actuator & wheelchair
mounting
The vertical actuator and how to mount
it to a wheelchair are the most critical
design aspects of the project. Some of
the initial concepts have already been
reported [6]. Use of a non-working
mock up allowed evaluation of these
concepts.
Many of these task areas are common
to all assistive robot systems. However
some tasks are more appropriate for a
fixed site workstation, perhaps used for
a vocational application, while others,
are more specific to a wheelchairmounted robot. These tasks include
general reaching operations as well as
more specific tasks related to mobility
such as opening doors and windows
and operating switches (e.g. light
switches, lift call buttons).
Discussions with users identified some
of the specific requirements and
constraints for a wheelchair-mounted
manipulator:
Requirements
It must be able to:
• reach to floor level;
• reach to head height.
Constraints
It must not:
• compromise manoeuvrability;
• obstruct the wheelchair user’s
vision;
• create a negative visual impact;
• affect the steering or control of the
wheelchair;
Figure 1. Mock-up manipulator
- 87 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Both wheelchair users and others who
saw the mock-up thought that the
single stage actuator was too obtrusive.
In order to overcome this, an extending
mechanism was used which, in its
parked (lower) position, does not
extend noticeably above head height.
does not greatly effect the steering. The
following photograph (Figure 2) shows
the prototype (without cosmetic covers)
mounted on a "Scandinavian Mobility"
electric wheelchair.
The mechanism is based around two
parallel vertical tracks, linked by a
pulley. As the moving section of the
actuator moves upwards relative to the
fixed section, the upper arm mounting
point moves upwards relative to the
moving section. Two constant tension
springs counterbalance the weight of
the arm so that a small motor of only
6W may raise the whole arm.
The mock-up mounted the manipulator
on a hinged mounting point towards the
rear of the wheelchair, allowing the
manipulator to be swung forwards
when required. It was found that the
use of a hinged mounting required too
much clearance to the side of the
wheelchair, often not possible in a
small room. The manipulator is
therefore now mounted in a fixed
position above the rear wheels. While
not giving quite as much forward reach
as had been originally specified this
seems a good compromise solution.
Mounting the manipulator at the side,
close to the shoulder of the user,
decreases the visual impact of the
device and does not obstruct the
wheelchair approaching a table or desk.
Since the weight is over the fixed,
rather than castoring, wheels the device
- 88 -
Figure 2. Manipulator mounted to
wheelchair.
Upper arm
The basic design of the upper arm is
copied from the earlier trolley-mounted
manipulator. The main rotary joints
(identified as shoulder, elbow and wrist
yaw) all move in a horizontal plane.
Vertical movement comes from the
vertical actuator described above. At
the wrist there are roll and pitch
movements. The basic design
comprises an aluminium structure,
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
within which the motors are mounted,
covered by a vacuum-formed cosmetic
moulding.
The opportunity was taken to improve
the design, particularly in the area of
access for maintenance. The motors are
now mounted within modules, which
may be easily removed for
maintenance. The cosmetic covers are
also redesigned for easier removal and
improved aesthetics.
Gripper
The earlier trolley-mounted robot used
a prosthetic hand end effector. This
never proved totally effective as a robot
gripper. A purpose made gripper has
been designed specifically for the
current device. It has the following
features:
• Two parallel moving jaws;
• Slim profile to allow good visibility
of the item being gripped;
• Compliant elements in the drive
train to allow variable force
gripping;
• Non backdrivable gearing and
compliance to maintain grip force
when power is removed from the
drive motor.
Electronics
The electronics design is based around
an I2C serial link running through the
length of the manipulator. There are
also 5v (for digital electronics) and 24v
(for motor power) power supplies
running through the manipulator. A
single board PC compatible processor
(GCAT from DSP Design, London,
UK) mounted at the base of the
manipulator sends command signals to
motor control boards mounted within
the manipulator. On the control boards
(size only 50mm x 50mm) the serial
signal is converted to a parallel signal
for the proprietary HCTL1100 motor
control chips. Motor control uses pulse
width modulation.
Figure 4. Electronics block diagram
User Interface
There are two main approaches to user
interface design for an assistive robot.
• Task command: This works well in
the structured environment of a
workstation. It may be less
appropriate in the undefined
environment within which a
Figure 3. Gripper (without covers)
- 89 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
wheelchair-mounted robot will be
required to operate.
• Direct control: This allows the user
to control the manipulator in an
undefined environment. It does,
however, make a greater demand on
the user and may be time consuming
and tedious.
The main approach used for the
wheelchair-mounted robot is direct
control, although there are also
functions to allow the manipulator to
be moved easily to certain pre-set
orientations.
Figure 5. User interface display.
Conclusions
At the time of writing (Jan 99), the
system is at the stage of final assembly
and debugging of software. Brief
evaluations are due to start in April 99.
A mobile base has been designed, onto
which the manipulator can be mounted.
This can be wheeled up close to a user’s
wheelchair and will enable evaluations
to be carried out from the user’s own
wheelchair.
Users of electric wheelchairs are
generally able to use a two-degree of
freedom input, either a conventional
joystick or a head or chin operated
joystick. It was decided that this would
be the most appropriate input for a
wheelchair-mounted robot (although a
switch-operated system will also be
available as an option). The use of a
two-degree of freedom joystick
provides an intuitive form of control of
a manipulator in real time. In the long
term we envisage the user being able to
use the same joystick to control both
wheelchair and manipulator.
Further developments are planned
including the facility to integrate the
system with a range of wheelchairs.
This will enable longer term
evaluations to take place towards the
end of the year.
Acknowledgements
Control of a six-degree of freedom
device with a two-degree of freedom
input requires mode switching. The
scheme used for the wheelchairmounted robot uses the joystick
movements to navigate around a map
(Figure 5), displayed on a small LCD
screen, or to switch to an alternative
mode.
The authors are grateful to the Southern
Trust for their generous support of this
work. A panel of 29 electric wheelchair
users has given vital user input to the
project. The authors also acknowledge
the contribution to the project from the
technical staff at the Institute,
particularly Martin Rouse (Mechanical
- 90 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Workshop) and Simon Gale
(Electronics Laboratory).
References
1. Hammel J, Van der Loos HFM
"Factors in the prescription & cost
effectiveness of robot systems for highlevel quadriplegics", Proc RESNA
1991, 14, 16-18, 1991.
2. Kwee HH, Duimel JJ, Smits JJ,
Tuinhof de Moed AA, van Woerden
JA, v.d. Kolk LW, Rosier JC, "The
MANUS wheelchair-borne manipulator
system review and first results", Proc.
2nd Workshop on Medical &
Healthcare Robotics, Newcastle upon
Tyne, UK, 385-403, 1989.
Author Address & contact
information
Dr Michael Hillman
Bath Institute of Medical Engineering
Wolfson Centre
Royal United Hospital
Bath BA1 3NG. UK
Tel (+44) 1225 824103
Fax (+44) 1225 824111
[email protected]
http://www.bath.ac.uk/~mpsmrh/
3. Orpwood R, "Design methodology
for aids for the disabled", Journal of
Medical Engineering & Technology,
14, 1, 2-10, 1990.
4. Prior S, "An electric wheelchair
mounted robotic arm – A survey of
potential users", Journal of Medical
Engineering & Technology, 14, 4, 143154, 1990.
5. Hillman M, Jepson J, "Evaluation of
a trolley mounted robot – A case
study", Proc. ICORR'97, 95-98, 1997.
6. Hagan K, Hagan S, Hillman M,
Jepson J, "Design of a wheelchair
mounted robot", Proc. ICORR'97, 2730, 1997.
- 91 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
PREPROGRAMED GESTURES FOR ROBOTIC MANIPULATORS: AN
ALTERNATIVE TO SPEED UP TASK EXECUTION USING MANUS.
N. Didi1, M.Mokhtari1,2, A. Roby-Brami1
1
INSERM-CREARE U483, Université Pierre & Marie Curie, Paris, France.
2
Institut Nationale des télécomunications, Every, France.
[email protected]
ABSTRACT
1 INTRODUCTION
In the rehabilitation robotic context,
we are convinced that robotic assistive
devices for severely disabled persons
may compensate their impairments in
grasping. However the use of
telemanipulated robotic arms requires
an excellent dexterity and cognitive
efforts not often available among the
concerned
users
population.
Preprogrammed gestures and a
control method that offers shared
control between the human and the
machine may improve the execution of
complex tasks. In this paper we
describe a new Assistive Control
System (ACS) for the Manus robotic
arm. This system supports several
input devices and offers new features,
such as a gesture library and new
control modes. Results of the
evaluations of this ACS are also
presented. The aim of our approach is
to make the robotic arm Manus easily
controlled and accessible to a larger
population of handicapped users.
Manus, a six Degrees Of Freedom
(DOF) robotic arm mounted on a
wheelchair,
is
presently
commercialized by Exact Dynamics
company in the Netherlands. The
French
Muscular
Dystrophy
Association (AFM) has introduced
fifteen Manuses in France to help
disabled people to get acquainted in
touch with such technology. The main
advantage of the Manus is that it can
perform tasks in non-structured
environment which correspond, in
general, to the real environment of the
end-users.
To use the Manus arm in daily living
with the actual command architecture,
the user must perform repetitive
actions in order to complete the
different tasks. Our approach is to
propose an assistance to the end-user
in their daily life. We have developed a
new control system called Assistive
Control System (ACS) which relieves
the handicapped user from executing
the same sequence of commands for
common tasks.
- 92 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
The ACS we are proposing will
provide a semiautonomous controller
for Manus that will lessen the number
of mundane tasks (by preprogramming
commonly used gestures) while still
enabling the user full control on the
robot.
Robotic workstations have shown their
efficiency in providing fully automated
tasks in a structured environment.
However, the evaluations conducted in
France in several rehabilitation centers
with the Master-Raid workstation [3],
demonstrated that users feel dependent
of this type of restricted environment
and excluded from the command loop
such that they feel they become simply
observers of the automated tasks. The
users would appreciate a robotic
system that could combine human with
autonomous control such that they will
feel active during the execution of
tasks.
2 THE GESTURE LIBRARY
In human physiology, any complete
natural gesture is describe as being
two-phased: an initial phase that
transports the limb quickly towards the
target location and a second long phase
of controlled adjustment that allows
limb to reach the target accurately.
Those two phases are defined
respectively as a transport component
and a grasp component [2], In our
approach, we are interested in
automating the first phase. The second
one continues to be controlled by the
user.
The gesture library contains a set of
generic global gestures that help
disabled people in performing complex
daily tasks. These gestures represent a
portion of any particular task. Each
gesture (Gi) is characterized by an
initial operational variable of the robot
workspace (Oii) corresponding to the
initial robot arm configuration and a
final
operational
variable
(Oif)
corresponding to the final robot arm
configuration. Each variable (Oi) is
defined in the Cartesian space by the
gripper position (xi, yi, zi) and
orientation (yawi, pitchi, rolli). The
gestures generated by our system are
linked only to the final operational
variables. A path planner is able to
generate, from any initial arm
configuration,
the
appropriate
trajectory to
reach
the
final
configurations. We have prerecorded
twelve final operational variables as
describe in [1] and allow the user to
record two others.
Oi (xi, yi, zi,
yawi, pitchi,
Of (xf, yf, zf,
yawf, pitchf, rollf)
End-effector trajectory
Figure 1: Representation of the two robot
configurations that characterize any gesture.
- 93 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
3 ORGANIZATION OF THE
NEW MODES
In addition to the Cartesian Control
Mode (CCM) and the Joint Control
Mode (JCM) (the first one allows the
user to control manually the arm and
gripper motion in Cartesian space
whereas the second one allows a direct
and separate control of the six arm
joints) existing in the commercialized
version of Manus, the ACS offers three
other modes designated as: the Pointto-Point Control Mode (PPCM), the
Record Mode (RM) and the Replay
Control Mode (RCM). Fig.2 shows the
ACS modes organization.
The gestures of the library described
above are activated by the user in the
PPCM. In this mode, each button of the
keypad generates a gesture following
the keypad mapping showed in fig.3.
From the storage unit
Main Mode
To the storage
Unit
Fold-In
Fold-Out
Record Mode
Joint Mode
Point-to-Point
Mode
Cartesian Mode
Replay Mode
The new modes
Figure 2: The ACS modes organization
The 3x3 matrix of pre-set buttons
correspond
to
nine
pre-set
configurations of the robotic arm,
following a vertical grid front of the
user. For example, when the user
wishes to reach a target in the left (left
side of the robot) and down position
he/she may push the button “DL” that
will bring the robot end-effector
towards that position.
Middle Center
to theUSer
High Right
FLoor
US
HL
HC
HR
switch to
the CCM
FL
ML
MC
MR
back to the
Main Mode
CM
DL
DC
DR
MM
OD
P1
P2
3x3 matrix of
pre-set buttons
Down Left
Open the Door
Figure 3: The keypad pre-set mapping
in the PPMC
The button “OD” will generate a
gesture towards an arm configuration
allowing the user to open a door or
grasp an object from the top, the button
“FL”, will generate a gesture to grasp
object from the floor, the button “US”,
is a back gesture towards the user, and
the buttons “P1” and “P2” will
generate gestures towards two user
pre-recorded robot configurations.
These configurations are recorded in
the RM. The RCM, which is not
accessible from the user input device,
will allow, for example, evaluators to
replay off-line, a saved sequence of
actions performed previously by the
disabled patient.
4 EVALUATION
A pilot evaluation was conducted with
six control subjects [6]. It was
organized in two sessions during two
- 94 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
days. Subjects were asked to use
Manus and execute 8 training tasks and
one final task. The first 3 tasks were
easy and consisted of moving a cubic
object using only the CCM from a
position to another with three different
grasping strategies (from the front,
side, and top). Tasks 4, 5 and 6 were
the same but the subjects were now
asked to use both the PPMC and the
CCM. The 7th task involved pouring
the content of a cup situated on a shelf
and the 8th task asked the subjects to
retrieve the cubic object from a shelf to
read what was written on the back of
this object. The final task consisted of
a compilation of the strategies used in
the 8 previous training tasks. It
consisted of taking a bottle of water
from a shelf, pouring the water from
the bottle into a glass on a table,
putting back the bottle on the shelf,
bringing the glass close to the mouth
and drinking the water. These last tasks
were a little more complex and
involved arm displacement with large
amplitude, and additionally the
subjects were also asked to use the two
cited control modes. A quantitative
analysis has allowed us to make the
following observations:
1- We observed a decrease of the
time execution of the task for the
three simple tasks executed with the
CCM only. Observations were made
between the tasks and between
sessions fig.4. This is probably due to
the quick learning of the CCM.
2- We noticed that the use of the
PPMC in tasks 4, 5 and 6 increased
the duration of the execution of the
tasks, particularly when the subjects
discovered this mode for the first
time. We emphasize that the PPMC
may have seemed much more
complex than the CCM and possibly
the subjects needed more time to
master this new mode.
3- The total latency time ( ∑ of
latency time between two commands)
varied linearly with the total task
time and represented more than 50%
of each task duration fig.5, time that
the subjects spent looking for suited
strategies to reach the target or for the
correct button to execute the
appropriate command.
4- We also noticed, when we
separated
the
mode
changes
commands from the keypad mapping
in order to have one exclusive keypad
for the robot commands and one
second keypad for mode changes, the
command mapping in each mode
seemed understandable for the
subjects. They used the PPCM to a
much greater extent. This suggests
that this separation brings with it, an
easier control of Manus
- 95 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
CCM
Total time (s)
200
CCM+PPCM
150
S1
S2
100
50
Task 6
Task 5
Task 4
Task 3
Task 2
Task 1
0
Figure 4: Total execution time of the 6 first
tasks during the two sessions (S1 and S2),
mean of the 6 control subject.
300
Rest time (s)
250
200
150
100
50
0
0
50
100
150
200
250
300
350
Total tim e(s)
tr= -21,895 + ,889 * tT; R^2 = ,938
Figure 5: The regression curve between
the task duration and the total latency time.
between 50 and 70% of the task
duration. However, we failed to
noticed a learning comparable to the
one observed with the control
subjects.and noted that the PPMC was
used less in the final task. As
mentioned earlier the assimilation of
this mode is not as easy as the CCM.
The contribution of the PPCM
appeared after another training session
where two patients of the group cited
above seemed familiar with the two
main modes: the CCM and the PPCM.
They were asked to collect, using
Manus, five different objects located in
different places and put them all in a
box. This Evaluation was conducted
into two sessions over two days. In the
first session the patients were asked
first to perform the task with the CCM
only and then, to re-executed it using
the PPCM. In the second session, they
were asked to start with the PPCM and
end with the CCM only.
duration (s)
The results (see Fig.6 and Fig.7 shows
A second evaluation was made with
the contribution of the PPMC in the
the participation of four patients (two
execution of the ask. It has allowed
quadriplegic C6-C7 and two having
muscular dystrophy) located at the
hospital Raymond Poincare. Our
evaluation has shown that, in spite of
using only the CCM
using the CCM + PPCM
their handicap, their performance was
400
not quantitatively different from
300
control subject. For example, the final
200
task was executed with an average time
100
of 485.6±61.5 sec in the first session
0
Latency
CCM
PPCM
Task
and 449.4±36.6 sec in the second
session compared to 476.9±31.2 sec
Figure 6: The contribution of the ACS
and 425.75±26.4 sec obtained with the
in term of number of commands.
control subjects. The latency time was
- 96 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Number of commands
the patients to perform the task with, in
mean, 13 commands less and to save
approximately 50 seconds on the time
task.
using only the CCM
using the CCM + PPCM
150
100
50
0
CCM
PPCM
TASK
necessary with 10 commands of the
CCM.
The evaluations of the first ACS
version allowed us to bring some
improvement to the system. The first
trials with disabled patients showed
their interest regarding the ACS. The
results obtained, being preliminary, do
not allow us to yet declare what real
contributions of the new ACS modes
will bring to the Manus end-users.
More evaluations in real life conditions
with the help of disabled people are
necessary to test all the new functions
offered by the proposed new system.
Figure 7: The contribution of the ACS
in term of duration.
5 CONCLUSION
This paper has described a design
approach of an assistive control system
for the robotic arm Manus. The ACS is
designed to meet the disabled user
needs in term of manipulation of the
assistive robot Manus. Its development
is based on preliminary results
obtained from quantitative and
qualitative evaluation with the
participation of disabled people [3,5].
This system is designed on the one
hand, to reduce manipulation problems
that disabled users meet during
complex tasks, and on the other hand,
to solve the problems linked to the
user-interface. With its new functions,
we plan to reduce the task time and the
number of commands that are
performed. For example, one command
in the PPCM will be sufficient to
perform the same results that will be
The actual development produced
during this project will lead to a new
command architecture for Manus
which will be integrated through the
European Commanus project started in
November 1998. The overall goal is to
propose a new generation of Manus
manipulators with the end-user needs
taken into account.
ACKNOWLEDGMENTS
The authors would like to thank J.C.
Cunin and C. Rose from the French
muscular dystrophy association (AFM)
and the Institut Garches. N. Didi holds
a grant from AFM and Institut de
Garches.
- 97 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
REFERENCES
[1] N. Didi, B. Grandjean, M.
Mokhtari, A. Roby-Brami, “An
Assistive Control System to the
manipulation of the Manus arm
robot”,
RESNA’98,
P289-291,
Minneapolis,
Minneapolis,
Minnesota. June 1998.
of user control interface for the
Manus arm robot”, Advanced in
Perception-action coupling, Fifth
European Workshop on Ecological
Psycology.P156-161, July 1998,
Pont-à-Mousson, France.
[2] M. Jannerod, “Intersegmental
coordination during reaching at
natural visual object”. In J. Long &
A. Baddeley (Eds.) Attention and
performance IX, P153-169, Hillsdale,
NJ: Lawrence Erlbaum Associates
[3]
G.
Le
Claire,
Résultats
préliminaires
de
l'évaluation
réadaptative de RAID MSTER II et
MANUS II (Preliminary results of the
rehabilitation evaluation of RAID
MASTER II and MANUS II),
APPROCHE, France. avril 1997.
[4] Mokhtari M, Roby-Brami A,
Laffont I, “A method for quantitative
user evaluation in case of assistive
robot manipulation” RESNA'97, 420422, Pittsburgh, June 1997,.
[5] M. Mokhtari, N. Didi,A. RobyBrami, "Quantitative Evaluation of
Human-Machine Interaction when
Using an Arm Robot", RESNA’98,
P289-291, Minneapolis, Minnesota.
June 1998.
[6] E. Plessis-Delorm, N. Didi, M.
Mokhtari, B. Gradjean, A. RobyBramy, “An evaluation of two types
- 98 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
EVALUATION OF THE HEPHAESTUS SMART WHEELCHAIR SYSTEM
1
Richard Simpson1, Daniel Poirot2, Mary Francis Baxter3
TRACLabs, Houston, TX; 2WindRiver Systems, Houston, TX;
3
Texas Women’s University, Houston, TX
ABSTRACT
Hephaestus, the Greek god of fire,
craftsmen and smiths was the only
Olympian
with
a
disability.
Hephaestus was injured when his
father, Zeus, flung him off Mount
Olympus for siding against Zeus in a
dispute with Hephaestus' mother, Hera.
To compensate for his disability
Hephaestus built two robots, one silver
and one gold, to transport him. The
Hephaestus Smart Wheelchair System
is envisioned as a series of components
that
clinicians
and
wheelchair
manufacturers will be able to attach to
standard power wheelchairs to convert
them into “Smart Wheelchairs.” This
paper describes a prototype of the
system and presents the results from
preliminary user trials involving both
able-bodied and disabled subjects.
BACKGROUND
Independent mobility is critical to
individuals of any age. While the
needs of many individuals with
disabilities can be satisfied with power
wheelchairs, there exists a significant
segment of the disabled community
who find it difficult or impossible to
operate a standard power wheelchair.
This population includes, but is not
limited to, individuals with low vision,
visual field neglect, spasticity, tremors,
or cognitive deficits.
To accommodate this population,
several
researchers
have
used
technologies originally developed for
mobile robots to create “Smart
Wheelchairs.”
Smart wheelchairs
typically consist of a standard power
wheelchair base to which a computer
and a collection of sensors have been
added. Smart wheelchairs have been
designed which provide navigation
assistance to the user in a number of
different ways, such as assuring
collision-free travel, aiding the
performance of specific tasks (e.g.,
Figure 1. Overview of Hephaestus Smart Wheelchair System
- 99 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Table 1. Questions (and associated extreme answers) given to each subject
Question #
1
2
3
4
5
6
7
Question
How difficult was the task when the wheelchair
did not provide navigation assistance?
How difficult was the task when the wheelchair
did provide navigation assistance?
How noticeable was the navigation assistance?
How often did you disagree with the assistance
provided by the wheelchair?
How helpful was the navigation assistance
provided by the wheelchair?
What effect did the presence of navigation
assistance have on your performance?
Which condition did you prefer?
passing through doorways), and
autonomously transporting the user
between locations.
We are developing a system for
converting standard power wheelchairs
into smart wheelchairs, called the
Hephaestus Smart Wheelchair System.
Wheelchairs equipped with the
Hephaestus System will be able to
assist users in two distinct ways: as a
mobility aid, the smart wheelchair will
present users with an immediate
opportunity for independent mobility,
and as a training tool, the smart
wheelchair will allow users to safely
develop and refine the skills necessary
to operate a power wheelchair without
the need for technological assistance.
Thus far, a working prototype of the
system has been developed using an
Everest and Jennings1 Lancer2000
power wheelchair as a testbed. The
prototype requires no modifications to
the wheelchair’s electronics or motors
(making it easy to install the system or
transfer
the
system
between
wheelchairs) and bases its navigation
assistance behavior on the navigation
assistance behavior developed for the
1
Everest and Jennings; 3601 Rider Trail South; Earth
City MO 63045
Leftmost Extreme
Not difficult at all
Rightmost Extreme
Very difficult
Not difficult at all
Very difficult
Not noticeable at all
Never
Very noticeable
All the time
Not helpful at all
Very helpful
Positive effect
Negative effect
Navigation assistance
No navigaton assistance
NavChair
Assistive
Wheelchair
Navigation System [1].
IMPLEMENTATION
Figure 1 gives an overview of the
Hephaestus system. As shown in the
figure,
the
Hephaestus
system
interrupts the connection between the
joystick and the controls interface. The
user’s joystick input is intercepted by
the computer, modified by the
navigation assistance software, and
then sent to the control interface in a
manner transparent to both the user and
the wheelchair.
The prototype accepts input from a
standard analog joystick that, in
unmodified
power
wheelchairs,
connects directly to the E&J Specialty
Controls Interface (EJSCI), which
provides an interface between the
wheelchair and a set of potential input
and display units. On the Hephaestus
prototype, the cord connecting the
wheelchair joystick and the EJSCI has
been cut in two, to allow the
Hephaestus system to intercept and
modify the user’s joystick inputs. The
only other physical modifications made
to the wheelchair were the addition of a
lap tray to provide a surface to mount
sonar sensors and an electrical
connection made to the wheelchair’s
- 100 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Figure 2. Experimental Tasks for User Trials. Each subject performed each task
eight times -- four times with navigation assistance, four times without navigation
assistance.
batteries to provide power for the sonar
simple contact switches placed on the
sensors.
leading edges of the wheelchair. In the
prototype system, up to 24 switches
The Hephaestus system currently
can be mounted on any available
makes use of sixteen sonar sensors
surface on the wheelchair.
(configured to detect obstacles a
maximum distance of one meter from
METHODS
An evaluation of the prototype was
the wheelchair and a minimum
performed using both able-bodied and
distance of 8 centimeters from the
disabled participants.
All subjects
wheelchair). Thirteen sonar sensors
were asked to perform the same three
are mounted on the lap tray facing
distinct tasks under two conditions:
forward or to the side of the wheelchair
navigation assistance active (condition
and three sonar sensors are on the
NAA) and navigation assistance
battery
box
facing
backwards.
inactive (condition NAI).
When
Currently, the prototype has two
navigation assistance was not active,
‘‘blind spots," one on each side of the
the wheelchair behaved exactly like a
chair near the middle of the
normal
power
wheelchair.
wheelchair. These blind spots make it
Performance was compared between
possible to collide with an obstacle,
conditions based on (1) quantitative
despite the navigation assistance
measures of the chair's behavior and
provided by the smart wheelchair
(2)
subjective
responses
to
system, by pulling up next to an
questionnaires completed by each
obstacle and pushing the joystick
subject upon completion of all trials.
directly to the side towards the
obstacle.
The configuration of the wheelchair
was fixed for all four able-bodied
Bump sensors represent the “sensors of
subjects. Following the trials involving
last resort” on the smart wheelchair.
able-bodied subjects, modifications
When a bump sensor is activated it
were made in response to feedback
brings the chair to an immediate halt.
from the subjects during their trials.
Bump sensing is implemented using
- 101 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Table 2. Experimental Measures of Performance
Parameter
Time
Collisions
Success
Explanation
Time required to complete task
Total number of collisions that occurred in a trial
Did subject successfully complete the task within the time limit (two minutes)
The configuration of the wheelchair
was not kept constant for the four
disabled subjects.
Each subject
required
different
seating
and
positioning interventions, different
joystick placements, and different
settings of the wheelchair’s velocity
and acceleration parameters.
The
changes required by each disabled
participant underscored the diversity of
the target user population.
Several results were expected based on
investigators’ previous experience with
the NavChair Assistive Wheelchair
Navigation System [2]. Able-bodied
subjects were expected to take longer
to complete the experimental tasks
with navigation assistance than without
and to prefer to operate the chair
without navigation assistance. The
variety of abilities within the small
sample of disabled subjects made it
impossible to predict the impact of the
system on their performance. What
was expected was a highly subjectdependent effect of navigation
assistance for disabled subjects.
Subjects
Eight subjects (four able-bodied, four
disabled) participated in the user trials.
All able-bodied subjects had no
sensory, motor, or cognitive disabilities
that interfered with their ability to
operate a power wheelchair. The four
disabled subjects were drawn from the
local population. Three of the subjects
were diagnosed with cerebral palsy, the
fourth was diagnosed with post-polio
syndrome. None of the able-bodied
subjects had previous experience with
a power wheelchair. The four disabled
Units
Seconds
NA
NA
subjects had extremely diverse
previous experience with power
wheelchairs, ranging from daily use to
limited previous experience.
Protocol
Before the experiment, each subject
received instructions and training to
familiarize them with the purpose of
the experiment and the operation of the
smart wheelchair. Subjects began by
driving the wheelchair without
navigation assistance
active
to
familiarize themselves with the
wheelchair. Once subjects reported
that they understood how the
wheelchair operated without navigation
assistance, navigation assistance was
activated and subjects were again
instructed to drive the chair around the
testing area until they were comfortable
operating the wheelchair.
During
training, obstacles were placed in the
testing area but they were not in any of
the configurations used during trials.
After training, subjects completed the
three navigation tasks shown in Figure
2. Each subject completed each task
eight times (corresponding to eight
separate trials).
The order of
experimental condition (navigation
assistance active, navigation assistance
inactive) was counterbalanced across
subjects, but all four trials for each
condition
were
performed
in
succession. The order of tasks was the
same for all subjects.
Before each task, subjects were given
instructions on how to complete the
task, including the path of travel they
should follow and their target
destination. Before each trial, the
- 102 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Table 3. Results from user trials, averaged within each subject. NAA = Navigation
Assistance Active, NAI = Navigation Assistance Inactive
Subj
1
Task
1
2
3
2
1
2
3
3
1
2
3
4
1
2
3
Condition
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
Time
36.59
18.86
36.59
14.97
18.45
17.82
18.06
14.50
48.08
12.91
15.85
13.48
21.09
15.33
36.67
12.73
14.04
13.47
29.25
14.02
28.11
13.25
14.10
13.43
Collisions
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Success
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
Subj
5
Task
1
2
3
6
1
2
3
7
1
2
3
8
1
2
3
Condition
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
NAA
NAI
Time
18.27
18.66
21.39
14.10
16.70
15.90
71.67
51.29
45.15
41.79
37.75
55.82
52.92
21.69
43.13
15.54
19.53
11.13
13.13
12.37
12.37
10.05
13.66
11.06
Collisions
0.00
0.00
0.00
0.00
0.00
0.00
0.25
0.00
0.00
0.00
0.25
1.50
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Success
100
100
100
100
100
100
50
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
wheelchair was positioned in the same
corresponded to an answer of 5.0 and a
starting location and subjects navigated
neutral answer (placed exactly between
to the same ending location. Subjects
the two extremes) corresponded to an
were given two minutes to complete
answer of 3.0. Table 1 lists each
each trial.
question and associated extreme.
After all trials were completed,
The performance measures used in this
subjects were asked to fill out a
experiment are shown in Table 2. Data
questionnaire on their subjective
for all measures was compared
impression of each condition. All
between subjects using a two-factor
responses were given by placing marks
(navigation assistance condition, trial)
on a line four inches long. At each end
repeated-measures ANOVA for each
of the line for a question were vertical
experimental measure.
Statistical
markers with phrases indicating
significance for all comparisons was
extreme answers to the question being
defined as p < .05.
asked.
Subjects’ answers were
RESULTS
converted to numerical scores between
Table 3 shows the results for all
1 and 5 by measuring the distance of
subjects. As can be seen from the
the subject’s mark from the leftmost
table, able-bodied subjects were
extreme of the scale (which
consistently faster without navigation
corresponded to an answer of 1.0). A
assistance active. The difference in
mark on the rightmost extreme
- 103 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Table 4. Averages of Responses to the Questionnaire. 1.0 was the leftmost extreme,
3.0 was the neutral answer, 5.0 was the rightmost extreme.
Question #
1
2
3
4
5
6
7
Able-Bodied Subjects
Avg
95% conf. int.
1.97
[1.36, 2.57]
2.62
[1.43, 3.80]
3.74
[2.78, 4.70]
1.53
[1.13, 1.94]
3.29
[2.11, 4.47]
3.35
[2.93, 3.78]
4.25
[3.59, 4.91]
Disabled Subjects
Avg
95% conf. int.
1.54
[1.17, 1.91]
2.77
[1.47, 4.08]
3.97
[2.85, 5.00]
1.95
[1.15, 2.76]
3.73
[2.22, 5.00]
1.87
[0.98, 2.75]
2.13
[1.20, 3.05]
Avg
1.75
2.70
3.86
1.74
3.51
2.61
3.19
All Subjects
95% conf. int.
[1.39, 2.12]
[1.88, 3.51]
[3.17, 4.54]
[1.30, 2.19]
[2.61, 4.42]
[1.90, 3.32]
[2.24, 4.13]
time between conditions for ablethe tasks without any assistance from
bodied subjects was significant for
the Hephaestus system, its attempts to
Task 2, but was not significant for
modify their input were viewed as
Tasks 1 and 3.
For able-bodied
intrusive rather than helpful. The
subjects, the effect of subject was
Hephaestus system reduces the
statistically significant for Tasks 1 and
wheelchair’s speed in the presence of
3 but was not significant for Task 2.
obstacles, which caused most subjects
to take longer to complete the
Table 3 also shows the variation in the
experimental tasks. This was a source
performance of the four subjects with
of annoyance for able-bodied subjects
disabilities. For the disabled subject
but not for disabled subjects, who
group, the effects of subject was
preferred the added security that
significant for all three tasks. There
obstacle avoidance provided.
was not a significant difference for any
other measure for this group on any of
Many wheelchair navigation accidents
the tasks. It should be noted that one
are not caused by a lack of skill, but
subject (Subject 6) did collide with two
rather by a lapse in concentration and
obstacles.
an inability to correct in a timely
manner.
These are the types of
Table 4 shows the average responses to
accidents
that
the smart wheelchair is
the questionnaire. As expected, the
most effective at correcting but are
able-bodied subjects preferred the
most difficult to reproduce in
navigation assistance inactive (NAI)
laboratory trials, when subjects are
condition (questions 6 and 7). The
likely to be devoting their full attention
disabled subjects preferred the
to the navigation task. This is the
navigation assistance active (NAA)
primary reason why subjects with
condition, despite the fact that it
disabilities were willing to accept
typically did not lead to immediate
additional time to complete a task in
improvements in performance. When
exchange for increased safety provided
questioned further, subjects indicated
by the Hephaestus System’s constant
that they liked the sense of security that
vigilance.
the system provided and expected to
The trials involving subjects with
achieve better performance given more
disabilities exposed two flaws in our
time to learn to operate the system.
experimental design. First, the three
CONCLUSION
separate tasks were each too short and
The results of the user trials conformed
simple to draw out differences between
to our expectations. Because ableoperating the wheelchair with and
bodied subjects were able to complete
- 104 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
without navigation assistance. Second,
subjects with disabilities did not
receive enough training prior to trials.
This was particularly important for the
subjects with poor motor control
(subjects 6 and 7), both of whom
lacked experience operating a power
wheelchair. Both subjects continued to
improve throughout the course of the
experiment (both subjects took
significantly less time to complete
Task 3 on average than Task 1) which
indicates that neither subject reached a
plateau during training.
Future experimental evaluations are
planned which will incorporate the
lessons learned in these preliminary
user trials. More subjects will be
involved, and each subject (particularly
those
with
limited
wheelchair
experience) will receive extensive
training prior to actual trials. The
number of trials will be increased and
will be spread out over several
sessions, to allow subjects to receive
significant experience with the
Hephaestus System. The experimental
tasks will also be altered to be more
complex and realistic. Instead of three
separate tasks, subjects will be asked to
complete one complex navigation task.
The primary shortcomings in the
prototype that were identified during
the user trials were (1) the delay
between input and response caused by
the navigation assistance algorithm,
particularly when obstacles were
located near the wheelchair, and (2)
difficulty passing between narrowlyspaced obstacles. This feedback was
used to modify the parameters of the
navigation assistance algorithm (but
not the algorithm itself) to increase the
system’s responsiveness and to reduce
the minimum gap the system can pass
through to 76.2 cm (30 in).
ACKNOWLEDGMENTS
This research was funded by a Phase I
SBIR grant from the National Center
for Medical Rehabilitation Research of
the National Institutes of Health. The
Lancer2000 wheelchair was donated to
TRACLabs by Everest & Jennings.
REFERENCES
[1] Levine, S., Koren, Y., &
Borenstein, J. (1990) NavChair
Control System for Automatic
Assistive Wheelchair Navigation.
In the Proceedings of the 13th
Annual
RESNA
International
Conference.
Washington, D.C.:
RESNA, 193-194.
[2] Simpson, R. (1997) Improved
Automatic Adaptation Through the
Combination
of
Multiple
Information Sources. PhD Thesis,
Univ. of Michigan.
AUTHOR ADDRESS
Richard Simpson
TRACLabs
1012 Hercules
Houston, TX 77058
[email protected]
- 105 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
POCUS PROJECT: ADAPTING THE CONTROL OF THE MANUS
MANIPULATOR FOR PERSONS WITH CEREBRAL PALSY.
Hok Kwee, Ph.D. and Jacques Quaedackers, Rehab.Eng.,
iRv Institute for Rehabilitation Research, NL-6430 AD Hoensbroek,
Esther van de Bool, O.T., Lizette Theeuwen, O.T., and Lucianne Speth, M.D.,
Rehabilitation Centre SRL-Franciscusoord, NL-6301 KA Valkenburg.
The Netherlands.
Abstract
Under the POCUS Project, interactive
studies are under way to adapt the control of the MANUS Manipulator for children and young adults with cerebral
palsy. Various control approaches are
implemented and tested with 6 test persons, ranging from 7 to 29 years, in an
integrated clinical and special education
environment. With the ADAPTICOM configuring method, initial control configurations were designed posing minimal
demands on coordinated control input
from the user. They only use 2 or 3
switches and timed responses, to control
all gripper movements in space in a
sequential way. For each user the controls and control procedures are then
individually adapted, ranging from large
push buttons on the lap board, a keypad,
a joystick, head-controlled switches, or
an individually-moulded hand-held grip
with 3 integrated push buttons. Cognitive aspects are of major importance,
and much effort is invested in guidance
and training as an integral part of the
study. In two cases, a PC labyrinth game
with adapted interface facilitated initial
training of basic concepts of movement
control and mode switching. Experimental results halfway the project are
quite promising and two test persons
have applied for provision of a personal
MANUS manipulator. User spin-offs in
related domains like wheelchair control
and communication have also been obtained.
Introduction
Case studies with adapted interfacing
under the French Spartacus Project with
a stand-alone workstation manipulator
in a clinical setting have shown the
potential use of a manipulator to enhance independence of persons with
functional tetraplegias of different origins [1]. One case study concerned a
10-year old spastic-athetoid, non-communicating boy, who succeeded surprisingly well in using the system for all
kinds of tasks once the appropriate interface and control procedures had been
found. In this case, cognitive aspects did
not appear to be a limiting factor, in
spite of the fact that this boy had never
been able to perform any "manual"
tasks. The elements which finally allowed him to gain control were:
1. The use of controls which could be
released, allowing him to use them
when he could control his movements,
while avoiding inadvertent inputs during
- 106 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
involuntary movements. In this case,
they consisted of a potentiometric roller
under his chin for proportional position
control of gripper movements and a
flexible-bar type switch controlled with
gross arm movements for mode selection.
2. A scanning control procedure to give
him successively access to only one
degree of freedom ("DOF") at the time,
thereby selecting the direction of the
gripper movement to be made, and just
control it back and forth, even with
badly coordinated movements.
Since this boy had never been able to
physically manipulate any objects, there
was initially some doubt whether he
would have the cognitive abilities to do
so through a remotely controlled manipulator. In this case, this proved to be
no problem at all and he was amongst
the very best users of the system. Since
the Spartacus system was never commercialised, he never obtained a system
for his personal use, and no other case
studies with persons with cp have been
performed.
The MANUS wheelchair-mounted manipulator evolved from the experience
obtained in the Spartacus Project and
did result in a commercial product, supplied to some 40 persons in the Netherlands [2]. Amongst them, only one has
cp, but he is controlling it very much
like most of the other users with neuromuscular disorders, like muscular dystrophy, through a finger-controlled
16-key keypad and the standard procedures. The only specific adaptation con-
sists of a key guard on the keypad to
facilitate selective pushing of different
keys [3]. As such, he is not really comparable with the previous case as far as
residual motor function is concerned.
With the POCUS Project, researchers
from iRv, medical and paramedical staff
of SRL Franciscusoord rehabilitation
centre for children, and its school for
special education are developing and
testing further adaptations of the control
of MANUS to persons with cp, including
cases where mild cognitive impairments
may be a complicating factor.
Methods.
In these studies, 6 test persons, ranging
in age from 7 to 29 years, participate on
a voluntary basis, and care has been
taken to limit the burden imposed on
them. Appropriate seating and correct
posture are essential for them to diminish spasticity and improve their ability
to control external devices. Therefore,
they remain seated in their own wheelchair, with the manipulator mounted on
a stand-alone support next to it (fig. 1).
Although this meant sacrificing the two
DOFs of wheelchair mobility, essential
for real-life intervention in the environment, this arrangement is quite satisfactory for the supervised experiments
of this project.
- 107 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
tion programmes. Therefore, collaboration between all medical, paramedical
and teaching staff involved with the test
persons is pursued and no attempt is
made at this stage to collect objectively
quantifiable data. Essential stages of the
experiments are video-taped for off-line
analysis, documentation, and presentation.
Fig.1. Drive-in experimental environment with a test person, seated in his
own wheelchair next to a stand-alone
MANUS manipulator, and an O.T.
teaching its use.
The experiments are conducted as case
studies in the Occupational Therapy
Department by an O.T and one or two
rehabilitation engineers. A short cycle of
"interactive development" of implementing control environments, user
training, testing and observation of effectiveness, analysis of problems encountered, and re-design of the environment is used with the different subjects. These items are not strictly separated, and sessions are conducted in a
pragmatic game-like manner to keep the
test persons motivated, essential in particular for the children. Besides the
motor problems associated with spasticity, complicating factors to be dealt with
consist of limited attention span, cognitive problems, lack of familiarity with
mechanical interventions, communication problems, slow learning, and interaction with educational and rehabilita- 108 -
Control environment
Building on the experience gained with
both the Spartacus cp case study and the
keypad type control used under MANUS
[4,5], an elementary control environment has been implemented to start
with. It uses only 3 push buttons: 2 to
control one DOF at the time into opposite
directions and a third one to scan
through different modes, successively
giving access to different DOFs. Controls
and control procedures have then gradually been adapted and elaborated, guided
by the performance and the problems
encountered by the different test persons. The ADAPTICOM configuration
method (previously "ADAPTICOL") was
used for "rapid prototyping" of control
configurations [4,5,6].
The design of the control environment
requires finding a compromise between,
often conflicting, criteria like:
• Design control for minimal demands
on well-coordinated input signals;
• Use control procedures which pose
minimal demands on cognitive abilities;
• Add protections and warnings for
(presumed) control errors.
• Speed up control as much as possible
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
to avoid frustration from timeconsuming execution of tasks;
• Give enhanced feedback to facilitate
menu handling and error signalling.
• Design feedback to speed up mode
selection by facilitating prediction (in
scanning procedures).
Input controls
The use of push buttons (AbleNet large
Jelly Bean or small Specs Switches) on
the lap board was an initial guess, successfully maintained in some cases, but
changed in others. In the first cases,
proper positioning of the switches is
critical and has been optimised individually, taking into account any controls already used for other purposes
like a communicator or a wheelchair.
In one case, the switches have later been
replaced by the keypad with key-guard,
used before by RTD [3] since enough
finger function was still present.
In a second case, a switch joystick has
been used, replacing the wheelchair
joystick in the first 10 sessions. It only
replaces the two movement control
switches, while an associated push button is used for mode selection. At the
11th session, it has been replaced by a
proportional joystick with flexible handle and a mode selection switch.
In a third case, where hitting any fixed
button required a lot of effort, an individually-moulded hand-held grip was
made with three thumb-activated keys
(fig.1). This device provided a good
control, even with a hand moving about
in a badly controlled way. Since, it is
also used very effectively in the classroom with a text editor.
In a fourth, most difficult case, no effective control could be obtained from
upper or lower limbs. In spite of poor
head balance, head movements seemed
to be the most promising source of control. The main problem consisted of
avoiding simultaneous activation of
signals, and many arrangements have
been tried and rejected. Today, a promising arrangement has been found, providing two independently controllable
switch signals by placing two push buttons on extended lateral supports of a
headrest. In this case too, the search for
control signals was pursued simultaneously with the control of a wheelchair
and of an assistive communication device, and the head switch arrangement is
also tried out for the latter.
An additional case was presented to us
from another centre, concerning a 7-year
old spastic-athetoid girl, very effectively
controlling a wheelchair with "Adremo"
interface, using minimal head and foot
movements. The same interface also
proved to be very effective for the control of the manipulator.
Control procedures
To limit selection time and complexity,
initially only 8 modes have been made
accessible through a scanning procedure. They successively give access to
gripper movements along the six elementary cartesian-euler coordinates (X,
- 109 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Y, Z, yaw, pitch, roll), gripper opening
and closing, and arm "swing" in cylindrical coordinates. In addition, folding
the arm out and in can be selected only
directly after switching on the system.
Feedback required for mode selection is
obtained from the standard MANUS 5x7
LED matrix display. Icons representing
a rotating arrow are used, favouring
scanning prediction over explicit icon
meaning, although it is not clear yet
whether all subjects have the cognitive
abilities to really exploit it to speed up
selection. Signalling of mode transitions
is further enhanced by short beebs,
while longer ones are used in case of
errors like wrong (e.g. simultaneous)
key signals. A beeb also signals activation of gripper opening to warn against
dropping objects. During the initial
training phase only, more elaborate
feedback is given on a PC screen
through the ADAPTICOM Monitor interactive teaching program (fig.1).
Scanning methods for mode selection
have gradually evolved, both to enhance
user performances and to facilitate
teaching. Initially, two 3-key configurations were implemented, either scanning
through a single-loop of 8 modes or a
double-loop of 2 x 5 modes, sharing one
mode to switch loops. Hitting the mode
selection key resulted in an immediate
step, and keeping it pushed continued
with scanning at regular intervals. To
facilitate training with a reduced number
of modes, the second one was retained,
grouping X, Y, Z and gripper open/close
in the basic loop.
The double approach of an immediate
step followed by scanning gives a fast
response, but also gave rise to frequent
errors, at least in the initial phases.
Therefore, three separate options have
also been provided: "step-scanning" of
one step at the time only; "active scanning" of successive steps while the selection key is pushed but starting after
one delay time; and "auto-scanning",
automatically scanning while no key is
pushed, and thereby allowing control
using 2 keys only. To further diminish
the effect of accidental or multiple key
strikes, key responses have been made
history-dependent through "slow-key"
processing (both requiring no key to be
pressed for some time, and then keeping
a key pressed for a minimal time before
releasing it to obtain a response) or by
increasing scan delay time, once only,
after activation of any key. Furthermore,
loop switching has been changed into a
two-step operation: selection and confirmation, allowing correction of a
wrong or accidental selection.
Some older procedures have since also
been successfully adapted here for keypad control and for control with a proportional joystick with mode switch.
Training
Cognitive aspects are of major importance for the successful use of a manipulator, and much effort has been
invested in guidance and training as an
integral part of the study. Several as-
- 110 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
pects to facilitate user training have
already been mentioned above, like the
use of the ADAPTICOM Monitor program
during initial training. Since not all of
the subjects are able to read, the help
screens have been adapted to use a more
graphical representation to clarify the
different modes selected.
To teach control of basic gripper movements and mode switching, a tower
building task with H-shaped elements
has been used (fig.1). Starting from a
simple arrangement, their initial orientations are successively changed to require more and more of the basic
movements to be used [1,7,8].
All subjects have required more training
time than average to integrate the remote control of a gripper to manipulate
objects, partly due to the ongoing search
for an appropriate control environment
to which they had to adapt each time it
was changed. Therefore, formal training
has been alternated with more motivating "real-life" object manipulations,
most of which being first-time achievements for the operators. Since most
subjects had little or no experience with
such type of mechanical tasks, guidance
also included explaining objectenvironment interactions. This is particularly relevant in contact situations,
where visual feedback alone is often not
enough to accomplish the task without
some comprehension of the mechanical
constraints to be expected.
ing in the study have required more time
and special attention for cognitive
training to cope with the control tasks.
Manipulation tasks appeared to introduce too many new elements at the time,
and therefore a simpler approached was
adopted to start with. A PC labyrinth
game [9] was used here, with its interface adapted to a similar 3-switch control. Two push buttons move a puppet
back and forth across the screen and a
third one toggles modes, successively
between X and Y directions. Once the
basic operations had been mastered, it
was also used to teach them to keep
attention and use path planning strategy,
looking ahead rather than engaging into
dead-end paths. Since they were more
motivated in using the manipulator than
the labyrinth exercise, these sessions
started with the latter and ended with
manipulator "games". Although it took
quite a few sessions, this method has
given the results hoped for, and today
sessions concentrate on manipulator use
only.
Manipulator training starts with the use
of the first loop only, scanning through
X, Y, Z, gripper, and loop switch-over
modes, while ignoring the latter one. In
a second stage, the second loop is entered as well, training control of gripper
orientation. Although loop switching
has been acquired by most subjects
today, it remains a relatively difficult
operation which requires special attention during training.
The two 8-year old children participat- 111 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Results
With the exception of an 8-year old boy
where basic interfacing did take much
time, all test persons are or have been
successfully using one of the double
loop configurations, with feedback limited to the 5x7 matrix LED display and
the beebs, as mentioned. Besides the
results already mentioned before, among
the other tasks performed in various
variations figure:
• Moving about various objects and
toys, bringing them within range and/or
stabilising them for direct, manipulation,
bringing them to the face, presenting
them to others, dropping them;
• "Playing with water": pouring into a
big container or a glass, drinking with a
straw, drinking from a cup in one case,
make a doll dive in a basin, etc.;
• "Playing with fire": lighting a candle
from another one already lit, extinguishing it with an upside down glass or
by bringing it to the face and blowing it;
• Eating a biscuit held in the gripper;
eating using a spoon or a fork;
• Shaving with an electric razor;
• Using a soldering iron;
• Inserting differently shaped objects in
corresponding holes of a Tupperware
game box (fig.1): a rather difficult task
requiring careful orienting, precise
movements, and planning.
• Drawing with a felt pen.
As training progressed, the need did
arise, as usual, for more speed and faster
control, at the cost of fewer compensations. This also resulted in the changes
of controls like the keypad and the pro-
portional joystick, which did indeed
result in a more effective control once
the basic principles had been acquired.
Discussion
As reported under [4] and [5], it was
observed that experiments involving
persons with mild cognitive impairments are very revealing of any userunfriendly aspects in the control which
would remain unnoticed with users who
can more easily adapt to them. They
tend to get easily lost in menu structures
and/or lacking sufficient feedback for
guidance. This has been confirmed in
this project, where mode switching, and
especially loop switching, require significant training efforts.
Nevertheless, the results today are quite
encouraging and several of the test persons appear to be good candidates to
benefit of a manipulator for personal
use. Today, two of them are indeed
applying for provision of a personal
manipulator, although it will be a long
way yet to pass administrative barriers.
As mentioned, in two cases a spin-off to
the control of other assistive devices in
the classroom has been possible, thereby
also mutually re-enforcing training of
the user in different settings.
The Spartacus manipulator referred to in
the introduction included a "pointing" or
"piloting" mode, in which the gripper
could be pointed into a given direction
and then made to move into this direction, "flying it like an aeroplane" [7,1,5].
- 112 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
This was particularly important for the
case discussed, and it would be in the
ones reported here, since it allows the
gripper to be moved into any direction,
even when only one DOF is controlled at
the time. Unfortunately, the MANUS
manipulator does not include this feature yet, but it is expected to be included
in a next generation.
Another feature of both Spartacus and
the early version of MANUS was a
display mounted on the arm, thereby
moving with it and remaining within the
user’s field of view. This is lacking today, but is badly needed when head
movements are used to control the arm,
as in two of the cases presented.
Conclusion
Although the study is still under way at
the time of this writing, it is expected
that the resulting control configurations
can be used in practice by some of the
persons from the complex cp target
group. We have developed relatively
basic control, configurations to start
with, and more complex and faster ones
to evolve to if possible.
As a spin-off, the basic configurations
may also be useful again for other target
groups, like progressive neuro-muscular
diseases, when residual functions diminish. They are made available within
the libraries of the ADAPTICOM package.
In this project, concept development,
implementation, training and evaluation
cannot really be separated. Much of it is
realised in the field with a major contribution from the users. We have called
this approach "interactive development"
References.
1. Kwee H.H.: "SPARTACUS and
MANUS: telethesis developments in
France and in the Netherlands. In: R.
Foulds (ed.): "Interactive robotic aids one option for independent living: an
international perspective." Monograph
37, World Rehabilitation Fund, New
York, (1986)7-17.
2. Verburg G., H.H. Kwee, A. Wisaksana, A. Cheetham, J. Van Woerden:
"MANUS: Evolution of an assistive
technology." Technology and Disability,
5/2(1996)217-228.
3. Peters G., MANUS consultant, RTD,
Arnhem, The Netherlands: personal
communication.
4. Kwee H.H.: "Integrating control of
MANUS and wheelchair." Proc.
ICORR’97, Bath, (1997)91-94.
5. Kwee H.H.: "Integrated control of
MANUS manipulator and wheelchair
enhanced by environmental docking."
Robotica 16/5(1998)491-498.
6. Kwee H.H., M.M.M. Thönnissen,
G.B. Cremers,
J.J. Duimel
and
R. Westgeest:
"Configuring
the
MANUS system." Proc. RESNA'92,
(1992)584-587.
7. Guittet J., H.H. Kwee, N. Quétin,
J. Yclon: "The Spartacus telethesis:
manipulator control studies." Bull.
Prosth. Res., BPR 10-13(1979)69-105.
8. Kwee H.H.: "La téléthèse MAT-1 et
l'apprentissage systématique de télémanipulation." J. de Réadaptation
- 113 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Médicale, 6/5(1986)149-156.
9. Copy Unlimited Educative Software:
"Doolhof" (Labyrinth) program, 1996.
Acknowledgements.
The POCUS Project is financed by a
grant from the "Dr. W.M. Phelps
Stichting voor Spastici" in The Netherlands. The authors thank all test persons
for their contributions to this project.
Address first author:
Hok Kwee, Ph.D.
iRv Institute for Rehabilitation Research
P.O. Box 192
NL-6430 AD Hoensbroek
The Netherlands
Tel. +31.45. 5237 542/37
Fax: +31.45. 23 15 50
email: [email protected]
- 114 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
A User’s Perspective on the Handy 1 System
Stephanie O’Connell1 and Mike Topping BA Cert Ed.2
1
Stephanie lives at Flat 12 Gordon Clifford Court, St. Anthony’s Court, Bracknell, Berkshire, UK
Mike Topping is Research Development Manager at Centre for Rehabilitation Robotics, Staffordshire
University, School of Art and Design, College Road, Stoke on Trent, Staffordshire, ST4 2XN, UK
2
Abstract
The Handy 1 was developed in 1987 by
Mike Topping to assist an 11-year-old
boy with cerebral palsy to eat unaided.
The system is the most successful lowcost, commercially available robotic
system in the world to date, capable of
assisting the most severely disabled
with several everyday functions such as
drinking, washing, shaving, cleaning
teeth and applying makeup [1]. This
paper outlines the development of the
Handy 1 and provides a case history of
Stephanie O’Connell, one of the Handy
1 users, in which she gives her views of
the system and how it has altered her
live.
Development of Handy 1
The Handy 1 was initially developed to
enable a child with cerebral palsy to eat
unaided. The early version of the
system consisted of a Cyber 310
robotic arm with five degrees of
freedom plus a gripper. A BBC
microcomputer was used to program
the movements for the system and a
Concept Keyboard was utilised as the
man machine interface [2], [3].
The first prototype was completed
within three months and placed for
trials in the boys home. The system
worked successfully and was like3d by
the user, however some design
weaknesses were noted:
• The system was too bulky making it
impossible for the boy to eat with
his family in the dining area [2].
• Although simple to operate, the
robot required a skilled carer to set
it up
• The pressure sensitive interface was
suitable for someone with cerebral
palsy, but would not have worked
successfully with less dexterous
disability groups [2].
Fig. 1 The first Handy 1 prototype
In 1989, work commenced on
improving the Handy 1 specification in
order to create a multi-functional
system capable of helping a number of
different disability groups with basic
everyday tasks [2].
- 115 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
The next version of Handy 1 was much
more advanced. The interface between
Handy 1 and the disabled user became
a single switch control known as a
‘wobble switch’ which can be placed at
wherever the user has the most useful
movement. For example, for an
amputee with no arms the switch could
be placed at the side of the head. This
switching arrangement has been
successful in the majority of cases and
has enabled the system to be used by
many different disabled groups
including, cerebral palsy, motor
neurone disease, stroke, muscular
dystrophy, multiple sclerosis and
people involved in accidents [4]. For
people so disabled that they do not
possess even the slightest movement
required to operate the wobble switch,
switches are available which can be
operated by the blink of an eye, thus
enabling most people access to the
equipment.
dish begin to scan, one after another
from left to right across the back of the
serving dish [5]. The method of making
a choice of food is as follows:
• The user waits for the LED to be lit
behind the section of food they want
to eat.
• The user then activates the single
switch and the robot scoops up a
spoonful of food from the chosen
area of the dish and delivers it to a
comfortable mouth position.
• The user then removes the food
from the spoon, then the LEDs
begin to scan again allowing the
procedure to be repeated until the
dish is empty.
During the early Handy 1 trials, it
emerged that although the Handy 1
enabled them to enjoy a meal
independently, the majority of subjects
wished that they could also enjoy a
drink with their meal. Thus the design
of Handy 1 was altered to incorporate a
cup attachment. The cup is selected by
knocking the switch when the green
light is lit in the centre of the dish. The
green light is included in the scanning
light sequence. The cup can be emptied
either by drinking from a straw or by
using a unique tilting device, which
allows the user to tilt the cup using
their own head movements to remove
the liquid [5].
Fig. 2 The Handy 1 system today
Control of Handy 1
When Handy 1 is powered up, seven
Light Emitting Diodes (LEDs)
positioned integrally behind the eating
Close user involvement in the
development and evaluation stages of
the
project
have
contributed
significantly to the success of the
Handy 1 eating and drinking system.
By maintaining close contact and
- 116 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
encouraging feedback from our user
groups, several suggestions for
development of additional attachments
have been highlighted [4].
As a direct result of this feedback the
Handy is now being further developed
to enable severely disabled people to
achieve independence in other
important daily living activities.
Designs were produced which took the
form of three detachable slide-on tray
sections
(eating/drinking,
washing/shaving/teeth cleaning, and
cosmetic application) which could be
Eating and Drinking Tray
supplied according to the users
requirements [6]. This flexibility was
considered important as the Handy 1
would be used by people with a range
of different disabilities who may want
to add or remove attachments to
accommodate gains or losses in their
physical capabilities.
It is important that each prototype is
tested by disabled users to ensure that
they are able to use it easily and
effectively. One of the Handy 1 users is
Stephanie O’Connell, a 24-year-old
lady with cerebral palsy.
Make-up Tray
Washing, Shaving and Teeth Cleaning Tray
Figure 3 Various tray attachments for Handy 1
- 117 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Stephanie O’Connell has used the
Handy 1 system for three years (fig.4).
Throughout that time she has been
actively involved in the development of
Handy 1, trialing new Handy 1 features
and
giving
detailed
feedback.
Stephanie is also Editor of the Handy 1
users newsletter which provides users
with up to date information on the
developments of the Handy 1 system.
Stephanie’s experiences of the Handy 1
system have been included to give the
perspective of a rehabilitation robotics
user.
Fig.4 Stephanie using the Handy 1
system
Stephanie says, ‘I lost the ability to
feed myself in 1992 at the age of 18,
due to cerebral palsy, stiffness and my
age. I try to be as independent as
possible, I have a Cheater electric
wheelchair to which I also connect my
Possum.
The
Possum
is
an
environmental control system which
controls my whole house, helping with
tasks such as opening, closing curtains,
switching lights on and off, television
control etc. However, despite help in
these activities, I was desperate and
determined to feed myself again.
Before I came to the conclusion that
the Handy 1 was the right machine for
me I did lots of research and tried
different things such as the ‘Neater
Eater’ (The ‘Neater Eater’ is a
mechanical feeding system which uses
pivoted damping mechanism) and
various spoons. However, the ‘Neater
Eater’ needed too much physical
movement which made me too tired
and gave permanent backache. Finally,
after lots of consideration and
exhibitions we came to the conclusion
that the Handy 1, which I have
affectionately named ‘Albert’, was the
best machine for me, mainly because
the amount of movement required to
operate it was minimal.
I came across Handy 1 at the Naidex
‘95 exhibition, which is the leading UK
based trade show for technical aids for
disabled people, and since purchasing a
system for myself I, along with other
users of the Handy 1 have been
involved in its ongoing development. I
am now also the editor of the Handy 1
users newsletter which keeps Handy 1
users informed of
the
latest
developments to the system.
The first meal I had with Handy 1 left
me pleased and excited. It had been 3
years since I had last fed myself and
the freedom to do so again was
extremely satisfying. Of course, my
ability to operate the system has
improved with practice. Meals with
Handy 1 were initially slow but I
- 118 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
persevered until meal times became a
union between Handy 1 and I. I
experimented with various foods and
soon became aware of what Handy 1 is
capable of. Oriental food was difficult
to manage and so was pasta and rice in
the beginning, but thanks to practise
and perseverance, I never go without
my spaghetti bolognaise! I also found
that combinations of food worked well,
such as beans on toast or cereals with
milk. Mixing the foods seems to help
bind them together and make them
easier to pick up. As long as the food is
cut into sensible bite sized pieces
Handy 1 copes well with almost any
food type.
The appearance of the Handy 1 system
has changed greatly during the last 3
years. When I first received the system
it was larger and more awkward
looking. It also had material covers.
However now that the system is
equipped with plastic covers the
appearance is much improved and is
available in a range of colours. The
plastic surfaces also mean that the
system is far easier to keep clean and
therefore more hygienic.
Also I have found that the Handy 1 has
provided a sort of physiotherapy for
me. The spoon always presents the
food at the same place and therefore
you train yourself to move to that
position. Persistence is required at first
but it is well worth the effort. When I
first used the system I easily became
very tired but now I feel that I tire less
easily. My posture has improved and
my movements feel more controlled
- 119 -
and less jerky than when I first began
using the system. When using Handy 1
I feel totally in control of my feelings
again. I need something that requires a
very light touch so the single switch
control is ideal and very easy to use.
The updated Handy 1 system is much
more user and carer friendly than the
version that I had initially. The system
now sets itself which is definitely
greatly beneficial and it is simple and
quick for my carers. I also find that I
am able to remove the detachable
eating tray myself as it is quite
lightweight.
The attitudes of carers to the Handy 1
have overall been mixed. Some carers
will do anything so that you can help
yourself, whereas others prefer to feed
as they think that it will be saving them
time. With the my initial version of the
system, carers knew that it would take
several minutes to correctly set up the
system, however, with the new system
there is no longer a problem as my
carers only have to turn the system on
as they would do a television set. I
find that the satisfaction of being able
to feed myself when I am at home
makes me feel more comfortable when
asking for help with eating when I go
out. When I am out sometimes I feel as
if people are watching me whilst I am
being fed, I wish then that they could
see me using the system and eating by
myself.
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
My initial experiences with the washing
and toothbrush attachment.
In May this year I began trying out a
tray attachment for the Handy 1 system
which enables users to clean their teeth,
and wash. The system had been
developed through the European
Commission DGXII Biomed II
program and was called the RAIL
project (Robotic Aid to Independent
Living) and I was involved in the
evaluation stage. When I first saw the
toothbrush attachment during the early
stages of the project I felt that I might
not be able to use it but after a few
more adaptations had been made I was
able. Even though the prototype
version of the system was not perfect it
was nice to be able to try it out and do
another activity for myself. As I used
the system, it became apparent that
some changes to the design were
required and after using the attachment
for 5 days I had a clearer idea of what I
could and could not achieve when
using the system and of what
improvements I thought could be
made. I gave my suggestions to the
developers of the system and I know
that these suggestions will be
considered and incorporated into the
design if appropriate.
unable to use the system at times such
as holidays away from home.
Some people have thought that £4000
seems a lot of money to pay for a piece
of equipment when carers are available
who could do the job. However, I feel
that no one can put a price on the
ability to feed yourself or on how nice
it feels to be able to put a toothbrush to
your mouth or wash yourself. I
personally feel that it is harder if, like
me, you have once been able to feed
yourself and then your condition
deteriorates and you can’t. If you have
had the ability and then you lose it I
think that gives you the drive and
determination to achieve this again.
I feel that Handy 1 is the best piece of
equipment for me. At ICORR’97 I felt
that this was confirmed as I looked at
the other equipment available and was
still happy that the £4000 I spent was
not a waste of money. It was not until
then that I became entirely sure that I
had not made a mistake. I felt when
choosing the system that it was the
most suitable for me and I still believe
this. I am so familiar with the system
and I have not yet come across
anything else on the market which has
such a light touch’.
I think that rehabilitation robotics could
References
be extremely useful in helping severely
disabled people achieve independence
in daily living activities. If someone
[1] Weir, RFff, Childress, D.S. (1996)
were to say that I could not use Handy
Encyclopaedia of Applied Physics,
1 it would be as if they were taking my
Vol. 15
arms away. I have become so
dependent on the system and have only
[2] Topping M J (1995) The
been aware of this when I have been
Development of Handy 1 a Robotic
- 120 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Aid to Independence for the Severely
Disabled. Proceedings of the IEE
Colloquium “Mechatronic Aids for the
Disabled” University of Dundee. 17
May 1995. pp2/1-2/6. Digest No:
1995/107.
[3] Topping M J (1996) ‘Handy 1” A
Robotic Aid to Independence for the
Severely Disabled. Published in
Institution of Mechanical Engineers. 19
June 1996.
[6] H. Heck, Ch. Buhler, P. Hedenborn,
G. Bolmsjo. M. Topping, (1997) “User
requirements analysis and technical
development of a robotic aid to
independent living (RAIL). 4th
European Conference on Engineering
and Medicine Bridging Eat and West Warsaw (Poland) 25-27 May 1997. Pre
Conference Advanced Courses May
24, 1997
[4] Smith J, Topping M J, (1997) Study
to Determine the main Factors Leading
to the overall success of the Handy 1
Robotic
System.
ICORR’97
International
Conference
on
Rehabilitation Robotics, Hosted by the
Bath Institute of Medical Engineering,
Bath University, pp147 - 150.
Acknowledgements
We gratefully acknowledge the support
of The European Commission,
Directorate General XII, Science,
Research and Development, Life
Sciences and Technologies for their
valuable support of the RAIL (Robotic
Aid to Independent Living) project.
[5] Topping M J, Smith J, Makin J
(1996) A Study to Compare the Food
Scooping Performance of the ‘Handy
1’ Robotic Aid to Eating, using Two
Different Dish Designs. Proceedings of
the IMACS International Conference
on Computational Engineering in
Systems Applications CESA 96, Lille,
France, 9-12 July 1996.
Authors’ Addresses:
Stephanie lives at Flat 12 Gordon
Clifford Court, St. Anthony’s Court,
Bracknell, Berkshire, UK
Mike Topping is Research
Development Manager at Centre for
Rehabilitation Robotics, Staffordshire
University, School of Art and Design,
College Road, Stoke on Trent,
Staffordshire, ST4 2XN, UK
- 121 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
DESIGN OF HUMAN-WORN ASSISTIVE DEVICES
FOR PEOPLE WITH DISABILITIES
Peng Song, Vijay Kumar, Ruzena Bajcsy
GRASP Laboratory, University of Pennsylvania, Philadelphia, PA 19104, USA.
Venkat Krovi
Department of Mechanical Engineering, McGill University, Montreal, CANADA
Richard Mahoney
Rehabilitation Technologies Division, Applied Resources Corp., Fairfield, NJ, USA
ABSTRACT
This paper presents examples of a class
of human-worn manipulation aids for
people with disabilities, and a paradigm for the cost-effective design and
manufacture of such devices. Also discussed is a software design environment that integrates a variety of support tools to facilitate human-centered
product design.
INTRODUCTION
Although robots and robot systems are
versatile manipulation aids, they appear
to be less acceptable to people with
disabilities than simpler and less flexible assistive devices, such as prosthetic
limbs [1]. There are many reasons for
the lack of success of general purpose
robotic aids in this community [2].
Such electromechanical systems tend to
be very complex, unreliable and expensive.
Another key obstacle is the difficulty
that the users have in controlling such
complex systems [3]. A user with a
prosthetic limb is in intimate contact
with the limb and therefore has proprioceptive feedback (Doubler and
Childress [4] call this extended
physiological proprioception). In contrast, users of robotic systems have
only visual feedback. While haptic interfaces are active areas of research,
there appear to be inherent limitations
with the technology that preclude simple and cost-effective mechanisms for
force and tactile sensing [5].
The needs of people with physical disabilities may be better served by passive multi-link articulated manipulation
aids called teletheses, that are worn and
physically controlled by the user. The
Magpie [6] is an example of a telethesis designed to assist with the task of
eating.
The design and development of two
new teletheses are presented in this paper. A head-controlled feeding aid
has been developed that allows a user
to manipulate a feeding utensil (for example, a spoon) to pick up food from a
plate and bring it to the mouth without
dropping the food. A head-controlled
painting tool has been developed that
allows a user to move a paintbrush
from a pallet to any point on a canvas.
The design approach discussed here
- 122 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
emphasizes the use of a virtual
prototyping environment that enables
the testing and evaluation of the product before committing to manufacture.
DEVICE DESIGN
The design of a telethesis can be decomposed into an input subsystem that
is attached to the human user, an effector subsystem that is used to interact
with the environment, and a coupling
subsystem that transforms the motion
of the user to drive the end effector.
In both candidate designs, independent
motions of the head and neck are captured by a set of links, cables and pulleys that constitute the input subsystem. This motion is transformed and
transmitted to the effector subsystem
that accomplishes the desired task.
Since the product volume for the types
of customized products discussed here
is small, the manufacturing cost must
be kept low. Thus, there exists a need
to automate the process of deriving
product specifications and developing
the detailed design. In addition, there is
always a need to prototype the product
quickly and be able to respond to the
consumers’ needs rapidly.
There are three important processes or
stages for rapid design and prototyping
of customized products [7]:
• Data acquisition: the acquisition of
geometric, kinematic, dynamic and
physiological information about the
customer, for developing the design
specifications and for detailed design.
• Virtual prototyping: the process of
simulating the user, the product, and
their combined (physical) interaction in software during the product
design, and the quantitative and performance analysis of the product.
• Device design and optimization:
automation of the tools necessary to
permit a designer to take a preliminary design, convert it into a detailed design, and quickly produce
prototypes for evaluation and production.
A virtual prototyping environment has
been developed that allows a designer
to create customized synthetic models
of the human user and virtual prototypes of the product, and to evaluate
the use of the product by the human
user in the virtual environment.
The virtual prototyping environment
allows a designer to (a) integrate heterogenous data from different sources;
(b) easily design a product; (c) model,
simulate and analyze the designed
product; and (d) manufacture the virtually tested product. Off-the-shelf packages are used wherever possible, integrating them seamlessly into the overall system. The primary functional
modules include:
1. Data manipulation: Geometric and
kinematic models of the human
body are obtained from 3-D imaging systems and cameras [8]. The
designer can manipulate interactively either the raw data or parametric models determined from the
measured data using a graphical in-
- 123 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
terface.
2. Kinematic and dynamic modeling:
Synthetic models of the human user
consist of articulated rigid body
models that reflect the geometry and
the kinematics of the user. Jack, a
software package for human body
simulation [9], is used to support the
definition, positioning, animation,
kinematic simulation and human
factors performance analysis of
simulated human figures. The C
Application Programming Interface
(API) of Jack enables modeling of
other serial chains, like the mechanisms of interest here. This has been
augmented with a C library, that
contains routines for kinematic and
dynamic analysis, including forward
and inverse kinematics, and forward
and inverse dynamics. Thus the designer can, for example, specify a
desired trajectory for the human
head with a specified load while restraining the torso, and examine the
forces and torques that will be required at the base of the neck to
execute motion.
scripting interface which enables
the designer to make parametric
changes in the Jack environment
interactively, which are then used to
update the original CAD model
automatically [10].
4. Mechanism design: The mechanism
design module supports the dimensional synthesis, optimization and
analysis of mechanisms. The optimization engine runs on Matlab, a
commercially available package for
numerical, matrix-based calculations.
3. Computer aided design: The me5.
chanical design is accomplished
using Pro/Engineer (Parametric
Technologies Corporation), which
was chosen for its parametric part
and assembly modeling capabilities
and because of the interfaces offered to a variety of other graphics,
finite element analysis and manufacturing
packages.
The
Pro/Develop
module
of
Pro/Engineer offers a powerful
- 124 -
Figure 1. A rendered solid model
of the head controlled feeding aid
created in Pro/Engineer.
Visualization and Interaction: The
front end visualization is also handled with the help of Jack. The designer can interact with and provide
input specifications to the system
using a variety of input techniques.
It is possible to see the simulated
human execute motions while conforming to kinematic and physiological constraints.
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Figure 2. The design of the painting
tool (candidate design no. 2) in the
virtual prototyping environment.
INTERACTIVE SIMULATION
The kinematics and dynamics of the
system are modeled and represented in
a modular fashion. The kinematics of
the input subsystem, the effector subsystem and the coupling subsystem are
coded independently. Further, the headneck kinematics, specific to the customer, is modeled in Jack. All models
are coded in C or C++.
Once the configuration design is completed, the designer “attaches” the
product to the synthetic model of the
user. This is done by defining position
and orientation constraints between the
product and the human model in Jack.
Since the interface to Jack allows the
designer to manipulate the human
model, it is easy to move the head/neck
in any direction and visualize the
movement of the articulated mechanism.
As shown in Figure 3, the subsystems
are first completely prototyped in the
virtual world. This facilitates testing
and analysis by the designer, and
evaluation by the customer and possibly a therapist. In the next stage, the
input subsystem is prototyped while the
effector subsystem remains in the virtual world. The coupling subsystem is
simulated by the use of sensors on the
input subsystem and suitable electronics that allow the virtual models by the
sensory information. This facilitates a
second round of evaluation, both by the
designer and the customer (and the
therapist). This evaluation accompanied possibly by redesign ensures that
the final prototype meets task and user
specifications.
DISCUSSION
Consumer involvement
We consulted potential consumers and
other people with disabilities during
both the conceptual and detailed design
phases of the feeding aid. Virtual prototypes not only facilitated the evaluation of the product by consumers but
also facilitate the involvement of therapists and physicians. For example, several choices of the head-mounted control linkage were discarded because of
aesthetic considerations. The redesign
of the product in response to this feedback at a very early stage can ensure
the success of the product and possibly
avoid building multiple physical prototypes and incurring the resulting expenses.
Manufacture and Testing
A prototype of the feeding aid is shown
in Figure 4. Figure 5 shows the proto-
- 125 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
V
I
R
T
U
A
L
W
O
R
L
D
P
H
Y
S
I
C
A
L
Virtual prototype of
effector subsystem
Virtual prototype of
coupling subsystem
Virtual prototype of
effector subsystem
Virtual prototype of
input subsystem
Virtual prototype of
coupling subsystem
Synthetic human model
Communication
Designer
Electronic interface
Physical prototype of
effector subsystem
Physical prototype of
input subsystem
Physical prototype of
coupling subsystem
Customer
Physical prototype of
input subsystem
W
O
R
L
D
Customer
DESIGN AND VIRTUAL
PROTOTYPING
INTERMEDIATE
PROTOTYPE
FINAL
PROTOTYPE
Figure 3. The three phases of detailed design and prototyping for customized assistive devices.
types of the input subsystem for the
painting tool. In the prototypes, all
links are made out of slender composite
tubing. The tubes are attached via aluminum inserts to housings for bushings
and pulleys. The manufacture merely
involves cutting tubes to specifications
and mounting appropriately sized pulleys. All other components are standard.
The two teletheses shown here will be
undergoing further consumer evaluation. In addition, the virtual prototyping
software design environment is being
developed into a commercially viable
system.
CONCLUSION
Justification for the further development of human worn manipulation devices for people with physical disabilities has been provided. A virtual
prototyping software design environment has been described that provides a
range of integrated tools for the design,
prototyping, and evaluation of this
class of device. A description of two
telethesis systems that have been developed using the virtual prototyping
design environment.
It is expected that further investigation
of this design approach and the ultimate commercialization of the design
software will lead not only to the
- 126 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
emergence of further concepts for human worn assistive devices, but will
also contribute to improvements in the
design possibilities for assistive technology in general.
Figure 5. Prototype of a design for
the painting device input subsystem.
The user is shown operating the input subsystem physical prototype
and interacting with the virtual prototype of the end effector subsystem
on a Silicon Graphics workstation.
Figure 4. A preliminary prototype of
a head-controlled, passive, feeding
mechanism.
ACKNOWLEDGEMENTS
This work was supported by NSF
grants MIP 94-20397, DMI 95-12402,
and SBIR DMI 97-61035.
The authors gratefully acknowledge the
efforts of Craig Wunderly, Chris
Hardy, and Aman Siffeti in the design
of the painting tool system, and those
of the HMS School, Philadelphia, PA,
and the Matheny School and Hospital,
Peapack, NJ, in facilitating consumer
involvement in this work.
REFERENCES
1. W.S. Harwin, T. Rahman, and R.A.
Foulds, “Review of Design Issues in
Rehabilitation Robotics with Reference
to North American Research,” IEEE
Transactions on Rehabilitation Engineering, 3, No. 1, 3-13, 1995.
2. J. B. Reswick, “The Moon Over Dubrovnik - A Tale of Worldwide Impact
on Persons with Disabilities,” Advances in external control of human
extremities, 4092, 1990.
3. L. Leifer, RUI: factoring the robot
user interface, Proc. RESNA Int’l. '92,
RESNA Press, 1992.
4. J. A. Doubler and D. S. Childress,
“An analysis of extended physiological
proprioception as a prothesis control
technique,” Journal of Rehabilitation
Research and Development, Vol.21,
No.1, pp.5-18, 1984.
- 127 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
5. J. E. Colgate and J. M. Brown,
“Factors affecting the Z-Width of a
Haptic Display,” Proceedings of IEEE
International Conference on Robotics
and Automation, San Deigo, CA, May
8-13, pp 3205-3210, 1994.
6. M. Evans, “Magpie: It's development and evaluation,” Technical report, Nuffield Orthopeadic Center,
Headington, Oxford, England OX3
7LD, 1991.
7. V. Kumar, R. Bajcsy, W. Harwin, P.
Harker, “Rapid design and prototyping
of customized rehabilitation aids,”
Communication of The ACM, Volume
39, Number 2, pp. 55-61, 1996.
8. I. A. Kakadiaris, D. Metaxas, and R.
Bajcsy, “Active part-decomposition,
shape and motion estimation of articulated objects: A physics-based approach,” IEEE Computer Society Conference
on
Computer
Vision
and Pattern Recognition, Seattle, WA,
June 21-23, pp. 980-984, 1994.
9. N. I. Badler, C. B. Phillips and B. L.
Webber, Simulating Humans: Computer Graphics, Animation, and Control, Oxford University Press, New
York, NY, 1993.
10. V. Krovi, "Design and Virtual
Prototyping of User-Customized Assistive Devices," Ph.D. Dissertation,
Department of Mechanical Engineering
and Applied Mechanics, University of
Pennsylvania, Philadelphia, PA, 1998.
- 128 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
A RAPID PROTOTYPING ENVIRONMENT FOR MOBILE
REHABILITATION ROBOTICS
Ir. B.J.F. Driessen, ing. J.A. v. Woerden, Prof. Dr. G. Bolmsjö (Lund University),
Dipl.- Ing. O. Buckmann (BIBA)
TNO-TPD, PO-BOX 155, 2600 AD, Delft, The Netherlands
Email: [email protected], tel: +31 15 2692394, fax: +31 15 2692111
Abstract: This paper describes a
development
environment
for
collaborative
engineering
of
rehabilitation robotics devices, called
RETIMO. The basis of RETIMO is
models of the different components
(mechanics,
computer
hardware,
controller, human interfaces) of the
mobile robot. Each component can exist
in three different stages: a) simulation
stage, b) virtual prototyping stage, c) real
prototyping stage. RETIMO will lead to:
• faster to the market
• design cost reduction because of
collaborative engineering,
• better quality because end-users are
more involved
An example of the method is given by
the design of a mobile base mounted
manipulator.
moving to a pre-defined position from
any point in the workspace of the mobile
system. However, if the end-user does
not have the capability to store new
locations and postures in the memory of
the mobile robot, the functionality of the
path planner from the end-user’s point of
view is rather poor. Seemingly, the enduser interface is in this situation
responsible for a partial failure of the
developed functionality. Would the enduser, however, have the possibility to test
and evaluate the functionality at a very
early stage, the engineer could have used
the feedback for adapting the
functionality in such a way that the enduser can really use it.
This paper describes a development
strategy for assistive devices, enabling
the integration of the end-user in the
Introduction
development process. After a short
Designing assistive devices requires a review of development strategies for
tight co-operation between developers, assistive (mechatronic) devices, the
end-users, and therapists during the structure of RETIMO is explained.
entire development process. This applies Emphasis is put on the controller
especially for advanced assistive devices prototyping and the embedded system
such as robots or mobile bases. prototyping. Early results of the MobiNet
Engineers can develop advanced control [1] program are given, demonstrating the
functions for a mobile robot, for example current status of the development
an intelligent path planning algorithm for environment. Finally developments
- 129 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
scheduled for the near future are
indicated.
Development strategy
Rapid prototyping is a technique for
analysing complex problems by using
fast realisation methods for different
components (e.g. mechanical parts,
controllers, etc.). By having prototypes
of these components, end-users can
interact with either the real or the virtual
system at an early stage. Rapid
prototyping
applies
to
different
disciplines, for example:
• mechanical rapid prototyping, dealing
with the rapid manufacturing of
mechanical components directly from
3D cad-drawings [2],
• embedded system rapid prototyping,
dealing with evaluating different
hardware architectures using processes
and virtual communications,
• controller rapid prototyping, dealing
with
the
(semi)automatic
implementation of control algorithms
from simulation results,
• user-interfacing prototyping, dealing
with designing optimal user interfaces
for end-users.
For all disciplines, three development
phases can be identified: the simulation
phase, the virtual prototyping phase and
the prototype realisation phase1.
During the simulation phase, models of
the system are created for some
disciplines. Using these models (which
1
Note that during product development, design iterations
between these phases is very much required.
are not real-time), calculations which are
required in a certain discipline can be
carried out. Examples of these
calculations are: strength analysis
calculations using FEM packages,
control design and tuning using Matlab
or MatrixX, or computational analysis
calculation e.g. using HAMLET.
Consequences of basic decisions can be
evaluated for all disciplines. For
example, what will the mechanic
structure of the system be if a three
fingered gripper will be used instead of a
two fingered gripper. Visualisation can
show the results in a more
understandable format.
During the virtual prototyping phase,
the models are compiled to a real-time
environment. The assistive device still
exists only in a virtual world, but now
can be simulated in real-time. Therefore,
the dynamic behaviour of the system can
also be visualised in real-time. This
makes it possible for end-users to test
and evaluate the assistive device by
means of 3D visualisations. Since no
parts of the system exist in reality,
modifications in the structure of the
device can be made with limited effort.
The virtual prototyping phase is very
important for incorporating end-users in
the development of assistive devices.
During the prototype realisation phase,
the system will be constructed in reality.
It is also possible to combine virtual
prototypes with real prototypes of the
system. In this way, incremental system
development is possible, reducing the
- 130 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
risks of product failures strongly. The
results of the virtual prototyping phase
(3D CAD drawings, programming code
of control algorithms) are reused as
much as possible. This is very much
required, since the prototype realisation
phase is often the most time consuming
during
system
development
and
iterations in this phase are very costly.
For following the described working
methodology, an environment with the
following requirements is needed:
• open with respect to integration
standards such as M3S,
• possibility to carry out (semi)
automatic model transformations
between simulation model, virtual
prototyping model, and real prototype,
• 3D visualisation capabilities for
visualising the system to the end user
as well as the engineer,
• open with respect to hardware
platforms, making it possible to test
different kinds of communication
buses (e.g. CAN), and different kind
of real time environments (PharLap,
Windows CE, VxWorks, etc).
• wide availability of debugging
facilities.
State of the art
No full blown methods for development
of complex systems of the described
nature are found. The literature addresses
single discipline methods for many
applications. The goal of the above
described development strategy is to
model, design and realise systems using
collaborative engineering. The aim is
threefold:
• Reducing the time-to-market
• Less costly prototyping
• Better product quality among others
because of end-user involvement
Collaborative
engineering
means
organisational and technological support
for multidisciplinary integrated design
with many people working at different
locations. The Manus manipulator [3],
the commercial available general
purpose rehabilitation robot, is at this
moment re-engineered following these
principles in the Commanus project [4].
Elements of the method are applied in
the Mobinet European TMR project.
Visualisation turns out to be very
important in multidisciplinary designs.
Mono-disciplinary views on (simulated)
device models answer each moment the
question : Are we still working on the
same robot? In Mobinet mechanical
rapid prototyping (Lund University and
BIBA Bremen ) control rapid
prototyping (TNO-TPD and University
of Reading) as well as embedded system
integration rapid prototyping (TNOTPD) is addressed. The latter one is also
dealt with in the TIDE ICAN project [5].
User-interface design is extremely
important for end-users. The web based
ProVar approach [6] and the Manus
adapticom method [7] in the Netherlands
can be mentioned.
- 131 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
user
simulated
controller
(non RT)
virtual
controller (RT)
real
controller
simulated
mechatronic
device (non RT)
virtual
mechatronic
device (RT)
real
mechatronic
device
I
II
III
world
model
visualisation
Figure 1. Rapid prototyping for mechatronic systems.
The user can generate commands to the
controller. The control exists either in
simulation, or as a virtual prototype, or
in reality as a real prototype of the
embedded system. The simulated
controller, can only communicate with
the simulated mechatronic device (since
they or both non-real-time). The virtual
The big blocks represent the simulation (real-time) controller can communicate
phase (I), the virtual prototyping phase with either, the virtual mechatronic
(II), and the prototype realisation phase device, or with the prototype of the
(III). The small blocks represent system mechatronic device. Note that in all
components, such as the controllers, the stages (simulation, virtual prototype, or
dynamical model, the world model, etc. real prototype), the mechatronic device
The blocks can communicate with each has an interface towards a visualisation
other using interfaces, represented by environment. This means that an endlines ending in a small shape. Blocks can user can see how the total system will
only communicate if they have identical behave at an early stage. Especially at
the virtual prototyping phase, he/she can
interfaces.
already evaluate the system or practice
with it.
RETIMO: A rapid prototyping
environment for assistive devices.
Structure
The structure of the RETIMO
development environment is shown in
Figure 1.
- 132 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Embedded system prototyping
Designing the embedded system means,
among others, taking decisions about the
real-time
environment
(VxWorks,
PharLap, WindowsCE), the organisation
of the real-time processes (number of
parallel threads), the bus type, and the
distribution of the controllers over the
real-time threads. Based on UML, we are
able to identify efficiently the
specifications for the real-time system.
Using Rationals Rose [8], we can create
real-time environments for different
operating
systems,
meeting
the
requirements found during the system
analyses. Interfaces between the realtime
environment,
and
the
Matlab/Simulink Real Time Workshop
[10] exist, so controllers can be
efficiently merged in the embedded
system (see below).
A virtual communication bus exists
between controller and mechatronic
device. The virtual bus can be configured
as a CAN bus, or a USB bus, or a serial
port. Interfaces exist between the virtual
bus, and the corresponding “real” buses,
making in possible to communicate with
virtual prototypes and real prototypes at
the same time (hardware in the loop
simulations).
real-time controller, without re-coding
the designed algorithm. For this we use
the
Matlab/Simulink
Real
Time
Workshop (RTW). A disadvantage of the
RTW is that it generates only one single
C-function, containing the functionality
of the entire Simulink model. When this
model contains several control blocks,
all blocks are combined into one Cfunction, which is very inconvenient for
developing a hierarchical or distributed
controller. This problem can be solved
by writing the controllers directly in C,
as a so called sfunction. Matlab tools can
be used for optimising the controller, and
after code generation, the different
controllers can easily be identified. Also
during controller prototype generation,
the C-algorithm can be reused.
The virtual controller can communicate
with the real mechatronic prototype,
enabling
hardware-in-the-loop
simulations. Here, some components of
the device are virtually prototyped,
whereas others exist in reality.
Visualisation
Currently we use OpenInventor for
visualising
the
world
model.
OpenInventor is a tool which is built on
top of OpenGL. 3D Objects can be
created using 3D Cad packages, and
exported to OpenInventor. At this
moment we do not have the possibility to
Controller prototyping
For developing control algorithms we interact with the world model during
use the Matlab/Simulink simulation simulations. This functionality would be
environment.
Matlab offers a wide useful, since it can help in investigating
variety of design tools for different types the response of the system on an
of controllers. Once we have satisfactory unexpected event (e.g. placing an object
simulation results, we want to create a in front of a mobile base).
- 133 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Designing a mobile base mounted
manipulator
The system.
In RETIMO, we design a mobile
manipulator, which will be used by
elderly and disabled
persons for carrying out all day living
tasks. At TNO-TPD the mobile
manipulator is composed of the Manus
manipulator, and a LabMate mobile base
[9]. A picture of the two subsystems is
shown in Figure 2.
The controller
For developing the control system of the
manipulator, the figure below shows the
results of a joint speed controller, which
was designed in Matlab. The speed
1
0.8
0.6
velocity [rad/s]
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
0
0.5
1
1.5
2
time [s]
2.5
3
3.5
4
Figure 3. Simulation results and
HIL results.
controller was compiled to a virtual
prototype. We tested the virtual
controller with a real prototype of the
Manus (hardware-in-the-loop). In Figure
3 the results of the HIL simulation and
the Matlab simulation are shown.
As can be seen, only small differences
between simulation and reality exist. For
this situation, new developments can
indeed be tested on the virtual system,
since this behaves with the same
dynamics as the real system.
The visualisation
We’ve build a visualisation of the entire
system for showing how the total system
will look like in practice. The results of
the visualisation are shown in Figure 4.
Figure 2. The Manus manipulator,
and the Labmate mobile base.
Future developments
RETIMO has proven to be powerful in
speeding up developments. User
involvement needs to be more intensive.
Provisions are made for interfacing to
standard powered wheelchairs. Current
activities are the development of a
RETIMO-M3S interface. In the TIDEICAN project, interfaces to DX as well
- 134 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Figure 4. Visualisation of the total system.
as P&G are developed. In this way [3]
RETIMO can also interface with these
industry standards.
Manus manipulator information
package. January 29th 1998. Web:
http://www.worldonline.nl/~dyna
mics.
The next step is the ability to present [4] COMMANUS
EU-CRAFT
virtual simulation results over the
(BIOMED 2) project nr. BMH4internet and be interactive with users.
CT98-9581.
For the ProVar workstation [6], this [5] ICAN EU-Telematics 2C. Project
functionality is already partly available.
nr. DE4204.
We believe that enabling the virtual [6] ProVar
home
page:
prototype functionality over the internet
http://provar.stanford.edu/
will strongly increase the demand and [7] Kwee, Hok. “Integrated control of
the acceptance of using (advanced)
MANUS
manipulator
and
assistive devices by persons where they
wheelchair
enhanced
by
are meant for.
environmental docking.” Robotica
(1998) volume 16, pp. 491-498.
References
[8] Bruce Powel Douglass. “Real
[1] MobiNet.
“Mobile
Robotics
Time UML. Developing Efficient
Technology
for
Healthcare
Object for Embedded Systems”.
Services”. A TMR project. Project
Addison-Wesley. ISBN 0-201nr:
FMRX960070.
Web:
32579-9. 1998.
http://147.102.33.1/mobinet/
[9] Labmate mobile base. Web:
mobhome.htm.
http://www.ntplx.net/~helpmate
[2] Burns, Marshall. “Automated [10] Matlab RTW user’s guide. The
Fabrication
Improving
MathWorks, inc. May 1997. Web:
Productiviy in Manufacturing”,
http://www.mathworks.com.
Prentice Hall, Inc., New Jersey
USA, 1993.
- 135 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
TECHNICAL RESULTS FROM MANUS USER TRIALS
Håkan Eftring1, MSc; Kerstin Boschian2, OT
1
Certec (Center for Rehabilitation Engineering Research), Lund University, Sweden
2
Department of Rehabilitation, Lund University Hospital, Sweden
Abstract
Eight users have tried the Manus arm at
the Department of Rehabilitation at
Lund University Hospital. The user
trials were carried out in close cooperation with Certec at Lund
University.
After the trials one of the users, Ms
Eva Gerdén, decided to buy a Manus
arm, and she received her Manus arm
in November 1998.
The main objective of the user trials
was to find out how robot technology
could support the early rehabilitation of
people with spinal cord injuries.
Another objective was to increase the
knowledge of user needs and what
make robots worth using.
This
paper
presents
technical
comments received during the user
trials and from Ms Eva Gerdén. The
results could be used for improvements
to the Manus arm, to other wheelchairmounted manipulators and to robots in
general.
One of the most commented issues is
the physical size of the Manus arm,
preventing the user from driving the
wheelchair close to a table or
maneuvering the wheelchair through
narrow passages.
Two of the users immediately stated
that it was awkward to have the Manus
arm mounted on the left side of the
wheelchair, since they are righthanded.
Background
Certec at Lund University and the
Department of Rehabilitation at Lund
University Hospital have been cooperating within the field of
rehabilitation robotics since 1993 when
a RAID workstation was installed and
evaluated.
In 1996 we received funding for
creating a National Rehabilitation
Robotic Center at the Department of
Rehabilitation. A Manus arm [1, 2] (the
first in Sweden) was purchased and
user trials were carried out from May
1997 to May 1998. The main objective
of the user trials was to find out how
robot technology could support the
early rehabilitation of people with
spinal cord injuries.
After the trials, one of the users, Ms
Eva Gerdén, decided to buy a Manus
arm, and she received her Manus arm
in November 1998. She is so far the
only Manus end user in Sweden.
Another objective of the user trials was
to increase the knowledge of user needs
and what make robots worth using.
- 136 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Certec’s interest in theory and method
is documented in “Certec’s Core” [3].
the users asked to try the Manus arm at
home for 2 hours, and so they did.
Methods
Eight users have tried the Manus arm at
the Department of Rehabilitation at
Lund University Hospital. The user
trials were carried out in close cooperation with Certec at Lund
University.
Seven of the eight users have spinal
cord injuries (C3-C6) and they had
been injured 0.5-21 years at the time
for the trials. One user has a spinal
muscular atrophy since birth. The ages
of the users were 22-51 years.
Approx. 15 patients and earlier patients
at the Department of Rehabilitation
were invited to the trials. Seven of
them wanted to be part of the trials.
The eighth user in the trials, Ms Eva
Gerdén, was actively looking for
robotic aids and was therefore invited
to the trials.
The Manus arm was mounted on a
Permobil Max90 wheelchair (fig 1) and
the users had to move from their own
wheelchairs to the Permobil wheelchair
during the trials. Two joysticks were
used for controlling the Manus arm and
the wheelchair. Some users could use
their hands to control the joysticks and
some users used chin control.
Fig 1. The Manus arm mounted on a
Permobil Max90 wheelchair.
The users could choose which tasks to
carry out, and at the end all users
carried out the following drinking task:
• Open a kitchen cupboard,
• bring a glass to the table,
• close the cupboard,
• open a refrigerator,
• grasp a jug of water,
• pour water into the glass,
• return the jug to the refrigerator,
• close the door,
• insert a straw if necessary,
• drink the glass of water and
• return the glass to the table.
Each user tried the Manus arm 3-4
hours per day for 1-2 days at the
Department of Rehabilitation. Two of
- 137 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Other tasks carried out by the users:
• Take a book or a binder from a shelf
and put it on a table or on their
knees.
• Insert a video tape into a video
cassette recorder and return the
video tape to a table.
• Reach the environmental control
unit from a shelf.
• Pick up things (e.g. a hand stick or a
remote control) from the floor.
• Pick up a dropped magazine from a
user’s feet and put it back on his
knees.
• Press door opening buttons and
elevator buttons.
• Open the front door of a user’s
house.
During the trials, comments and
suggestions from the users were written
down and followed by a discussion.
After the trials, a questionnaire was
sent to the eight users.
More thorough discussions have been
held with Ms Eva Gerdén after she
decided to order a Manus arm. There
has been a continuous dialogue with
her about adaptations, modifications
and suggestions for improvements as
well as about the importance of
independent living.
This
paper
presents
technical
comments received during the user
trials and from Ms Eva Gerdén. The
results could be used for improvements
to the Manus arm, to other wheelchairmounted manipulators and to robots in
general.
Results of the questionnaire
Seven of eight users answered a
questionnaire:
• Only one user wanted to have a
Manus arm as it looks and works
today. The other users thought it
was too large, too heavy and too
difficult to control.
• However, four users would like a
Manus arm if it was improved. The
following
improvements
were
mentioned: It should be mounted on
the back of the wheelchair. It should
be possible to use the wheelchair
joystick to control the Manus arm. It
should be smaller, lighter, easier to
use and have more reach. It should
be possible to lift heavier things.
• Five users would like to try the
Manus arm again, if it was
improved.
• Speed: Three users think it is too
slow. Three users think it is OK.
• Strength: Four users think it is too
weak. Three users think it is OK.
• The most difficult thing when using
the Manus arm: Too many
“commands” for a small adjustment.
Too many functions to keep in mind
in the beginning. Using the joystick.
Comments and suggestions received
from the users
Size and position
One of the most commented issues is
the physical size and position of the
Manus arm, preventing the user from
driving the wheelchair close to a table
or maneuvering the wheelchair through
narrow passages.
- 138 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Furthermore, the view from the
wheelchair is limited when the Manus
arm is mounted, and even more limited
when folded out.
Two of the users immediately stated
that it was awkward to have the Manus
arm mounted on the left side of the
wheelchair, since they are right-handed
(even if they have not used their right
hands for many years).
Modify the fold out and fold in
procedures, so they don’t require so
much space. Turn the base all the way
to the user’s legs before folding out the
upper and lower arms just in front of
the user.
Weight
The Manus arm is mounted above one
of the front wheels, which makes
wheelchairs with small front steering
wheels difficult to steer. It is also
harder to drive the wheelchair up a
sidewalk curb.
Reach, payload and grasping force
More reach to the floor. In general, the
reach is too short. The maximum
payload is too low to manipulate a 1 kg
pot without problems. The position of
the gripper relative the center of gravity
of the object to be grasped causes high
torque. It should be possible to see how
hard the gripper is holding an object.
frustrating to find out that the package
is almost empty, when you have been
very, very careful during the pouring
movements.
Gripper fingers
A gripper with three fingers might be
more useful and might be more rigid
than the two-finger gripper. The fingers
of the gripper should be a little thinner,
narrower and rounded to be able to
grasp small things 45 degrees from
vertical.
Joystick, keypad and their menus
It is very difficult for the user to use
two joysticks (one for the wheelchair
and one for the Manus arm). A joystick
switch box for the Permobil wheelchair
is not yet available. The Manus display
should be integrated with the
wheelchair display.
The Manus joystick can rotate around
itself. This is a problem when you need
to have a Y-shaped adaptation on the
joystick on which you can put your
hand. If you lift the hand from this Yshaped adaptation, it is difficult to put
the hand back.
Sometimes it is not good to have the
movement of the Z-axis and the
open/close movement in the same
joystick menu. When you control the
joystick with your chin and move the
arm in the Z direction, it is hard to
prevent the gripper from opening by
mistake (and dropping an object).
However, when you can control the
joystick without problems, it is very
Detect the weight of a grasped object
(e.g. a milk package) to be able to
know how much I can tilt it before the
milk is at the edge of the package. It is
- 139 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
good to have these movements in the
same menu.
would be narrower without the arm on
the side.
The two menu alternatives “Away” and
“Closer” should be added to the keypad
drink menu. This is good if you have to
grasp a glass close to the table, to
prevent the fingers of the gripper from
pushing against your lips. The speed of
the “Stop drinking” movement should
be faster than the “start drinking”
movement.
The results of the user trials indicate
that integration of the wheelchair and
the robot arm is the key to success for
wheelchair mounted manipulators. If
wheelchair manufacturers could have
their wheelchairs prepared and
approved for mounting robot arms, the
enormous amount of work for each
adaptation could be reduced and the
user would have an optimum solution.
The robot might then be worth using.
New movements
Small and large circular movements
should be introduced, to be able to stir
sugar in a cup of coffee or to stir food
on the stove.
Short
movements
with
high
acceleration would make it possible to
push food (e.g. meat balls) around in
the fry-pan.
Discussion & Conclusion
The mounting position of the Manus
arm unnecessarily limits the number of
potential users. People with spinal cord
injuries at the levels C5-C6 will hardly
accept a Manus arm, which stops them
from driving very close to a table. This
is necessary to be able to use their
limited arm/hand functions.
Acknowledgements
Funding for carrying out the user trials
and creating a National Rehabilitation
Robotic Center was provided by The
National Board of Health and Welfare
in Sweden.
Research activities in this field was
funded by Stiftelsen för bistånd åt
rörelsehindrade i Skåne, a Swedish
foundation.
References
[1] G Peters; F de Moel
“Evaluation of Manus Robot Arm users
in the context of the General Invalidity
Act”
GMD Evaluation report, 1996
[2] H H Kwee
A solution where the Manus arm
“Integrated control of MANUS
temporarily could be moved back along
manipulator and wheelchair enhanced
the side of the wheelchair is desirable.
by environmental docking”
It should still be possible to use the
Robotica, vol 16, pp 491-498, 1998
Manus arm from this position. An arm
mounted on the back of the wheelchair
[3] B Jönsson
would be a better solution in this
“Certec’s core”, 1997
perspective, since the wheelchair
- 140 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
http://www.certec.lth.se/doc/certecscore/
Author Address & Contact
Information
Håkan Eftring
Certec
Lund University
Box 118
SE-221 00 Lund
SWEDEN
E-mail: [email protected]
Ms Eva Gerdén is happy to answer any
questions about her Manus arm.
E-mail:
[email protected]
- 141 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
MOBINET: THE EUROPEAN RESEARCH NETWORK ON
MOBILE ROBOTICS TECHNOLOGY IN HEALTH CARE SERVICES
Nikos I. Katevas
Head of R&D Dept.,
ZENON SA - Industrial Automation
Kanari 5, Glyka Nera Attikis,
15344 Athens, Greece
[email protected]
Abstract: The goal of this paper is to
present the MobiNet project: Mobile
Robotics Technology for Health Care
Services Research Network. The main
objective MobiNet is to concentrate the
forces of European scientists in the
prototype design of an autonomous
mobile robot for health care services
by incorporation and development of
innovative, on the state of the art
techniques, as a result of the joint
research activities. MobiNet is
supported by the European Union (EU)
under the TMR programme. Short
description of the TMR Programme, as
well as selected details on MobiNet
project’s
objectives,
partnership,
progress, and potential application
fields are following.
The TMR programme
utilisation of human resources through
transitional mobility and co-operation.
Currently, almost 100 networks are in
progress for the period 1994-1998.
TMR research networks finance young
researchers that are appointed to
reinforce
the
research
teams
participating in a common project.
Community support covers the
networking cost associated with the
network activities. The benefit for both
young researchers appointed and
participating teams is considered
valuable as the first get training
through research in highly qualified
teams and the second participates in
ambitious research projects and
exchanges know-how in pan-European
level. The program covers the
disciplines:
Mathematics
and
Information
Sciences,
Physics,
Chemistry, Life Sciences, Earth
Sciences, Engineering Sciences, and
Economic,
Social
and
Human
Sciences.
MobiNet is included in the frame of
Engineering Sciences discipline.
Training and Mobility of Researchers
(TMR) Programme of EU aims
stimulation of the training and mobility
of researchers. In particular its area
referred as research network fully
supports training through research and
- 142 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
MobiNet Project Overview
MobiNet is a research network for the
establishment of scientific and
technological co-operation, aiming the
design of a fully autonomous mobile
robot for use in health care services.
The main objective of this research
network is to concentrate the forces of
European scientists in the prototype
design of an autonomous mobile robot
for health care services, using and
developing innovative, on the state of
the art techniques, as a result of the
joint research activities. The aim of the
proposed research network is to
organise and establish an active
workgroup of researchers for the
mobile robotics technology and health
care services, in a formal co-operative
status.
The outcome of the project is the
detailed design of the prototype of an
autonomous mobile robot with high
maneuverability and manipulability
features. This prototype design will be
the final deliverable, integrating the
accumulated research results of three
years. Operational features of the
proposed mobile robot include
execution of complicated manipulation
and transport tasks.
Environment perception is being
realised with the onboard installed
sensors like vision, ultrasound, infrared
etc. The behaviour of the robot is being
optimised for indoor health care tasks,
interacting with the user by high level
commands.
All levels of autonomy are being
addressed. The network is addressing a
wide range of topics beyond the state
of the art, like hierarchical task
planning,
reactive/fuzzy
control,
intelligence
distribution
and
organisation, real time control of multi
joint/wheels robots, path planning and
obstacle avoidance methods for
structures with complicated kinematics
constraints,
sensor
fusion
of
multidimensional
information,
environment
perception,
robot
guidance with visual feedback,
representation
and
modelling
techniques, advanced man-machine
interface, etc.
The Partnership
MobiNet consortium is composed of
12 highly qualified groups, from 9
countries holding complementary
expertise. Among them, some of the
most respected universities and
enterprises.
Zenon SA, (ZENON) in charge of the
project management and holding
experience in mobile robotics projects,
overtakes research efforts focused on
innovative path planning and sensor
- 143 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
fusion techniques, FernUniversitat
Hagen (FERN UNI) is contributing its
long experience in sophisticated
omnidirectional
mobile
robots,
National Technical University of
Athens (NTUA) participates with two
laboratories (IRAL - BEL) providing
expertise neural based control systems,
virtual environments, telerobotics,
task/path
planning,
Universidad
Politecnica de Madrid (UPM) offers
know how in reactive control
architecture and artificial intelligence
techniques, TNO/TPD (TNO), along
with Scuola Superiore Santa Anna
(SSSA) are involved in robot
manipulability issues, based on their
background in construction of special
purpose manipulators attached to
mobile platforms, Lunds University
(LUND) is concentrated on advanced
simulation and design of robots with
high complexity, University of
Bremen BIBA (BIBA) is supplying its
know how in modern robot oriented
telepresence applications and sensing
techniques, The University of
Reading (UOR) - Cybernetics
Department and University of Dublin
– Trinity College Dublin (TCD) are
performing research in several fields,
as vision based robot guidance,
learning systems etc. employing highly
sophisticated neural networks.
FTB (FTB) and University of
Montpellier (UMFM) are linking the
technology providers to service robot
users’ community, and they are
contributing to user interface issues.
MobiNet Project Progress
MobiNet has a life of almost two years.
Through out this time the participating
teams conducted surveys for existing
methods for the topics of interest and
after defining and dividing the research
efforts among the available teams of
experts proceeded to the development
of innovative solutions. Being a
consortium of 12 partners of 9
European states, MobiNet had to face
the differences in language, culture and
background in the multinational
scientific teams. In addition MobiNet
includes
teams
from
different
disciplines. However, the work for the
- 144 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
common goal proved to be a pleasure
and beneficial for all participants.
Indeed some interesting results have
been already announced as the
outcome of this fruitful co-operation.
The floor for those announcements has
been given in the MobiNet annual
symposiums (two taken place so far)
but apart of that some of the join work
succeeded to be published outside the
network facilities.
In MobiNet the notion of WorkGroup
has been used to organise the research
activities and contribute to the project
final
deliverable.
MobiNet
WorkGroups are structured as follows:
WorkGroup 1:
System
Architectural and Control Issues that
addresses the architectural and control
issues of the overall system that can
combine e.g. a mobile robot, a
manipulator, a set of sensors and a
man-machine interface. Control issues
included here should address the global
system control problems. Participants
are NTUA, UOR, LUND, TCD,
ZENON, TNO, and UPM.
ZENON, FERNUNI, NTUA, UPM,
and TCD.
WorkGroup 3:
Manipulator which
focuses on topics related to the
manipulator design and control. Issues
addressed may include: manipulator
design studies, manipulator’s path
planning
and
control
methods
(exclusively for manipulators), etc.
Participants are TNO, SSSA, NTUA,
LUND, and BIBA
WorkGroup 2:
Mobile Robot which
focuses on topics related to the mobile
robot design and control. Issues
addressed may include: path planning
methods using conventional fuzzy
logic or neural network techniques,
reactive motion control, mobile robot
control methods (exclusively for
WorkGroup 4:
Environment
mobile robots), issues regarding
Learning that addresses topics related
mobile robot kinematics configuration,
to methods of the robot environment
robot design etc. Participants are
- 145 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
perception. Issues addressed may
include:
sensors’
fusion
and
integration, robot localisation issues,
sensors’ configuration and placement
etc. Participants are UPM, TCD,
ZENON and SSSA.
The project book is expected to finish
in the year 2000 and will include all
research results and scientific findings
for the topics addressed.
Potential Applications
TMR networks are open to all fields of
the exact, natural, economics and
management sciences, as well as to
those social and human sciences that
contribute to the objectives of the
Fourth
Framework
Programme.
Among them, we can consider the
development
of
scientific
and
technological excellence in Europe
with the aim of responding to the needs
WorkGroup 5:
Man-Machine
of industry, and improving the quality
Interfacing that addresses all topics
of life in the Member States. In the
related to man-machine interfacing.
case of MobiNet, and in the area of the
Issues addressed may include: user
so-called Health Care services there is
interface design and construction,
a strong demand for automation of
ergonomics and social studies etc.).
several health care tasks, such as
Participants are FTB, UMFM, BIBA,
transportation and manipulation of
NTUA, UOR, and TCD
materials, drugs, meals, files etc; and
this on a 24h basis and in dynamically
The MobiNet consortium is working
changing environments. The potential
on a project book to be titled
becomes even larger if we additionally
“Advanced Mobile Robots in Health
consider advanced applications in
Care Services”.
rehabilitation fields. And the market
The proposed structure is as follows:
volume explodes if we finally take into
account the use of service robots in
Part 1 – System Architecture &
daily tasks of everybody. The MobiNet
Control
Network is consequently addressing a
Part 2 – Mobile Robots
broad range of applications in Health
Part 3 – Manipulators
Care Services including those under
Part 4 – Environment Learning
the label “Service Robots”. The
Part 5 – Man-Machine Interfacing
Network partners are at the same time
Part 6 – Special Issues
strongly linked to the interested
industry and in close contact with the
- 146 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
user groups, in order to integrate their
feedback in the project’s activities.
Training Aspects
Training of young scientists represents
one of the central objectives of the
network. Training courses have been
selected and organised offering vertical
knowledge in all the disciplines
addressed e.g.: Neuro-fuzzy path
planning methods, Hybrid Path
Planning, Autonomous Navigation,
Distributed control systems, Sensor
based control of mobile robots with
visual feedback, Human-machine
interfaces, Docking mobile robots,
Modular interface architectures etc. In
addition, to this extensive programme,
three MobiNet Symposia were
scheduled, including tutorials and
giving the opportunity to the
researchers to exchange experience.
Two of these symposia have already
been celebrated in Athens (May 1997)
and Edinburgh (July 1998). It is of
major
importance
that
young
researchers will practice co-operating
with experienced researchers and also
they will prove their knowledge
actively contributing in all join
research activities.
Project Profile
1. Dr. Nikos KATEVAS
ZENON SA - Industrial Automation
Kanari 5, Glyka Nera,
15344 Athens, Greece
tel:
+30 1 6041582
fax:
+ 30 1 6041051
e-mail: [email protected]
Other Participants
2. H. Hoyer - FernUniversitat Hagen
(DE)
3. S. Tzafestas, D. Koutsouris –
National Technical University of
Athens (GR)
4. F. Matia - Universidad Politecnica
de Madrid (ES)
5. C. Buhler - FTB (DE)
6. G. Lacey - University of Dublin
(IE)
7. M. Kroemker - BIBA, Bremen
University (DE)
8. W. Harwin - University of Reading
(GB)
9. G. Bolmsjo - Lunds University (SE)
10.P.Rabischong - University of
Montpellier (FR)
11.P. Dario – Scuola Superiore S’
Anna (IT)
12.K. van Woerden - TNO / TPD (NL)
Start Date
1 October 1996
Duration
48 Months
Contact reference
ERBFMRXCT960070
Project Co-ordinator
- 147 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Acknowledgements
We want to acknowledge the funding
of the European Community through
the MOBINET project FMRX CT960070 (Training and Mobility of
Researchers programme).
References
[1] Research Training Networks (19951996): Practical Information and
Programs. EUR-17654, European
Communities (1997).
[2] S. G. Tzafestas, D.G.Koutsouris
and N.I.Katevas, Proc. of the 1st
MobiNet
Symposium,Athens
(Greece), 15-16 May 1997.
[3] N.I.Katevas, Proc. of the 2nd
MobiNet Symposium, Edinburgh
(Scotland) UK, 23 July 1998.
- 148 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
AFMASTER : AN INDUSTRIAL REHABILITATION WORKSTATION
Rodolphe GELIN*, Françoise COULON-LAUTURE*, Bernard LESIGNE*
Jean-Marc Le BLANC**, DR. Michel BUSNEL***
*
Commissariat à l’Energie Atomique
**
AFMA Robots
***
Association APPROCHE
ABSTRACT
workstations but built with methods
and
quality
of
an
industrial
manufacturer.
The
two
first
AFMASTER
workstations will be operational in
summer 1999. This paper presents the
results of the evaluation of the EPIRAID workstations and the design of
the new AFMASTER ones.
Experiment and evaluation show that
robotized workstations are excellent
tools to allow severely disabled people
to get back to work. The modularity of
such workstations provides to users a
way to find a part of autonomy in their
daily life (cooking, drinking or
playing).
Up to now, there was no industrial
INTRODUCTION
workstation able to provide not only
More open than the Handy 1 robot,
powerful functions but also robustness
easier to control than the Manus robot,
and reliability. Most of workstations
the robotized fixed workstation should
were laboratory prototypes and, in
have found many applications for
spite of efforts of the developers, the
rehabilitation of disabled people.
reliability was the weak point of the
Nevertheless, 15 years after the first
system.
MASTER prototype, this kind of robot
The French association APPROCHE
is still less used than the robots of
has been working for many years to
Rehab Robotics or Exact Dynamics.
convince
users,
doctors
and
While the DeVAR project was
occupational therapists that robots are
spreading his wings in USA [1], the
one of the best way to assist disabled
RAID and EPI-RAID European
people. In 1998, after two years of
projects brought the concept of
massive evaluation of the EPI-RAID
MASTER to its maturity. But they
workstations, APPROCHE asked to
were to steps left before an actual
AFMA Robots, French manufacturer
dissemination: a complete evaluation
of industrial robots, to develop a new
of such a workstation and a real
workstation.
This
AFMASTER
industrialization to get a reliable and
workstation would be based on
performing product.
principles experimented by CEA on
MASTER, RAID and EPI-RAID
- 149 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
The APPROCHE association, CEA
and AFMA Robots have been getting
together to take these decisive steps.
constraints of cost are far stronger for
this application.
CEA
THE TEAM
The APPROCHE association
Founded in 1992, the French
APPROCHE
association
gathers
rehabilitation centers to promote the
use of robotized systems for
rehabilitation of disabled people. In
1995, APPROCHE bought 5 EPIRAID workstations and 2 embedded
arms MANUS. These robots have been
evaluated by around 100 users in 10
French rehabilitation centers. This
massive evaluation is partly described
in this paper.
After
this
fruitful
experiment,
APPROCHE wants that doctors
become able to prescribe robots to
patients as they prescribe electric
wheelchair.
The
French
Atomic
Energy
commission has been involved in
rehabilitation robotics for more than 20
years. In the 70s, CEA proposed to
apply its knowledge in robotized
manipulation to rehabilitation domain.
For the Spartacus project and the very
first Master project, CEA was the main
designer and developer of the
robotized system. Within the RAID
and
EPI-RAID
projects,
CEA
associated to European rehabilitation
centers improved the concept of
workstation. CEA was the technical
support of APPROCHE during the
evaluation of the 5 workstations.
EVALUATION OF THE EPI-RAID
WORKSTATION
The EPI-RAID Master workstation
AFMA Robots
The latest version of the Master
Workstation (the EPI-RAID workstation) was based on a PC and
transputer boards. Transputers were
used for real time control of the arm.
The Man-Machine Interface of the
system used a graphical interface
developed under Windows 3.1 [2].
AFMA Robots is a 42 person
company. It produces Cartesians
manufacturing robots and performs
engineering of robotized cells. Since
1980, AFMA has built more than 800
robots used in many industrial areas
(automotive and aerospace industries).
The competencies of AFMA Robots go
from the mechanical design to the
Besides the robot and its controller, the
automate programming of multiple
workstation includes an environment
robot systems. Developing a new
control system (ECS).
robotized workstation for rehabilitation
is a new challenge for AFMA. The
- 150 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
The graphical man-machine interface
can be configured to any kind of input
device. Whatever the handicap of the
user is, when he is able of controlling a
single switch, he can access to all the
functions provided by the workstation.
For 86% of the users, the learning
phase was short (2 day long). 90% of
the evaluations lasted two weeks. Only
one user has worked with the station 6
month long.
70 % of evaluations happened without
any technical problem. Most of the
failure came from the ECS. Computer
and robotics failed less frequently.
Three domains of application were
proposed for evaluation: vocational
applications (inserting floppy disk in
the PC, handling books or sheets,
stapling sheets together...), daily life
(handling glass or bottles, taking
medicine, hanging phone...) and leisure
(inserting video or audio tapes,
handling CDs...) .
Fig. 1: the EPI-RAID Workstation
A programming language allows the
station to be automatically controlled
for complex or repetitive tasks
involving the robot and the ECS.
A pneumatic tool changer allows to
choose, according the task to perform,
a universal gripper or a sheet of paper
manipulator.
Method of evaluation
APPROCHE bought 5 workstation to
be
evaluated
in
10
French
rehabilitation centers. 91 users (65 men
and 26 women) have evaluated the
workstation [3].
Results of evaluation
First, no situations were found where
the disabled user could not use the
system. Results of evaluation showed
that the interest and the efficiency of
the
workstation
is
particularly
appreciated for vocational tasks and
leisure tasks.
Training was considered to be easy by
86% of the subjects. Access to the
control station was considered to be
well designed (75%), though 64% of
the users felt that a second control
station was necessary in order to
separate the different functions
(leisure, office, domestic), to have
better visibility of each part of the
- 151 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
station, or to use the station in a
recumbent position.
With respect to the operating modes,
84% considered the automatic mode
interesting, while 80% judged the
manual mode necessary on security
and autonomy grounds, but felt that in
practice it was too slow and too
complex. The environmental control
system was much appreciated (73%).
Other options gathered are: aesthetic
judgement is varied (44% are
appreciative, 16% do not like the
system, and 40% have no opinion);
61% consider the system insufficiently
reliable; 66% thought the organisation
of the station to be functional, but in
general the visibility was considered
poor.
Estimations of the autonomy and time
gain are reported in the table below.
important
not important
none
Autonomy
gain
33%
62%
5%
Time
gain
17%
48%
35%
done by the robot. He has to put his
ideas together to explain the robot how
to accomplish a task. For some users,
the workstation was an opportunity to
think again.
Last but not least, every one agreed to
say that the cost of an EPI-RAID
workstation (about $100.000) was an
important obstacle to its real domestic
application.
THE NEW AFMASTER WORKSTATION
Objectives
When APPROCHE asked AFMA
Robots to develop a new fixed
workstation for severely disabled
people, it just asked for a reliable and
cheaper
EPI-RAID
workstation.
AFMA translated this request as a
MTBF of 10.000 hours and a price of
$50.000 for the new AFMASTER
workstation.
Mechanical design
A psychological study has been led
during the technical evaluation[4][5]. It
reveals that using this kind of
assistance is interpreted by many users
as giving up the hope of using again
their own body. Of course, this feeling
is painful. But the principle of
programming tasks is very positive.
The user has to think to what has to be
AFMA kept the design of a SCARA
robot. This kinematics gives a wide
enough work envelope fitting with the
shelves the robot has to reach.
In opposition of the EPI-RAID
workstation,
the
AFMASTER
workstation does not include a
horizontal rail to improve the working
area. Several reasons explain this
choice. The first one is an economical
one: the less axis you have, cheaper
- 152 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
your station is. The second reason is
the optimization of the kinematics that
allows the new workstation to have a
wider working area than the EPI-RAID
workstation. At least, the workstation
is dedicated to be settled at home. The
dimensions of the station had to be
compatible
with
the
domestic
constraints. Furthermore, if only one
user works with the workstation, the
number of specific tasks to perform is
lower than if the station is shared by
several users.
The interface of the EPI-RAID
workstation has been preserved. The
user can choose a task by selecting
icons on the screen. An icon can
represent a task or a set of tasks.
We assumed that the user is able to
control a « mouse like » input device.
Task DRINKING
SOS
The Controller
The controller is based on two PC’s.
The first PC includes an 8 axis
controller board. This board deals with
position sensors to servo each axis.
The inverse and direct geometric
models are computed on this PC. This
PC does not need either a screen or a
keyboard.
The second PC is dedicated to ManMachine Interface (see paragraph
below).
The connection between the two PC’s
is a regular 19.2 kBds serial link.
Man machine interface
The man machine interface is made by
a multimedia PC running under
windows 98. The AFMASTER
application allows to control the robot,
to run programmed tasks and to use the
ECS.
ECS
AUTO
MANUAL
QUIT
STOP
?
Return
Help
Figure 2: Man-Machine interface of AFMASTER
application
A scanning facility is provided to assist
the user for selection of the icon.
A sound blaster board and a modem
are integrated to the PC. The user has
an Internet connection, integrated
phone and fax facilities.
The IBM Gold speech recognition unit
allows the system to be controlled by
the voice of the user. The workstation
application is completely Windows 98
compatible. So this application is
controllable by the speech recognition
unit as easily as any other Windows 98
application.
- 153 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
One of the problem to solve was to use
the same microphone for speech
recognition unit and for phone and to
have the same speakers for phone,
speech synthesis and audio CD
listening.
performing robotized workstation for
disabled people.
AFMA Robots has developed an ECS
connected to the parallel port. This
universal remote control can be
programmed and used by a software
running on the PC. This software can
be used by the AFMASTER
application.
Thanks to the long experience of CEA,
AFMA Robots designed and realized
the new AFMASTER workstation in
less than one year. The ten first models
of this workstation will be used by
APPROCHE in its rehabilitation
centers to promote this industrial
technical assisting device. We hope
that, within two years, the eleventh
AFMASTER workstation will move in
a user’s home.
Next steps
REFERENCES
The first AFMASTER workstation will
be delivered to APPROCHE in June
1999. APPROCHE will use this station
in Kerpape to promote this new
industrial product. APPROCHE will
use this workstation to show to
concerned people that this kind of
product exists, is reliable and can assist
the disabled in his daily life.
[1] J.Hammel, HFM Van der Loos,
J.Leifer « DeVAR transfer from R&D
to vocational and educational settings »
ICORR’94 - Wilmington, Delaware USA
[2] Dallaway JL, Jackson RD « Raid a vocational Workstation » ICORR’92
- Keele - UK
[3] DR. Le Claire G. « Résultats
définitifs de l’évaluation réadaptative
de RAID-MASTER II et MANUS II »
Internal Report of APPROCHE
[4] Morvan JS, Torossian V, CayotDecharte
A
« Evaluation
psychologique du système robotisé
RAID-MASTER II » Internal report of
Université René Descartes
[5] Busnel M, Lesigne B and al. « The
robotized workstation MASTER for
quadriplegic users - Description and
evaluation » Journal of Rehabilitation
Research and Development 1999
APPROCHE will buy ten of these new
workstations within the two next years.
These stations will be used for the
same application in the other
APPROCHE rehabilitation centers.
CONCLUSION
The good results of the evaluation of
the EPI-RAID workstation helped the
APPROCHE association to convince
an industrial robot manufacturer,
AFMA Robots to accept the challenge
of building a reliable, cheap and
- 154 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
AUTHOR ADDRESS
Rodolphe GELIN
CEN/FAR - BP6
92265 Fontenay aux Roses cedex
France
Tel: 33 1 46 56 86 53
Fax: 33 1 46 54 75 80
[email protected]
- 155 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
DESIGNING A USABLE INTERFACE FOR AN INTERACTIVE ROBOT
Simeon Keates1, John Clarkson1 and Peter Robinson2
1
Department of Engineering, University of Cambridge
2
Computer Laboratory, University of Cambridge
ABSTRACT
The
traditional
emphasis
of
Rehabilitation Robotics has been
dominated largely by the logistics of
system development rather than how to
maximise overall system usability [1].
The research programme at Cambridge
has focused on the shortcomings of
this approach and the identification of
strategies for placing the user
exclusively at the centre of the design
process [2].
inspectors who handle the circuits
under an optical microscope. The
IRVIS system is being developed
because the inspection process is
fundamentally a visual task and
potential inspectors are being excluded
from this vocational opportunity
because of the current reliance on the
manual manipulation of the circuit.
The use of IRVIS in the workplace will
remove an unnecessary barrier to
motion-impaired operators.
This paper describes the re-design of
the interface for an Interactive Robotic
Visual Inspection System (IRVIS) and
how this was used to formulate a
structured, methodical approach to
user-centred interface design. A
discussion of the original IRVIS
interface design will be presented,
followed by a description of current
usability theory and its role in
formulating the proposed five-level
user-centred design approach. The
results of the evaluation of this
approach, through user trials, will also
be discussed.
The IRVIS prototype
A prototype IRVIS system was
developed by Mahoney [3]. It consists
of a movable tray with three degrees of
freedom and a digital video camera
mounted on a tilting gantry above with
freedom to translate (Figure 1).
BACKGROUND
The aim of the IRVIS system is to
enable the remote inspection of hybrid
microcircuits. Currently the inspection
task is performed by able-bodied
Figure 1. The IRVIS System.
- 156 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
This arrangement of five motors,
whilst offering all the requisite
functionality, resulted in complex
kinematics to perform basic inspection
tasks. For example, examining a wire
bond from all possible angles involves
tray and camera translation, tray
rotation
and
gantry
tilting.
Consequently, a routine inspection
procedure can involve all five motor
axes.
Interface design and user trials
An interface for IRVIS was designed
using the Cambridge University Robot
Language (CURL - Figure 2). This was
menu-driven, with the inspectors
specifying the axis and magnitude of
motion to be generated.
Figure 2. The CURL interface.
User trials at a local hybrid
microcircuit
manufacturer
demonstrated the feasibility of the
system, but highlighted a significant
shortfall in overall usability. Put
simply, the system was not meeting the
needs of the inspectors and a new
interface was clearly required.
NEW PRODUCT DESIGN
There are three steps to be considered
in developing all new products, such as
IRVIS: (1) defining the problem to be
addressed; (2) developing a solution
and (3) evaluating the solution [4]. The
following sections describe how these
three stages were applied to IRVIS and
subsequently subdivided to form a
five-level design approach that is
applicable to generic interactive system
design.
1 - PROBLEM DEFINITION
The problems with the original CURL
interface were principally due to the
users being unable to understand and
predict the effects of commands
entered through the interface and the
resulting motion of the robot. The
commands were too abstract and
distant from the immediacy of manual
circuit manipulation, resulting in a lack
of feeling ‘in control’. The IRVIS
system required a structure enabling
intuitive direct control, rather than the
more detached supervisory control
offered by the CURL interface.
It was quickly realised that an
understanding of generic inspection
routines was needed and data
collection sessions were organised
with the manufacturer involved in the
original user trials. Experienced
inspectors were video-recorded and
study of the tapes provided detailed
- 157 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
information on inspection procedures.
The generic actions observed were
classified
into
five
categories:
translation; rotation; tilting; zooming
and focusing.
2 - DEVELOPING A SOLUTION
Any approach to the development of
interactive mechatronic systems needs
to support the concurrent development
of both the mechatronic hardware and
the system interface, whilst retaining a
central focus on usability.
Usability approaches to design
Nielsen [5] gives an account of the use
of heuristics in a usability inspection
method
known
as
“heuristic
evaluation”. Three of these heuristics
directly
address
the
observed
shortcomings in the CURL interface
and collectively form the basis of a
design approach:
• Visibility of system status - for the
user to have sufficient feedback to
have a clear understanding of the
current state of the complete system;
• Matching system and real world for the system to respond
appropriately to changing user input;
• User control and freedom - for the
user to be have suitably intuitive and
versatile controls for clear and
succinct communication of intent.
Building on these heuristics, a design
approach was developed that expands
the second stage of the design process,
solution development, into three
specified steps. Each level of the
resultant design process (Figure 3) is
accompanied by motion-impaired user
trials at the Papworth Trust throughout
and a final evaluation period before
progression to the next level, thus
providing a framework with clearly
defined goals for system usability.
The role of the prototype
An integral part of the design approach
is the use of prototypes to embody the
system at each stage of development.
There are a number of forms that a
prototype can take from low fidelity
abstract representations through to
high fidelity working models.
Extending directly from the principles
of prototype fidelity, a variable fidelity
prototype for use in the IRVIS redevelopment was proposed at the
previous ICORR conference [6]. This
prototype was in essence a software
simulation of the proposed system that
encompasses both the appearance and
functionality of the user interface and
the mechanical properties of the
robotic hardware.
- 158 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Level 1 - Problem
specification
specify the complete problem
to be solved
view of the inspection tray (Figure 4).
The user was able to select control
over any one of the robot’s individual
motors and to drive them by moving
the cursor in the display windows and
pressing either mouse button.
STAGE 1
verify problem definition
Level 2 - Visibility of system
status
develop a minimal, but sufficient
representation of the system
verify user understanding
Level 3 - Matching system
and real world
augment the behaviour of the
model with simulated kinematics
Figure 4. The first interface revision.
STAGE 2
Figure 3. The design approach.
Users were asked to predict the
machine’s behaviour as a result of their
input. Initially, the users had some
difficulty understanding what was
being presented to them and it quickly
became clear that apparently simple
details can make a substantial
difference to the overall usability.
Small changes such as the addition of a
view cone, use of colour-coding and a
little extra geometric detail led to a
representation of the system that
required almost no explanation. Users
who encountered the final version of
the interface were able to successfully
perform simple positioning tasks.
Visibility of system status
After developing a basic model of the
system, work focused on the problem
of defining a minimal, but sufficient,
representation of the system for the
user to be able to interact with. This
version of the revised interface showed
an overview of the robot and a camera
Matching system and real world
Having established a representation
that afforded sufficient feedback to the
user, the next step was to include
kinematic motion in the model. The
user trials utilised in this stage of the
research were to ensure that the
simulated robot response to user input
verify system behaviour
Level 4 - User freedom and
control
develop quality of control
and consider ‘handling’
verify user comfort
Level 5 - Evaluation /
validation
evaluate system usability
STAGE 3
validate system usability
- 159 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
was consistent with that of the actual
hardware.
The kinematics used to drive the
physical system were reconstructed in
the virtual system and a clearer
understanding of the nature of the
user’s view of the geometry led to an
intuitive set of driving controls.
Discrepancies were identified between
the anticipated and actual response
behaviour. These were a result of weak
assumptions made in the original
interpretation of the robot system
kinematics. Poor performance of
operations such as rotation about a
point had previously been attributed to
mechanical inaccuracies; working
within a simulated environment
identified the control software as the
origin.
User freedom and control
The next stage concentrated on
assessing the ease of interaction
between the user and the simulation
interface, identifying particular aspects
of the interface that required
modification. From each of the
previous levels, it was clear that all of
the users wished to interact as directly
as possible with the circuit and not
with the motors. Consequently, the
individual motor controls were
replaced with generic movement types,
specifically translation, rotation and tilt
(Figure 5).
Figure 5. The final interface.
The size and direction of each of these
inputs were directly proportional to the
magnitude and direction of the input
device movement. Thus the user could
manipulate the circuit directly and the
interface became easier to use. The
speed-of-response parameters were
also investigated to verify that the
users were comfortable with ‘feel’ of
the virtual robot. This was achieved by
establishing a series of pseudoinspection tasks and acquiring
interaction data that could be analysed.
One
of
the
most
important
improvements arising directly from the
user trials was the development of a
position control input paradigm to
complement the original velocity
control. Velocity control moves the
cursor at a rate proportional to the
displacement of the transducer from
the central datum, whereas position
control moves it by a distance
proportional to this displacement.
Position control proved to be both a
quantitative and qualitative success.
The users found the interface easier to
interact with and more intuitive.
Experiments showed that for all users
- 160 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
the fastest times obtained under
position and velocity control were
similar. However, position control
required lower levels of acceleration
and velocity, requiring a less
demanding mechanical specification
for the robot.
3 - EVALUATION
In order to assess the usability of the
redesigned interface when used in
conjunction with the robot, the IRVIS
robot was transported to Papworth for
user trials (Figure 6). Only one of the
users had used the IRVIS robot before,
but all had experience of the
simulation.
Figure 6. User trial evaluation.
The evaluation exercise consisted of
the users manipulating a hybrid
microcircuit in each of the generic
inspection modes (translation, rotation,
etc.). Users were asked whether they
felt that they were interacting directly
with the robot and if the speed of
response was too slow.
Qualitative feedback from all the users
was extremely favourable. Each user
found the new interface easy and
intuitive to use and all completed the
tasks with a minimum of guidance. No
user complained of the speed of
response of IRVIS being too slow. This
was a significant result, because it had
been previously thought that IRVIS
was mechanically under-specified. The
new interface showed that the cause of
the problems was in the software
implementation and not mechanical in
origin, thus saving an expensive, and
unnecessary, re-build.
A representative from the manufacturer
involved in the original evaluation of
IRVIS declared the revised system to
be fully fit for use and is pursuing
quotes for remote inspection devices,
based on the IRVIS specification.
CONCLUSIONS
The most important outcome from this
research has been the development of a
five-level approach to interactive
system design. This approach provides
a substantive framework for the design
process, with specific usability goals
throughout the design cycle. This
structure and focus on usability is a
key strength of the process over more
traditional approaches.
Validating the effectiveness of a design
approach is difficult, but one way is to
verify the success of products
developed using it.. The significant
increase in usability of the IRVIS
- 161 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
interface shows that the design
approach
can
yield
notable
improvements in a product’s fitness for
purpose.
Acknowledgements
This project was funded by the
Engineering and Physical Sciences
Research Council. We thank Bob
Dowland for his contribution to this
work. We also gratefully acknowledge
the staff and residents of the Papworth
Trust for their time and efforts.
References
[1] Buhler C. “Robotics for
Rehabilitation - A European(?)
Perspective.” Robotica. 16(5). 487490. (1998).
[6] Dowland R, Clarkson PJ and
Cipolla R. “A Prototyping Strategy for
use in Interactive Robotic Systems
Development.” Robotica. 16(5). 517521. (1998).
Address
Dr Simeon Keates
Engineering Design Centre
University of Cambridge
Trumpington Street
CAMBRIDGE. CB2 1PZ. UK.
Tel:
Fax:
E-mail:
+44 (0)1223 332673
+44 (0)1223 332662
[email protected]
[2] Keates S, Robinson P. “The Role of
User Modelling in Rehabilitation
Robotics.” Proceedings of ICORR ’97.
75-78. (1997).
[3] Mahoney RM, Jackson RD, Dargie
GD. “An Interactive Robot
Quantitative Assessment Test.”
Proceedings of RESNA ’92. 110-112.
(1992).
[4] Keates S, Clarkson PJ, Robinson P.
“Developing a methodology for the
design of accessible interfaces.”
Proceedings of the 4th ERCIM
Workshop. 1-15. (1998).
[5] Nielsen, J. Usability Inspection
Methods, John Wiley & Sons, 1994.
- 162 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
A ROBOTIC MOBILITY AID FOR FRAIL VISUALLY IMPAIRED PEOPLE
Shane MacNamara, Gerard Lacey
Department of Computer Science Trinity College Dublin, Ireland
ABSTRACT
This paper discusses the design of a
smart mobility aid for frail, visuallyimpaired people. The device is based
on the concept of a walker or rollator a walking frame with wheels. The
device, which is called the PAMAID
(Personal Adaptive Mobility Aid) has
two modes of operation – manual and
assistive. In manual mode the device
behaves very much like a normal
walker. In assistive mode, the
PAMAID assumes control of the
steering and will navigate safely inside
buildings, giving the user feedback on
the immediate environment via a
speech interface. The PAMAID was
evaluated in a nursing home in Ireland
and the results of these tests will be
briefly presented.
INTRODUCTION
Comprehensive statistics on dual
disabilities are rare. Some studies do
provide compelling evidence that there
is a substantial group of elderly people
with both a visual-impairment and
mobility difficulties. Ficke[1] estimated
that of the 1.5 million people in nursing
homes in the United States around 23%
have some sort of visual impairment
and 71% required some form of
mobility assistance. Both visual
impairments and mobility impairments
increase substantially with age. Rubin
and Salive[2] have shown that a strong
correlation exists between sensory
impairment and physical disabilities.
The people in this target group have
difficulty
using
conventional
navigational aids in conjunction with
standard mobility aids. Their lifestyle
can thus be severely curtailed because
of their heavy dependence on carers.
Increased mobility would lead to more
independence and a more active,
healthier lifestyle.
A number of electronic travel aids for
the visually impaired already exist.
Farmer [3] provides a comprehensive
overview. A small number of devices
have reached the stage of extensive
user trials, notably the Laser Cane[4],
the
Pathsounder[5]
and
the
Sonicguide[6]. None of these devices
provide any physical support for the
user however. A full review of assistive
technology for the blind is provided in
[7].
DESIGN CRITERIA
A number of considerations had to be
taken into account when designing the
device. The device has to be
constructed such that the cognitive load
- 163 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
on the user is kept to a minimum. Thus
the user interface has to be very simple
and intuitive.
The device has to be safe and reliable
to use and the user must have
immediate control over the speed. For
this reason, it was decided that the
device should not have motorised
locomotion, only the steering is motor
controlled. This also reduces the power
requirements of the mobility aid
substantially. The one disadvantage of
giving the user control over the speed
is that from a control perspective, the
system becomes under-determined.
One of the two control parameters is
lost and the system is more difficult to
control. As a consequence, the control
loops must be tight so that the system
can react to unexpected changes such
as the user accelarating when close to
an obstacle.
To make the device as inexpensive as
possible, most of the components are
available off-the–shelf. Ultrasonic
range sensors were chosen over a laser
scanning rangefinder to further reduce
the potential cost of the device.
adjusting the steering angle of the
device, they do not in any way propel
the device. Absolute encoders return
the angular position of each of the front
wheels. The device thus has kinematic
constraints similar to those of an
automobile.
Fig 1. Photograph of mobility device
Handlebars are used for steering the
device in manual mode and indicating
an approximate desired direction in
assistive mode. They can rotate
approximately +/-15 degrees and are
spring loaded to return them to the
MECHANICAL DESIGN
central position. In the manual mode of
operation, the handlebar rotation is
The mechanical design of the device is
converted to a steering angle and the
very similar to that of a conventional
device can be used in the same way as
walker with a few important
a conventional walker. The two wheels
differences. The two castor wheels at
are controlled independently because of
the front of the walker have been
the highly non-linear relationship
replaced by two wheels controlled by
between them at larger steering angles.
motors. The motors are solely for
It is desirable to achieve these large
- 164 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
steering
angles
for
greater
manoeuvrability. Rotation on the spot
can even be achieved as shown in fig 2.
To slow the vehicle down, the wheels
are “toed in” by a few degrees from
their current alignment. The exact
misalignment angle used will depend
on the severity of the braking required.
monitoring the steering angles of the
two front wheels. A pair of incremental
encoders are used for odometry. These
are mounted on the rear wheels. All the
encoder information reaches the motion
controller via a single serial bus (SEI
Bus, US Digital).
The handlebar
steering angle is monitored by a linear
hall-effect sensor positioned between 2
magnets.
Fig 2. The steered wheels can be
positioned so that rotation on the spot
is possible.
HARDWARE
Control of the device is distributed
through a number of separate modules.
An embedded PC (Ampro LittleBoard
P5i, 233MHz) is used for high-level
reasoning. The motion control module
is custom built around a singleboard
micro-controller (Motorola MC68332).
Communication between the PC and
the motion controller is via serial line.
This motion control board also deals
with general I/O. Optical absolute
encoders (US Digital) are used for
Fig 3. Sonar configuration in plan and
elevation
- 165 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Ultrasonic sensors (Helpmate Robotics
Inc.) are used for object detection and
ranging. Fifteen sonar transducers are
used in total. This provides degree of
sensor redundancy which is appropriate
for the current application. The
arrangement of the sonars around the
mobility aid is shown in fig 3. The
arrangement is very similar to that
proposed by Nourbakhsh in [8]. There
are seven groups of sonars in all. Four
sonars point sideways (One group,
composed of two sonars, on each side)
and are used to determine the presence
of any adjacent walls. Two groups
point approximately straight ahead.
One of the groups is at a height of
approximately 40cm and contains 3
sonars. The second group contains 2
sonars and is at a height of 25cm and
used for detecting obstacles closer to
the ground. Two more groups are set at
angles of approximately 45 degrees
and –45 degrees. The fifth group
comprises of two sonar at a height of
30cm from the ground pointing
upwards at an angle of approximately
60 degrees. This group is used
predominantly for detecting headheight obstacles, tables etc. The PC is
equipped with a sound card so audio
feedback can be provided where
appropriate. The sound samples are
pre-recorded and contain messages
such as “ Object left”, “Object ahead”
and “Head-height obstacle”
SOFTWARE
Due to the high demands on reliability,
the mobility aid uses the Linux
operating system. Its extensive
configurability means also that it
possible to tailor the system to the
requirements of the application. The
Task Control Architecture[9] was used
as a framework for the software design.
TCA is essentially an operating system
for task-level robot control. The control
can be transparently distributed across
- 166 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
multiple machines as TCA can handle
all the interprocess communication. A
central server is used to pass messages
between individual software modules.
Other services provided include
scheduling, resource management and
error
handling.
Communication
between modules is via UNIX sockets.
Currently, there are five modules
running on the device – motion control,
sensing, feature extraction, audio
output and high-level control (see fig.
4). All processes run on the same
processor. If required however,
processes can be moved transparently
to other processors and connected
together via a small hub.
tolerance. Once a positive feature has
been identified, the robot will switch
into the mode associated with that
feature. For example, if the device
detects that it is in a corridor, the
‘follow_corridor’ mode will steer the
device to the centre of the corridor.
Similarly, if a left junction has been
detected, the device will query the user
on how to proceed. A rule-based
obstacle avoidance routine is located
within the high-level control module.
The rule-based system is more suitable
than a potential field algorithm for the
current sonar layout adopted.
The feature extraction module uses the
sonar returns to determine simple
features in the indoor environment such
as corridors, junctions and dead ends.
The four sideways-pointing sonars (see
fig 3.) are predominantly used for this
feature extraction. Evidences for the
existence of walls on either side of the
device is accumulated. A histogram
representation of feature evidences is
used. If a particular feature is detected
from one set of sonar returns, its
evidence is incremented by one,
otherwise its evidence is decremented.
The feature with the highest histogram
score is then the most probable feature
in the local environment. For instance,
the criteria for a positive corridor
identification is that evidence of a wall
either side of device is strong and that
the measured angles to the left and
right walls are parallel within a certain
The device was evaluated on-site on
seven persons (all male) registered as
visually impaired. The average age of
the test participants was 82. They
suffered from a variety of other
physical problems such as arthritis,
balance problems, frailty, nervousness
and general ill-health. After testing the
device, the users were questioned on its
performance.
The results are summarised in the table
below. The results were compiled using
a 5 point Likert scale.
RESULTS
User’s sense of safety while 4.4 / 5
using device
Ease of use
4.2 / 5
Usefulness
3.8 / 5
Table 1. User Feedback on device
performance
- 167 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
FUTURE WORK
Work is continuing on improving the
autonomy of the device indoors. An
inexpensive vision system is being
developed for detecting features such
as doors. Sensors which can reliably
detect down-drops are also being
developed.
ACKNOWLEGEMENTS
The authors would like to acknowledge
the assistance of Heather Hunter of the
National Council for the Blind in
Ireland while carrying out the user trial.
We would also like to thank Magnus
Frost and Jan Rundenschold of
Euroflex, Sweden for constructing the
chassis. This research was funded in
part by the European Union
Telematics Application Program 3210.
REFERENCES
[1]. Ficke R.C. Digest of Data on
Persons with Disabilities. National
Institute
on
Disability
and
Rehabilitation Research, Washington
DC. 20202, USA, 1991.
[2]. G.S. Rubin G.S. Salive M.E. The
Women’s Health and Aging Study:
Health and Social characteristics of
Older Women with Disability, Chapter:
Vision and Hearing. Bethesda, MD:
National Institute on Aging, 1995.
[3]. Farmer L.W. Foundations of
Orientations and Mobility, chapter
Mobility Devices, pages 537-401.
American Foundation for the blind. 15
West 16th Street. New York, N.Y.
10011, 1987.
[4]. Benjamin J.M. The new c-5 laser
cane for the blind. In Proceedings of
the 1973 Carahan conference on
electronic prosthetics, pages 77-82.
University of Kentucky Bulletin 104,
November 1973.
[5]. L. Russell. In L.L Clark, editor,
Proceedings of the Rotterdam Mobility
Conference, pages 73-78, 15 West 16th
Street, NewYork, N.Y. 10011,
American Foundation for the blind,
May 1965.
[6]. Kay L. A sonar aid to enhance the
spatial perception of the blind:
engineering design and evaluation.
Radio and Electronic Engineer,
44(11):605-627, November 1974.
[7]. Lacey G. Adaptive Control of a
Robot Mobility Aid for the Frail
aVisually Impaired. PhD Thesis,
Trinity College Dublin. To be
published, 1999.
[8]. Nourbakhsh I. The Sonars of
Dervish, The Robotics Practitioner,
Vol. 1, 4, 15-19, 1995.
- 168 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
[9]. Simmons R., Lin L., Fedor C.
“Autonomous Task Control for Mobile
Robots”. In Proceedings of IEEE
Symposium on Intelligent Control.
Philadelphia, PA, September 1990.
ADDRESS
Shane MacNamara
Department of Computer Science
Trinity College Dublin
Ireland
Tel: +353-1-6081800
Fax: +353-1-6772204
email:[email protected]
- 169 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
MODELLING HUMAN DYNAMICS IN-SITU FOR
REHABILITATION AND THERAPY ROBOTS
William Harwin and Steven Wall
Department of Cybernetics, University of Reading, England
Abstract
This paper outlines some rehabilitation
applications of manipulators and
identifies that new approaches demand
that the robot make an intimate contact
with the user. Design of new
generations of manipulators with
programmable compliance along with
higher level controllers that can set the
compliance appropriately for the task,
are both feasible propositions. We must
thus gain a greater insight into the way
in which a person interacts with a
machine, particularly given that the
interaction may be non-passive. We are
primarily interested in the change in
wrist and arm dynamics as the person
co-contracts his/her muscles. It is
observed that this leads to a change in
stiffness that can push an actuated
interface into a limit cycle. We use
both experimental results gathered
from a PHANToM haptic interface and
a mathematical model to observe this
effect. Results are relevant to the fields
of rehabilitation and therapy robots,
haptic interfaces, and telerobotics.
Background
There are several application areas
where machines make an intimate
contact with the user and in these
situations it is important to gain a good
understanding of human neuro-
musculo-skeletal dynamics. Several
areas in the field of rehabilitation
robotics require this type of close
contact with a person and in these
situations it is possible that some useful
information can be gained from that
contact. Close contact robots in
rehabilitation include power-assisted
orthotic mechanisms [1], robots in
physical therapy[2,3], and EPP based
telerobotics[4]. In non-rehabilitation
applications, close contact robots are
common in haptic interfaces and
telerobotics.
To aid the design of close contact
machines requires good knowledge of
the human under conditions similar to
those that will be experienced in
practice. Although it is attractive to
develop linear approximations of
human dynamics as this allows for
easier stability analysis, human arm
dynamics are inherently non-linear and
time dependent and include factors
such as fatigue, posture, and movement
history. In rehabilitation the clinical
condition gives a further complication
adding additional factors to the
equation such as tremor, muscle
atrophy, and limb flaccidity.
We use a two level approach to
understanding human neuro-musculoskeletal dynamics and investigate cocontraction in the process. An
- 170 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
experimental method allows in-situ
data to be gathered at the first level. At
a second level individual physiological
elements in the joint of interest can be
modelled and the composite dynamics
then simulated.
Human System Identification
Several studies base a human system
model on a second order mass, spring,
damper approximation athe mechanical
properties of various joints [5,6,7].
Standard techniques then allow the
lumped characteristics of the human
arm to be determined by applying a
perturbing force, and then examining
the positional response. A force
feedback device such as the
PHANToM (Sensable Technologies,
Cambridge MA, USA) has the ability
to both apply a force and measure the
positional response of the user. The
PHANToM was used in the following
experiments and consists of a low
impedance, 3 degrees-of-freedom,
revolute manipulator where the
traditional end effector is replaced by a
thimble, through which the user
interact with the device.
The workspace of the PHANToM is
designed for movements of the finger
and wrist, therefore it is these joints
that will be the focus of the modelling.
Previous studies of the impedance
presented by the index finger [5] report
several trends:
force.
• There was a relatively large, near
critically damped value of the
damping ratio for fast transients.
In a study of the stiffness of the human
wrist [7], the relationship between the
angular position and the torque was
modelled by an underdamped second
order parametric model.
Experimental Method
Preliminary
experiments
were
performed in order to assess the
feasibility of developing mechanical
impedance models for the human wrist
and the metacarpal-phalangeal joint of
the finger. The subject’s elbow and
other relevant joints were firmly
secured via a splint so that the only
movement was the joint being
examined. The finger splints were
rigidly attached to the tip of the
PHANToM, via the thimble provided.
Perturbations were applied via the base
motor of the device of an amplitude
determined by sampling from a normal
distribution of zero mean, with a fixed
period of 0.1s. The subject was either
asked to relax, or to co-contract the
appropriate muscles in order to oppose
the
motion.
The
subsequent
displacement of the corresponding joint
on the PHANToM was recorded.
Results and Analysis
The resultant positional output and
• There was little inter-subject
estimated torque input data was used to
variation in mass estimates.
construct a second order discrete time
ARMA model relating the two
• There was an approximately linear
variables. Such a model can then be
increase in stiffness with applied
converted to a second order mass- 171 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
where K is the d.c. compliance, T and θ
are the applied torque and resulting
angular perturbation.
0.1
0.08
Relaxed
Tensed
0.06
0.04
imag.
0.02
0
-0.02
Common
-0.04
-0.06
-0.08
-0.1
-700
-600
-500
-400
-300
real.
-200
-100
0
Figure 1: Continuous Time Model Poles for Human
Wrist over 0.1s Time Window
spring-damper model of impedance in
the continuous time domain, providing
some estimate of the mechanical
parameters of the impedance presented
by the user. The plot in figure 1
illustrates the poles of the continuous
time models for tensed and relaxed
wrists. The data was analysed over a 1
second time window, taken from the
beginning of the first step in torque.
A visual analysis of the data suggests
three different regions for the location
of poles, indicated on the diagram. The
region near the origin includes poles
for both contracted and relaxed
conditions and is common throughout
all the models developed. The poles are
close to the origin, suggesting an
unbounded position response to a step
input. Several poles were unstable,
which is an unrealistic suggestion,
however, it is inferred that over a small
displacement, away from the limits of
movement of the joint, a suitable model
for the impedance of the wrist is:
θ
K
=
T s( sτ (u) + 1)
(1)
The unstable poles result from a lack of
information present in the data, due to
the long time constant of the wrist.
Modelling over a longer time period
may eliminate the instability. The
model suggested in equation (1) is a
gross oversimplification of the dynamic
properties of the human wrist.
However, it is reasonable to suggest
that it does approximate the dominant
mechanical properties of the joint over
a limited displacement not approaching
the limits of the joint’s motion, prior to
onset of sensory feedback or reflex
actions. The time constant, τ, here
depends on level of muscle cocontraction and many other factors, as
indicated by the regions on figure 1.
For low levels of muscle activation, the
second pole of the system is in the
‘Relaxed’ region, further into the left
hand plane, indicating a faster response
time. With muscle co-contraction, the
second pole of the system is shifted
towards the origin in to the ‘Tensed’
region, indicating an increase in the
stiffness. Results for the response of
the finger to perturbations displayed
similar behaviour.
As with varying levels of muscle
contraction, three distinct regions are
again evident in the pole placement.
The model expressed in equation (1) is
again applicable to the results, with τ
being a function of input force. Region
1 represents the pole at the origin.
Regions 2 and 3 display the variation in
the mechanical parameters of the
- 172 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
1. The Hill effect causes a drop in
force in the shortening muscle,
whereas the extending muscle
exerts a larger force thus tending to
restore the limb following a
perturbation.
2. The non-linear length-tension
relationship of the series tendon
operates higher up the non-linearity
when muscles are co-contracted
thus causing a greater stiffness.
3. The reflex action of the golgi
tendon organ.
system with the magnitude of the
perturbations. This indicates an
increase in response time, and, hence,
stiffness with increasing force, which
agrees with the results presented by
Haijan and Howe[5].
Simulation of co-contraction in the
elbow
A non-linear elbow model has been
developed, based principally on that of
Stark and others [8] but adapting
parameters from Prochazka[9] and
Gossett[10]. This model is used to
identify the elements that cause an
increase in stiffness when agonist and
antagonist muscles co-contract. It is
hypothesised that there are three
mechanisms that contribute to the
increase in stiffness when a person cocontracts their muscles
The simulations done here illustrate the
first of these and show that a non-linear
series
elasticity
prevents
high
frequency vibration at high levels of
muscle tension. This mechanism does
not appear to contribute significantly to
the increase of stiffness as muscles cocontract. The third mechanism is
currently unexplored.
NR1
NL1
Modified Stark and Lehman Model
To Workspace1 Single Antagonist
To Workspace9
Single Agonist
10*10s
Out1
10*10s
In1
In1
Out1
(s+10)(s+10)
(s+10)(s+10)
Hill damping
Left
iRight Muscle velocity
estimator
Left muscle velocity
estimator
xl
Hill damping
right
xr
NL
20
In1
s+20 HTL
Product
NL->HTL
FsL
Poly nom
xl-x
xr-x
sl1
Left Tendon
NR
Poly nom In1
NR->HTR
x
FsR
Series Tendon Force
Left
FsR
0.035
Series Tendon Force
Right
0.035
0.035
BP
Bp
Zero
E
olander
1/J
ometer
External perturbation
s+20
Product1
FsL
Moment arm
20
HTR
Right Tendon
sr1
Fin
To Workspace10
Inertia
elbow angle
1/s
1/s
Integrator1
Integrator2
xb
To Workspace
Sum
KP
Kp
- 173 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Description of simulation
Effect of co-contraction on arm stiffness (linear tendon)
3
The simulation is shown in figure 2. A
bimuscle model is used and the force of
contraction is estimated by scaling the
Hill damping hyperbola. The form of
the Hill equation for contracting
muscle is
Applied torque (Nm), Arm movement (radians)
Applied force
F
(1 + afact )v
= 1+
Fact
Bh − v
The series tendon connecting the
muscle to the bone, is modelled either
as a linear element F=Ke x or as a
fourth power F=Kx4. The spring
constant in the latter case is adapted to
fit data published by Evens and
Barbernel [11] for the human palmaris
tendon.
1
Resultant movement
0
-1
Neural input
to muscles
-2
-3
0
5
10
15
Time (seconds)
20
25
Figure 3: Stiffness of model with linear tendon.
Simulation results
Results of the simulation where the
tendon is modelled as a linear spring
are shown in figure 3. The applied
torque is ramped down and then up to + 2.8 Nm, and the resulting movement
of the arm observed. When there is no
co-contraction as indicated for the first
6 seconds, the elbow acts as a weak
spring, with a small lag. Between 6 and
12 seconds the muscles are activated at
about half their full strength. During
Effect of co-contraction on arm stiffness (non-linear tendon)
3
Applied force
Applied torque (Nm), Arm movement (radians)
where Bh = |Vmax| afact. v is the
muscle contraction velocity, F the force
of contraction, Fact is a measure of
muscle activation, and afact and Bh are
the Hill constants. A cubic spline, with
continuous
first
and
second
differentials at v=0 , is used when the
muscle is being extended. A shaping
parameter p=0.2 is used to force an
intercept on the positive x axis at
|Vmax| p. The velocity of the muscle
with respect to the bone is estimated
from position using a simple second
order filter with a double pole giving a
3dB cut off at 5 rad/s.
2
2
1
Resultant movement
0
Table 1 shows values for other
-1
parameters along with comparison with
Neural input
to muscles
other simulation studies. It should be
-2
noted that the tendons are assumed to
translate force into torque via a
-3
0
5
10
15
20
constant moment arm, and gravitational
Time (seconds)
effects are ignored.
Figure 4: Stiffness of model with non-linear tendon
- 174 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
25
this period the stiffness does increase
by a small amount, as can be observed
by the change to the gradient of the
position, and the lower movement
peaks. At high levels of co-contraction
a high frequency limit cycle is induced.
Figure 4 shows the results when the
tendon model is replaced by the nonlinear equation F=Kx4. Results are
similar to those shown in figure 3 with
possibly slightly more change in
stiffness as muscles co-contract. It is
noted that the non-linear tendon
suppresses the limit cycle observed in
at high levels of co-contraction in the
linear tendon model, this could be an
artifact of the numerical integrator. It is
somewhat surprising that the non-linear
tendons do not contribute more to the
change of joint stiffness observed in
practice.
Discussion
System identification techniques are
able to identify locally linear models
for a person interacting with an
J
m
l
KP
BP
Ke (tendon
extensor)
Kf (tendon flexor)
Be / Bf
Hill af
Hill Bh (=af
Vmax)
Gos94 Elbow/
forearm
0.0772
1.77
0.177
1
1
0.1
Stark
Neck/ head
0.0103
actuated interface as has been
illustrated for the wrist data given. The
model gives an adequate description
but only for small movements away
from
the
joint
limits.
The
measurements of force and position
were derived entirely from access to
internal control parameters of the
PHANToM and a model of its
dynamics. Better measurements from
the PHANToM would possibly
improve the model estimates. However
this demonstrates the potential of insitu human model identification.
The danger of the more detailed nonlinear physiological model is that it is
sensitive to the choice of parameters
for which there is little practical data.
In addition the current model does not
include a reflex neural circuit thus
omitting a factor that undoubtedly has
an influence on the change of stiffness
as antagonist muscles co-contract.
However
if
a
physiologically
appropriate and accurate model can be
developed from interaction data it can
Prochazk97
Cat solenus
1
0.115
0.115
2.29
0
0
20,000
0.1
1
This simulation
Elbow/ forearm
0.07
Hill B
.25
1.5
Hill B
1
1
0.3
7
(2000 N/m at
0.0035m)
Hill B
.25
0.66
kgm^2
kg
m
Nm/rad
Nms/rad
Nm/rad
Nm
Nm/rad
Nms/rad
m/s
- 175 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
then be linearised for control system
design or a simplified version can be
used for model reference control
techniques.
Conclusion.
Both experimental and simulation
models of the human wrist and elbow
have been discussed with advantages
and disadvantages of each discussed.
As in many areas it demonstrates the
trade-off that must be made between
simplicity and accuracy
Acknowledgements,
This work is partially supported by
EPSRC GR/L76112 “Determining
Appropriate Haptic Cues for Virtual
Reality and Telemanipulation”.
References
1.
2.
3.
4.
W.S. Harwin and T. Rahman Analysis of
force-reflecting telerobotic systems for
rehabilitation applications Proceedings
of the 1st European Conference on
Disability Virtual Reality and Associated
Technologies pp 171-178 ISBN 07049
1140X (1996)
P.S. Lum, C.G. Burgar H.F. M. vander
Loos The use of a Robotic device for
post-stroke movement therapy ICORR97:
Proceedings
of
the
international
conference on rehabilitation robotics The
Bath Institute of Medical Engineering
Wolfson Centre Royal United hospital
Bath UK ISBN 1-85790-034-0 pp 107110 (1997)
M.L. Aisen, H.I. Krebs, F. McDowell, N.
Hogan, and B.T. Volpe The effect of
Robot assisted therapy and rehabilitative
training on motor recovery following
strokeArch Neurol 54(4) pp 443-446
(1997)
S. Chen, T. Rahman, and W. Harwin
Performance Statistics of a HeadOperated
Force-Reflecting
Rehabilitation Robot System. IEEE
Transactions
on
Rehabilitation
Engineering 6(4) pp 406-414 (December
1998)
5 A. Z. Haijan, R. D. Howe, Identification
of the Mechanical Impedance at the
Human Fingertip, to appear in the ASME
J. of Biomechanical Engineering.
6 D. J. Bennett, J.M. Hollerbach, Y. Xu,
I.W. Hunter, Time-Varying Stiffness of
Human Elbow Joint during Cyclic
Voluntary Movement, Experimental
Brain Research 88, pp. 433-442, (1992).
7 T. Sinkjaer, R. Hayashi, Regulation of
Wrist Stiffness by the Stretch Reflex, J.
Biomechanics 22, pp. 1133-1140, (1989).
8. W.H. Zangemeister, S. Lehman and L.
Stark
Sensitivity
analysis
and
optimization for a head movement
modelBiological Cybernetics 41 pp 3345 (1981)
9. A. Prochazka, D. Gillard, and D.J.
Bennett Implications of positive feedback
in
the
control
of
movementJ.
Neurophysiology 77 pp 3237-3251
(1997)
10. J.H. Gossett, B.D. Clymer and H.
Hemami Long and short delay feedback
on one-link nonlinear forearm with
coactivation.IEEE T. Systems man and
cybernetics 24(9) (september. 1994)
11 J.H Evans and J.C. Barbenel Structural
and mechanical properties of tendon
related to function Equine veterinary
journal 7 (1) i-viii (1972)
Author Address and contact
information.
William Harwin and Steven Wall
Department of Cybernetics, University
of Reading P.O. Box 225, Reading
RG6 6AY England
email: [email protected],
[email protected]
- 176 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
DOMESTIC REHABILITATION AND LEARNING OF
TASK-SPECIFIC MOVEMENTS
Yoky Matsuoka, Harvard University
Larry C. Miller, Boston Biomotion Inc.
Abstract
We have constructed a device that is
suitable for domestic task-specific
rehabilitation. The machine has a
large workspace permitting natural
three-dimensional movements. It is
unique because it is inherently safe but
still allows force-velocity collinearity
and force amplitude variation within
one movement.
These real-life
motions in a software-controlled
environment
make
task-specific
rehabilitation possible under the
complete volition of the user.
Furthermore, the machine can operate
without constant supervision due to its
software control features and its
hardware’s inherent safety and
flexibility, making it the perfect
candidate for domestic use.
addition, in actively powered motion
devices, the control software’s
inhibition of the hardware for safety
can fail and cause severe injuries.
In order to overcome these problems,
we have constructed a threedimensional resistance rehabilitation
machine that matches well to the
user's kinematics and needs. This
machine has no potential for machineinduced accidents and injuries.
Device Description
The mechanical design of the device
was motivated by the need to provide
safe, repeatable, accurate, and smooth
controlled resistance to the user over a
large workspace. The device is
designed to be purely dissipative and
thus it is inherently safe. There are
three actuated joints as shown in
Figure 1: yaw and pitch rotary joints
are combined with a linear joint to
create a large 1.1meter radius halfsphere workspace.
Introduction
Recently, the importance of computer
assisted rehabilitation has been
emphasized
for
improving
performance and recovery time. Most
robotic devices are designed to have a
Magnetic particle brakes are used for
specific workspace for specific
the actuators to provide accurate
injuries with safety features included
control over a wide range of speed and
in the software. However, a human’s
torque with a simple electrical current
normal movements cannot be matched
input. Each brake, a Placid Industries
well on these highly constrained
B-150, provides a maximum torque of
devices and rehabilitating on such
17 N-m. To accomplish over 500N
machines can result in muscle
maximum force, two cable-pulley
imbalance and the disruption of the
speed reducers were designed. The
underlying coordination structure. In
- 177 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
is cabled with two pairs of 3.2mm
cables. The linear stage has a single
pair of 2.4mm antagonistic cables
wrapped around the brake shaft and
attached to the ends of the linear stage
tube. The resulting lateral stiffness is
approximately 60 kN/rad (or 10mm
deflection under 550N force at 1.1m
extension), and the linear stiffness is
110kN/m (or 5mm deflection under
maximum torque).
Figure 1: A picture of the rehabilitation
device. There are yaw, pitch, and linear
joints and they are cable controlled.
first stage has a 60mm diameter input
pulley translated to a 390mm output
pulley, and the second stage has
80mm input and 400 mm output
pulleys. These two stages together
create a reduction ratio of 32.5 : 1,
producing the 550N-m output torque.
This cable/pulley reduction strategy
was chosen because it has extremely
low friction and zero cumulative
backlash.
A
three-degree-of-freedom
nonactuated gimbal is designed as the
primary interface tool for the machine.
The gimbal has a removable handle
that can be substituted with specific
grips such as baseball and tennis as
shown in Figure 2. In addition, the
gimbal can be replaced with other
couplers shown in Figure 3 to
accommodate movements for various
limbs.
Concurrently,
to
improve
the
performance of the machine under
dynamic operation, the stiffness of the
machine is calibrated with the cable
Figure 2: The gimbal handle can be
diameters. The yaw joint is cabled
interchanged to activity specific grips such
with a single pair of 2.4mm
as the baseball handle.
antagonistic cables and the pitch joint
- 178 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
The machine is controlled by a motion
controller with a DSP, and is
programmed to incorporate this
interface. It converts the encoder
readings to Cartesian coordinates and
Figure 3: Various couplers can be used as
the interface for the machine to
rehabilitate or train various sets of
muscles.
can respond to a user’s musculoskeletal changes in force, position,
velocity, acceleration, power, work,
and range of motion in real time.
LabView software is used as the
graphical interface and it allows the
user to specify the training variables in
a simple manner. A foot pedal is
installed within the user’s workspace
to make fine adjustments or to send
commands during training without
stopping the motion. A picture of the
overall machine in use is shown in
Figure 4.
Task-Specific Training
One of the biggest advantages of our
new machine is the capability of threedimensional task-specific training.
The Principle of Specificity of
Training states that “mimicking or
replicating an activity of daily living
in training assures that gains carry
Figure 4: A picture of the machine in use.
The machine has three actuated and three
non-actuated joints creating 1.1m half
sphere workspace. This configuration
allows most movements made by a strong
and tall individual.
over precisely to the motion of
interest.” With our machine, the user
is freed from the line of action of the
force constraint present in all current
forms of resistance training.
For example, the line of action of the
force in current weight training is
always directed through the center of
the earth, tangent to the arc of motion
in rotary systems, or along the cable as
shown in Figure 5. Human force
production in such activities is highly
constrained because of the need to
- 179 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
reconcile body position, joint axes,
and leverage to the line of action of
the force. With our device, the user
has complete control over the force
direction with end-point force-velocity
collinearity.
Force-velocity
collinearity means that when one
pushes on the endpoint, it moves and
is resisted in the direction it was
pushed. Research has shown that
purposeful motion is degraded without
force-velocity collinearity. Therefore
when our machine is used, daily
activities can be replicated in an
entirely natural cause and effect
environment without any machine
specific constraints.
movement as an example, the biceps
muscle can exert 71% of its potential
when the arm is straight (180 degrees),
100% of its strength at 100 degrees
and 67% at 60 degrees [1] as shown in
Figure 6. The only way to match these
muscle properties is to train with a
variable resistance device.
Our
machine creates the force field that
matches the strength of the muscle at
each specific configuration to achieve
maximum efficiency while eliminating
injury. By keeping track of changes in
the user’s input, the applied force can
be adjusted to be stronger or weaker as
the training progresses.
Furthermore, the magnitude of the
force can be varied within one
movement to accommodate the
physiology of the user. In a curling
Figure 5: Most force resistance training
devices do not preserve force-velocity
collinearity. Force-velocity collinearity
means that when one pushes on the
endpoint, it moves and is resisted in the
direction it was pushed. These offsets
between resistance and velocity create an
inefficient and dangerous environment for
rehabilitation.
Figure 6: Variation in force relative to the
angle of contraction [from Wilmore and
Costill, 1994]. 100% represents the angle
at which force is optimal. If the weight
were matched to accommodate the
strength at the 60 degree angle, the weight
would be too light for other angles.
However, if the weight is matched for 100
degrees,
over-strain
is
inevitable
elsewhere.
- 180 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Domestic Usage
This
task-specific
rehabilitation
machine has another advantage. Due
to its inherently safe hardware, the
rehabilitation can take a place without
full supervision. At the appearance of
pain or fatigue, the user can
instantaneously
decrease
the
machine’s damping or stop the
motion. Because the machine exerts
the resistive force only when the user
applies force, the user experiences no
loading when the motion is stopped.
Furthermore, the computer of the
rehabilitation machine can be linked to
the physician through the Internet.
The physician can have on-line access
to the user’s musculo-skeletal changes
and can vary the output of the machine
as necessary.
In addition, the manipulator does not
fall on the ground even when the user
releases
the
machine
because
gravitation is compensated for
internally. At the same time, if the
machine receives a high impact, the
machine acts like an inverse damper to
accommodate the impact. Thus, if
someone falls on the machine, the
machine slows you down gradually as
the body velocity decreases.
Learning a New Task
In addition to rehabilitating for a task
that is already familiar, a new task or
activity can be learned using the
machine. Often, people with injuries
or disabilities cannot try other
activities because the level they have
to start at is too physically demanding.
With a software controlled low-inertia
machine, the training can be
conducted at any level for any activity.
The advantage of a software based
domestic machine is that the data of
the rehabilitation training can be
recorded and can be brought to
physicians for an evaluation.
In
return, the physicians can assign the
next training level in software
according to the progress.
This
procedure assures that the patients do
not make a mistake with the
procedural settings. With the software
assigned by the physicians, the
machine can act as a virtual therapist.
This machine can enhance the life of
people who are physically challenged.
They will no longer be limited by their
physical abilities and can participate in
a certain activity at their own level.
This is good for recreation purposes
and for learning tasks that they never
thought that they would. When those
tasks are learned, they may be able to
go out and actually try the non-virtual
activities.
At last, the installation of the machine
at home is trivial because it is
designed to disassemble into small
manageable pieces.
- 181 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Challenges
The constructed machine is a
prototype and is not yet suitable for
mass production. There are two issues
that cannot be overlooked. First, the
cost of the machine needs to be
significantly reduced in order to target
domestic usage. This work is already
underway and it has been shown that
the redesign of some components
results in significant price reduction
and eliminates bulkiness as well.
resistance variation.
Previously a
domestic rehabilitation device was
impossible because of the safety issues
and the need for supervision. With the
combination of software-controlled
supervisors and the inherent safety of
the hardware, our device design
allows rehabilitation to take a place at
home. By working the muscles in
synergy instead of isolation with robot
assistance, recoveries will be faster
and better in the future.
Second, the complete passiveness of
the machine is an advantage for safety,
but it limits the functionality of the
machine. For example, if the end
point of the robot is in the area where
it should not be, the user must
physically move it out of the area
because the robot cannot store any
energy to move itself. Currently, the
interface program accommodates this
problem by giving visual guidance of
the movement paths. In the future,
small active actuators or springs will
be integrated to create the perception
of active components.
If active
actuators are used, they must output
very small torque even under its
maximum current input to assure the
safety of the machine.
References
[1] J. H. Wilmore and D. L. Costill
“Physiology of Sport and Exercise”,
Human Kinetics Publishers, 1994.
Contact:
Dr. Yoky Matsuoka
Harvard University, Division of
Engineering and Applied Sciences
29 Oxford Street
Cambridge, MA 02138
[email protected]
Conclusion
Our
machine
represents
a
revolutionary hardware platform. It
allows large natural movements with
force-velocity
collinearity
and
- 182 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
TEM: THERAPEUTIC EXERCISE MACHINE
FOR HIP AND KNEE JOINTS OF SPASTIC PATIENTS
Taisuke Sakaki, Seiichiro Okada, Yasutomo Okajima*, Naofumi Tanaka*,
Akio Kimura*, Shigeo Uchida*, Masaya Taki*,
Yutaka Tomita**, and Toshio Horiuchi**.
Yaskawa Electric Co., Tsukuba, Japan;
*Tsukigase Rehabilitation Ctr., Keio Univ., Tsukigase, Japan;
**Faculty of Science & Technology, Keio Univ., Yokohama, Japan.
a therapeutic exercise to improve ROM
and prevent contracture of the joint.
Many therapists have noticed a decrease
of spasticity by repetitive ROM-E. The
exercise
itself
includes
simple
flexion/extension motion using the
uniarticular muscle and straight-legraising (SLR) motion using the
biarticular muscles to stretch the
quadriceps
femoris,
hamstrings,
gastrocnemius, and so on.
Abstract: The Therapeutic Exercise
Machine (TEM) is a newly developed
exercise machine for the hip and knee
joints of spastic patients. This study
aims at evaluating the short-term effects
of Continuous Passive Range of Motion
Exercise (CPROM-E) on passive
resistive torque of the hip and knee in
spastic patients and in normal subjects.
During the CPROM-E in 40 individual
sessions, TEM carried out CPROM-E of
the lower extremity copying the
therapists’ initial motion, and recorded
the load torque of each subject’s hip joint
and the integrated EMG (I-EMG) of the
subject’s quadriceps femoris and
hamstrings. In the normal subjects, the
peak torque of the hip significantly
decreased by 5 percent, and the peak
amplitude of I-EMG was not always
reduced. In the spastic patients, the peak
torque significantly decreased by 35
percent, and the peak amplitude of IEMG significantly decreased after
exercise on TEM. These results suggest
that CPROM-E with TEM may have
beneficial effects in the management of
spasticity.
Two kinds of machines are employed for
this therapeutic exercise. One is an
exercise machine often used for sports
rehabilitation. The other is a continuous
passive motion (CPM) device, which is
usually used after surgical treatment on
the knee or hip. The limitations of these
machines lie in their motion pattern and
motion dynamics. Since these devices
execute only one degree of freedom
motion in rotation or in linear direction,
they cannot extend the biarticular
muscles. Further, these machines cannot
modify the motion against the patient’s
load smoothly, thus their use may
include pain.
BACKGROUNDS
Range of motion exercise (ROM-E) is
NEW REHAB-MACHINE: TEM
The Therapeutic Exercise Machine
1
(TEM) is a novel exerciser for the hip
and knee joints of spastic patients [1-5].
Two mechanical arms of TEM move the
targeted lower extremity. The arms are
driven by electric motors, controlled by
a computer using load sensor
information (Fig.1).
The machine has the following features.
1) Wide range of motion
The arm mechanism can follow the
three-degrees-of-freedom motion of the
lower extremity in the sagittal plane.
Thus, a highly flexible and wide range
of motion, including flexion/extension
mode, SLR, etc., is realized. Stretching
motion is accessible not only to the
uniarticular muscles but also to the
biarticular muscles around the hip and
knee.
Knee
Hip
0 – 110 [deg.]
15 – 90 [deg.]
Hip
15 – 100 [deg.]
In SLR with knee
extended.
With knee flexed.
Available ROM in Exercise with TEM
the machine. TEM follows and
memorizes the therapist’s motions, and
then the device replays the pattern of
exercise precisely. Implementation is
very easy for therapists (Fig.3).
4) Measurement functions
TEM measures the angle and the torque
of hip and knee, and records the three
channels
of
surface
integratedelectromyogram (I-EMG).
Fig.1 TEM Apparatus
2) Soft-motion
If the patient exerts external force to
TEM, the mechanical arms move
compliantly against the force. Based on
the model of virtual compliance, the
actual load to the patient’s leg is
continuously
and
appropriately
modulated. TEM can accomplish a
smooth and elastic movement similar to
that achieved by human therapists
(Fig.2).
3) Direct-teaching
Therapists can teach TEM the
appropriate types of motion by
articulating them while the patient is on
Fig.2 Concept of Soft-motion Function.
2
Virtual Compliance Model
(Spring + Dumping)
Load
TEM Dynamics
Smooth and Elastic Motion
Fig.3 Direct-teaching to TEM by
Physical Therapist.
METHODS
The purpose of this study is to evaluate
the short-term effects of CPROM-E on
passive resistive torque of the hip and
knee in spastic and normal subjects. The
subjects were 4 healthy adults and 6
spastic adult patients. By using the
direct-teaching function, the therapist
taught
one
session
of
the
flexion/extension motion to TEM
(Fig.3). During 40 serial sessions of the
CPROM-E, TEM carried out these
exercises on the lower extremity of
study participants, repeating the initial
motion guided by the therapist (Fig.1).
One session took 15 seconds. TEM
measured the angles and load torque of
knee and hip, and recorded the I-EMG
of medial hamstrings and quadriceps
femoris (vastus medialis). The data were
analyzed with the t test.
RESULTS
Figure 4 shows the time history of the
changes of the hip torque and I-EMG
during the first to last session after 40
individual repetitions of exercise in the
series of normal subjects (NL). The hip
torque, which is shown as the average of
the changing ratio of its peak, decreased
steadily and significantly (p<0.0001) by
about 5 percent. And the average of peak
amplitudes of I-EMG of hamstrings and
quadriceps remained low. Figure 5
shows the counter-illustration of Fig.4 in
the spastic patients (CVA). The peak
torque of the hip decreased significantly
(p=0.01) by 35 percent and the peak
amplitudes of I-EMG also decreased
significantly (p=0.003 and p=0.01,
respectively).
(%)
(ƒ
Ê V)
100
100
80
80
60
Hip torque
EMG (Hamst.)
40
EMG (Quad.)
40
20
20
0
20
30
40
times of exercise
0
1
10
60
Fig.4 Hip Torque and I-EMG in NL.
(%)
(ƒ
Ê V)
100
100
Hip torque
80
80
60
60
40
EMG (Hamst.)
20
EMG (Quad.)
0
1
10
20
30
40
20
0
40
times of exercise
Fig.5 Hip Torque and I-EMG in CVA.
3
DISCUSSIION
Joint stiffness involves of the reflex
and/or non-reflex components [6-9]. The
non-reflex components may be related to
changes of collagen in connective tissue
and the proportion of binding crossbridges in muscle. Reduction of joint
torque without decrease of muscle
activity is caused by the non-reflex
components, while the reduction of joint
torque with decrease of muscle activity
is caused by the reflex components. The
reduction of joint torque was shown in
healthy adults and in spastic patients.
However, in healthy adults, the torque
was reduced without decrease of muscle
activity, while in spastic patients the
torque was reduced with such a
decrease. Therefore, the non-reflex
components may contribute to the
decrease of torque in normal cases, and a
combination of reflex and non-reflex
components may cause the decrease of
torque in spastic patients. We are
elucidating these mechanisms by
experiments with the H reflex.
CONCLUSIONS
The new rehabilitation TEM for the
therapeutic exercise of the lower
extremity was presented. We examined
the short-term effects of Continuous
Passive Range of Motion Exercise with
TEM on muscle tone in 4 healthy adults
and 6 spastic patients. The results
suggest that CPROM-E with TEM may
have beneficial effects on spasticity.
REFERENCES
[1] Tanaka N, Okajima Y, Kimura A, Uchida
S, Taki M, Iwata S, Tomita Y, Horiuchi T,
Nagata K, Sakaki T: Therapeutic Exercise
Machine for the hip and knee (2) Effects of
continuous
passive
range-of-motion
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
exercise on spasticity. IRMA VIII, 109,
1997.
Tanaka N, Okajima Y, Taki M, Uchida S,
Tomita Y, Horiuchi T, Sakaki T, Kimura
A: Effects of continuous range of motion
exercise on passive resistive joint torque.
Jpn J Rehabil Med, 35, 491-495, 1998.
Okajima Y, Tanaka N, Kimura A, Uchida
S, Hasegawa M, Tomita Y, Horiuchi T,
Kondo M, Sakaki T: Therapeutic Exercise
Machine for the hip and knee (1)
Importance
of
virtual
mechanical
impedance control and multi-degrees of
freedom of motion. IRMA VIII, 166, 1997.
Okajima Y, Tanaka N, Hasegawa M,
Uchida S, Kimura A, Tomita Y, Horiuchi
T, Kondo M, Sakaki T: Therapeutic
Exercise Machine: Soft Motion by the
Impedance Control Mechanism. Jpn J
Sogo Rehabil, 26, 363-369, 1998.
Sakaki T, Okada S, Okajima Y, Tanaka N,
Kimura A, Uchida S, Hasegawa M, Tomita
Y, Horiuchi T: Therapeutic Exercise
Machine for hip and knee joints of spastic
patients. WCB98, 375, 1998.
Hagbarth KE, Hagglund JV, Nordin M,
and Wallin EU: Thixotropic behavior of
human finger flexor muscles with
accompanying changes in spindle and
reflex responses to stretch. J Physiol, 368,
323-342, 1985.
Malouin F, Bonneau C, Pichard L, and
Corriveau D: Non-reflex mediated changes
in plantaroflexor muscles early after
stroke. Scand J Rehabil Med, 29, 147-153,
1997.
Thilmann AF, Fellows SJ, and Ross HF:
Biochemical changes at the ankle joint
after stroke. J Neurol Neurosurg Phychiatr,
54, 134-139, 1991.
Toft E: Mechanical and electromyographic
stretch responses in spastic and healthy
subjects. Acta Neurol Scand Suppl, 163, 124, 1995.
Dr. Taisuke Sakaki
Yaskawa Electric Co.
5-9-10, Tokodai, Tsukuba, Ibaraki, 3002635, Japan.
4
A ROBOT TEST-BED FOR ASSISTANCE AND ASSESSMENT
IN PHYSICAL THERAPY
Rahul Raoi, Sunil K. Agrawalii, John P. Scholziii
Mechanical Systems Laboratory
University of Delaware, Newark, DE 19716.
Abstract
1. Introduction
This article describes an experimental
test-bed that was developed to assist
and assess rehabilitation during
physical and occupational therapy. A
PUMA 260 robot was used for which a
controller and interface software was
developed in-house. The robot can
operate in two modes: (i) passive and
(ii) active. In the passive mode, the
robot moves the subject’s arm through
specified paths. In the active mode, a
subject guides the robot along a
predefined path overcoming a specified
joint stiffness matrix. In this mode, the
controller
provides
gravity
compensation so that the robot can
support its own weight in an arbitrary
configuration.
The
developed
graphical interface enables display of
the current configuration of the robot in
real-time, customize experiments to a
specific subject, and collect force and
position data during an experiment.
The results of a preliminary study using
this test-bed are also presented along
with issues involved in choice of paths
and interpretation of the results.
Active exercise is an important
component
of
rehabilitation.
Resistance is typically accomplished by
using expensive exercise equipment or
is applied manually by a therapist.
Most available exercise equipment
allowing for controlled application of
forces to a limb or the trunk limit
motion to one plane or forces are
applied directly on to a single joint. As
such, their relevance to functional
movements is extremely limited. And
although manual resistance applied by
a therapist allows for exercise of
multiple degrees-of-freedom (Voss et
al., 1985), it requires the therapist’s
complete attention to only one patient
at a time, increasing the cost of
treatment.
Keywords: Robot, Rehabilitation,
Assessment, Physical Therapy.
The need for objective, quantitative and
reliable evaluation tools to assess the
neuromuscular performance of patients
is critical to both physical and
occupational therapy (Carr and
Shepherd, 1990; Chandler et al., 1980).
The ability to quantify movement
performance has been a particular
problem in these disciplines. This is
specially the case in neurological
rehabilitation, where most assessments
- 187 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
of motor function have been based on
an ordinal scale of quantification
(Bayley, 1935; Poole and Whitney,
1988; Rothstein, 1985; Scholz, 1993).
These facts indicate that the
development of a device that would
allow for controlled motion of the
entire limb in quasi-functional patterns
could improve patient evaluation and
treatment effectiveness while reducing
its time and cost. Some important
issues that need to be addressed are (i)
development of a user friendly robot
with a safe control system, (ii)
development of a versatile subject
interface, and (iii) design of suitable
experiments
to
evaluate
the
effectiveness
of
the
approach.
However, there have been only a
handful of studies that have attempted
to develop complex machines to
accomplish this task and that have
evaluated
protocols
for
their
application.
(Lum et al., 1995) and MIME (Mirror
Image Motion Enabler) have been
reported for post stroke therapy (Lum
et al., 1997).
This article presents some recent
efforts at University of Delaware in the
development of a robot test-bed to
assist and assess rehabilitation. The
salient features of this study are: (i) an
in-house developed controller for the
robot
motivated
by
safety
considerations, (ii) a versatile interface
that can be used to customize subject
experiments, (iii) a mechanism to
collect force and position data during
an experiment, (iv) protocols to provide
assessments using the robot test-bed.
The outline of this article is as follows:
Section 2 presents a description of the
robot set-up.
The design of
experiments, data analysis, and results
are described in Section 3. These are
followed by a discussion of the results,
their implications and conclusions.
Noritsugu et al. (1996) developed a two
2. Robot Test-bed
degree-of-freedom rubber artificial
muscle manipulator and performed
The test-bed consists of a six degreeexperiments to identify human arm
of-freedom PUMA Mark II 200 series
parameters. Impedance control has
robot arm. Due to inherent limitations
been suggested as an effective
of the original controller provided by
approach to control human-machine
the manufacturers, an in-house
systems (Hogan, 1985) and has been
controller was developed that uses
studied for direct drive robots
LM628 based servo controllers
(McKormic and Schwartz, 1993).
interfaced with a Pentium 233 MHz
Some preliminary studies have been
computer. The computer also handles
presented on the application of robot
the user interface and real-time display
technology
to
enhance
the
of the graphics. A schematic of the setrehabilitation of stroke patients (Krebs
up is shown in Figure 1 along with data
et al., 1995). These studies suggest that
flow in the system. The robot joints are
robots are promising new tools in this
equipped with optical encoders that
area. A prototype for bimanual lifting
- 188 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
provide a resolution of roughly 0.005
degrees and a 6-axis force-torque
sensor, manufactured by JR3 Inc.
(Model No. 67M25A). Even though the
robot has the capability to move in 3dimensional space, in this study, the
robot motion was restricted to the
vertical plane.
The software for the robot was written
in an object-oriented environment.
Some of its special features are: (i)
ability to interact with other
applications such as MATLAB, (ii)
personalized and flexible experiments
through an interactive user interface,
(iii) a two-dimensional graphic
visualization of the robot motion on the
monitor. The software allows the robot
to run in two modes: (i) Passive (P) and
(ii) Active (A). In the P-mode, the
VB Front End
- Uses MATLAB
-
on the movement of the robot by the
subject. This mode is also effectively
used before experimentation in Amode, described later.
A typical session in the P-mode has the
following features:
ΠLocate 40 points on the computer
screen, 20 each on the inner and
outer walls of a tunnel containing
the path. Alternatively, a path
defined earlier or stored in the
computer can be recalled for a
current use. Typical paths created
using this procedure are shown in
Figure 2.
ΠDuring path execution, the software
draws the inner and outer walls and
locates 20 discrete points along the
central line between the walls.
MATLAB DDE Engine
- Started in the background
- Handles all matrix
engine for
computations
Provides GUI
computations
Pentium basedPC
233 MHz
PUMA 260
Robot Arm
Data Acquisition
&
Servo Control
Board
Position Data from Encoders
Force Data from Force Sensor
Servo
Amplifier
Fig. 1: A schematic of the modules in the system along with flow of data
These points are then utilized to
robot moves the subject hand within
solve the inverse kinematics
the workspace, with little or no control
- 189 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
between two successive points and
the robot tracks the central line by
moving between two successive
points. The subject is instructed to
lightly hold the robot end-effector
during motion of the robot.
traverses the path in both forward
and reverse directions, although
more repetitive trajectories can be
specified in principle.
A typical session in the active mode is:
ΠThe therapist or experimenter
recalls a path or defines a new path
by describing 40 points on the outer
and inner walls. The robot moves
Fig. 2: Typical paths created by the
the subject to the starting point of
software divided into regions.
the central line and handles the
control over to the subject.
Since PUMA 200 robot is heavy, in a
ΠThe subject is in full control of the
general configuration, the links will fall
robot arm and makes an attempt to
under their own weight. To alleviate
track the central line while
the subject from working against this
overcoming the stiffness specified
gravity load, a scheme was developed
at the joints of the robot. The
to gravity balance the robot by
stiffness can be varied along the
providing actuator torque appropriate
path using control panels on the
to the configuration of the robot in the
screen.
plane. A gravity model for the robot in
ΠDuring motion, the position of the
the vertical plane was developed using
end-effector and subject exerted
analytical approach verified by
forces and moments are recorded by
experimental data (Rao, 1999). It was
the 6 DOF force sensors.
observed that this model for the gravity
ΠDuring experiment, if the subject
loading worked quite well over the
hits a wall boundary, the robot
useful workspace of the robot. The
temporarily takes over control,
geometric planning for the robot was
moving the handle/hand back to the
done using its inverse kinematic model.
nearest point on the center line, and
then returns control to the subject.
3. Experimental Studies
The color of the wall that is hit
changes during this period giving
3.1 Selection of Paths
the subject a visual cue of the
collision. The original color is
In this exploratory study, experiments
restored once the robot end is at the
were conducted on four healthy adult
central line and the control is
subjects. In order to understand the role
handed over to the subject. A trial
of paths during experimentation, two
gets completed when the subject
- 190 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
paths A and B, shown in Figure 2, were
used. Path A consists of linear
segments while path B consists of
circular segments, with both intermixed
with sharp turns. The rationale for
choosing these two paths is that an
arbitrary path can be constructed using
combinations of these two. Each of
these paths was divided into 3 regions.
This was done to observe if any of the
regions had the feature of being
particularly easy or difficult to
negotiate. Each path was traversed
forward and backwards and we label
this as a block of experiment. Four
blocks of experiment were completed
on each path.
The first three blocks had identical
experiment conditions. In the fourth
block, joint stiffness were enhanced by
a factor of 2. This was done to observe
how learning during the first three
blocks of experiments helps a subject
overcome enhanced stiffness during the
fourth block of experiments. Certain
factors were kept consistent across
blocks of experiments and across
subjects. These were:
Standardizing their grip on the endeffector so that their elbow points
straight ahead and they have a clear
view of the monitor.
The collected data consists of the
following information: region of the
path, X and Y co-ordinates of the end
point, X, Y, Z forces and moments.
This data was was analyzed off-line
using MATLAB. The hardware
allowed us a sample rate of roughly
1000 Hz.
3.2 Data Analysis:
The central line was defined for
convenience as the intended path for
the experiments. Deviations from the
central line d provided indicators of a
subject’s performance and consistency.
Position data analysis was conducted
for all four blocks of experiments.
The fundamental difference between
position data analysis and force data
analysis is that there is no intended or
known ideal force trajectory with
which a comparison can be made.
Further, even though subjects attempt
to maintain a constant speed in the
A reminder to the subject before each
trials, they are not able to achieve it
block of experiment about experiment
exactly. This leads to a different
objectives, i.e., to remain within the
number of data samples collected in
two walls on the screen and track the
each trial. Thus, in order to bring all
central line as closely as possible.
subjects to a common time base, a
A reminder to the subject to maintain
normalization procedure was employed
constant speed during the entire study.
which included an interpolation
Each subject was given two practice
between elements of each column in
trials in the active mode to facilitate
the data array. This interpolation was
determining a comfortable speed for
performed using cubic splines,
the experiments.
resulting in a new array consisting of
- 191 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
the normalized elements.
The
algorithm for the analysis of subject
data within a block of experiments can
be summarized as follows:
For each trial in a block, isolate
samples belonging to regions 1, 2 and 3
into different arrays.
Normalize elements of each array to
obtain normalized values of the
samples.
Compute the signed distance of the
end-effector from the central line for
each sample.
Concatenate all normalized samples
that belong to a certain region within a
particular block of four trials.
Identify samples at every 5 % of the
total number of samples for each
region. Compute the mean and standard
deviation of the samples and obtain a
graphic representation of the variation
in a particular region of a path during a
block of experiments.
3.3 Results
Because of the preliminary nature of
these tests, all data collected during the
experiments were analyzed visually.
Among these, the deviation d and zmoment from the force sensor Mz
showed some trends and were therefore
analyzed in greater detail. Figures 3
and 4 show a set of four plots that
represent the normalized mean
deviations for a subject tracking the
central line in a particular region of the
path. These plots are shown for all
three blocks of experiments. The eight
plots in the two figures represent a
general trend among all subjects in the
experiments. Across the three blocks,
one can observe a decrease in the mean
distance from the center path,
accompanied by a decrease in the
variable error band about this mean
distance. This indicates that a subject
was able to track the center more
consistently as more experiments were
conducted.
- 192 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Deviation from the center during motion in Region 3, Block 1
Distance (inches)
4
2
0
−2
−4
0
2
4
6
8
10
12
14
16
18
20
18
20
18
20
18
20
Deviation from the center during motion in Region 3, Block 2
Distance (inches)
4
2
0
−2
−4
0
2
4
6
8
10
12
14
16
Deviation from the center during motion in Region 3, Block 3
Distance (inches)
4
2
0
−2
−4
0
2
4
6
8
10
12
14
16
Deviation from the center during motion in Region 3, Block 4
Distance (inches)
4
2
0
−2
−4
0
2
4
6
8
10
12
normalized Index
14
16
Fig. 4 Distance from the center line for subject 1, region 3, path B
- 193 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Mean Moment Variation Along the Path in Region 1 , Block 1
Moment− Z (lbs−in.)
40
20
0
−20
−40
0
2
4
6
8
10
12
14
16
18
20
18
20
18
20
18
20
Mean Moment Variation Along the Path in Region 1 , Block 2
Moment− Z (lbs−in.)
40
20
0
−20
−40
0
2
4
6
8
10
12
14
16
Mean Moment Variation Along the Path in Region 1 , Block 3
Moment− Z (lbs−in.)
40
20
0
−20
−40
0
2
4
6
8
10
12
14
16
Mean Moment Variation Along the Path in Region 1 , Block 4
Moment− Z (lbs−in.)
40
20
0
−20
−40
0
2
4
6
8
10
12
normalized Index
14
16
Fig. 5 Moments about the Z axis for subject 4, region 1, path A
- 194 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Mean Moment Variation Along the Path in Region 3 , Block 1
Moment− Z (lbs−in.)
40
20
0
−20
−40
0
2
4
6
8
10
12
14
16
18
20
18
20
18
20
18
20
Mean Moment Variation Along the Path in Region 3 , Block 2
Moment− Z (lbs−in.)
40
20
0
−20
−40
0
2
4
6
8
10
12
14
16
Mean Moment Variation Along the Path in Region 3 , Block 3
Moment− Z (lbs−in.)
40
20
0
−20
−40
0
2
4
6
8
10
12
14
16
Mean Moment Variation Along the Path in Region 3 , Block 4
Moment− Z (lbs−in.)
40
20
0
−20
−40
0
2
4
6
8
10
12
normalized Index
14
16
Fig. 6 Moments about the Z axis for subject 1, region 3, path B
- 195 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
As far as hits on the wall were
concerned, Tables 1 and 2 reveal that
there were fewer hits on the walls in
block 3 compared to block 1, although
subjects hit the wall infrequently
nonetheless.
robot joints provide position data while
a six degree-of-freedom force-torque
sensor at the end-effector provides
force and torque data that can be used
to assist and quantify patient
rehabilitation.
From the plots representing the
moment about the Z axis (Figure 5 and
6), perpendicular to the plane of
motion, one can observe that the profile
of Mz becomes smoother across blocks
of experiments. This general trend
suggests that a subject learned to
traverse the path with fewer jerks, or
more smoothly across blocks as more
experiments were conducted, although
a more detailed analysis is clearly
needed.
Our test-bed provides a means to
measure quantitatively the performance
of quasi-functional movement patterns
by patients with a variety of movement
disorders. A significant problem in
patients who have suffered a stroke, for
example,
is
the
presence
of
coordination deficits. These are
especially difficult to quantify.
Although information obtained about
movement patterns produced by the
end-effector (i.e., hand or foot) does
not provide detail about individual
impairments, the information provided
may be extremely valuable for
assessing the effects of specific
impairments or different levels of
impairment on functional movement
patterns.
With our test-bed,
quantitative assessment of quasifunctional movement patterns is made
possible where such information was
previously very difficult to obtain.
Recent research has indicated that
movement trajectories may be planned
by the nervous system in terms of
movement of the end-effector rather
than
the
individual
movement
components (Flash and Hogan, 1985;
Hogan and Winters, 1990; Hogan,
1995; Scholz and Schoner, 1999).
Such information may be essential,
therefore, for identifying deficits in
central planning or the transformation
This study indicates that some regions
of the two paths A and B enabled a
better performance by some subjects as
opposed to the others, but the trends
were not similar across subjects.
4. Discussion
This article has described the design
and fabrication of an experimental testbed consisting of a PUMA 260 robot
arm with an in-house designed
controller unit, interfaced with a
Pentium based computer. The software
is written in an object oriented
environment with a graphical user
interface that enables one to customize
experiments for a subject.
The
software also provides the user with a
real time animation of the robot motion
and the path traced by the robot endeffector. The optical encoders at the
- 196 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
of a central plan into action. Most
importantly, the information provided
will be helpful in customizing a
patient’s treatment, for helping to
determine when to stop treatment
because it is yielding no further
improvement, and for providing data to
evaluate the efficacy of particular
treatment approaches.
Combining
information obtained from our test-bed
with other types of data, e.g., video
analysis of joint motion and/or
electromyography, should provide a
means for assessing the relationship
between whole limb motion and the
underlying impairments. In our very
preliminary tests of the device, we have
shown that information on end-effector
position and force can be obtained
which may be useful for characterizing
changes in performance.
The ultimate goal of rehabilitation is to
improve the patient’s functional
capabilities,
regardless
of
the
underlying pathology. Our test-bed can
potentially provide a number of
advantages
for
neuromuscular
rehabilitation.
For example, when
there is weakness of many muscles that
act to control movement and stability
of a limb, strength training of each of
these muscles is necessary. The use of
single
degree-of-freedom
dynamometers to train the affected
muscles can be very time-consuming.
Our device, on the other hand, would
allow for simultaneous strength
training of many muscles through the
performance
of
quasi-functional
patterns of movement. Although free
weights or pulley systems allow for
simultaneous strength training of many
muscles as well, it may be impossible
for a patient to control free weights in
the early stages of rehabilitation.
Moreover, because our device can, in
principle, be made to provide
accommodating resistance throughout
the range of motion, a patient would
never work against more resistance
than he or she can handle.
By providing real-time animation of
robot
motion
and
movement
constraints, our test-bed provides a
means for providing immediate
feedback to the patient about the results
of their movement along a specified
spatial path (e.g., patient keeps the
hand centered, deviates toward the
outer wall, etc.), which may be made
simple or complex according to the
current abilities of the patient. In
addition, more performance oriented
feedback can be provided to the patient
after one or several trials (e.g., the
force field generated by the hand
during the movement).
Such
information is essential for motor
learning
(Weinstein,
1990).
Ultimately, our goal is to use the
graphics interface to make therapy
game-like for the patient with the goal
of increasing patient interest and
motivation.
Most
functional
tasks
involve
movement of an entire limb or a
substantial number of joints at the very
least. It is also common for such tasks
to be carried out in all three spatial
- 197 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
dimensions simultaneously.
An
important goal, therefore, is to design
training paradigms that approximate as
closely as possible this reality. The
tests reported in this article evaluated
movement of the entire upper
extremity, although the movements
were limited to a single plane. Thus, a
important future direction will be to
extend the development of the robot’s
use to three-dimensional movements.
This will require more complicated
graphic displays to provide the patient
with convincing information about the
hand’s position in three-dimensional
space.
However, it will first be
important to improve the robot’s
performance in the current set-up and
to perform more quantitative tests of its
performance with human subjects,
including patients with movement
deficits.
Although the Puma robot was designed
for industrial use, we have shown that
it has potential for use in rehabilitation
as well. However, several problems
will need to be resolved before this
particular robot can be used effectively
with patients.
Currently, we are
working to improve the interface of the
robot handle with the subject’s hand so
that it can be accommodated to the
different grasping abilities of patients.
This is a general problem faced with
the use of any robot, however. In terms
of controlling forces applied to a
subject’s hand, it would be ideal to be
able to specify the Cartesian stiffness at
the end-effector rather than a matrix of
joint stiffness. To date, this has been
difficult because of difficulty in
characterizing and accounting for joint
friction.
This problem does not
preclude the robot’s use for quantifying
movement deficits or in training
movement patterns, although it may
limit its overall usefulness.
The most encouraging result of our
work to date has been the development
of a graphical user interface that is
flexible and easy to use. As described
in the Results section, subjects learned
to minimize deviations from the center
line in repeated trials. Also, the torque
they applied to the end-effector became
smoother over blocks of experiment.
These results suggest that robot set-ups
like these possess the potential of
providing
effective
aids
for
rehabilitation.
Acknowledgments:
The
authors
acknowledge support of National
Science
Foundation
Presidential
Faculty Fellowship during the course
of this work.
References
Bayley, N., The development of motor
abilities during the first three years.
Monographs of the Society for
Research in Child Development, 1, 126, 1935.
Carr, J.H. and Shepherd, R.B., A
Motor Relearning Programme for
Stroke,
Rockville,
MD:
Aspen
Publishers, Inc., 1990.
- 198 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Chandler, L.S., Andrews, M.S. and
Swanson, M.W., Movement Assessment
of Infants - A Manual. Rolling Bay,
Washington, 1980.
MRCAS ’95- 2nd International
Symposium on Medical Robots and
Computer Aided Surgery, John Wiley
and Sons, Nov. ’95.
Flash, T. and Hogan, N., The
coordination of arm movements: an
experimentally
confirmed
mathematical model,
Journal of
Neuroscience, 7, 1688-1703, 1985.
Lum, Peter S., Lehman, Steven L. and
Reinkensmeyer, David J.,
The
Bimanual Lifting Rehabilitator: An
Adaptive Machine for Therapy of
Stroke Patients, IEEE Transactions on
Rehabilitation Engineering, Vol. 3, No.
2, pp 166- 173, June 1995.
Hogan, N., Impedance Control: An
Approach to Manipulation, parts I, II
and III, ASME Journal of Dynamic
Systems, Measurement and Control,
Vol 107, pp 1- 24, 1985.
Hogan, N., The mechanics of multijoint posture and movement control,
Biological Cybernetics, 52, 315-331,
1985.
Hogan, N. and Winters, J.M.,
Principles
underlying
movement
organization: upper limb. In J.M.
Winters and S.L-Y. Woo [Eds.].
Multiple
Muscle
Systems:
Biomechanics
and
Movement
Organization, pp. 182-194. New York:
Springer-Verlag, 1990.
Kazerooni, H., On the Robot
Compliant Motion Control, ASME
Journal
of
Dynamic
Systems,
Measurement and Control, Vol 111(3),
pp 416- 425, 1989.
Krebs, H. I., Aisen, M. L., Volpe, B. T.
and Hogan, N., Robot Aided Neuro
Rehabilitation: Initial Application to
Stroke Rehabilitation, Proceedings of
Lum, Peter S., Burgar, Charles G. and
H. F. Machiel Van der Loos, The Use
of a Robotic Device for Post Stroke
Movement Therapy, Proceedings of
the International Conference on
Rehabilitation Robotics, Bath, U.K.,
April 14-15,1997, pp 79- 82.
McKormick, W. and Schwartz, H. M.,
An Investigation of Impedance Control
for Robot Manipulators, International
Journal of Robotics Research, Vol 12,
No. 5, October 1993, pp 473- 489.
Noritsugu, T., Tanaka, T. and
Yamanaka, T.,
Application of a
Rubber Manipulator as a Rehabilitation
Robot, IEEE International Workshop
on Robot and Human Communication,
pp 112- 117, 1996.
Poole, J.L. and Whitney, S.L., Motor
assessment scale for stroke patients:
concurrent validity and interrater
reliability,
Archives of Physical
Medicine and Rehabilitation, 69, 195197, 1988.
- 199 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
PUMA Mark II Robot 200 Series
Equipment Manual 1985.
Rao, R., A Robot Test-bed for Physical
Therapy, M.S. Thesis, Department of
Mechanical Engineering, University of
Delaware, 1990.
i
Graduate Student, Department of
Mechanical Engineering
ii
Rothstein, J.M.,
Measurement in
Physical
Therapy,
New
York:
Churchill Livingstone, 1985.
Assoc. Prof., Mechanical Engineering,
Email: [email protected], Also,
corresponding author
iii
Scholz J. P., Analysis of movement
dysfunction: Control parameters and
coordination stability,
The 13th
Annual Eugene Michels Researchers
Forum, pp. 3-13. Alexandria, VA:
American
Physical
Therapy
Association, 1993.
Associate Professor, Physical Therapy,
Email: [email protected]
Scholz, J. P. and Schoner, G., The
uncontrolled
manifold
concept:
identifying control variables for a
functional task, In Press Experimental
Brain Research, 1999.
Spong, Mark, W. and Vidyasagar, M.,
Robot Dynamics and Control, John
Wiley and Sons, 1989.
Voss, D.E., Ionta, M.K. and Myers,
B.J., Proprioceptive Neuromuscular
Facilitation, Philadelphia, PA: Harper
and Row Publishers, 1985.
Weinstein, C.J. Knowledge of results
and motor learning - Implications for
physical therapy, Physical Therapy,
71, 140- 149, 1990.
- 200 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
RAID - TOWARD GREATER INDEPENDENCE IN THE OFFICE & HOME
ENVIRONMENT
Tim Jones
Technical Director, OxIM Ltd
12 Kings Meadow, Oxford OX2 0DP, England
Tel: +44 1865 204881
1.
Introduction
RAID - Robot for Assisting the
Integration of the Disabled - is a
system for allowing a handicapped
person to operate independently of a
human carer for periods up to 4 hours
in the office and home environment. It
is designed for those with full mental
faculties
but
severe
physical
disabilities, whether traumatic or
congenital in origin, and allows them
to handle papers, books, disks and
CD ROM’s, files, refreshments etc.
Originally conceived as a natural
extension of many year’s work on
the MASTER project at CEA-STR,
Fontenay-aux-Roses, France, the
development of the first three
prototypes was undertaken by a
European consortium with 50%
support from the EC’s TIDE
programme.
Figure 1. The RAID workstation under development at Lund University
Sweden.
- 201 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
The development programme led to a
two-year period of clinical trials with
some hundreds of quadriplegic users,
funded and conducted by APPROCHE
(Association pour la Promotion des
Plates-formes RObotisée en faveur des
personnes
HandicappéEs),
an
independent syndicate in France
comprising
doctors,
therapists,
disability centres, insurance companies
and Government agencies responsible
for handicapped persons.
communicate with a PC.
These
include joysticks, detectors for chin
and eye movement, and puff sensors.
The aim of RAID is to enable such
people to control the movement of
objects in the physical world - both in
an office environment, and also in a
domestic setting. The goal is to enable
severely disabled users to be
independent for at least four-hours at a
time, without intervention from a
human carer.
Five complete workstations were
ordered by APPROCHE (with another
for the CEA) for this evaluation.
These were constructed by OxIM, who
had acquired sole exploitation rights
from the consortium. In clinical trials
RAID proved to be a versatile product,
popular with its disabled users, but
requiring additional design work to
eliminate problems of inadequate
reliability, to reduce its physical size,
and improve visibility.
The EC’s TIDE (Telematics Initiative
for the Disabled and Elderly) program
supported two important phases of the
MASTER-RAID development from
1992 - 1996, in projects called RAID
and EPI-RAID respectively, with total
support from
DG XIII of some 1.9Mecu. The
collaboration included groups from:-
OxIM has attempted to secure risk
capital to complete the design and
proceed to a production launch, but so
far has failed to secure investment for
this project nor has it identified an
appropriate Venture Partner with the
appropriate marketing capabilities.
France: CEA, Service Téléoperation
et Robotique.
UK:
Oxford Intelligent Machines
(OxIM), Armstrong Projects
Ltd, and
CambridgeUniversity.
Sweden: Rehabcentrum Lund-Orup,
DPME, HADAR,CERTEC,
and Lund University.
An exploitation agreement between the
EPI-RAID partners gave OxIM
2.
The RAID Project: Adapting
exclusive marketing rights for RAID,
the Office to the needs of
as well as a licence to use the CEA’s
Quadriplegics.
MASTER software, in exchange for
royalties on total net sales of all units
There is a wide variety of existing
after those required for the clinical
devices for enabling people with
trials.
particular physical disabilities to
- 202 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
In 1994 APPROCHE, through the
inspiration of its Director Dr M.
Busnel, enabled the next phase to
begin. APPROCHE raised sufficient
funding from its members and from
Government for the purchase of 5
OxIM RAID stations and their
evaluation for 12 months in each of 10
co-operating
disability
centres
distributed throughout France. By July
1995 the 5 stations had been
delivered, and had passed acceptance
tests witnessed by CEA as Project
Engineers, at the first 5 selected
Centres. By June 1996 the stations had
been used extensively by some 45
handicapped people. Their disabilities
were mostly C3 to C8 spinal column
injuries (24) followed by 8 victims of
neuromuscular disease such as
Duchenne’s syndrome, 6 patients with
progressive disease of the nervous
system such as multiple sclerosis, 5
with disease of the spinal column and
cerebral cortex, and two with severe
head injuries. Altogether some 58%
had suffered traumatic injury - mainly
road accidents - and 42% disease. The
stations were then relocated by OxIM
staff at the second set of 5 centres and
the evaluation continued.
In parallel with this, clinical trials on
the EPI-RAID stations continued at the
Rehabcentrum Lund-Orup in Sweden
(featured above in figure 1) and, to a
lesser extent at the Bradbury
Progression Centre, Papworth, UK.
By the end of the trials in July 1997,
some hundreds of handicapped users
had been introduced to RAID
workstations. At least 45 different
tasks had been tried, mainly for office
activities but also many for domestic
life and leisure activity, ranging from
handling books, papers ( See figure 2
below) and disks to operating a
microwave (without adaptations) and a
tape recorder. Comments from users
were highly encouraging, and the
power of the robot workstation was
appreciated not only in relation to its
functions in the office and at work, but
also for leisure use. Always it was the
element of increased independence that
was most valued, allowing the use of
human carers more for companionship
and support, and less for mere physical
assistance.
Unfortunately the trials were affected
by certain reliability problems with
RAID. These ranged from recurrent
minor problems with the robot’s
control and end-effectors, particularly
the one used for handling papers, to
unexpectedly frequent problems with
the PC’s running the MASTER
software.
- 203 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Figure 2. The RAID page turner under clinical trials at Bradbury Progression Centre,
Papworth, UK
.
feedback obtained from the user trials
Diagnosis and cure of these problems
in France, Sweden and UK. OxIM is
was made difficult and slow by the fact
also defining a strategy for the
that the system is relatively complex,
marketing of a simplified more
and that several major components
compact RAID workstation.
were designed by different members of
the original collaboration. The
conclusion from these trials is that the
Technical
RAID system has a commercial future
if the technical issues raised are
The product has to be developed so
successfully addressed.
that it is fit for purpose. It has to be
Reliable and provide the functionality
required by the users at an affordable
3.
Route to Market.
cost. It must also be easily
configurable, easy to maintain and reFurther work is required both on the
programmable without the need for
technical and commercial aspects of
highly trained individuals. It must also
marketing RAID. OxIM has prepared a
be compatible with the users
business plan for the elimination of the
environment, that is, a sensible size
remaining defects, based on the
- 204 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
and discreet (not noisy) with clear user
visibility of the vital elements in the
system.
The key to achieving much of this is to
simplify the system, to make it more
compact and to make it more
accessible particularly in respect of the
software. The exact details of how this
will be achieved will be revealed in
due course.
Commercial
The benefits of the workstation have to
be sold to:
The users - who must believe that the
system will be of real benefit to them.
The Clinicians - who will specify the
system for the users.
The funding agencies - Government
organisations, Insurance companies,
Charities etc.
Investors / Joint venture partners - to
enable the development to occur.
There is a strong financial case for the
use of RAID based on the savings that
can be realised in carer costs.
Unfortunately the agencies that pay for
capital equipment are often not the
same agencies that pay for care. Some
creativity will be required in these
instances (perhaps through leasing
arrangements) so that the savings
realised can provide an incentive to
purchase.
later to individuals and support
agencies. The market forecast shows
some 2,500,000 individuals in Europe
of employable age, in disablement
categories 6-9 (representing those with
disabilities relevant to potential use of
RAID). Allowing for a Eurostat
estimate of 95% of these being
unwilling to work, and the inevitable
difficulty for individuals in securing
funding, OxIM believes there is the
potential to sell at least 1,000 units in
Europe, and possibly ten times this
amount. Similar figures apply to the
USA. The challenge is to open up this
difficult new market.
The projected cost of a RAID station
varies according to the complexity of
the configuration required, but at 1997
prices was about $50,000. Given the
right investment this will fall due to the
simplification of the system, and could
dramatically reduce with reasonable
manufacturing batch sizes. An end user
price under $30,000 is entirely feasible
given
the
right
commercial
circumstances. The capital cost then
becomes comparable with certain other
aids for the handicapped - e.g.
specially converted cars.
4.
The Way Forward
There is clear potential for the RAID
concept, but investment is required to
take RAID forward. OxIM is still
exploring potential avenues for
achieving this and in the meantime is
concentrating on keeping RAID in the
Sales would be targeted firstly to the
public eye. OxIM believes that RAID
80 assessment centres in the EU and
- 205 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
has a future and is committed to
developing it further. What is required
is a source of venture capital willing to
accept the unusual mix of risks needed
to take RAID through to production
launch.
For more information:“http://www.oxim.co.uk”
- 206 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
INTEGRATED CONTROL OF DESKTOP MOUNTED MANIPULATOR
AND A WHEELCHAIR
Dimiter Stefanov
Institute of Mechanics, Bulgarian Academy of Sciences
ABSTRACT
This paper describes a system for
movement assistance and indoor
transportation, realized by desktop
mounted manipulator and an omnidirectional
powered
wheelchair,
controlled by the same set of user’s
commands. The repeatable robot
movements in a preprogrammed mode
require one and the same initial
wheelchair position irrespective of the
manipulator. A design approach to a
specialized
automatic
navigation
system capable of performing fine
guidance of the wheelchair to a
preliminary determined place is
discussed. Examples of navigation
systems based on inductive and
optoelectronic sensors are described
too. A common control system of a
wheelchair and a robot by usage of
head movements is also included.
I. Formulation of the task
The main part of the robotic
workstations is designed to assist
disabled individuals in their every day
needs such as eating, drinking,
operating simple objects, etc. [1, 2]. A
prototype desktop mounted manipulator
for household tasks was developed and
tested at the Bulgarian Academy of
Sciences some years ago under the
HOPE project [3, 4]. The manipulator
uses an optoelectronic follow-up
positioning system that responds to
movements of the head and the eyelids.
The user sets directly the spatial
position of the gripper. Regardless of
its simplicity, the algorithm allows
simultaneous control of three DOF.
Tests have shown that users can adapt
quickly to the robot.
The control of the workstation is based
on the assumption that an external
helper has positioned the user at a
preliminary determined location. Sitting
close to the worktable, the user can
operate the robot, performing unaided
pick-and-place ADL tasks. The
movement independence can be
increased significantly if some indoor
mobility is provided to enable the user
to move freely from one place to other.
Wheelchair mounted manipulators are
one of the solutions to such a task. This
paper proposes an alternative solution,
suitable
for
indoor
movement
operation. Sitting in a powered
- 207 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
wheelchair with high manoeuvrability
(not in a stationary chair), the user
possesses the ability to control both the
manipulator and the wheelchair and
also to access independently different
places within the house, for example: to
move near the window, to move in front
of the TV set, to stay close to the bed,
or to perform different movement tasks,
using the robot. This approach has
some advantages: it can be used by the
elderly people who spend most of their
time at home; the robot uses the main
power supply; the size of the
wheelchair is smaller because no
manipulator is mounted on it.
The HOPE manipulator is controlled in
a direct mode only, i.e. the user
participated actively in the control
process all the time. Further
improvement of the control algorithm
can be obtained if the robot
automatically performs repeatable
movements in the pre-programmed
control mode. Almost all the movement
tasks involve user’s face or mouth. The
automatic mode can be realized
successfully if the robot, the user and
the manipulated objects are located at
the same initial position each time
when a concrete task is being
performed.
In the case of wheelchair mounted
manipulator, the mutual position
between the user and the manipulator is
the same. The position between the
manipulator and the objects depends on
the precision of the wheelchair steering.
The robotic workstation maintains the
- 208 -
same initial position between the
manipulator and the objects. In this
case, the position of the user’s face is
determined by the position of the chair.
It can be seen that all variants
(wheelchair mounted manipulator;
workstation and wheelchair) need
accurate positioning. Achieving the
exact location could be very
burdensome. First, it would require
many manoeuvres; second, it is time
consuming and third, it requires
considerable mental and physical
efforts from the user. One way to
overcome these problems is to use an
automatically navigated wheelchair.
This would significantly reduce the
user’s mental burden for successful
control of the robot.
Many research projects have been
devoted to indoor wheelchair guidance
systems [5, 6]. Usually such systems
perform “do-to-goal” commands and
navigate the wheelchair to different
locations within user’s home, avoiding
environmental obstacles. The cost of
such systems is significant. Therefore,
the use of a universal guidance system
could greatly increase the total cost of
the unit.
Two main issues are addressed to this
paper:
• simple navigation system for
automatic guidance with respect to
the initial workstation position
• system for common control of the
wheelchair and robotic workstation.
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
2. Design approach
The operator uses his/her wheelchair for
independent indoor transportation.
During the sessions of robot control, the
same wheelchair is utilised as a normal
chair. The initial position can be
regained easily if the wheelchair
possesses high maneuverability. The
proposed navigation approach considers
omni-directional
wheelchair
that
provides
three
degree-of-freedom
locomotion.
Due
to
its
high
manoeuvrability, such a wheelchair can
ease user’s access to different places
and simplify the steering process [7, 8].
A specific type of wheelchair will not
be treated.
The wheelchair is navigated in two
modes. During the direct control mode,
the user sends commands to MMI and
the wheelchair can move to various
places in the house. When the user
decides to operate the robot, he simply
directs the wheelchair to the worktable.
When the wheelchair is close to the
workstation, the navigation system is
activated and the wheelchair is guided
in automatic mode to the preliminary
determined place. As soon as the
desired position is reached, the
wheelchair stops automatically and the
user can control the robot from that
position.
4. The navigation system
The wheelchair navigation system is
based on the following guidepoint. The
schemes involve permanently installed
- 209 -
coils or optical guidepoint markers.
Specialized sensors, mounted on the
wheelchair are used to servocontrol the
steering mechanism, causing the
wheelchair to move to the intended
position. Three different schemes will
be developed during the project. The
first one is presented in Figure 1. Here
are shown (top view) the positions of
the worktable 1, the manipulator 2, and
the objects 3. Two coils with
ferromagnetic core (4 and 5) are
embedded in the floor. Their axes are
perpendicular to each other.
2
3
1
4
5
7
8
6
Fig. 1. Inductive navigation system
Coils 4 and 5 emit electromagnetic
fields at different frequencies (f1 and f2).
The locations of the coils mark the
initial wheelchair position (where the
wheelchair should be placed when the
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
user controls the robot). Coil 4 is
parallel to the long edge of the table
while coil 5 is perpendicular to the
same edge. A sensing head is arranged
on the bottom of the wheelchair 6. This
head consists of a pair of inductive
pick-up
coils
with
mutually
perpendicular axes (7 and 8). Each coil
is a part of a receiving resonance
contour tuned at the same frequency f1
or f2. Inducted signals are used to
servocontrol the steering mechanism,
causing the wheelchair to reach the
initial position where the signals attain
maximal values.
The optoelectronic navigation follow up system is shown in Figure 2.
An example of the construction of the
optoelectronic sensor is shown in Fig.
3. Light source (A1) is mounted to the
table 1. The light beam 2 is split by
partially transmissive mirror 3 and the
beam is detected by two photoreceivers
4 and 5 which are divided by optical
partition 6. Two output signals are
generated. The first (O1) is dependent
on the displacement between the
position of the sensor and the center of
the light beam 2. The second output
signal (O2) is dependent on the
deflection between the light beam axis
and the axis of the sensor. The light
signal O2 becomes zero when the light
intensity indicated by photoreceivers 4
and 5 equalizes with that sensed by
photoreceiver 7.
1
A1
2
A
A1
4
+
6
>
3
O1
5
+
7
Figure 2. Optoelectronic navigation
system
Light source A1 is mounted on the front
side of the desk. Its place corresponds
to the initial position of the wheelchair.
A pulse of near infrared radiation is
received by the sensor (A) which is
mounted under the wheelchair hand
rest.
-
>
O2
Figure 3. Optical navigation sensor
When the wheelchair comes close to the
table, the distance between A and A1
(Fig. 2) decreases and the output signal
O1 exceeds the preliminary defined
level, hence, switching the wheelchair
- 210 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
control system to automatic navigation
mode. Referring to the signal O1, the
wheelchair moves to the left or right
until the sensor A matches to the beam
centre. This is followed by a rotation,
which is servocontrolled by the signal
O2 and the wheelchair moves to the
table. When the sensor signals exceed
the preliminary determined level, the
wheelchair stops.
An alternative variant of optoelectronic
wheelchair navigation system is shown
in Figure 4.
2
3
1
head
consists
of
reflective
optoelectronic sensors (LED’s and
photo receivers).
5. Common control of the robot and
the wheelchair
The operator controls either the
manipulator or the wheelchair at
different time sequences. That is why
one and the same user’s commands can
be used to control both the wheelchair
and the robot. The use of a single
command makes the control process
easier for the user. In addition, the
learning phase and adaptation to the
control system, the total number of
commands needed for the control of the
wheelchair and the robot, are reduced.
A system for common movement
control is shown in Fig. 5.
K1
USER
4
ROBOT
MMI
5
K2
WHEELCHAIR
6
Figure 4. Pattern navigation system
The navigation system follows optical
patterns 4 arranged on the floor that are
sensed by optoelectronic head 5. The
patterns are oriented parallel to the
worktable. The width of the lines and
the distance between them provide
information about the current position
of the wheelchair. The optoelectronic
WHEELCHAIR
NAVIGATION
SYSTEM
Figure 5. Motion assisting system
Phases of control:
A. Transport phase
When the wheelchair is far from the
manipulator, the power supply to the
manipulator is switched off. While
- 211 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
driving the wheelchair, the user can
move to various places within the
house.
by the navigation system decrease and
switch K2 turns the scheme to transport
mode, enabling the user to control the
wheelchair again.
B. Navigation phase
When the wheelchair approaches the
worktable then the output signals of the
navigation
sensors
exceed
the
preliminary determined level. A special
scheme is activated, the switch K2 turns
on and the wheelchair is controlled by
the wheelchair navigation system. Then,
moving on a low speed, the wheelchair
gets near to the initial position.
6. Head control of wheelchair and
manipulator
C. Initial position of the wheelchair
The wheelchair finds automatically its
initial position and stops. Then a special
signal is sent to the robot and it is
powered. This is followed by the
activation of switch K1 and the
interface signals are directed to the
robot.
D. Robot control
Operating the manipulator, the user can
perform daily living tasks.
E/ Resume the transport phase
The HOPE manipulator [1] is based on
head motions. The same commands can
also be applied to wheelchair control.
The robot control uses optoelectronic
positioning sensor that is mounted on a
spectacles’ frame and can detect the
head position with respect to the
gripper. Limited forward-backward
head tilting and left-right head rotation
are used for gripper movement in the
“up-down” and ”left-right” directions
respectively. The optoelectronic system
allows cordless data transfer.
Present approach considers omnidirectional
wheelchair.
Such
a
wheelchair needs three proportional
user’s
movements,
which
are
transformed into commands for “leftright”,
"forward-backward”
and
“rotation” (Fig. 6).
When the user does not need the
assistance by the robot, he/she sets a
special command that turns the switch
K1 and the robot supply switches off.
The wheelchair, controlled by its
navigation system, moves away from
the worktable on a trajectory that is
perpendicular to the worktable. When
the distance between the wheelchair and
the table increases, the signals, received
- 212 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Figure 7 presents a block diagram of a
robot and wheelchair controlled by head
movements. The scheme follows the
conception of Fig. 5.
1
2
Head movement
“left/right” (A)
3
Figure 6. Omni-directional wheelchair
The wheelchair can be controlled by the
same head motions sensed with respect
to the wheelchair’s headrest. The
relation between the head motions and
wheelchair direction depend on the
user’s
movement
abilities.
The
following scheme is possible:
• “forward-backward”
head
movements
can
control
the
wheelchair
in
the
“forwardbackward” direction
• lateral head tilting motions can
control the wheelchair on “left-right”
• “left-right” head turning can control
the wheelchair rotation to the left or
right.
A single switch, located in the headrest,
can detect the touch of the user’s head
to the headrest and can produce signals
for the wheelchair’s forward-backward
directional motion and the power
supply (i.e. on/off of the wheelchair
batteries).
The optoelectronic system for the
detection of the user’s head position can
be modified or replaced with advanced
systems such as Peachtree [9], UHC
[10], and Origin’s head mouse [11].
Visual servoing of the user’s face [12]
can also be used.
- 213 -
Head movement
“up/down” (B)
MMI 1
ROBOT
CONTROLLER
ROBOT
MOTORS
Eyelids’
movements (C)
Push
to the headrest
reset
MMI 2
WHEELCHAIR
CONTROLLER
WHEELCHAIR
MOTORS
stop
WHEELCHAIR
NAVIGATION
SYSTEM
Figure 7. Head control of
wheelchair&manipulator
The system for detecting the head
motions consists of two parts
designated as MMI 1 and MMI 2. The
first part (MMI 1) detects the head
motions relative to the gripper, while
the second part (MMI 2) detects the
same head movements relative to the
headrest of the user’s chair.
8. Conclusion
The combination of a desktop mounted
robot and high manoeuvrability
wheelchair
can
provide
indoor
independence of manipulation and
transportation. Application of a special
navigation system for automatic
guidance of the wheelchair to the initial
position reduces the user's participation
in the control process. The concepts,
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
presented above, do not fit to the
algorithm of HOPE robot only. The
same approach can be applied to
different workstations and different
kind of man-machine interaction.
9. References:
1. Harwin W., Rahman T., and Foulds
R. A Review of Design Issues in
Rehabilitation
Robotics
with
Reference to North American
Research, IEEE Transactions on
Rehabilitation Engineering, Volume
3, pp. 3 - 13, 1995
2. Van der Loos H.F.M. , VA/Stanford
Rehabilitation Robotics Research
and Development Program: Lessons
Learned in the Application of
Robotics Technology to the Field of
Rehabilitation.
IEEE
Trans.
Rehabilitation Engineering, Vol. 3,
March, 1995, pp. 46-55.
3. Stefanov D., “Model of a special
Orthotic Manipulator”, Journal of
Mechatronics, Elsevier Science
LTD, Vol. 4, pp. 401-415, Great
Britain, 1994
4. Stefanov D., “Robotic workstation
for daily living tasks”, Proc. of the
European conference on the
Advancement of Rehabilitation
Technology ECART 3, pp. 171 173, Lisbon, 10 - 13 October 1995
5. Mittal H., Yanco H., Aronis J.,
Simpson R. Assistive Technology
and
Artificial
Intelligence.
Application in Robotics, User
Interfaces And Natural Language
Processing, Springer-Verlag, 1998
6. Gomi T. and Griffith A. Developing
Intelligent Wheelchairs for the
Handicapped, Lecture Notes in AI:
Assistive Technology and Artificial
Intelligence, Vol. 1458, 1998
7. Everett H. Sensors for Mobile
Robots, Theory and Application,
A.K. Peters, 1995
8. Borenstain J., Everett H., Feng L.
Navigating Mobile Robots. Systems
and Techniques, A.K.Peters, 1995
9. Peachtree
Proportional
Head
Control, (PHC-2), Catalog of
Dynamic Systems, Inc., Atlanta
10.Jaffe D. Ultrasonic Head Controller
for Powered Wheelchair (UHCW)
Palo Alto VA Rehabilitation
Research and Development Center,
http://guide.stanford.edu/Projects/uhc.html
11.Head Mouse,
Head-Controlled
Pointing for Computer Access,
Origin Instrument Corporation,
http://www.origin com/access/
12.Hashimoto K. Visual Servoing. Real
Time
Control
of
Robot
Manipulators Based on Visual
Sensory Feedback, World Scientific
Publishing Co., 1993
Acknowledgements:
The research is supported by Grant
Number TN639/96 from the National
Science Foundation of Bulgaria.
- 214 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Author’s Address:
Assoc. Prof. Dimiter Stefanov, PhD
Bulgarian Academy of Sciences,
Institute of Mechanics,
Acad. G. Bonchev Street, block 4,
1113 Sofia, BULGARIA,
FAX: +359-2-707498,
Phone: +359-2-7135251,
E-mail: [email protected]
- 215 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
UPPER LIMB MOTION ASSIST ROBOT
Yoshihiko Takahashi, and Takeshi Kobayashi
Dept. of System Design Eng.
Kanagawa Institute of Technology
1030, Shimo-Ogino, Atsugi, Kanagawa, 243-0292, JAPAN
Phone/Facsimile: +81-462-91-3195
E-mail: [email protected]
Abstract - An upper limb motion assist
robot to elderly and disabled people is
proposed in this paper. The robot can be
mounted on a wheelchair to actuate an
elderly person’s upper limb three
dimensionally by his will. A wrist of an
arm is suspended, and actuated by a wire
driven control system. A vibration
reduction system is also developed to
decrease the vibration occurred in the
wire driven system. The wire driven
control system is advantageous to design
a compact, light weight, and low cost
mechanism. In this paper, the concept of
the robot, the mechanical structure of an
experimental
setup,
mechanical
characteristics,
control
system,
experimental results are described.
1. INTRODUCTION
A rapid growth of elderly population
causes the shortage of care workers.
Therefore, it is necessary to develop
assist robots capable of supporting such
an aged society [1]. Many assist robots in
which an elderly person can support
himself have been fabricated [1-10].
There is a tendency that an elderly
person can not lift up his arm since his
muscular strength is declining though his
hands still operate normally. When an
elderly person can lift up his arm by his
own will, then an elderly person can
improve his quality of life. Tateno et al.
proposed the upper limb motion assist
robot by which elderly and disable
people can move their arm by their own
will [2]. The vibration occurred in the
suspended structure was one of the
problems. Lum et al. used an industrial
robot [5]. Homma also proposed an
upper limb assist system [3,4]. The robot
is using strings to actuate an elderly
person’s arm because of its safe property.
However, the proposed drive system is a
kind of a parallel mechanism in which
complicated calculation is required.
The robot system proposed in this
paper is using a wire driven control
system by which an elderly person’s
wrist is actuated three dimensionally in
the orthogonal coordinates. The wire
driven
control
system
[6]
is
advantageous to design a compact, light
weight, and low cost mechanism, which
makes it possible to mount the robot on a
wheelchair. In addition, a vibration
reduction system is also developed in
order to decrease the vibration occurred
- 216 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
in the wire driven system. In this paper,
the concept of the robot, the mechanical
structure of an experimental setup, the
mechanical characteristics, the control
system, and the experimental results are
described.
2. PROPOSED UPPER LIMB MOTION
ASSIST ROBOT
Fig.1 shows the concept of the upper
limb motion assist robot proposed in this
paper. An elderly person who can use his
hand but can not lift up his arm by his
own muscular strength is supposed to be
an user of the upper limb motion assist
robot.
The robot has a frame structure to
stand vertical to the ground, like a
window frame. Two wire driven systems
in the X and Z directions are mounted on
the frame structure. One wrist of an
elderly person is suspended by wires,
and actuated in the X and Z directions. In
addition, the frame structure is also
actuated in the Y direction by using Y
drive system. Therefore, an elderly
person can move his arm in the three
orthogonal coordinates. The field of
vision is maintained since the wire used
in the robot system is very thin. The wire
driven system is advantageous to design
a light weight, compact, and low cost
mechanism. The robot system can be
mounted on a wheelchair as shown in
Fig.1. The weight increase of a wheel
chair by mounting the robot will be
small.
As an interface between a human and
the robot, the following instruction
systems can be considered; a voice
instruction system, an eye movement
instruction system, a neck movement
instruction system, and a touch panel
instruction system and so on.
Fig.2 shows the comparison between a
cantilever type actuator and a wire driven
type actuator. When using a wire driven
type actuator, a lower power and smaller
mechanism can be designed.
3. MECHANICAL CONSTRUCTION
OF EXPERIMENTAL SETUP
The experimental setup of the X and Z
drive systems shown in Fig.3 was
fabricated to confirm the concept of the
upper limb motion assist robot. Both of
the X and Z drive systems were mounted
on a one plate. A dummy mass and a
plastic arm were used instead of an
actual hand and arm. The wire suspended
the dummy mass attached to the tip of
the arm. Both of the drive systems were
using a potentiometer and a DC motor as
a sensor and an actuator respectively.
The DC motor rotated pulleys, and then
the wires were actuated.
In the vertical (Z) drive system, a DC
motor drove the wires, and then the wires
actuated the dummy mass. The vertical
(Z) drive system was mounted on two
sliders of the horizontal (X) drive
system, and then was driven in the
horizontal direction. The two sliders
were driven simultaneously by four
pulleys, wires, and a DC motor. The
position of the two sliders was detected
by a potentiometer. The dummy mass
position differs from the position of the
- 217 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
sliders because of vibration. Therefore, a
laser sensor was used to detect the
dummy mass position.
4. MECHANICAL
CHARACTURISTICS OF
HORIZONTAL DRIVE SYSTEM
The dynamical equation in the X
direction becomes as
d 2x
M
=− ( F1 + f 01 )sin α
2
dt
The positioning control was carried
out in the horizontal (X) and vertical (Z)
drive systems using a personal computer
with an A/D and D/A board. Classical
proportional (P) control was used as a
control theory. Large amplitude vibration
was not occurred in the vertical (Z) drive
system. However, large amplitude
vibration occurred in the horizontal
direction. The frequency of the dummy
mass vibration was about 2.56 Hz. The
frequency of the dummy mass vibration
was changed depending on the vertical
position of the dummy mass.
The analysis of the dummy mass
vibration was carried out focusing on the
frequency change of the dummy mass
vibration. Fig.4 shows the dynamical
model of the suspended dummy mass.
Here,
x : Dummy mass displacement in the X
direction
F1 and F2 : Tensions
f 01 and f 02 : Initial tensions
α and β : Angles
r1 and r2 : Wire lengths in the upper and
lower sides during when the dummy
mass is vibrated
L : Initial total wire length
L1 and L2 : Initial wire lengths in the
upper and lower sides
(1)
−( F2 + f 02 )sin β
Also the next relations can be obtained
from Fig.4,
L1 = r1 cosα
L2 = r2 cos β
(2)
x = r1 sin α = r2 sin β
Using the equation (2), the dynamical
equation becomes as
d 2 x  F1 + f 01 
 x cosα
M
=− 
2
L

1 
dt
 F + f 02 
 x cos β
− 2
L

2

(3)
Assuming that the dummy mass is in the
center position in the Z direction, the
displacement of the dummy mass is
small enough, and the angles of α and
β are entirely less than one,
cos α ≈ 1 and cos β ≈ 1
(4)
Then, the dynamical equation becomes
as,
- 218 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
d 2 x  F1 + f 01 F2 + f 02
=− 
+
M
2
L
L2

1
dt

 x (5)

The spring stiffness is thus as follows.
F + f 01 F2 + f 02
K sp = 1
+
L1
L2
(6)
Here, the next relations can be obtained
when the displacement x is small
enough,
L = L1 + L2
(7)
F1 + f 01 = F2 + f 02 + M ⋅ g
(8)
Using the equations of (7) and (8), the
relation between the spring stiffness and
the vertical position of the dummy mass
becomes as,
K sp =
L(F1 + f 01 ) − L1M g
L1 (L − L1 )
(9)
Finally, the relation between the
mechanical resonance frequency and the
dummy mass vertical position becomes
as,
1
f =
2π
1
=
2π
K sp
5. SIMULATION OF MECHANICAL
RESONANCE FREQUENCY
The relations of the mechanical
resonance frequency and the vertical
position of the dummy mass are
simulated using the above mentioned
equations. At first, the spring stiffness
K sp is obtained by using the
experimental results of the mechanical
Next, the
resonance frequency f .
tension F1 is obtained by using the
equation (9). Then, the mechanical
resonance frequencies at different
vertical position are calculated using the
equation (10).
Fig.5 shows the relation between the
resonance frequency and the dummy
mass position. In the theoretical results,
the gravitational effect is considered, and
the conditions of f 01 = M g and
f 02 = 0 are utilized. The theoretical
results show the good correlation with
the experimental results. When the
dummy mass approaches the center of
the vertical stroke, the resonance
frequency tends to be low. The vibration
is worst at the center of the vertical
stroke.
6. MATHEMATICAL MODEL OF
HORIZONTAL DRIVE SYSTEM
M
1
M
 L(F1 + f 01 ) − L1M g 


L1( L − L1 )


(10)
The linear displacement of the slider is
almost in proportional to the rotational
displacement of the DC motor. However,
the displacement of the dummy mass is
not in proportion to the displacement of
the DC motor due to the vibration of the
- 219 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
wire driven mechanism. Hence, the
controlled object can be modeled as two
mass dynamical systems. The first mass
system consists of two sliders and a
rotational system including the DC
motor etc. The second mass system
consists of a linear movement system of
the dummy mass and the arm etc.
Therefore, the horizontal drive system is
modeled as follows.
K K  dθ
d 2θ 
J
+  K cr + t e 
Ra  d t
dt2 
of
the DC motor
K sp : Spring stiffness
K cl : Damping factor of the linear
system
K gh : Gear ratio
K d : Transducer coefficient
v : Input voltage of the DC motor
7. VIBRATION REDUCTION USING
CLASSICAL CONTROL
The vibration reduction using a
classical control is discussed in this
2 θ +K K K x
+ K 2 K sp K gp
gp
gh
sp
s
gh
section. A personal computer with a A/D
and D/A board is used as a controller as
K t K pa
=
v shown in Fig.6. Where, the experimental
Ra
setup using the moving sensor was used.
2
The laser sensor is attached to the slider
d xs
dx
M
+ K d s + K sp x s
of the horizontal drive system. The
dt
dt 2
moving sensor control loop is our
proposed scheme to reduce the dummy
+ K gh K gp K spθ = 0
mass vibration. The laser sensor can
detect the displacement between the
y = K d xs
dummy mass and the slider of the
(11)
horizontal drive system. Fig.7 shows the
where,
block diagram of the control system
where K p1 is the gain on the
M : Mass
potentiometer loop, and K p 2 is the gain
J : Moment of inertia
on the laser sensor loop. The laser sensor
K cr : Damping factor of the rotational
loop does not react when the value of
system
K gp : Translation coefficient of the K p 2 is zero, but vibration reduction
control acts proportional to the value of
pulley
K p 2 . Figs.8 and 9 show the
K t : Torque constant of the DC motor
experimental results with and without the
Ra : Resistance of the DC motor
vibration reduction control. It is clear
K pa : Power amplifier gain
that the laser sensor loop is effective to
K e : Back-electromotive force constant reduce the vibration.
- 220 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
8. VIBRATION REDUCTION USING
OBSERVER BASED OPTIMAL
CONTROL
J =∫
The vibration reduction using an
observer based optimal control is carried
out in this section. The H 2 control [11]
is utilized as an observer based optimal
control. The state equation of the
controlled system becomes as follows.
dx
= Ax + B1w + B2u
dt
z = C1x + D12u
(12)
Where, x is the state vector, u is the
control input vector, w is the
disturbance, z is the output vector for
evaluation, t is the time, y is the
measured output vector, and C1 , B1 are
the weighting factors. The cost function
to be minimized is as follows.
1
K
− cl
M
0
0
}
(13)
The controller with an observer is as
follows.
dxˆ
= Axˆ + B2uˆ + YC2T ( y − C2 xˆ )
dt
uˆ = − B2T X xˆ
(14)
Where, û is the control input using the
estimated state variables, X and Y are
the positive solutions of the following
two Riccati equations.
y = C2 x + D21w
0

K sp

−

M
A=
0

 K gh K gp K sp

J

0
0


0
0

B1 = 
0
0
 b1K t K pa
0

 J Ra
{
∞ T
u u + xT C1T C1x dt
t
XB2 B2T X − AT X − XA − C1T C1 = 0
YC2T C2Y − YAT − AY − B1B1T = 0
(15)
The controlled system of the suspended
hand can be modeled as follows.
0
K gh K gp K sp
M
0
2
2 K
K K gp
sp
gh
−
J
0
0
1
−
1  Kt Ke

J  Ra






+ K cr 

0 0 0
 0 
 0 

0 0 0



0 0 0 , B2 =  0 
 K t K pa 

0 0 0


 J Ra 

- 221 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
 0
 0

C1 = − c1a

 0
 0

0
0
0
0
0 c1a K gh K gp
0
0
0 c1b K gh K gp
− K s1 0 K s1K gh K gp
C2 = 
0 K s 2 K gh K gp
 0
0
0

0 ,

0
0
0
,
0
Where, the controller and observer gains
are changed by using the values of
b1 , c1a , c1b . Fig.10 shows the block
diagram of H 2 control system. Fig.11
shows the experimental results using
H 2 control. It is clear that the laser
sensor loop is effective to reduce the
vibration. Compared with the classical
control results in Fig.9, the H 2 control
results in Fig.11 are superior.
D12 = [1 0 0 0 0]T
0 0 1 0 0 
D21 = 

0 0 0 0 1 
(16)
system, and the experimental results
were described.
REFERENCES
[1]
S.Hashino, Daily life support
robot, J. of Robotics Soc. of Japan,
Vol.14, No.5, p.614 (1996)
[2]
M.Tateno,
H.Tomita,
S.Hatakeyama,
O.Miyashita,
A.Maeda,
and
S.Ishigami,
Development of powered upper-limb
9. CONCLUSIONS
orthoses, J. of Soc. of Life Support
Technology, Vol.5, No.5, p.156 (1998)
An assist robot for an upper limb
[3]
K.Homma, and T.Arai, Upper
motion to elderly or disabled people was
limb motion assist system with
proposed in this paper. The proposed
parallel mechanism, J. of Robotics
robot can be attached to a wheelchair,
Soc. of Japan, Vol.15, No.1, p.90
and can actuate a wrist of an upper limb
(1997)
in three orthogonal directions by a wire
[4]
K.Homma,
S.Hashino,
and
driven control system. The wire driven
T.Arai, An upper limb motion assist
control system is advantageous to design
system: experiments with arm models,
a compact, light weight, and low cost
Proc. IEEE/RSJ Int. Conf. on
mechanism. In addition, a vibration
Intelligent Robots and Systems, p.758
reduction system was also developed in
(1998)
order to decrease the vibration occurred
[5]
P.S.Lum,
C.G.Burgar,
and
in the wire driven system. In this paper,
H.F.Van der Loos, The use of a
the concept of the robot, the mechanical
robotic device for post-stroke
structure of an experimental setup,
movement therapy, Proc. Int. Conf. on
mechanical
characteristics,
control
Rehabilitation Robotics, p.107 (1997)
- 222 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
[6]
Y.Takahashi,
Y.Tomatani,
Y.Matsui, Y.Honda, and T.Miura,
Wire driven robot hand, Proc. IEEE
Int. Conf. on Industrial Electronics,
Control, and Instrumentation, p.1293
(1997)
[7]
Y.Takahashi, H.Nakayama, and
T.Nagasawa, Biped robot to assist
walking and moving up-and-down
stairs, Proc. IEEE Int. Conf. on
Industrial Electronics, Control, and
Instrumentation, p.1140 (1998)
[8]
Y.Takahashi,
T.Iizuka,
and
H.Ninomiya, Standing-on-floor type
tea serving robot using voice
instruction system, Proc. IEEE Int.
Conf. on Industrial Electronics,
Control, and Instrumentation, p.1208
(1998)
[9]
Y.Takahashi, M.Nakamura, and
E.Hirata, Tea serving robot suspended
from ceiling, Proc. IEEE/RSJ Int.
Conf. on Intelligent Robots and
Systems, p.1296 (1998)
[10] Y.Takahashi, T.Hanzawa, Y.Arai,
and T.Nagashima, Tire Driven Stick
Robot to Assist Walking and Moving
Up-and-Down Stairs, Proc. Int. Conf.
Control Auto. Robotics and Vision,
p.95 (1998)
[11] J.C.Doyle,
K.Glover,
P.Khargonekar, and B.Francis, State
space solutions to H 2 and H ∞
control problems, IEEE Trans. Auto.
Control, AC-34(8), p.831 (1989)
x
Wheelchair
y
z Three
dimensional
drive
mechanism
Fig.1 Concept of upper limb motion
assist robot
Arm
Motor
Motor
Wire
Arm
mg Higher power
Lower power
Larger mechanism Smaller mechanism
(a) Cantilever type
(b) Wire driven type
Fig.2 Comparison between cantilever
type and wire driven type
- 223 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Arm
Dummy
mass
Fig.3 Experimental setup
Resonance frequency Hz
Wire
Experimental
Calculated
1
r
L1
F1+ f 01
Dummy mass vertical position mm
Fig.5 Relation between resonance
frequency and dummy mass position
Wire
Laser sensor
f 02
2
M :Dummy
mass
PC
AMP D/A
x
Fig.4 Dynamical model of suspended
dummy mass during vibration
Potentiometer
DC motor
A/D
F
r
2+
L2
L
Dummy
mass
Reflecting plate
Fig.6 Configuration of control system
- 224 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Potentiometer loop
vd
&&
v
&
xm
&x&s
x&s xs
Laser sensor loop
Fig.7 Block diagram of control system
Potentiometer
Displacement
[mm]
Laser sensor
Slider
output
Xm
Laser sensor
Time [sec]
Fig.8 Positioning results without vibration
reduction control (Classical control)
Laser sensor
Current
[A]
Displacement
[mm]
Laser sensor
Slider
output
Xm
Potentiometer
Time [sec]
Fig.9 Positioning results with vibration
reduction control (Classical control)
- 225 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
&&
u
&
&x&s
xm
x& s xs
Potentiometer loop
Laser sensor loop
Fig.10 Block diagram of H2 control system
Displacement
[mm]
Laser sensor
Slider
output
Xm
Potentiometer
Current
[A]
Laser sensor
Time [sec]
Fig.11 Positioning results with vibration reduction control (H2 control)
- 226 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Driver’s SEAT: Simulation Environment for Arm Therapy
M. J. Johnson1,2, H. F. M. Van der Loos 1,3, C. G. Burgar1,3, L. J. Leifer2
Rehabilitation R&D Center (RRDC) - VA Palo Alto HCS1, Depts. of Mechanical
Engineering2 and Functional Restoration3, Stanford University
Abstract
Hemiplegia, affecting approximately
75% of all stroke survivors, is a
common neurological impairment.
Hemiplegic upper and lower limbs
exhibit sensory and motor deficits on
the side of the body contralateral to the
location of a cerebral vascular
accident. Recovery of coordinated
movement of both upper limbs is
important for bimanual function and
promotes personal independence and
quality of life. This paper will describe
the philosophy and design of Driver’s
SEAT, a one degree of freedom robotic
device that aims to promote
coordinated bimanual movement.
Introduction
The Driver’s Simulation Environment
for Arm Therapy (SEAT) is a
prototype rehabilitation device
developed at the VA Palo Alto Health
Care System (VAPAHCS)
Rehabilitation Research
&Development Center (RRDC) to test
the efficacy of patient-initiated
bimanual exercise to encourage active
participation of the hemiplegic limb.
The robotic device is a car steering
simulator, equipped with a specially
designed steering wheel to measure the
forces applied by each of the driver’s
limbs, and with an electric motor to
provide programmed assistance and
resistance torques to the wheel.
Background
A variety of upper limb rehabilitation
techniques have been used to help
improve motor control and physical
performance outcomes in subjects with
hemiplegia. Despite the varied efforts,
studies [e.g., 1,2,3] suggest that upper
limb rehabilitation therapy has a less
than 50% success rate. However, in
some small scale studies, researchers
have demonstrated that recovery of
arm function may be improved even in
chronic hemiplegia. After synthesizing
the results of several of these
intervention techniques, Duncan [4]
noted that forced-use paradigms [e.g.,
5,6,7] and enhanced therapy [e.g., 8,9]
provided the most promising evidence
that motor recovery can be facilitated.
These effective interventions were
described as having the following in
common: active participation of the
patient in tasks, increased practice
times outside of therapy sessions,
increased involvement of the paretic
limb in exercises, and more repetitive
training. Besides these elements, other
variables, such as early intervention,
external motivation, and bimanual
exercise, have been proposed as
important for successful rehabilitation
- 227 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
outcomes. Driver’s SEAT is designed
to incorporate many of these key
components into rehabilitation therapy.
Driving is a motivational functional
task. In his literature review, Katz, et
al.[10] suggested that cessation of
driving in stroke patients is associated
with social isolation and depression.
Therefore, if the ability to drive can be
restored, the resulting independence
can reduce a person’s sense of
immobility as well as improve their
prospects for rehabilitation. In view of
this, the motivation to use Driver’s
SEAT to improve upper limb
performance should be a strong one,
since subjects are given the
opportunity to practice coordinated
steering, a skill integral to driving.
each subject's recovery determines the
type of force intervention given.
Driver's SEAT is designed to use a
modified forced-use paradigm to
enable subjects to engage their paretic
limb. The robotic device will engage
muscle groups of the shoulder and
elbow in a bimanual exercise that uses
a simple (one-degree of freedom) task.
Three steering modes are designed into
Driver's SEAT to allow the paretic and
non-paretic limbs of subjects to
interact in three different ways. In
each mode, subjects' ability to
successfully complete the steering
tasks is coupled to their ability to
modify the forces they generate on the
steering wheel with each limb.
Hardware/Software Design
Sustaining motivation throughout a
rehabilitation program using Driver’s
SEAT is facilitated by transferring
some of the responsibility for task
success from the therapist to the
subject. One suggested method is to
engage subjects in patient-controlled
exercises. The benefits of patientcontrolled exercise are under
investigation in another study at the
RRDC called “Mechanically Assisted
Upper Limb Movement for
Assessment and Therapy” study
(MIME) [11]. In this study, a sixdegree of freedom robot is used to
implement bimanual exercises
(structured tracking tasks) that allow
the non-paretic limb to guide the
therapy of the paretic arm. As a result,
the person initiates and controls the
therapy in a natural way. The level of
The hardware has been designed to
interface with a low cost PC-based
driving simulator designed and built by
Systems Technology Inc. (STI) [12].
The value added to the Driver's SEAT
system by the STI's simulator is its
ability to give realistic graphical road
scenes and quantify cognitive and
sensory/motor skill recovery using
both position and force related
performance measures.
The current Driver's SEAT system
(Figure 1) consists of a motor, an
adjustable-tilt (0°-90°), split steering
wheel, a height-adjustable frame,
wheel position sensor (optical
encoder), wheelrim force sensors, STI's
simulation hardware and the
experimenter's computer hardware.
- 228 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
In real time, the STI computer
generates the graphical scenes and
collects various variables associated
with the steering dynamics, i.e., lateral
acceleration, steering angle and yaw
rate. The angular position of the
steering wheel controls the lateral
position of the car image on the
generated roadway scene. A typical
road scene is designed using STI’s
scenario definition language. The
scene is made to appear 3D and the
roadway moves towards the driver as a
function of speed. Several road scenes,
designed to last no longer than 3
Figure 1: Driver’s SEAT System
minutes, give users the "feel" of rural,
suburban, and urban driving.
Throughout this paper, steering tasks
are defined as the roadway scene and
the set of instructions given to the
drivers to guide them in navigating the
scene. Thus, a steering task is
designed such that if users follow the
experimenter’s instructions and 1)
navigate the roadway scene in such a
way as to keep their car icon tracking a
road edge line and 2) coordinate their
limbs as instructed, they would
experience success. Steering tasks are
implemented without user-controlled
accelerating and braking in order to
allow users to concentrate solely on
steering. The speed of the scene is set
a priori and remains constant
throughout the task.
The experimenter’s computer is the
nucleus of the Driver’s SEAT system.
Through a series of menus, the driver
programs written in "C" allow the
experimenter to pick the parameters
that determine the steering tasks the
STI sub-system displays to the user
and the parameters that determine the
steering mode experienced by the user.
Also, this computer is used to record
the signals from the position and force
sensors and update the torque setting to
- 229 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
the motor via a motion control board
and a power amplifier.
The two computers are set-up to
communicate over serial (RS 232
protocol) and digital ports. The
commands for choosing the roadway
scene are sent to the STI computer via
serial ports and the signals to start/stop
collection and stop torque control are
sent to the experimenter’s computer
via digital ports.
The unique split steering wheel
configuration, shown in Figure 2,
enables the forces generated with each
limb to be measured independently.
The rim of the wheel is a steel tube that
is split into two sections. Each half is
supported by two flexible spokes that
flex in the tangential direction. The
tangential forces are measured by two
load cells located at the base of the
wheel.
designed the system to be able to
implement three steering modes that
complement the three main recovery
stages of stroke [2]. Named according
to the participation of the paretic limb,
the modes are passive movement (PM),
active steering (AS), and normal
steering (NS).
The PM mode was designed for
subjects whose paretic limb is flaccid.
Since they have no volitional control
over their paretic limb, they are
instructed to perform the steering task
using their non-paretic limb. The nonparetic limb is used to begin retraining
of the paretic limb. At the wheel, the
weight of the paretic limb is
compensated by the servo-mechanism,
i.e., the paretic limb is moved
passively while the non-paretic limb
actively steers. This mode design was
based on research [1,13] that suggests
that motor recovery may be enhanced
by matching up the cortical activity
associated with attempting to initiate
movement with proprioceptive
feedback associated with that
movement.
When subjects begin to demonstrate
that they are regaining some volitional
control over their paretic limb, they are
permitted to begin exercising in the AS
mode. The AS mode was designed for
subjects whose paretic limb has
Figure 2: The split-steering wheel
moderate hypertonia and synergistic
movement. Subjects are instructed to
Modes of Operation
perform the steering task using their
paretic limb, relaxing, if possible, their
Driver's SEAT is intended to be used
non-paretic limb. At the wheel, the
throughout the entire recovery cycle of
forces exerted by the non-paretic limb
a subject with hemiplegia. We
are counteracted by the servo- 230 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
equivalent torque signal by modifying
their limb torques in a manner
appropriate to the current steering
mode.
mechanism, i.e., the paretic limb is
encouraged to steer actively and the
non-paretic limb is actively
discouraged. This mode was designed
based on the "forced-use" research.
The three steering modes are
implemented using the proportional derivative (PD) torque control law
shown in Equation 1 where K p and Kv
are the proportional and derivative
constants, respectively.
The NS mode was designed to allow us
to assess how subjects distribute their
limb forces, i.e., how much the paretic
limb participates in the steering tasks.
The mode is also used as a general
exercise mode to assess limb
coordination. Typically, subjects use
this mode as their primary exercise
mode when their motor deficits have
been minimized and “normal”
voluntary control has returned. They
are encouraged to practice coordinated
driving and improve their force
symmetry by actively steering with
both their paretic and non-paretic
limbs.
Control Architecture
Equation 1:
T motor = K p (T actual
T actual is the actual torque on the
steering wheel, Tactual −1 is the previous
value of the actual torque, Tdesired is the
torque command sent to the motor, and
∆t is the sampling time. The desired
torque is given by Equation 2.
To successfully complete a steering
task on a simulator a driver is said to
act as a position controller. In the
context of driving, a position controller
extrapolates from the displayed
roadway scene a control signal (desired
steering angle) that allows the vehicle
to track on or within road edge lines
[14]. Studies in manual control theory
[e.g., 14] suggest that this position
control action is intuitive and can be
performed by the average human.
In the Driver's SEAT control design,
users are asked to go a step further and
convert their steering control signal
into an equivalent torque control
signal. They are asked to generate this
− T desired ) + Kv (T actual − T actual −1 ) ∆t
Equation 2:
T desired = T restore − Tresist + Tassist
The restoring torque is defined in
terms of the steering wheel angle, θ :
9
T restore = Tmax * Sin( θ )
2
θ
π
when θ ≤ and Trestore = − * K a * θ
9
θ
π
when θ > , where Ka is defined so
9
that for a given steering angle
range, Trestore does not exceed a
maximum permissible torque (this
torque is defined based on safety and
other subjective factors). The sine
function allows us to smoothly
transition (±10•) between steering
- 231 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
directions and thus maintain “road
feel” at the wheel.
If the subject has left hemiplegia (left
limb is paretic and the right limb is
non-paretic), then T resist = Fnon− paretic * R
and T assist = F paretic * R . The forces
( Fnon− paretic and Fparetic ) are obtained from
the load cells, and R is the radius of the
steering wheel.
The desired torque changes with the
steering modes and is used to create
the interaction effects at the wheel.
Again, assuming the subject has left
hemiplegia, Table 1 shows how the
desired torque changes.
Modes:
PM
AS
NS
Desired Torque
T desired = Trestore + T assist
T desired = Trestore − T resist
T desired = Trestore
Table 1:The desired torque used in
each mode.
For example, if subjects steering in the
AS mode are able to modify their limb
dynamics so that only their paretic
limb steers then they will experience
minimal resistance torques. The
dominant motor torques on the wheel
will be the restoring torques that give a
sense of “road feel” to the task.
Experimenter/User Protocol
The Driver's SEAT system is designed
to be used with subjects with right or
left hemiplegia. A typical session
using the system progresses as follows:
The experimenter asks the subject to
sit in a posture supported chair and
place their hands at the ± 90° (3 and 9
o'clock) positions on the steering
wheel. Their arms are placed in the
following position: forearms neutral,
elbows flexed to about 90 degrees and
shoulders slightly abducted and flexed.
The steering wheel tilt and height is
adjusted to provide a comfortable
interaction with the steering wheel
throughout the range of motion. The
experimenter describes the steering
task to the subject and then begins the
road scene. The subject is expected to
perform the described task.
For subject safety, adjustable
mechanical stops limit the rotation of
the steering wheel to not exceed ±135°
from neutral, and an emergency stop
pedal is placed under the subject’s left
foot so that power can be disconnected
at anytime during a session.
Future Work
To assess the efficacy of the three
operational modes, at least 8 stroke
patients will be tested. We will
explore whether our designed modes
can encourage subjects' non-paretic
and paretic limbs to interact in the
ways we have proposed. Along with
video data and EMG muscle group
activity, we will use our measures of
wheel position and bilateral limb
forces exerted on the wheel to
determine the success of our approach.
- 232 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
6. Wolf SL, Lecraw DE, Barton LA,
Jann BB. Forced use of hemiplegic
upper extremities to reverse the effect
of learned nonuse among chronic
stroke and head injured patients.
Experimental Neurology, 1989,
104(2): p. 125.
Acknowledgments
This project was supported by the core
funds of the VAPAHCS RRDC and
NASA Grant No. NGT: 2-52208. We
thank Bimal Aponso of STI for his
support in interfacing the STI
simulator with our system.
7. Barton LA, Wolf SL. Learned nonuse in the hemiplegic upper extremity.
Advances in Stroke Rehabilitation,
Gordon WA (ed.), 1993, ButterworthHeineman, Boston, Chapter 5.
References
1. Jorgensen C, et. al. “Outcomes and
the course of recovery in stroke”. Part
II: Time course of recovery. The
Copenhagen Stroke Study”, Arch Phy
Med Rehabil, Vol. 76, May 1995.
8. Sunderland A, Tinson DJ, Bradley
EL, Fletcher D, Hewer RL, Wade DT.
Enhanced physical therapy improves
arm function after stroke. A
randomised controlled trial. Journal of
Neurology, Neurosurgey, and
Psychiatry, Vol. 55, No. 7, July 1992,
p. 530.
2. Gresham GE, et al. Post-stroke
rehabilitation. Clinical Practice
Guideline, Number 16, US Department
of Health Services, AHCPR
Publication No. 95-0662, May 1995.
3. Ottenbacher KJ. Why rehabilitation
research does not work (as well as we
think it should). Arch Phy Med
Rehabil, Vol. 76, February 1995, p.
123.
4. Duncan PW. Synthesis of
intervention trials to improve motor
recovery following stroke. Top Stroke
Rehabil, Vol. 3, No. 4, Winter 1997, p.
1
5. Taub E, Miller NE, Novack TA,
Cook EW, Fleming WC, Nepomuceno
CS, Connel JS, Crago JE. Technique to
Improve Chronic Motor Deficit After
Stroke. Arch Phy Med Rehabil, Vol.
74, April 1993, p. 347.
9. Sunderland A, Tinson DJ, Bradley
EL, Fletcher D, Hewer RL, Wade DT.
Enhanced physical therapy improves
arm function after stroke: A one year
follow up study. Journal of Neurology,
Neurosurgey, and Psychiatry, Vol. 57,
1994, p. 856.
10. Katz RT, Golden RS, Butter J,
Tepper D, Rothke S, Holmes J, Sahgal
V. Driving safety after brain damage:
follow-up of twenty-two patients with
matched controls. Arch Phy Med
Rehabil, Vol. 71, February 1990, p.
133.
11. P.S. Lum, C.G. Burgar, H.F.M.
Van der Loos, The use of a robotic
device for post-stroke movement
therapy. Proceedings of 1997
- 233 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
International Conference on
Rehabilitation Robotics, Bath, U.K.,
April 14-15, 1997, pp. 107-110.
12. Allen RW, Rosenthal TJ,
Parseghian Z. Low cost driving
simulation for research training and
screening applications. Society of
Automotive Engineers Technical Paper
Series, No. 950171, February 27, 1995.
13. Ada L, Canning JH, Carr SL,
Kilbreath SL, Shepherd RB. Task
specific training of reaching and
manipulation. Insights into the reach
to grasp movement, KMB Bennett and
U Castiello (eds.), 1994, Elsevier
Science B.V., Chapter 12.
14. McRuer DT, Allen RW, Weir DH,
Klein RH, New Results in Driver
Steering Control Models, Human
Factors, 19(4), August 1977, 381-397.
- 234 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
A ROBOTIC SYSTEM FOR UPPER-LIMB EXERCISES TO PROMOTE
RECOVERY OF MOTOR FUNCTION FOLLOWING STROKE
Peter S. Lum1,2, Machiel Van der Loos1,2, Peggy Shor1, Charles G. Burgar1,2
1
Rehab R&D Center, VA Palo Alto HCS
2
Dept. of Functional Restoration, Stanford University
Abstract
Our objective is to evaluate the
therapeutic efficacy of robot-aided
exercise for recovery of upper limb
motor function following stroke. We
have developed a robotic system which
applies forces to the paretic limb
during passive and active-assisted
movements.
A clinical trial is
underway which compares robot-aided
exercise
with
conventional
NeuroDevelopmental Therapy (NDT).
Preliminary data suggests robot-aided
exercise has therapeutic benefits.
Subjects who have completed a two
month training protocol of robot-aided
exercises
have
demonstrated
improvements in active-constrained
training tasks, free-reach kinematics,
and the Fugl-Meyer assessment of
motor function. Integration of robotaided therapy into clinical exercise
programs would allow repetitive, timeintensive exercises to be performed
without one-on-one attention from a
therapist.
approaches two million. Stroke is the
most common inpatient rehabilitation
diagnosis and the resulting loss of
upper limb motor function is often
resistant to therapeutic efforts. Yet,
methods that decrease the workload on
clinical staff are needed. Integration of
robot-aided therapy into clinical
exercise programs would allow
repetitive, time-intensive exercises to
be performed efficiently.
We have developed a device
which facilitates movement in the
paretic limb. A Puma 560 robotic arm
applies forces to the paretic limb that
would normally be provided by a
therapist. This system is capable of 4
modes of exercise, all patterned after
exercises currently used in therapy. In
passive mode, the subject relaxes as
the robot moves the limb. In activeassisted mode, the subject triggers
initiation of the movement with force
in the direction of movement and then
Introduction
Disability resulting from stroke
affects individuals, their families, and
society. Each year 700,000 people in
the United States suffer strokes, and
the number of stroke survivors now
- 235 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
"works with the robot" as it moves the
limb. In active-constrained mode, the
robot provides a viscous resistance in
the direction of movement and springlike loads in all other directions. In
bilateral mode, the subject attempts
bilateral mirror-image movements
while the paretic limb is assisted by the
robot. Movement of the contralateral
limb is measured by a 6-DOF digitizer;
the robot moves the paretic limb to the
mirror-image position with minimal
delay. A six-axis force-torque sensor
measures the interaction forces and
torques between the robot and the
subject.
A clinical trial is underway to
evaluate the therapeutic efficacy of
robot-aided exercise for recovery of
upper limb motor function relative to
conventional
NeuroDevelopmental
Therapy (NDT). We report results
from the first 5 subjects to complete
the protocol.
Methods
Chronic stroke subjects (> 6
months post CVA) are randomly
assigned to a robot or control group.
Both groups receive 24 one-hour
sessions over two months. A robot
group typical session begins with 5
min of stretching, followed by tabletop
tracing of circles and polygons, and a
series of 3-dimensional reaching
movements; all assisted by the robot.
A control group typical session
includes NDT-based therapy targeting
upper-limb function incorporating
stretching, weightbearing, games and
activities (cone stacking, ball tossing,
etc.), and 5 min of exposure to the
robot with target tracking tasks. A
single
occupational
therapist
supervises all sessions.
All subjects are evaluated pre
and post treatment with clinical and
biomechanical measures. A blinded
occupational therapist evaluates the
level of motor function in the paretic
limb with the Fugl-Meyer exam, and
the disability level of the subjects with
the Barthel ADL scale and the
Functional Independence Measure
(FIM). The biomechanical evaluations
include measures of isometric strength
and
free-reach
kinematics.
Electromyograms (EMG) are recorded
from several shoulder and elbow
muscles during these evaluations.
Preliminary Results
Robot group subjects exhibited
decreased resistance to some passive
movements and improved performance
of some active-constrained reaching
movements post-treatment (Table 1).
Decreased resistance to passive
movement was indicated by increased
total work. Improved performance of
active-constrained movments was
indicated by increased positive work,
- 236 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Subject
active
C
constrained
work
work
efficiency
% completed
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
efficiency, % of movement completed,
or average velocity (efficiency is
defined as the positive work biased by
the potential work that would have
been done if the forces were directed
perfectly toward the target).
Improved performance of activeconstrained movements in one robot
subject was clearly due to improved
muscle activation patterns. Pre and
post-treatment data is displayed in
Fig.1. for this subject during an activeconstrained
forward-lateralup(shoulder level) reach.
Pretreatment, no movement was possible.
Post-treatment, half the movement
could be completed. Pre-treatment,
only biceps (antagonist) was strongly
activated.
Post-treatment, triceps
(agonist)
was
activated
while
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
#11 lateral (elbow flexed)
+
+
#10 lateral (elbow extended)
+
#9 forward-medial-up(eye level)
#5 forward-lateral-up(should level)
#4 forward-lateral
+
#8 forward-medial-up(should level)
passive
+
#7 forward-medial
active
constrained
work
work
efficiency
velocity
+
+
#6 forward-lateral-up(eye level)
passive
Subject
B
#3 forward-up(eye level)
active
constrained
work
work
efficiency
velocity
#2 forward-up(should level)
passive
Subject
A
#1 forward
Table 1. Improvements in
performance metrics for three robot
group subjects. Significant positive
correlations (p<0.05) between
performance metric and session
number indicated by "+" (shaded
blocks indicate movement was not
tested)
+
+
+
+
+
+
activation of biceps was suppressed.
In addition, several shoulder agonists
were silent pre-treatment, and were
subsequently activated post-treatment.
The ability to free-reach toward
targets increased post-treatment. The
kinematics of unconstrained reaching
were measured pre and post-treatment.
Table 2. illustrates the cases of
significant increases (p<0.05) in the
extent of reach (indicated by "+").
Shaded squares indicate the reach
could be completed pre-treatment.
While there were no significant
changes in the Barthel ADL scale or
the FIM, all subjects tested to date
have exhibited some improvements in
motor function. Improvements in the
Fugl-Meyer assessment of motor
function in all robot (diamonds) and
- 237 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
agonist
Subject A
Subject B
Subject C
Subject D
Subject E
pre
post
agonist
0
post
time(sec)
+
+
+
+
+
#9 forward-medial-up(eye level)
+
#8 forward-medial-up(should level)
+
#7 forward-medial
+
+
60
pre
50
20
Fugl-M eyer scores
infraspinatus mid deltoid
post
#6 forward-lateral-up(eye level)
pre
#5 forward-lateral-up(should level)
triceps
agonist
#1 forward
post
#4 forward-lateral
antagonist
pre
#3 forward-up(eye level)
post
pre
#2 forward-up(should level)
17 cm
0.5 mV
hand position
biceps
Table 2. Improvements in the extent of free
reaches post treatment. Subjects A,B,C are
robot, and subjects D&E are controls.
target
Fig. 1. Kinematics and EMG for an activeconstrained forward-lateral-up(shoulder
level) reach in one robot group subject.
control subjects (circles) is illustrated
in Fig. 2.
40
30
20
10
0
pre
m id
post
Fig. 2. Fugl-Meyer scores pre, mid
and post-treatment. Circles are the
robot group subjects and diamonds
are the controls.
- 238 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Conclusions
Preliminary data from this
ongoing clinical trial suggests robotaided exercise has therapeutic benefits.
Improvements have been demonstrated
in active-constrained training tasks,
free-reaching, and the Fugl-Meyer
assessment of motor function. It will
be possible to determine the efficacy of
robot-aided therapy relative to NDTbased therapy after more subjects are
tested.
Acknowledgements
Doug Schwandt, MSME, Jim
Anderson, JEM, Matra Majmundar,
OTR
Funded by VA RR&D Merit Review
project B2056RA
- 239 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
INTERFACING ARTIFICIAL AUTONOMICS, TOUCH TRANSDUCERS
AND INSTINCT INTO REHABILITATION ROBOTICS.
John Adrian Siegel, Victoria Croasdell; Mercury Research Institute of Science, Art &
Robotics. Byron, Michigan. USA
Abstract
The examples included are on going
experiments in rehabilitation robotics,
that relate to the integration of artificial
external nervous systems, simple
electronic brains, robotics and human
interface. Each experiment is founded
on the following basis: Each person
regardless of the severity of paralysis
or amputation has certain reactionary
points such as eyebrow movement.
They also have applicable sensory
points which can be acted upon.
Through adaptation, the reactionary
points can be given a code which can
control many functions or modes (a
series of automatic functions) or
provide accurate sensory feedback.
Robotics can thereby return voluntary
actions. It can also add the equivalent
of artificial instinct which can provide
automatic safety attributes. Modes can
combine with the voluntary and
instinctive attributes, to provide
automatic features such as balancing a
glass of water while constantly
monitoring and obeying
new
commands,
and
surveying
the
surroundings. I have successfully
tested the above methods.
Introduction
rehabilitation device which would use
a robot arm, equipped with servos, to
allow a quadriplegic person to tend to
some of their needs. The device had a
major short coming as it was designed
to react to neck movements which in
many cases are not possible. I had also
considered voice control, but the
limitations of errors in recognition
remain disconcerting. In a crowded
room such errors increase to a
unacceptable proportion. As years
passed I developed an interest in
artificial instinct. I was fascinated by
the process of artificial autonomic
systems and tested primitive circuits
and robots which mimicked life forms.
I coined the Name “Electronic Pets” to
describe a variety of small and often
hand held creations. Some of these
were designed after common single
cell animals and insects which focused
mainly on reactions to touch, light and
sound. Although far from sentient,
each of these simple artificial creatures
would relate to their environment by
creating light patterns, moving or
creating sounds.
During these
experiments
I
considered
the
possibility that many reactionary
effects we consider as signs of life are
actually pre-programmed instinctive
responses which can be defined as the
Approximately sixteen years ago I
designed
and
diagrammed
a
- 240 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
preliminary programming data for
life.
I believe that higher brain
development is directly dependent on
these simple built in command
patterns.
Although the higher
functional reality of a human is
paramount, the basic instinct for self
preservation is ever ready to assist us.
Instinct relates through many modes,
most of which are linked to survival.
Other levels of built in fundamentals
relate to the nervous system and motor
control functions.
I have been
designing a basic equivalent of instinct
to function with artificial autonomics,
which in turn interacts with the
disabled. This strategy will allow
people who are paralyzed to regain an
additional degree of independence.
signals from sensors on a patient’s face
and integrates them through a matrix of
wires, relays and electronics which
relate Boolean logic and power
distribution in both directions and on
one set of common paths.
Feature Controlled Wheelchair
This experiment uses only three facial
movements to utilizes fourteen
functions that control a five range of
motion robotic arm and a mobile
wheelchair base, while relating it’s
status through a visual indicator
Next Phase Wheelchair
console. This design is easy to control
and allows multitasking. It maneuvers
The exoskeleton design redefines the
around a room in any direction, can
concept of earlier experiments by
pick up and move objects, and allow
reconfiguring the unit to appear to fit
the user to print or draw on a vertical
like armor without the drive
surface with primitive strokes. The
components being directly visible. As
robotic chair was created on a budget
illustrated this design would reduce the
of $275.(two hundred and seventy five
bulky look associated with such
dollars). My limited budget forced me
concepts by housing the main servo
to try to condense it’s circuits and
mechanisms under the seat. Each range
power distribution by designing an
of motion would have both mechanical
unusual circuit which takes simple
- 241 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
and electronic limits to insure that
hazardous over travel in a given range
of motion does not occur. The design
would embody the concept of an
artificial motor control and instinctive
reactionary system, that links to a
patient’s features. The non contact link
will be self calibrating and designed to
be as inconspicuous as possible. The
sensors will work by comparing light
and color absorption in relation to
trajectory to track small marks on the
patients features. The signal for this
measurement will be oscillated at an
exact frequency which will be
recognized by the detector circuitry.
The design also incorporates pulsed
signal artificial touch.
into electrical variances in the nerve
pathways. Naturally this form of
pulsed artificial touch is far from
normal but it is an effective and an
extremely low cost method to create
tactile
feedback.
This
simple
experiment was linked to my forehead
and connected across a pair of
eyeglasses.
Peltier junctions can also be added to
this concept to allow a sense of hot and
cold.
Modes
In each design the challenge in
configuration is the inability of a
patient to easily convey enough motion
request data to the artificial system for
fluid movement and quick action. To
accommodate this problem, modes of
operation can be designed to take care
of
known factors of movement
relating
to
the
surrounding
environment. A mode can for example
be balancing a glass of water,
performing an emergency action to
avoid tipping, calling for help if the
patient’s vital signs are questionable,
calculating climbing angles for rough
terrain, navigating towards an object,
shaking hands, etc.
Artificial Touch
Pulsed signal pressure can yield a
sense of touch, both in location
perception and intensity. My pulsed
signal transducer consists of a basic
555 timer IC as an oscillator which is
tuned to approx. 70 Hz. This in turn is
connected to a small switching
transistor
which
powers
an
electromagnetic coil and movable steel
plate measuring approx. 3/8” square.
Ideally the electromagnetic coil
(Transducer) would be built out of
electroactive polymers. Future touch
Safety
transducers will be designed as arrays
I distribute functional limitations
of tactile units placed along areas of
across a robotic device to increase the
sensitive tissue.
chances of safe operation. For instance
The simple version of the experiment
limits and simple logic circuits relate
cost only $10.(ten dollars) to build.
positions of arms and will not let them
Pulsed signals are easily identified,
travel beyond a safe point regardless of
because they generate a perceivable
what the main circuit board tells it to
phase pattern, which readily converts
- 242 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
do. I believe that robots would ideally
have their “Electronic Brain” spread
out across the entire robots body,
freeing the main boards to imply
actions rather than being the total
governing discipline.
John Adrian Siegel&Victoria Croasdell
Mercury Research, Institute of Science,
Art & Robotics (M.R.I.S.A.R.)
120 S. Saginaw St., P.O. Box 386,
Byron, MI 48418, USA. (810) 2666513; e mail “[email protected]”; url
“www.shianet.org/~aaris”
- 243 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
THE DEVELOPMENT OF HANDY 1, A ROBOTIC SYSTEM TO ASSIST
THE SEVERELY DISABLED
Mike Topping BA Cert. Ed., Jane Smith BA Hons.
Staffordshire University
Stoke on Trent
Introduction
1. Summary
The Handy 1 is a rehabilitation robot
designed [fig. 1] to enable people with
severe disability to gain/regain
independence in important daily living
activities such as: eating, drinking,
washing, shaving, teeth cleaning and
applying make-up.
Fig.1 The Handy 1 system
Changing age structures, resulting in
increased numbers of people with
special needs are making ever greater
demands on the community of care
workers. Dependency upon care staff,
particularly in public institutions,
where volume dictates the level of
personal attention, can have a
significant effect on the well being and
quality of life of the individual.
The introduction of systems such as
Handy 1 will encourage greater
personal activity, leading to an
increased level of independence. The
impact of the Handy 1 on the
community of care workers will also be
significant helping to reduce the
amount of stress present in situations
where care workers assist disabled
people on a one-to-one basis [1].
User Control Characteristics of
Handy 1
A scanning system of lights designed
into the tray section (fig.3) of Handy 1
allows the user to select food from any
part of the dish. Briefly, once the
system is powered up and food
arranged in the walled columns of the
food dish, a series of seven has been
lights begin to scan from left to right
behind the food dish. The user then
simply waits for the light to scan
behind the column of food that he/she
wants to eat, and then presses the single
switch which sets the Handy 1 in
motion. The robot then proceeds onto
the selected section of the dish and
scoops up a spoonful of the chosen
food and presents it at the users mouth
position. The user may then remove the
food at his/her own speed, and by
pressing the single switch again, the
process can be repeated until the dish is
empty. The onboard computer keeps
track of where food has been selected
from the dish and automatically
controls the scanning system to bypass
empty areas.
- 244 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Fig 3 Handy 1 Eating tray section
During the early Handy 1 trials, it
emerged that although the Handy 1
enabled users to enjoy a meal
independently, however the majority
stated that they would also like to enjoy
a drink with their meal. Thus the design
of Handy 1 was revised to incorporate
a cup attachment (fig.4)[3], [5]. The
cup is selected by activating the single
switch when the eighth LED on the
tray section is illuminated.
Fig. 4 The cup attachment
Handy 1 food dish
A new plastic dish was developed in
1995 with seven integral walls. The
dish dramatically improved the
scooping performance of the robot with
even the most difficult of foods such as
crisps, sweets, biscuits etc. The reason
for this improvement was due to the
inclusion of the walled columns which
ensured that the food could not escape
when the spoon scooped into it. This
resulted in a significant improvement.
We carried out a comparison study to
compare the new dish with the previous
unwalled dish. 22 foods were used in
the study selected from 5 groups,
‘vegetables’, ‘meals’, ‘desserts’, ‘junk
foods’ and ‘fruits’. The study showed
that the Handy 1 performed more
successfully with food of all types
when used in conjunction with the new
walled dish. Improvements to the
robots scooping performance were
observed particularly with some food
types such as peas, where the
successful pickup rate rose from 34%
to 73% [5].
Current Development Programmes
The Washing, Shaving and Teeth
Cleaning System
The Handy 1 self care system which is
designed integrally to include the
washing, shaving and teeth cleaning
attachments enables people with little
or no useful arm or hand movement to
achieve
independence
in
these
important personal daily living
activities (fig.5).
- 245 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
own cosmetics. In many cases the
ladies commented that carers were
unable to apply their makeup exactly to
their taste and subsequently this
resulted in a feeling of frustration and
loss of self esteem.
Work commenced on a Handy makeup
attachment designed to enable ladies to
choose from a range of different
cosmetics
including
blusher,
foundation, eye shadows and lipsticks.
A prototype system was completed in
1996 and successfully trialled with a
number of ladies with motor neurone
disease (fig.6). Briefly the system
works as follows, when Handy 1 is
powered up a series of lights adjacent
to each of the cosmetic types begin to
scan, one after another, the concept
being that when the light is lit adjacent
to the cosmetic that is required, the user
simply activates the single switch. At
this point the Handy 1 selects the
correct brush or applicator and applies
the correct amount of blusher,
foundation, lipstick, eye shadow etc.
Once the make-up has been applied to
the applicator it is then taken by the
robot to the appropriate face position
where the user is able to apply the
make-up [8].
Fig.5 Washing, Shaving and Teeth
Cleaning Tray
The Handy 1 self care system’s human
machine interface is based upon the
well proven Handy 1 eating and
drinking protocol, i.e. a single switch
input used in conjujction with a
scanning control methodology. Using
this practical device, users are able to
instruct Handy 1 to pick up a sponge,
move it into the bowl of water, remove
excess liquid, apply soap and bring it to
the face position, rinse their face and
dry it using a warm air dry option to
complete the task. The system is fitted
with an electric shaver, toothbrush and
drinking cup. All can be picked up and
manipulated by the user in any order.
For example, once chosen the shaver or
toothbrush can be moved by the user to
any part of the face or mouth to allow
shaving or dental hygiene to be
performed in an efficient manner [6],
[7].
Handy 1 Make up Tray
Based on positive feedback from a
questionnaire sent to one hundred
ladies with motor neurone disease who
stated that the activity they most
wished to regain was applying their
Fig. 6 Handy 1 Make up Tray
- 246 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Leisure Type Activities
Based on a questionnaire study
conducted at a UK Motor Neurone
Disease Association Annual General
Meeting we are currently developing a
range of leisure type applications.
selected was to lower and lift the pen
from the drawing paper and to enable a
new colour pen to be chosen. Users
were able to draw by activating the
single switch when the LED adjacent to
the pen colour they wished to choose
was lit [9].
We discovered that many of the
disabled people interviewed spent
several hours each day in an
intellectually inactive state, often left to
watch the television for long periods
while carers dealt with other important
tasks such as cleaning and shopping.
The study highlighted conclusively the
current lack of appropriate leisure type
solutions for disabled people.
As a result a pre-prototype ‘Artbox’
was produced which is compact and
easy to operate. The prototype was
mounted on an adjustable stand to
facilitate its use with children or adults
sitting in chairs of different heights[9].
Briefly the system can be described as
follows: around a conventional shaped
artists pallet were placed eight different
coloured felt tip pens which were
housed in special holders (fig 7). An
LED was positioned alongside each
holder to facilitate any colour pen
being chosen and picked up. On each
of the four edges of the drawing paper
an LED was positioned in order to
allow directional control of the pens
once they were in position on the
paper. Also on the pallet were three
further LEDs labelled ‘up’, ‘down’ and
‘new pen’. Their function when
Fig. 7 A Young Child using the Artbox
Pre-prototype
The ‘Artbox’ prototype was tested in
schools for physically disabled children
and it provided a pleasant but powerful
means for children with special needs
to gain and consolidate their skills of
spatial
and
three-dimensional
awareness. As part of their education
able bodied children are encouraged
from an early age to develop and
exercise their skills of distance judging,
creation and spatial awareness. Due to
their physical disabilities, children with
special needs quite often do not receive
this same level of opportunity.
Importantly, the Handy Artbox enabled
the children who piloted the study to
draw directly onto paper, therefore
helping to develop judgement and
improve
their
three-dimensional
- 247 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
awareness. Overall there was a high
level of user and teacher satisfaction
with the Artbox and it was concluded
that the system could have the potential
of being a useful educational aid for
children with severe disability.
However, several areas for possiblte
improvement were highlighted, users
often felt frustrated by the time delay
encountered with the linear scanning
lights and this resulted in rejection of
the system by several of the more able
children who took part in the study.
Also, the viewing angle of the drawing
board proved difficult for some of the
more severely disabled children to see
[9].
A second prototype is now under
construction which will address in
more detail the human machine
requirements for this particular
application based on the important
feedback gained from the pilot study.
Conclusion
The necessity for a system such as
Handy 1 is increasing daily, the
changing age structure in Europe
means that a greater number of people
with special needs are being cared for
by ever fewer able bodied people.
The simplicity and multi-functionality
of Handy 1 has heightened its appeal to
all disability groups and also their
carers. The system provides people
with special needs a greater autonomy,
enabling them to enhance their chances
of integration into a ‘normal’
environment.
Acknowledgements
We gratefully acknowledge the support
of The European Commission,
Directorate General X11, Science,
Research and Development Life
Sciences and Technologies for their
valuable support of the RAIL (Robotic
Aid to independent Living) Project.
We also gratefully acknowledge the
support from the Sir Jules Thorn
Charitable Trust for their support of the
pilot work on the Artbox project
References
[1] Topping M J (1995) Handy 1 a
Robotic Aid to Independence. Special
Issue of Technology & Disability on
Robotics. Published by Elsevier
Science Ireland Ltd.
[2] Topping M J (1995) The
Development of Handy 1 a Robotic
Aid to Independence for the Severely
Disabled. Proceedings of the IEE
Colloquium “Mechatronic Aids for the
Disabled” University of Dundee. 17
May 1995. pp2/1-2/6. Digest No:
1995/107.
[3] Topping M J (1996) ‘Handy 1” A
Robotic Aid to Independence for the
Severely Disabled. Published in
Institution of Mechanical Engineers. 19
June 1996.
[4] Smith J, Topping M J, (1997) Study
to Determine the main Factors Leading
to the overall success of the Handy 1
Robotic
System.
ICORR’97
- 248 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
International
Conference
on
Rehabilitation Robotics, Hosted by the
Bath Institute of Medical Engineering,
Bath University, pp147 - 150.
[5] Topping M J, Smith J, Makin J
(1996) A Study to Compare the Food
Scooping Performance of the ‘Handy
1’ Robotic Aid to Eating, using Two
Different Dish Designs. Proceedings of
the IMACS International Conference
on Computational Engineering in
Systems Applications CESA 96, Lille,
France, 9-12 July 1996.
[6] M Topping (1998) Development of
RAIL (Robotic Aid to Independent
Living) IX World Congress of The
International Society For Prosthetics
and Orthotics. June 28 - July 3, 1998,
Amsterdam
[7] Topping M J, Helmut H, Bolmsjo
G, (1997) An overview of the
BIOMED 2 RAIL (Robotic Aid to
Independent
Living)
project.
ICORR’97 International Conference on
Rehabilitation Robotics, 14-15 April
1997, Hosted by the Bath Institute of
Medical Engineering, Bath University,
UK. pp 23 - 26.
[8] Topping M J (1996) A Robotic
Makeover Published in the Brushwork
Magazine
by
Airstream
Communications Ltd., West Sussex.
[9] Topping M J, Smith J (1996) Case
study of the Introduction of a Robotic
Aid to Drawing into a School for
Physically Handicapped Children.
Published
in
the
Journal
of
Occupational Therapists. Vol. 59 No.
12 pp565-569.
- 249 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
PROVAR ASSISTIVE ROBOT INTERFACE
J.J. Wagner2, H.F.M. Van der Loos1, N. Smaby2, K. Chang3, C. Burgar
1
Rehabilitation R&D Center, VA Palo Alto Health Care System;
2
Dept. Mechanical Engineering, 3 Dept. Computer Science, Stanford University
Abstract
2. The Architecture
This technical paper describes the
implementation of the User Interface
for ProVAR, a desktop assistive
rehabilitation robot. In addition to
mediating interactions between the user
and the real-time robot controller, the
interface is responsible for the
management of a world model and the
high level validation of task
deployment. The user may perform
task planning, simulation and execution
through a VRML and Java based 3D
graphical representation
of the
workspace area and a Menu-bar
command selection and edit window.
2.1 Pinocchio
Two workstations named Pinocchio
and Jiminey,1 are used in the ProVAR
system, each under distinct division of
labor and responsibilities.2 The two
computers communicate with each
other via a secure, dedicated 100Mbit
Ethernet connection and are protected
from power failure by separate UPSs.
Pinocchio serves as the controller for
the Puma 260 robot. Pinocchio has a
200 MHz Pentium Pro running the
QNX real-time operating system at a
500 Hz sampling rate, ensuring stable
and safe robot behavior. In addition to
a controlling the arm, the main servo
process
can
provide
real-time
simulation of the robot for use in the
verification and confirmation of
intended commands.
1. Introduction
The difficulty in placing assistive
rehabilitation robots in the field is that
both the end users and the occupational
therapists who will train them are likely
to have little or no previous experience
with robots and possibly even limited
experience with computers, erecting a
high barrier to adoption and use. Thus
the utility of an assistive robot is
determined largely by the quality and
ease of use of its User Interface (UI).
The concepts and implementation
discussed in this paper provide easier
access to functionality than other robot
interface concepts.
P INOCCHIO
J IMINEY
force, grasp and touch sensors
phone
/fax
Headmaster
speaker
microphone
power
amps
controller
ECU
GUI
TTK
stop
switch
Internet
trackball
keyboard
Fig. 1: ProVAR architecture
Pinocchio receives and processes
single goal execution events that
contain a command, a goal frame and a
set of parameters. The servo loop is
- 250 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
able to provide synchronous limit and
validity checking, e.g., joint velocity,
torque and force limits. At any time, a
new event may be given to preempt the
current parameters.
In addition,
Pinocchio may be queried to supply
current operating parameters, including
joint angle, motor torque, and readings
from force and proximity sensors3
located on the arm. The state table may
be queried for either the actual robotic
arm control or the real-time simulation.
Prentke-Romich
(Wooster,
OH)
HeadMaster-Plus system for head
motion cursor control, sip-and-puff and
check operated switches, as well as
standard keyboard and mouse/trackball
inputs.
Jiminey has a high speed Ethernet
connection to the Internet, and thus is
not isolated from outside attempts to
access it. Therefore Windows NT 4.0
was chosen as the operating system for
the 266MHz Pentium II workstation for
Fig. 2. The ProVAR VRML and Java based GUI
2.2 Jiminey
While Pinocchio is the real-time
controller of the robot, the other
workstation, Jiminey, handles the User
Interface (UI) and performs high level
task planning, management and
execution. Communication between the
user and Jiminey will be multi-modal,
including a combination of user inputs
via voice recognition software, a
the numerous security advantages it
offers over Windows 95 and 98.
The ProVAR UI reflects the
premise that the relationship between
individuals and their assistive robots is
fundamentally social. The ProVAR
system is presented to the user as a
“team” of two characters: Jiminey, a
helpful consultant and Pinocchio,
down-to-earth robot arm. 4 While the
- 251 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
user perceives an engagement with
both entities directly, in reality, the
only direct interaction the user has with
Pinocchio is the cheek-actuated
emergency cut-off switch for the robot.
All
other
communication
with
Pinocchio occurs through Jiminey.
In addition to mediating commands
to Pinocchio from the user, Jiminey is
responsible for the management of the
world model and the high level
validation of task deployment. For
example, before executing a new task,
the UI checks the task step list and
world model for condition flags that
need to be satisfied. When executing
the task, shown in figure 2, i.e., “Get
Videotape from Video Player,”
Jiminey would first verify the state of
several conditions:
• Is the last task/step complete?
• Is the gripper empty according to
the world model?
• Is the gripper empty according to
Pinocchio?
• Does the world show a tape in
the VCR?
Other tasks performed by Jiminey
include recording of full logging for the
collection of data on real-time
processes. This log is a valuable
resource for debugging and field
support of the system.5 The ability to
extract the event history on the fly also
increases the ability to perform remote
maintenance and repair of the system,
via ProVAR’s and ProVIP’s telediagnostic capabilities.
3. UI Design
The ProVAR UI is a VRML
(Virtual Reality Modeling Language)
and Java-based GUI construct that
affords the manipulation of ProVAR
via examination of a 3D graphical
representation of the world model and
via a Java applet’s Menu-bar
Command selection window. There are
a number of VRML browsers available.
Some are stand-alone applications but
most, such as CosmoPlayer, run as
plug-ins for web browsers such as
Netscape or Microsoft’s Internet
Explorer. The VRML and Java
components
are
viewed
while
embedded in a web browser, creating a
networked, platform-independent user
interface for robot command. Support
for voice recognition control uses the
hooks in the Java menu bar for
keyboard macros or, the Java Speech
Application Programming Interface..6
In figure 2, the window on the left
shows that the VRML robot can be
moved around by cursor actions. The
“Cosmo” controls along the bottom of
the window allow the image to be
zoomed. The gray buttons underneath
transfer location data to/from the
"Command-Edit” Java window in the
right hand portion of figure 2. The
individual steps of a command can be
built and tested using the pull-down
menus in the edit window.
4. The UI Components
4.1 VRML viewing of the world model
VRML is an object-oriented
language, with a model typically
consisting of the geometric description
of an object with appearance and
behavior nodes specified as needed.
To create the ProVAR world model,
- 252 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
the work area and walls were created
out of geometric primitives. Then a
prototype of every “interesting” object
(e.g.,
microwave,
videotape,
Puma 260) is created, then instantiated
if and when it is needed.
In addition to creating static, threedimensional representations, VRML
supports animation through events sent
to nodes via Javascript/VRMLscript
and with Java applets.
orientation is moved, it leaves behind a
semi-transparent “ghost” marking its
original position.
In addition to constructing or
modifying the task to be sent to
Pinocchio, in some instances, the
manipulation of some objects in the
VRML window can initiate real world
activities such as operating an
environmental
control
unit
or
answering the telephone.
4.2 Command-Edit window
Every command in the ProVAR
system consists of a series of individual
steps. These steps are grouped together
to form tasks. A command is the
completion of one or more tasks and
may contain one or more branch points
for selection between two different
subtasks.. The Command-Edit window
allows the user to create, simulate and
verify all the steps and tasks in a
command before sending them to the
robot. For example, the command
being created in figure 2 is “Play
Movie…”
Fig. 3: Simulation robot (on right) and
its marker ghost (on left).
There are two different colored
robots that are viewable in the VRML
browser window. One is the same color
tan as the real robot (light gray in
figure). The position in the VRML
browser of the tan model always
reflects where Pinocchio reports the
Puma arm currently is (as seen in
figure 2). The other Puma is colored a
surreal magenta (dark gray in figure 3)
and is used for simulation and testing
of the next command to be sent to the
robot. When the simulation robot’s
Tasks
Steps
Go to Via Point 1
Go to slot 1
Open Gripper
Get Video tape
Move to Tape
from slot one
Close Gripper
Move back from Tape
Go to Via Point 1
Go to Via Point 1
Go to VCR player
Put tape into VCR .
.
.
Table 1: Sample Command Task List
- 253 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
The steps in a Command list may be
created and edited through a number of
means. One of the easiest, if less accurate, methods is the direct manipulation
of the VRML model into the desired
configuration. Joint angles may be
recorded and modified from the simulation of the robot, from encoder values
taken from the actual robot that has
been moved in position, or from keying
in the numerical values directly into the
joint position array. (see figure 4)
Fig. 5: Example of cascading menu in
a natural-speech suggestive format
References
1
C. Collodi, Avventure di Pinocchio.
Giornale per i bambini, Year 1, n. 1,
July 7, 188
2
Van der Loos, H.F.M., Wagner, J.J.,
Smaby, N., Chang, K.-S., Madrigal, O.,
Leifer, L.J., Khatib, O., ProVAR
assistive robot system architecture,
Proceedings ICRA’99, May 10-15,
1999, Detroit, MI pp. 741-746.
3
J.M. Vranish, Guiding robots with
the help of Capaciflectors. NASA Tech
Briefs, March, 1997, 44-48.
4
Fig. 4: Tasks can be built and tested
using the pull-down menus in the Java
applete Command-Edit window.
The Java task editing applet
interacts with the VRML window via
the EAI (External Authoring Interface).
The EAI is a library of routines that
allows a Java applet on a web page to
access nodes and affect events and
fields in a VRML browser embedded
on the same page dynamically.
Previously created commands can
be loaded and executed by selecting
them using the menu bar of the Java
applet.
J.J. Wagner, H.F.M. Van der Loos,
L.J. Leifer, Dual-character based user
interface design for an assistive robot,
Proceedings ROMAN-98 Conference,
Kagawa, Japan, 9/30 – 10/2 , 1998.
5
H.F.M. Van der Loos. A History List
Design Methodology for Interactive
Robots. Ph.D. Thesis, Department of
Mechanical Engineering, Stanford
University, CA, 1992.
6
Java Speech API, Sun
Microsystems, Inc., Palo Alto, CA,
(http://java.sun.com/products/javamedia/speech/index.html)
- 254 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
CONTROL OF A MULTI-FINGER PROSTHETIC HAND
William Craelius1, Ricki L. Abboudi1, Nicki Ann Newby2
1
Rutgers University, Orthotic & Prosthetic Laboratory, Piscataway, NJ
2
Nian-Crae, Inc., Somerset, NJ
ABSTRACT
Our novel prosthetic hand is controlled
by extrinsic flexor muscles and tendons
of the metacarpal-phalangeal joints.
The hand uses tendon-activated
pneumatic (TAP) control and has
provided most subjects, including
amputees and those with congenital
limb absence, control of multiple
fingers of the hand. The TAP hand
restores a degree of natural control over
force, duration, and coordination of
multiple finger movements.
An
operable hand will be demonstrated.
BACKGROUND
offer more degrees of freedom (DOF),
but this number is limited by the ability
of the user to learn unnatural
movements to activate hand motions
and the ability of the controller to
decode the resulting electromyographic
(EMG) signals [2,3]. Perhaps due to
these
limitations,
myoelectric
controllers still provide only one
practical DOF, directed by flexionextension
of
arm
muscles.
Accordingly, intensive efforts are
underway to extract more independent
channels from EMGs, with advanced
signal processing techniques, tactile
feedback, complex user control
schemes, or surgical re-innervation
[4,5].
While modern robotic hands are highly
dexterous, having many degrees of
Even the most advanced controllers
freedom, prosthetic hands function
available today do not fully exploit the
much as they did over a century ago,
residual functions possessed by persons
by single-joint grasping. Available
with missing limbs. These include the
hand prostheses are either ’body
ability, at least in below elbow
powered’, or ’myoelectric’ devices that
amputees, to possibly control their
restore prehension.
Standard body
extrinsic muscles and tendons that flex
powered prostheses are controlled by a
the metacarpal-phalangeal joints. A
harness
that
couples
shoulder
controller that could transduce these
movements to opening/closing of a
volitional motions would thus restore,
prehensile hand. While harness-type
at least partially, the natural link
controllers have proven reliable and
between volition and movement, and
robust for thousands of amputees over
would hence be biomimetic. Beyond
decades [1], their versatility is limited
providing finger control for hand
by the number of independent control
prostheses, the TAP controller may
motions practically possible: one.
Myoelectric controllers may eventually
- 255 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
facilitate the transition to more
complete hand restorations via surgery.
METHODS
System Design
The overall design goal was to use
natural tendon movements in the
forearm to actuate virtual finger
movement.
A volitional tendon
movement sliding within the residual
limb causes a slight displacement of air
in foam sensors apposed to the skin in
that location. The resulting pressure
differential is transduced, processed,
and used to control a multi-finger hand.
Subject Screening
Twelve subjects filled out a
questionnaire (minors with parental
assistance) intended to provide
demographic
and
consumer
information.
All subjects reported
interest in multi-finger control and
proportional control of force and
velocity.
Four had congenital
deficiencies (2 female/2 male), and
eight had acquired amputations (all
male). Eight (including three with
congenital ULRD) were myoelectric
users, one used a body-powered hook,
one had a cosmetic hand, and one had a
cineplasty APRL hook.
There is
intense interest in this research as a
result of media attention, and our
database now includes over 80
potential candidates internationally, as
well as many providers and physicians.
while the examiner palpated the limb.
Successful detection of movement on
9/12 subjects indicated acceptance into
the next phase.
Six successful candidates, 10 to 40
years of age, having a minimum of 1/3
the original length of the forearm and
at least 3 tendons and/or muscle sites
were selected for further testing. Two
had congenital ULRD, and the rest had
acquired amputations. Tests were
performed to evaluate the sensitivity
and specificity of the system, the
ability of subjects to activate individual
fingers, and the degree of control over
the signals.
Smart Socket Fabrication
Sensor sites determined during the
initial screening were optimized using
a transparent test socket. Following
optimization of the measurements and
the final sensor locations, the sites were
transferred back to a positive cast of
the limb. A soft silicone sleeve was
custom fitted to the cast, with the
sensors
embedded
inside
at
predetermined locations. An acrylic
laminate was fabricated over the
silicone, with a wrist unit mounted on
the distal end to allow for direct
attachment of a prototype mechanical
hand.
Alternate smart sockets were made by
affixing single TAP sensors with
Velcro or glue at selected points on the
socket.
Residual and sound limbs were
examined and measured.
Subjects
were asked to perform finger flexions
- 256 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
RESULTS
Virtual and Mechanical Hands
Initial demonstrations of finger tapping
were done using a computer program
which displayed the TAP signals, along
with a virtual hand having fingers that
could be lit independently when the
corresponding finger volition was
detected. Some subjects, especially
children, seemed to enjoy operating the
virtual hand, and also watching their
TAP signals on the computer screen
[6]. The virtual hand proved to be a
valuable training tool.
The first 2 versions of our mechanical
hand were simple robotic hands that
allowed users to observe finger
activations. The version 3 prototype
hand was a laminated shell to which
were attached fingers obtained from a
commercial wooden hand (Becker
Imperial, Hosmer-Dorrance), as shown
below:
A 2-position thumb was attached to
permit either keyboard use or grasping.
Linear actuators provided movement of
3 independent fingers, each having
approximately 30 degrees of flexion,
with a maximum of about 4 N of force.
Software, written in 8051 code
controlled the hand. The version 3
hardware microcontroller for the
portable hand was the Log-a-Rhythm
(Nian-Crae, Inc.) wearable computer.
Because of the simplicity of the
requested
movements,
a
straightforward decoding algorithm
was used.
Structural design of the version 3 hand
proved effective. It consisted of an
acrylic/carbon fiber shell to form the
palmar structure, to which was attached
fingers, thumb, and a wrist unit. The
carbon fiber shell was a mirror image
of the sound hand of an amputee, and
was strong, light, and easily machined.
Actuators were mounted in the shell,
and linked to fingers. The finger
‘bones’ were 2 bars articulating at an
M-P and a P-P joint, inserted in a
spring for passive extension. Two
types of finger bone materials were
tested: steel bars and nylon rod. Also
tested was the return spring design:
either internal or external to the bones.
The thumb was mounted on a springloaded ratchet that had 2 stable
positions: abducted and adducted.
Structural and actuator designs are
currently being further developed.
Biomimetic Control
A signal response matrix was generated
for each subject, consisting of three
rows, representing requested finger
motions,
and
three
columns,
representing the three sensor locations,
as shown below:
Intention ↓
T
I
L
Site →
T
I
L
TT
IT
LT
TI
II
LI
TL
IL
LL
- 257 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
TT, for example, represents signal
energy from the thumb sensor for an
intended thumb movement; IT is from
the same sensor for an intended index
movement, and so on. To maximize
the diagonals, subjects were instructed
to use less force to help avoid cross
signals.
Several response matrices
were obtained from each patient. An
example is shown below. All subjects
were able to produce at least one
matrix comparable to the one shown.
sequential finger commands. Results
showed that diagonal signal energies
were all well above zero and ranged
from 1 to 22 dB above noise.
Sensitivity and specificity data were
summarized as the percentages of true
positives for diagonal sensors and true
negatives for off-diagonal sensors,
respectively. Within 3 or 4 sessions,
each subject could elicit independent
signals from each channel, with
sensitivities
and
specificities
approaching 100%. Some subjects
acquired sufficient dexterity to play
simple piano pieces with the hand.
Representative sensitivity data are
shown below for 3 subjects. Similar
results were found for specificity (not
shown).
Figure 1: Response Matrix
Traces represent squared signals
derived from TAP sensors over a 9second period of repetitive finger
flexions. Using the response matrix, the
levels of signals received from the
requested (diagonal) channels and the
cross-talk (off-diagonal) channels were
compared. Ratios of energy levels were
expressed in decibels (dB):
Rij = 10 log ∑
∑
(Di )
( O ij )
, i=1,2,3; j=1,2
Sensitivity
100
80
60
40
20
0
A
B
C
D
E
F
D1
D2
D3
Figure 2: Sensitivity of TAP System.
Sensitivity data was summarized as the
percentage of true positives for each
diagonal sensor on each subject. Bars
represent “diagonal” values (Di) for
each subject. Sensitivity was 100% for
all subjects on at least two channels.
Five or six data points were used in
each case.
where Rij is signal energy of sensor i
with respect to sensor j, Di is the
energy of diagonal sensor i, and Oij is
the energy of the off-diagonal sensors
with respect to each diagonal. Energies
were calculated for the duration of each
protocol,
representing
about
6
- 258 -
ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Dexterity Limits
Requested movements (3 subjects)
consisted of individual finger taps and
grasping.
Subjects were asked to
sustain signal movements for variable
times, and to apply forces of low,
intermediate and high intensity.
Average frequency of tested tapping
movements was 2.5 Hz. Subjects were
able to sustain supra-threshold signals
for up to 3 seconds.
DISCUSSION
The TAP hand offers amputees control
of finger flexion using natural motor
pathways. Most subjects, including
those with relatively short and scarred
residua, quickly gained control over
several mechanical fingers. Slow
typing and piano playing were
demonstrated.
Beyond
providing
dexterity, the TAP controller may
facilitate the transition to more
complete hand restorations.
Both grasping and sequential finger
tapping were accomplished. When
prompted to grasp an imaginary object
at increasing levels of force, signal
energy increased in approximate
proportion to force perception and
volition. Traces typical of 3 amputee
subjects tested are shown below:
Acknowledgements
The work is being supported by an
STTR grant from the NIH to NianCrae, Inc.
Figure 3: Proportional Control.
Subjects were prompted to grasp an
imaginary object at increasing levels of
force, subjectively determined. Signal
energy increased in approximate
proportion to force perception from
low (top) to high (bottom).
References
1.
Atkins DJ, Heard DCY and
Donovan
WH:
Epidemiologic
overview of individuals with upperlimb loss and their reported research
priorities. Journal of Prosthetics
and Orthotics, 8(1): 2-11, 1996.
2.
Graupe D and Cline WK:
Functional separation of EMG
signals via ARMA identification
methods for prosthesis control
purposes. IEEE Transactions on
Systems, Man, and Cybernetics,
SMC-5 (2): 252-259, 1975.
3.
O’Neill PA, Morin EL and Scott
RN:
Myoelectric
signal
characteristics from muscles in
residual
upper
limbs.
IEEE
Transactions on Rehabilitation
Engineering, 2(4): 266-270, 1994.
- 259 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
4.
Hudgins B, Parker P and Scott RN:
A new strategy for multifunction
myoelectric
control.
IEEE
Transactions
on
Biomedical
Engineering, 40(1): 82, 1993.
5.
Kyberd PJ: The application of
microprocessors to prosthetics.
Proceedings of 9th World Congress
of the International Society for
Prosthetics and Orthotics, p. 50,
Amsterdam, The Netherlands, June
28- July 3, 1998.
6.
Abboudi RL, Glass CA, Newby NA
and Craelius W. A biomimetic
controller for a multi-finger
prosthesis, In Press, IEEE
Transactions on Rehabilitation
Engineering, June 1998.
Author address and contact
information:
Dr. William Craelius
Orthotic and Prosthetic Laboratory
P.O. Box 909
Rutgers University
Piscataway, NJ 08854
[email protected]
Voice: 732-445-2369
FAX: 732-445-3753
- 260 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
TECHNOLOGICAL AIDS FOR THE TREATMENT OF THE
TREMOR
C. A. Avizzano, M. Bergamasco
PERCRO
Simultaneous Presence, Telepresence and Virtual Presence
Scuola Superiore S. Anna,
Via Carducci 40, I-56127 PISA, Italy
Abstract
In this paper we present a cluster of
Technological Aids for the analysis and
the treatment of a particular kind of
tremor caused by the multiple sclerosis.
The developed technological aids are:
• a system, provided with sensors,
capable of monitoring all the
movements of the upper trunk and of
the right arm of a patient;
• a Joystick System capable of
interfacing the users with a common
Operating System by filtering the
information caused by tremor;
• an Haptic Interface capable of
mechanically damping the effects
caused by tremor.
Innovative approaches have been
followed for the monitoring the upper
limb and head movements, the filtering
interface and the design of the haptic
interface.
Keywords:
Technological
Aids,
Advanced & Intelligent interfaces,
Haptic interfaces
1. Introduction
“Tremor is a rhythmic uncontrollable
oscillation that appears superimposed to
voluntary movements. Approximately
0.4% of the population in U. S. is
affected by some kind of pathological
tremor”[1].
Figure 1: An Haptic Device for writing
This article deals with the tremor
induced by Multiple Sclerosis (MS).
This particular kind of Intention tremor
presents frequencies which belongs to 26Hz range and amplitudes that vary from
few to several centimetres depending on
the limb as well as on the Disability
Status Scale (DS)[2].
Intention tremor contaminates the
voluntary activity in a simple additive
way[3]. Even if tremor is quite regular
and constant, it is very compromising in
everyday-life activities.
The need for Technological Aids (TA),
i.e. systems which are capable to help
persons in the accomplishment of
particular tasks, is largely felt among
patients which suffer by tremor.
Hsu, in [4], addressed the problem of
creating an assistive mechanical
interface (a special pen) for handwriting
and a particular mouse interface for
working with computers. Some filtering
systems for the Parkinsonian tremor
have been developed by Riviere[5].
Riviere addressed the possibility of
generating an auto-adaptive system for
the
tremor
identification
and
suppression. In [1] Kenneth presented a
FIR system which worked with MSDOS systems.
Haptic technologies have also been
proposed as a possible aid in the
treatment of some motor and cognitive
disabilities. A comprehensive research
on this topic has been carried out by
Avizzano and Bergamasco in [15].
Moreover similar technologies have
been proposed for aiding disables
patients to perform everyday-life actions
such as Whittaker and Tejima did for
helping disabled persons in eating [11]
[12] without the assistance of an external
person.
Several authors [6,7,8,9] presented some
orthoses-like interfaces for the reduction
of the tremor. Among the presented
systems we have passive orthoses as
well as dynamic controlled dissipative
mechanical systems which generated
dissipating signals that are proportional
to the tremor intensity.
At present very few Technological Aids
have been purposely developed for MSinduced tremor. This was essentially due
to the behaviours of such a kind of
tremor. The low band frequencies,
typical of this type of tremor, joined with
their large amplitudes, make it hard the
realization of TAs for these patients. In
this case, in fact, the data characteristics
of the tremor are very similar to the data
generated by normal movements.
In this paper we discuss the premises
and the design of a cluster of
Technological Aids for the analysis and
the treatment of the MS induced tremor.
These systems will take into account the
doctors’ need of accessing complete and
accurate data on the tremor as well as the
users’ need for an interface capable of
letting them to correct operate the
instruments of the common life.
The presented cluster is made up of three
different systems:
• a Sensitive Corset capable of
monitoring all the movements of the
upper trunk and of the right arm of a
patient. This device can be used by
therapists for precisely monitoring the
tremor activities of patients;
• a Joystick Unit: capable of interfacing
the users with a common OS and, at the
same time, of filtering the information
caused by tremor on the effective
position of the cursor on the screen. It
is a 2 DOF input interface. It is capable
of damping the vibrations induced by
tremor and of extracting the voluntary
movement characteristics. This device
can be used by both children and adults
as an interface allowing them to
successfully interact with the most
common computer applications;
• an Haptic Interface (HI) capable of
generating feedback force information
to patients and of mechanically
damping the effects caused by the
tremor.
2. Addresses of the systems
Tremor has never been considered as a
disease. It is rather considered as a
diagnostic sign of various diseases such
as Multiple Sclerosis, Ataxic tremor,
Parkinsonian tremor.
TREMOR is an European Community
project for the development of these
Technological Aids. It aims to focus the
interest of scientific community on this
impairment and to realise valid
instruments for the analysis of the tremor
characteristics and the treatment of this
impairment. At the same time his scope
is to give to the disable people the
possibility of operating systems and
tools in the presence of tremor.
TREMOR has been conceived for the
development and the validation of
technological aids for patients affected
by cerebellar tremor. This kind of
tremor, characteristic of patients affected
by Multiple Sclerosis (MS), generates
severe disabilities in the patient’s
everyday life activities.
Even if the results of TREMOR could be
used also in other pathologies, the
system components have explicitly
developed for Multiple Sclerosis.
The project aims at developing the three
technological aids presented in this
article.
3. The Sensorized System
The Sensorized System (SS) is a device
developed for medical analysis of the
tremor characteristics. It should be used
by patients assisted by doctors for the
monitoring and the analysis of the
tremor influence over the user
movements.
The primary application for the SS will
be the analysis of the results given by
different therapies and the production of
virtual therapy based methods.
The SS presents itself as an exoskeletonlike passive structure which can be worn
by the patient and be adapted to his
proper size.
Doctors can access to the system
potentialities by recording data collected
by the system while the user is
performing one from a set of predefined
tests.
Figure 2: The SS Concept.
Using the SS doctors can monitor the
patients’ tremor by means of a set of
numerical data. In this way then can
monitor the results of precise drugs
treatments as well as the reactions of the
patient’s bodies during the day or when
performing particular activities.
The system consists of the following
components:
• a passive structure, equipped with
sensors, capable of measuring 10
degrees of freedom (DOF) of the human
body and possessing a workspace large
enough for letting to users a great
mobility. These DOFs are distributed on
the user’s body in this way: shoulder
3DOFs; elbow 2DOFs; wrist 2 DOFs;
head 3 DOFs. Standing to this structure
the system can monitor the neck and the
right arm movements.
Even if the system is a mono-lateral
asymmetric device, its structure is good
enough to perform a complete medical
analysis of the tremor impairment;
• a computer interface for the jacket. It is
an electronic unit which interfaces the
jacket sensors with the host computer
system;
• a data-collection unit which is
connected to the computer system
interface and provides to the rest of the
system features for storing the real data;
• a video display for replicating the test
paths. This system integrates a table with
a large flat color LCD screen and an
adequate software for drawing on the
video the executed test path;
• a data-analysis software program
which allow the therapist to extract
meaningful indexes from the recorded
trajectories;
• a virtual-therapist program. It is an
animated 3D software which interacts
with the patient and shows him the
exercises to be done.
The data-analysis software program and
the Virtual Therapist Program are the
two main keys of access for the
Sensorized system. An analysis of the
achieved data can be done each time an
user executes one from a set of given test
exercises [16]. The results of the data
analysis will strike out an evaluation of
the performances of the subject.
All the tests are controlled by the
“Virtual Therapist” application (VT).
The VT defines 9 different tests which
can be executed and evaluated
separately: place finger on nose, move a
cup, trace a square on the display
system, trace a circle and so on. During
the execution of the tests the VT will
record and analyse a set of data in order
to figure out an estimation of the quality
of the movements. Possible recordings
are the whole trajectory made by users,
the amount of time for the test execution,
the average frequency of hand tremor,
the average frequency for the wrist and
the elbow tremor, the neck movements
4. The Joystick System
The Joystick System (JS) is essentially
an input device for computer systems
having the following scopes:
• to replace the mouse as a screen pointer
device for the computer;
• to support operation made by users
affected by tremor;
• to be completely transparent to the host
system and the user.
The JS has been conceived to be used
without the continuous assistance of a
therapist, which could help the patient to
manoeuvre the interface.
Figure 3: The JS Concept
It is simple to be used, non-dangerous
for the patient and capable of filtering
the tremor’s component of the
movement caused on the interface by the
patient disease.
Accessing the JS, the user will be able of
using most of the programs available on
interface computer such as: Internet
browser,
interfaces
controlling
applications, modem programs and so
on. The potentialities offered by such a
type of interface are enormous. With the
help of the JS, patients can access the
computer world which allows them not
only to access and communicate with
“Internet-People” but also to use the
available software in order to recover
lost capabilities. For sake of clarity let us
present an example: a simple modem
program allows, among the other things,
an ordinary user to access the modem
just like it was a telephone, by making
automatically the desired phone number
and by reproducing the audio via the
plugged in audio-board or the modem
speaker. This simple utility, which has
just a marginal value for a common user,
shows itself very useful for TREMOR
patients. In fact the use of a telephone,
which is de facto very difficult for a lot
of them, is automatically given by the
Joystick capabilities.
Different prototypes of joystick systems
have been developed. Each of them is
based on different data-acquisition
technology. This choice has been
suggested from the nature of the tremor.
In fact, tremor can act differently on
distinct patients: some patients cannot
use their hand but not the foot or the
head, other have not the possibility of
using the foots owing to the tremor
induced weakness and so on.
We have joystick interfaces that can be
driven by foot (pedals), by the hand
position on a plane (joysticks and mice),
by the hand force (force sticks), by the
hand position in the space (glove
interfaces) and also by the head
(Helmets).
The user program and drivers for
accessing the whole interfaces set is
unique. The program controls the
filtering process and forwards the result
signal in the adequate way to the
Operating System (OS). This program is
the core work of the system. It includes a
support for different interfaces, a rough
real-time kernel, a filtering module and a
user interface for controlling the pointer
actions. The whole programs works online in a completely transparent manner.
The filtering module of the Joystick
system has been entirely realised without
the help of device dependent hardware.
As far as the filtering process is
concerned, an appropriate joystick driver
reads the interface output and separates
the tremor component from voluntary
movements. This is done by means of a
filtering unit built into the joystick
device driver. Most of the driver
filtering-parameters are configurable in
order to allow the best adaptation is
possible to the particular kind of user.
The
device
driver
incorporates
configurable
options
(movement
strategies) to modify the movement
policy for the pointer and to support the
parallel use of different interfaces and.
Finally the device driver incorporates a
set of strategies (button strategies) for
the interpretation of the button pressure
(click and double click control).
5. The Haptic Interface
Haptic interfaces are mechanical systems
that operate in direct contact with
humans. Haptic interfaces have been
developed for Virtual Reality and
Teleoperated systems [14] as natural and
complete interface for reaching an
immersive feedback.
In TREMOR the Haptic Interfaces will
be employed in a contest of movement
recovery. The main goal of the Tremor
HI is to recover the user dexterity by
mechanically damping the tremor
behaviors.
Like the joystick interface this is an
instrument developed for the patients.
The Haptic Interface is a mechanical
system capable of working in a cubic
volume of 0.3m wide. It has been
conceived for small force magnitudes
but with high force and position
resolution values. The force bandwidth
of the system is more than 20 times the
tremor frequency i.e. about 100Hz.
Figure 5: The HI Concept
The main user of the Haptic Interface is
the patient. Anyway in the development
phase doctors have been considered as
complementary users of the system. This
fact is motivated by the observation that
at the start up and during all the setup
times, the main user will need the help
of the doctor for tuning the interface.
Even if the Haptic Interface has been
designed as a general purpose oriented
tool, some reference tasks have been
kept into account during the interface
design:
• to write by hand;
• to work with a screwdriver or other
types of tools;
• to use spoons, knives or forks for
eating purposes.
These tasks have been considered as
case studies in the design of the HI.
Their properties have been taken under
consideration for the determination of
the system specifications.
The Haptic Interface consists of different
components:
an
electromechanical
system, an electronic unit and a software
module.
From a technical point of view the HI
must be able to perform the following
operations:
• interacting with the user allowing him
the most possible comfortable and
stable precision grasping;
• leaving to the user the capability of
generating both confirming actions and
three dimensional moving actions;
• correctly reading the user movements;
• analyse and divide the user movements
into voluntary and involuntary actions;
• to apply the correct force patterns in
order to compensate vibrations and
stabilise the movement.
6. Preliminary Results
At present two different types of
validation procedures have been
performed for the systems:
• an evaluation phase of the filtering
algorithms on MS patients;
• a test of the technical results achieved
with the joystick system.
The characteristics of the tremor and the
capabilities of the filtering algorithms
have been tested with some writing
experiments. These experiments have
been conducted with real patients.
Particular equipment has been realised
for recording the user-pen movements
during writing. The set of experiments
has been recorded at clinical centres,
while the data analysis has been
performed off-line. The data have been
collected and filtered by means of
software developed at PERCRO. The
recorded data have been processed in
order to recover clean writing.
Figure 5: Filtering result for a case of
severe tremor.
A sample graphical result of this test has
been reported in figure 5. In its lower
part, the figure 5 shows the typical shape
of the acquired data in the case of a
severe tremor. In the upper part, the
figure outlines the typical result which
can be achieved applying the filtering
algorithms on the data achieved with the
interface.
In all cases of filtering we verified
improvements in the readability of the
outputs. Once more, we verified that the
properties of the tremor estimated by the
filter, in terms of spectral diagram,
stationarity and mean amplitude are
similar to those identified in the
scientific literature. The results of this
comparison outlined a close confidence
between the estimated tremor and the
real one and revealed the capability of
satisfactory filtering tremor for MS
patients.
During the test made on the Joystick
System, we have analysed the properties
of non-linear filters designed for
acquiring bidimensional data. These data
have been used for controlling the
pointer on a computer screen.
A special version of the joystick system
allows to the user to record the data read
from the different interfaces as well as
the filtered data produced by the joystick
filter.
Figure 6: One-dimensional filtering of
Joystick data.
Figure 6 reports a one-axis comparison
between filtered and unfiltered signals.
The input data for the system have not
been collected into clinical centres but
produced with a simulated tremor in the
development centre. The filtered signal
and the original one have been produced
on-line by the joystick system. In the
figure the upper trajectory represents the
outputs of the filter.
Different offsets have been added to the
data represented in the figure in order to
improve the shape readability. The scale
on the X-axis is in seconds and Y-axis
represents the mouse positions with
reference to a frame placed in the middle
of the screen and having {-1,1} values
close to the borders.
The input oscillations, which are present
in the input signals, are cancelled from
the joystick algorithms.
A more detailed numerical analysis of
the collected data is being now
performed. The tremor technological
aids are now in a validation phase at four
European MS clinical centres. The
results of the validation phase in terms
of usability, efficacy and comfortability
of the systems are expected by June
1999.
7. Conclusions
A new non-invasive system has been
conceived for the treatment of several
motor disabilities caused by tremor. It is
capable of tracking up to 10 different
DOF of the human upper trunk and is
well suited for the medical research and
the development of new medical
rehabilitation therapies.
A new system extending the classical
joystick capabilities has been set up in
order to let a wide number of disables to
access to computer system and interact
with the most common application.
A new haptic tool for controlling the
execution of the patient movements
during particular task based on high
performances and force feedback
interaction has been designed.
In the 1998 after the completion of the
system and a test phase with healthy
subjects,
a
period
of
clinical
experimentation in the clinical centres
for the first and the second system has
been planned.
The present work contributes to update
the current State of the Art in the
medical technologies for the treatment
for tremor diseases by the realization of
a set of system which are still under
research in the robotic field.
8. Acknowledgements
This research was supported by the
European commission under the project
DE n.3216 TREMOR. The cooperation
of the TREMOR consortium partners is
gratefully acknowledged.
The authors wish to thank all the
PERCRO team which have been
collaborating in the development of the
presented interfaces.
References
[1] Kenneth EB et al., Control and Signal
Processing
Strategies
for
Tremor
Suppression, Independent Living Aids
1996;
[2] Kurtze JF, Rating Neurologic Impairment
in Multiple Sclerosis; An Expanded
Disability Status Scale (EDSS), Ann.
Neurol 1983;
[3] Riviere CN, Thakor NV, Adaptive Human
Machine Interface for Persons with Tremor,
Eng. Medicine Biology Conference, 1995;
[4] Hsu DS et al., Assistive Control in Using
Computer Devices for Those with
Patological Tremor, Rehab R&D Progress
Report, 1996;
[5] Riviere CN, Thakor NV, Suppressing
Pathological Tremor during Dextrous
Teleoperation, Eng. Medicine Biology
Conference, 1995;
[6] Rosen MJ et al. Design of a Controlled
energy-dissipation orthosis (CEDO) for
functional Suppression of Intention
Tremors, Jour. Rehabil Res. Dev 1995;
[7] Elble RJ, Randall JE, Mechanistic
Components of Normal Hand Tremor,
Electroencephalography
and
Clinical
Neurophysiology 1978;
[8] Stiles RN, Ligthly Damped Hand
Oscillations:
Acceleration-Related
Feedback and System Damping, Jour.
Neurophysiology 50, 1983;
[9] Aisen ML et al., The Effects of Mechanical
Damping Loads on Disabling Action
Tremor, Neurology 43, 1993;
[10] Satava R, VR Surgical Simulator, The
First Steps, Proc. VR Systems 1993 NY;
[11] Whittaker M, Handy1 Robotic Aid to
Eating: A Study in Social Impact, Proc.
RESNA Int. 1992;
[12] Tejima N, Evaluation of Rehabilitation
Robots for Eating, Roman 1996, Tsukuba;
[13] Buche M et al., Analysis of Tremor
Methodology and Clinical Perspectives.
Preliminary Results, Schweiz Medical
Wochenschr 1984;
[14] Burdea GC, ‘‘Force and Touch
Feedback for Virtual Reality’’, WileyInterscience Publication 1996.
[15] Bergamasco M, Avizzano CA, Virtual
Environment Technologis in Rehabilitation,
Proceedings of Roman 1997;
[16] Ketelaer, P. Feys, ‘‘Report of the
Questionnaire Health Care Professionals’’,
Newcastle Upon Tyne, 5th june 1997.
DESIGN OF ROBOTIC ORTHOSIS ASSISTING HUMAN MOTION
IN PRODUCTION ENGINEERING AND HUMAN CARE
Kiyoshi NAGAI Isao NAKANISHI Taizo KISHIDA
Ritsumeikan University
Abstract: Mechanical design of robotic orthoses capable of assisting human forearm
motion is discussed. The robotic orthoses should be carefully designed such that two
basic specifications will be satisfied simultaneously; 1) human motion is assisted, and
2) the user is safe and anxiety-free. A design concept for the robotic orthoses is presented first. A prototype of a robotic orthosis for production engineering is then described. Another design of robotic orthoses for human care is also discussed. A power
assisting control scheme for the robotic orthoses with a macro-micro structure is proposed and investigated using simulations.
Key words: Assistive device, Robotic orthosis, Power assisting control
1 Introduction
Several studies have been carried out
regarding mechanisms and control
schemes for power assisting robotic
mechanisms [1]-[4]. As for the design of
their mechanical structures, one important fundamental problem still remains.
That is, how we can design mechanisms
capable of motion assistance providing
users with a safer and more anxiety-free
environment. We think that link and reliable safety mechanisms should be designed at the same time. Based on this
idea, our group started to design of a robotic orthosis which would be attached
to the upper limb [5], [6].
In this paper, we have discussed designs of robotic orthoses as power assisting systems. First, a design concept
for robotic orthoses was studied. A basic
design method satisfying the required
motion capability and mechanical safety
is described. A prototype of robotic orthosis for desktop production engineering is then given ample attention.
Another design of robotic orthoses for
human care motion is also dealt with. A
power assisting control scheme for the
robotic orthoses with a macro-micro
structure is proposed. The power assisting motions produced are investigated
using simulations for obtaining proper
mechanical properties as the design parameters.
2 Robotic Orthosis Worn by Humans
2.1 Basic concept of mechanical design
Robotic orthosis worn by humans
should be designed carefully so that they
satisfy the following two basic requirements simultaneously:
- Capability of assisting humans
motions
- Safety and no-anxiety
As for assisting human motion, we are
making efforts to realize the following
two functions [5]:
- Power Assist: Adds required power
to human action movement. This
function enables people to carry
heavier objects with less fatigue.
- Motion Guide: Moves the human
body to a desired position. This
function enables us to trace given
trajectories precisely.
As for safety of the system, mechanical
methods must be installed initially, because they are the most reliable compared to other electrical or software
methods. Also, in order not to create any
- 270 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
additional worry to the user, we can use
the following keywords as design guides
for robotic orthoses.
- small
- light in weight
- easily to be attached
- easily to be detached during
operation
Figure 1 shows two sets of mechanisms
A and S, here each point in the sets
means a corresponding basic structure.
When the point P1 is an element of the
set A but not an element of the set S, the
basic structure expressed by P1 satisfies
"Assisting human motion" but does not
satisfy "Safety and no-anxiety". We must
find a required basic structure expressed
by the point P0 directly, because it is difficult to change from a basic structure to
another one, for example from P1 to P0.
Therefore, we must consider the factors
of assisting human motion and safety
and no-anxiety simultaneously during
the design stage.
Fig. 1. Two sets of mechanisms satisfying the necessary requirements.
2.2 Basic structure and utilizing force
information
an example in Section 3.
Another idea concerning the basic
structure is constructing a macro-micro
mechanism for unexpected excessive
forces of the robotic orthosis on the user.
A related topic is discussed in Section 4.
The following part deals with utilizing
force information for power assisting
control. Figure 2 shows three cases of
connections between a human, a robotic
orthosis and an object. H, R and O stand
for ’Human’, ’Robotic orthosis’ and ’Object’, respectively. Regardless of the
three cases in Fig. 2, Eq. (1) represents
the relationship of the forces.
F = F H + FR
(1)
Here, FH and FR denote the forces applied to the object by the human and robotic orthosis, respectively; and F is the
resultant force applied to the object.
Note that all the forces are converted to
the same coordinate.
To realize power assisting movement
by the robotic orthosis, these forces are
being used in a control scheme [5].
(a) R-H-O (b) H-R-O (c) H-O-R
Fig. 2. Three connections between human, robotic orthosis and an object.
3 Robotic Orthosis in Production Engineering
In this section, an outline of a prototype
of
robotic orthoses in production engiIn this section, the basic structure of a
neering [5] is described.
robotic orthosis and utilizing force inHere, our concrete target is a person
formation for power assisting control are
sitting in a chair and working with
discussed.
his/her upper limbs. Figures 3, 4 and 5
As for the basic structure, adopting a
show the structure and appearance of the
’wearable type’ is a good idea because it
mechanism.
makes it easy to design robotic orthoses.
This robotic orthosis with eight DOF is
A prototype of a robotic orthosis capable
designed
to assist the human forearm
of assisting human motion with memotion and ensure user safety. It is cachanical safety in mind is described as
- 271 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
pable of moving the human forearm and
hand to an arbitrary position and orientation. The mechanical stoppers, mechanical breakers and mechanical
interface were installed to ensure user
safety mechanically. The mechanical
stoppers are installed in the properly designed link mechanism to avoid any configuration of the mechanism that could
injure the body. The mechanical breakers are installed to avoid any excessive
force applied to the elbow toward the
shoulder. The mechanical interface is installed for detaching the mechanism
from humans during operation.
The fundamental requirements on the
robotic orthosis are: 1) to assist workers
when moving the aged or disabled, and
2) to ensure their safety and not causing
any anxiety to the user.
Fig. 5. Photo of the robotic orthosis.
Here, we propose the adoption of a
macro-micro structure for the robotic
orthosis, because it enables us to decrease the inertia of the mechanism at
the point of attachment. In particular,
Fig. 3. A robotic orthosis for one of the
adopting a passive micro part without
upper limbs.
actuators is very effective. Its small inertia can contribute to avoiding any excessive
dynamic
forces
during
unexpected motions and to improving
the feeling of the user.
If we have adopted the macro-micro
structure with a passive micro part, and
we have also determined its mechanical
properties very carefully so as to utilize
the small motion range of the micro part
effectively. This robotic orthosis should
Fig. 4. Structure of the robotic orthosis.
be designed in the following way:
1) Determination of the required
4 Robotic Orthosis regarding Human
maximum force: We have to determine it
Care
at the endpoint of the robotic orthosis
according to the target care motions. For
4.1 Adopting macro-micro structure
example, the target care motion is lifting
up the disabled with mass of 100 kg. It
In this section, the basic structure of
allows us to estimate the required maxirobotic orthoses assisting human care
mum joint torque and the mass of the
and its design procedures are discussed.
- 272 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
macro part.
2) Design of a control scheme satisfying the required functions: We also have
to determine the desired properties of the
robotic orthosis under a control scheme.
It allows us to find the desired mechanical properties of the micro part. As the
mass property at the endpoint is dominated by that of the micro part and it
should have similar mass properties as
the desired one.
3) Determination of the design parameters such as the damping factors:
Using simulations under power assisting
control might be a reasonable way to
deal with the complex dynamics of the
robotic orthosis.
4.2 Power assisting control scheme
In this section, the power assisting
control scheme for the robotic orthosis
with a macro-micro structure is proposed
based on the impedance control with a
motion transfer function to change the
desired position. The impedance control
with this motion transfer function provides power assisting motions.
When we adopted a control scheme
based on the above idea, the changes in
the desired position and the displacement by the impedance control often appears in the opposite direction. However,
we can avoid this problem when the gain
of the motion transfer function is adjusted to be small in the high frequency
domain that includes the natural frequency of the system employing impedance control.
The proposed control scheme is
adapted to the model with one degree of
freedom shown in Fig. 6. The dynamics
of the macro part and the micro part are
represented as follows:
M M q&&M + g M + J M FM = TM − VM q& M (2)
T
M m &r& + g m + FR = FM
(3)
FM = − Dm r& − K m(rm − rm 0 )
(4)
where M M , M m , g M and g m are the in-
ertia matrices and the gravity forces of
the macro and micro parts. VM , q&&M , q& M
and TM are the viscous friction coefficient matrix, the joint acceleration, the
joint velocity and the joint torque of the
macro part. J M , FR and FM are the Jacobian matrix, the endpoint force of the
robotic orthosis and the force of the
macro part applied to the micro part. r is
the position of the endpoint of the micro
part, and rm the length of the micro part.
rm 0 is the initial length of the micro part.
Dm and K m are the matrices for damping
and stiffness of the micro part.
Before coming up with an accurate
control scheme, we should determine the
desired properties of the motion of the
robotic orthosis. Here we have introduced the desired mechanical impedance:
M d &r& + Dd r& + K d re = FRE , FRE = − FR (5)
where M d , Dd and K d are the desired
matrices of inertia, damping and stiffness. re (= r − rd ) is difference between r
and desired position rd . FRE is the external force applied to the robotic orthosis.
To derive the control scheme, &r& is
eliminated using Eqs. (3) and (5), and
the desired inertia matrix is here determined to have the original properties.
The obtained equation is substituted to
Eq. (2) to eliminate FM . Then we derive
the following control scheme neglecting
M M q&&M to avoid using acceleration signals.
T
TM = J M (− Dd r& − K d re + g m )
+ V M q& M + g M
(6)
To apply the above control scheme, we
have to determine the desired position of
the endpoint of the micro part detecting
the desired motion in humans. Here, we
- 273 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
decided to use the following motion
transfer function:
rdi ( s )
Ci
=
(7)
FHi ( s ) s (Ti s + 1)
where FH is the force in humans, T the
time constant, and C the gain of the desired velocity when FH is constant. Then
power assist motion can be realized using Eq. (6) with Eq. (7).
Fig. 6. A model of robotic orthosis with
a macro-micro structure.
4.3 Simulation
In this section, the proposed control
scheme is simulated. Lifting a mass of
20 kg is tested as a target task.
For carrying out simulations, we assumed that the force of the human FH is
produced in proportion to the difference
in the desired position of human rHd and
the position of human r . Proportional
gain is K H =1000[N/m]. Equation (2) is
M M =0.9 [kgm 2 ] ,
VM =0
used with
[Nms/rad] and J M = L =0.3[m]. Equations
(3) and (4) are used with M m =0.5[kg],
Dm =1000[Ns/m], K m = 5000[N/m] and
rm 0 =0[m]. Equation (6) is used as the
power assisting control scheme with
Dd = Dm and K d = K m . Equation (7) is
T =0.25[s]
and
used
with
C =0.001[m/Ns].
The simulated results are shown in Fig.
7. The forces F , FH and FR are plotted
in Fig. 7 (a). The plus values show that
the forces are directing upward. The positions rHd , rd and r are plotted in Fig. 7
(b). The positions rm , r and rM are plotted in Fig. 7 (c). The ratio of FR to F is
referred to as the power assisting ratio
and is plotted in Fig. 7 (d).
The user wears the robotic orthosis on
one of his/her upper limbs, and the limb
is assumed to be fixed at the initial position before t = 0 . At t = 0 , the upper
limb is released and a mass of 20 kg is
put on the user’s hand. The user is trying
to keep the upper limb at 0 m position
when 0 ≤ t < 5 . The user is then trying to
move the upper limb upward for lifting
up the mass when 5 ≤ t < 11 . After that
the user tries to keep the position of the
mass at a desired position when t ≥ 11 .
The user is not required to produce a
large force since power assisting ratio is
being kept at more than 0.77 when
5 ≤ t < 11 . The changes of the desired
position produced by the motion transfer
function and the displacement by the
impedance control appear in the opposite
direction when 0 < t < 0 . 5 . However, the
position of the user’s upper limb returns
to the initial position. As the gain of the
motion transfer function is adjusted to be
small in the high frequency domain that
includes the natural frequency of the
system under the impedance control.
The above results illustrate that the
proposed control scheme is available to
provide power assisting motions using
robotic orthosis.
5 Conclusion
The main results obtained in this paper
are summarized as follows.
1) A basic concept on design of robotic
orthoses assisting human motion is
shown. This concept is utilized to
design mechanisms providing re-
- 274 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
quired motion and mechanical safety
simultaneously.
(a) Endpoint forces
(b) Desired positions and their responses
(c) Length of the micro part and
endpoint positions
(d) Power assisting ratio
Fig. 7. The simulated results of power
assisting motions.
2) A mechanical design of the robotic
orthosis based on the concept in production engineering is described.
3) Adopting the macro-micro structure
is proposed for the robotic orthosis
regarding human care. A method to
determine the property of the passive
micro part is investigated using
simulations.
The concepts and techniques are now
being utilized to design a robotic orthosis as regards human care.
References
[1] H. Kazerooni, "Human-Robot Interaction
via the Transfer of Power and Information Signals", IEEE Trans. on System, Man. and Cybernetics, Vol. 20, No. 2, pp450-463, 1990
[2] K. Kosuge, et al., "Mechanical System
Control with Man-Machine-Environment Interactions", Proc. of the IEEE International Conference on Robotics and Automation, pp239244, 1993
[3] K. Homma, et al., "Design of an Upper
Limb Motion Assist System with Parallel
Mechanism", Proc. of the IEEE International
Conference on Robotics and Automation,
pp1302-1307, 1995
[4] Hayashibara, Y., et al., "Development of
Power Assist System with Individual Compensation Ratios for Gravity and Dynamic Load",
Proc. of the IEEE/RSJ International Conference
on IROS, pp640-646, 1997
[5] K. Nagai, et al., "Development of an 8 DOF
Robotic Orthosis for Assisting Human Upper
Limb Motion", Proc. of the IEEE International
Conference on Robotics and Automation,
pp3486-3491, 1998
[6] K. Nagai, et al., "Mechanical Design of a
Robotic Orthosis Assisting Human Motion,"
Proc. of the 3rd Int’l Conference on Advanced
Mechatronics, pp.436-441, 1998
Address:
Prof. Kiyoshi Nagai, Dr. Eng.
Dept. of Robotics, Ritsumeikan Univ.
Noji-higashi 1-1-1, Kusatsu,
Shiga 525-8577, JAPAN
Tel: +81-77-561-2750
Fax: +81-77-561-2665
E-mail: [email protected]
- 275 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
A SIMPLE ONE DEGREE-OF-FREEDOM FUNCTIONAL
ROBOTIC HAND ORTHOSIS
Mário F. M. Campos and Saulo A. de P. Pinto
Laboratório de Robótica, Visão Computacional e Percepção Ativa
Departamento de Ciência da Computação
Universidade Federal de Minas Gerais
Belo Horizonte, MG, Brazil
Abstract
Individuals who have suffered cervical
spinal cord injury (SCI) usually loose
the ability to manipulate objects in a
reasonably efficient way. In order to be
able to perform simple tasks, they must
resort to specially designed passive devices. This paper describes the design
and implementation of a one degree-offreedom functional hand orthosis. The
main objective was to develop a simple, inexpensive, adaptable device that
would help in restoring the precision
grip capability of individuals with SCI.
Several experiments of the grasp-andrelease type were conducted with different objects, and preliminary results
that show and quantify the improvement in an individual’s gripping abilities are presented.
Introduction
The human hand is an impressive device that is essential to the interaction
with the physical world. Its importance
is evident in communication [1] and
cognitive processes. The ability to manipulate small objects is very important
in general, but is fundamental to the
activities of daily life (ADL). In the
school environment, for instance, the
hand can be seen in action in the manipulation of objects such as pens,
erasers and books. In order to manipulate small objects, the hand executes a
movement that is known as precision
grip [1]. This type of grip has an important role in the execution of several
ADL. One such a precision grip called
the bidigital grip is very important and
is present in about 20% of the ADL
[3]. Among the bidigital grips, the
pinch grip, which is performed with
the index and thumb fingers, is the
most frequently used.
Individuals with C5-C6, C6 and C6-C7
SCI are usually able to move and position their hands in free space and in
most cases, are also capable to control
wrist movements such as extension and
flexion. Unfortunately, such individuals lack the ability to efficiently and
adequately grasp and release common
objects. Often this inability is one of
the main reasons which hinders such
individuals from undertaking professional, social and personal activities.
This work presents the design and implementation of a simple and inexpen-
- 276 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
sive device that significantly improves
the ability of an individual to perform
bidigital grips between thumb and index fingers (pulp-to-pulp pinch). A
simple orthosis prototype was built in
order to assess potential functional
gain. Preliminary results compare favorably to tenodesis (a type of synergy
where the wrist extension causes a
flexion of the fingers (to grasp) and the
wrist flexion causes and extension of
the fingers (to release) [4]) alone, in
object manipulation tasks.
Background
SCI individuals usually grasp and release objects using tenodesis. However, tenodesis alone is limited since
both the aperture and grasping of fingers are passive and depend, among
other factors, on the tension applied to
the tendons and ligaments of the fingers [5]. Hand orthosis [6] and neuroprosthesis [4, 7] are alternatives commonly used to (partially) restore the
functionality of the hand.
Several promising alternatives of devices that were designed to assist in the
recovery of functionality of SCI individuals have been reported in the rehabilitation robotics literature [2, 8, 9,
10]. Nevertheless, considering the
large number of devices that have been
proposed, it is very disappointing to
verify that only a few were effectively
useful. According to Kumar et al. [2],
this situation can be explained by the
high costs involved in building sophisticated robotic contraptions, by awk-
ward interfaces with the user and by
the social stigma of robots. A low-cost,
functional and user-friendly orthosis,
which can be aesthetically improved in
order to be less apparent was designed.
Methodology
The prototype of the orthosis is depicted in Figure 1. The structure was
built from low-temperature thermoplastic [6], which has as main advantages the low-cost, light-weight and
shape adaptability. The structure is
composed of three parts (links) connected by one actuated joint and one
passive, instrumented joint. The last
link keeps the thumb in a fixed position, that allows the closing of the grip
only with the movement of the index
finger. Actuation of the joint corresponding to the metacarpophalangeal
joint (MCP) of the index finger and its
consequent movement, is provided by
a directly coupled DC servomotor. A
potentiometer, approximately located
on the flexion-extension axis of the
wrist, informs the angular position to a
microcontroller, which is actually the
set point of the control system.
Figure 1: The prototype and some of its
components.
- 277 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
The device has only one artificially
actuated degree of freedom at the MCP
joint and two passive ones at the wrist
joint. The last two allow for a free
movement of the wrist during the flexextension, while permitting for limited
freedom of movement of tradio-ulnar
joint (arrow in Figure 2). Removing the
constraints provides a more comfortable use of the orthosis.
Figure 2: Potentiometer assembly. Arrow indicates radio/ulnar typical trajectory.
User’s
Central Nervous
System
Nervous
Impulse
User’s Body
Wrist
Flexors/Extensors
Position Sensor
Applied
Force
Grip
Aperture
Hand Sensors
an Vision
Joint at MCP
Orthosis
MCP joint position
Microcontroller
Actuator
Motor Command
Figure 3: Orthosis block diagram.
User control of the orthosis
Orthosis control by the user is very
simple and natural. Simplicity results
from the fact that there is only one degree of freedom to control. One of the
important features is that control is
very natural to the user since standard
tenodesis movements can be used. This
also greatly improves learning time to
control the device. A block diagram of
the system and its interaction with the
user is shown in Figure 3.
- 278 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Gripping is executed by extension
movements of the wrist. The central
nervous system of the user sends impulses commanding the wrist to be extended and also the fingers to flexed.
Wrist rotation is measured by the position sensor (potentiometer) which is
converted to angular displacement (set
point) by the control unit. A simple PD
control algorithm receives as input the
wrist position and sends control signals
to the servo. The servo applies torque
to the joint corresponding to MCP joint
of the index finger, which causes grip
closure around the object. Prior to finger-object contact, proprioceptors of
the individual’s hand and vision are the
main sensors used to control the grip
aperture. In the mean time, the position
sensor sends off to the automatic control system the angular position of the
wrist joint. After contact is made between the object and the hand, higher
prehension forces can be achieved by
moving the wrist sensor further in the
same direction. From that point on,
force sensing is provided by the individual’s hand tactile and force sensors.
Hence, the cutaneous - proprioceptive
– visual – robotics loop is able to provide full control of the orthosis. This
loop is extremely important to the acceptance of the orthosis by the user as
well as to minimize learning time to
control the device. One of the reasons
for is the inclusion in the loop of the
cutaneous and proprioceptive sensors
of the user. That happens because the
feedback is done by organs of the very
body of the user, who can sense the
object manipulation. The counteropposition (to open the fingers) is executed in a similar way, but it uses the
wrist flexion movement.
Results
Initial observations of the benefits of
the orthosis suggest that it provides a
good gain in functionality, allowing
the user to execute important tasks
such as feeding and writing.
In order to quantify the functional gain, a
test of grasp-and-release was conducted.
The test is described as follows and further details can be found in [5, 11].
Grasp-and-release Test
This test is performed in sessions. Each
session consists of testing each object
5 times with and without the orthosis.
Subjects are requested to complete a
maximum number of tasks within 30
seconds trials. The number of successful completions and failures are is recorded for each trial. Objects are tested
in random order to minimize systematic errors or successes due to fatigue
or learning by the subject. A gap of 30
seconds was kept between trials.
Object
Weight
(N)
Peg
Block
Can
0.0196
0.0981
2.207
Size
Material
(cm)
0.71 (dia.) × 7.6
Wood
2.5 × 2.5 × 2.5
Wood
6.5 (dia.) × 12.2 Alumi-
Video
tape
3.286
3.0 × 12.3 × 22.5
num
Plastic
Table 1: Test objects.
- 279 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
Figure 4: Grasp-and-release tests. Successful
completions. Columns show scores for 5 tests
in 4 sessions.
One subject, with C6-C7 level injury,
participated of the tests. The subject
has a good control of the upper limbs.
The orthosis was worn on his left hand
(non-dominant), while the right hand
(dominant) was used to perform the
tests without the orthosis. That is justified by the fact that we are comparing
the functional capability of a hand
supposedly less dexterous but worn
with the orthosis (left), with the other
hand supposedly more dexterous, since
it is the dominant one, but without the
orthosis (right).
became similar to tenodosis. The most
remarkable differences are seen for the
both the peg and videotape. The later
could not be manipulated in no one of
the sessions, without the orthosis.
The number of failures with the orthosis were quite small, as seen in Figure
5. This is mainly due to the learning
process the subject went thorough as
he tried to execute the maximum number of tasks within the allotted time for
each trial. Indeed, in the first session
no failures were observed with the orthosis. The difference is substantial
both for the peg and the videotape.
Furthermor, for the later not even a
single failure was registered.
Figure 5: Grasp-and-release tests. Failures
are shown by session. Columns show the
scores for the 5 tests in 4 sessions.
Grasp-and-release Test Results
Figure 4 presents some results for the
four sessions of tests conducted during
four consecutive days. It can be noticed that the performance with the orthosis was consistent and superior to
that of with tenodesis alone. The subject’s learning curve can be observed
in the grasping of a can, where the performance using the device gradually
Conclusion
The design and implementation of
prototype of an orthosis was presented
here. The main features of the device
are its performance, low cost, easiness
of use and adaptation to other individuals. Preliminary results of the
grasp-and-release test suggests that the
orthosis provides real functional gain
- 280 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
for its user. Evidently, more tests are
necessary with a reasonable sized
population of subjects. However, in
despite of that, the subject that underwent the experiments was very satisfied with the performance and easiness
to use presented by the device, to the
point that he had motivation to use it in
his daily life. Mainly, this is due to the
fact that the device feels comfortable,
it is easy to control, it is easy to wear,
it also provides firmness during manipulation and, most importantly, enables the execution of tasks that otherwise could not be performed (like the
videotape manipulation during the
tests). Some problems need to be addressed in order to make the orthosis
more acceptable to the user, and the
main one is to move the servomotor
from the hand to a more proximal position in the forearm. This can be accomplished by simple modifications to
the current design.
Acknowledgements
The authors wish to thank Priscila de
Paula Pinto, Raquel A. de F. Mini and
Lúcio de S. Coelho for their invauable
help with the experiments and data acquisition and processing. This work
was partially funded by CAPES, CNPq
522618-96.0 and FAPEMIG TEC
609/96.
References
[1] Napier J. R., Hands, George Allen
& Unwin, London, England, 1980.
[2] Kumar V.,Rahman T., Krovi V.,
“Assistive Devices for People with
Motor Disabilities”, to be edited in
Wiley Enciclopaedia of Electrical and
Electronics Engineering, 1997.
[3] Magee, D., Orthopedic Physical
Assessment, 3th edition, W. B. Saunders, 1997.
[4] Smith, B. T., Mulcahey, M. J.,
Betz, R. R., “Quantitative Comparison
of Grasp and Release Abilities with
and without Functional Neuromuscular
Stimulation in Adolescents with Tetraplegia”, Paraplegia, vol. 34, pages
16−23, 1996.
[5] Harvey L., “”Principles of Conservative Management for a Non-orthotic
Tenodesis Grip in Tetraplegics”, Journal of Hand Therapy, nº 9, pages
238−242, 1996.
[6] Linden C. A., Trombly, C. A.,
“Orthoses: Kinds and Purposes” in Occupational Therapy for Physical Dysfunction, C. A. Trombly, 4th edition,
Williams & Wilkins, 1995.
[7] Peckham P. H., Keith M. W., Freehaafer A. A., “Restoration of Functional Control by Electrical Stimulation
in the Upper Extremity of the Quadriplegic Patient”, Journal of Bone and
Joint Surgery, vol. 70A, nº 1, pages
441−447, 1988.
- 281 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
[8] Harwin W., “Theoretical Considerations for the Design of Simple
Teleoperators and Powered Orthoses”,
Proceedings of 5th International Conference on Rehabilitation Robotics,
Bath, UK, 1997.
[9] Nagai K., Nakanishi I., Hanafusa
H., Kawamura S., Makikawa M., Tejima N., “Development of na 8 DOF
Robotic Orthosis for Assisting Human
Upper Limb Motion”, Proceedings. of
the 1998 IEEE International Conference on Robotics & Automation, Leuven, Belgium, may, 1998.
[10] Kyberd, P. J., Chappel, P. H.,
“Prehensile Control of a Hand Prosthesis by a Microcontroller”, Journal of
Biomedical Engineering, vol. 13:9,
1991.
[11] Stroh Wuolle K. S., Van Doren C.
L., Thrope G. B., Keith M. W., Peckham P. H., “Development of a Quantitative Hand Grasp and Release Test for
Patients with Tetraplegia Using a Hand
Neuroprosthesis”, Journal of Hand
Surgery, vol. 19A:2, pages 209−218,
1994.
Contact Address
Prof. Mário F. M. Campos
DCC – ICEx - UFMG
Av. Antônio Carlos 6627, Pampulha
31270-010 Belo Horizonte, MG
Brazil
Email: [email protected]
- 282 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
ANALYSIS AND CONTROL OF HUMAN LOCOMOTION USING
NEWTONIAN MODELING AND NASA ROBOTICS
J. R. Weiss, V. R. Edgerton1, A. K. Bejczy, B. H. Dobkin1, A. Garfinkel1, S. J.
Harkema1, G. W. Lilienthal, S. P. McGuan2, B. M. Jau
MS 183-335, Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak
Grove Drive, Pasadena, California 91109
Combining NASA technology, University
insight and Industry know how NASA’s
Jet Propulsion Laboratory (JPL), the
UCLA Brain Research Institute and
Mechanical Dynamics Inc. (MDI) have
developed an approach for enhancing
strategies for rehabilitation of individuals
with spinal cord injury (SCI). This
approach utilizes robotics developed for
manned space exploration, mathematical
modeling used for commercial product
testing and human research on spinal cord
injuries. This collaboration resulted from
conversations between JPL and UCLA on
how the two could work together on the
application of NASA technologies to
neural repair and rehabilitation problems
resulting from traumatic brain and spinal
cord injury.
We know that a complete spinal cord
injury
severs the information flow
between the brain and the neural networks
below the level of injury. For example,
paraplegics injured at a lower thoracic
level of the spine lose control of their legs.
Through research efforts there is now clear
evidence that the efficacy of the remaining
neural networks in the lumbosacral, or
lower spinal cord, can be enhanced by
specific locomotor training. These
experiments demonstrate that the lumbar
spinal cord, even without input from the
brain, learns the specific motor tasks
that are practiced. For example, the
spinal cord can learn to step under full
weight-bearing conditions over a range
of speeds and to stand. Further, if the
spinal cord is not allowed to continue to
practice the motor task it will forget
how to perform it. This learning
phenomena can be associated with
significant changes in the biochemistry
of the spinal cord in the form of both
excitatory
and
inhibitory
neurotransmitters, as well as in the
receptors that respond to these
transmitters. In a sense, these findings
suggest that a significant degree of
functional neural regeneration might be
directed intrinsically by the neural
networks and their supportive cells.
Work on the rehabilitation of stepping
skills performed at UCLA resulted in an
approach called Body Weight
Supported Training (BWST). This
approach, although successful, was very
labor intensive thus not available to
most persons who could benefit from its
ability to get them out of their
wheelchairs. BWST requires that
physical therapists move the lower
extremities of the person while they are
- 283 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
suspended over a moving treadmill. The
therapists move the legs as required by the
speed of the treadmill exerting pressure in
all directions to maintain as normal a
walking motion as physically possible.
This method, although very successful, has
two short comings; it is difficult to
quantify the amount of exerted pressure
and direction of that pressure being
applied by the therapists, and it is equally
difficult to measure the degree of
improvement shown by the patient from
treatment to treatment. To solve these two
problems JPL proposed the use of a
robotic exoskeleton to replace the
therapists and a mathematical model to
perform the motion analysis and control of
the exoskeleton.
The exoskeleton technology originally
developed to assist astronauts in the
manipulation of devices in space was
broken down into its basic technologies,
enhanced for this application and retooled
for prototype testing in the UCLA
Neurorehabilitation Research Lab. The
resultant
technology
consists
of
microdevices for measuring force and
acceleration over six degrees of freedom.
i.e. includes positive and negative
rotations about all three possible
directions. These devices placed at the
major joints can detect even the most
subtle abnormal movement in the patients
stepping and coupled with recording
capabilities provide the necessary data for
complete analysis. Once prototyping has
been
completed
the
exoskeleton
technology can be integrated into a body
suit providing all necessary data required
to analyze the motion of walking.
UCLA, MDI, and JPL have begun
implementing the computer simulation
needed to analyze and predict human
motion. The model being used was
originally designed by MDI and
augmented by JPL. It currently
implements all necessary joints (hip,
knee, and ankle) of the lower limbs,
incorporates classical Newtonian
mechanics with six degrees of freedom
and is completely dynamic. This
modeling of lower limb stepping is
currently providing new insights in
efforts
to
develop
effective
rehabilitation strategies to improve
mobility in spinal cord injured subjects
as well as new counter measures to
protect astronauts during long-term
exposure to microgravity. The
completed model will provide a
research and therapeutic tool capable
of:
a) calculating the force levels
necessary at each joint to effect
successful locomotion;
b) pinpointing which weak
components of the step cycle need
augmentation, and by how much;
c) simulating both normal and
impaired locomotor strategies; and
d) devising and assessing
alternative locomotor strategies that
place fewer and/or less stressful
demands on muscle force output.
- 284 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
This effort incorporates state-of-the-art
simulation software tools to automatically
formulate and solve the equations for both
the physiological and neural control model
allowing higher order sophistication as
well as the development of a much more
robust controller. With this current
approach, higher-order elements such as
surface-contact joints (knees), soft tissue
wrapping
around
hard
tissues,
sophisticated muscle force algorithms,
fully articulating foot, distributed plantar
surface contact forces and detailed spine
will be included to make the model more
closely emulate the kinematics and
kinetics of a particular patient.
In
addition, the model will be "personalized"
including features such as parametric hard
tissue geometry and joint axis orientation,
soft tissue geometry and configuration as
well as a controller which may be
configured to the state of the subject.
Current prototype exoskeleton components
located in hand held interfaces at the knee
and foot to quantify the level of assistance
given by the trainer clearly depicts
changes in needed assistance both during
and across training sessions. These types
of data are a valuable tool for assessing the
subject’s progress while training to achieve
the appropriate kinematics and kinetics for
locomotion for both spinal cord injured
patients desiring normal walking
capabilities here on earth and astronauts
operating in space.
We are using a neural oscillator constucted
as a linear state space matrix and
augmented with non-linear state functions
through "Simulink" software that
functions as a central pattern generator
with a sensory feedback system
combined with closely simulated limb
mass, kinetics and moment arm data of
individual muscles of the hip, knee and
ankle. Currently the modeling is
focused on the locomotion of a subject
walking with a range of relative loads,
i.e. from full weight bearing to stepping
with no load (air stepping). Variables
that are being studied include percent of
body weight loading, speed of
stepping, frequency of stepping,
changes in muscle output, e.g., as would
occur with muscle hypertrophy or
atrophy, and changes in the number of
motor units recruited during selected
phases of the step cycle. The model
currently permits the evaluation of the
alterations in kinematic, kinetic and
ground reaction force dissipation
signatures for the lower extremity
during walking gait simulations at
varying gravity loads. As anticipated,
all three signatures from the model
predict decreased reliance on the shock
dissipation mechanism of the lower
extremity under decreasing gravity
loads. The model is sufficiently
detailed to permit analysis of the
passive (heel strike) and active (midand forefoot impact) peaks in the
ground-reaction dissipation signature to
predict effective shock at each joint. In
the coming months, addition of modular
neural control elements will enable the
testing of a variety of locomotor
regulating systems. Based on these
studies predictions of the pattern of
force, and thus the level of motor unit
recruitment necessary for successful
- 285 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA
locomotion, will be made.
One of the reasons that the exoskeleton
can be as important to patients as it will be
to astronauts is because the spinal cord as
well as the brain learns the motor task that
it is being taught. It appears that if the
spinal cord remains idle in bed or in space,
then it begins to forget how to walk.
Similarly, if you teach the spinal cord to
walk improperly, then it learns to walk
improperly. If a robotic exoskeleton is
used to move the legs in the proper
manner, the spinal cord will learn or
maintain the appropriate sensory
information that must be present for
normal walking to persist.
This robotic stepper will permit optimal
sensory inputs to be "seen" by the spinal
motor pools alone in the case of patients
with complete SCI, and by the spinal and
higher networks in the case of incomplete
SCI and stroke patients. From a scientific
point of view, the study of complete
thoracic spinal injured subjects with this
device and its measures will allow us to
study in greater detail the adaptability of
the cord. This feedback should also allow
patients to gradually increase the use of
their residual motor control and, with
consistent training, gradually reduce the
assistance provided by the motorized
exoskeleton.
More than a half million Americans are
hospitalized each year with stroke,
10,000 with spinal cord injury, and
100,000 with a traumatic brain injury.
These diseases and injuries result in
anything from partial to total paralysis.
Approximately 30 percent of those with
stroke and 75 percent with a spinal cord
injury suffer lifelong physical
impairment in ambulation, balance,
strength, and endurance.
Many of
these patients could be retrained to
walk. Physiological principles that have
evolved from studies of gravitational
loading and locomotion in rats, cats,
monkeys and humans show that
retraining is possible. It is the intention
of this collaboration to use the model
controlled exoskeleton approach to
show that automated BWST retraining
is possible and to then commercialize it
for global application.
1
Brian Research Institute, University of California at Los Angeles, Los Angeles,
California 90095
2
Mechanical Dynamics, Inc., 2301 Commonwealth Blvd, Ann Arbor, Michigan
48105
- 286 ICORR ’99: International Conference on Rehabilitation Robotics, Stanford, CA