Proceedings of the First RoboCare Workshop

Transcription

Proceedings of the First RoboCare Workshop
Proceedings of the
First RoboCare Workshop
RC-Ws-1
Rome, Italy
October 30th, 2003
Edited by
Amedeo Cesta
Proceedings of the
First RoboCare Workshop
RC-Ws-1
Auditorium Polo Biologico CNR
Rome, Italy
October 30th, 2003
Edited by
Amedeo Cesta
Proceedings of the First RoboCare Workshop
RC-Ws-1
Editor:
Amedeo Cesta
National Research Council of Italy
ISTC-CNR [PST]
Viale Marx, 15
I-00137 Rome, Italy
e-mail: [email protected]
“RoboCare: A Multi-Agent System with
Intelligent Fixed and Mobile Robotic
Components” is a project funded by the
Italian Ministry for Education, University
and Research (MIUR) under Law 449/97
(Funds 2000).
© 2003 ISTC-CNR
Consiglio Nazionale delle Ricerche
Istituto di Scienze e Tecnologie della Cognizione
Viale Marx, 15 - 00137 ROMA
ISBN 88 - 85059 - 15 - 5
Preface
This document collects several research results presented at the first RoboCare
Workshop in October 2003 which were obtained during the first year of the RoboCare
project.
RoboCare is a short name for the project “A Multi-Agent System with Intelligent Fixed
and Mobile Robotic Components” approved by the Italian Ministry for Education,
University and Research (MIUR) under Law 449/97 (Funds 2000), and started in
November 2002.
The goal of the RoboCare project is to build a multi-agent system which generates user
services for human assistance. The system is to be implemented on a distributed and
heterogeneous platform, consisting of a hardware and software prototype.
The use of autonomous robotics and distributed computing technologies constitutes the
basis for the implementation of user services in a closed environment such as a healthcare institution or a domestic environment. The fact that robotic components,
intelligent systems and human beings are to act in a cooperative setting is what makes
the study of such a system challenging, for research and also from the point of view of
technology integration.
Given the challenging goal of realizing a complete service-generating system which
incorporates cutting-edge domotic technology, robotic platforms and intelligent agency,
these proceedings collect different contributions from a variety of different research
areas, including robotics, intelligent planning systems, computer vision, automated
reasoning, human-computer interaction and environmental psychology.
As coordinator of the project I would like to thank MIUR for funding this project and
for giving the Italian research community the opportunity to activate such a challenging
line of research.
Additionally, I would like to acknowledge the role of a group of people that helped me
in several daily tasks involved in the coordination of the project. In particular I would
like to thank G.Riccardo Leone, Albano Leoni, Angelo Oddi, Federico Pecora, and
Riccardo Rasconi.
Updated information on the RoboCare Project can be obtained from our site, at
.
Amedeo Cesta
October 23rd, 2003
ISTC-CNR
Rome, ITALY
TABLE OF CONTENTS
Effective Reasoning Techniques for Temporal Reasoning
A. Armando, C. Castellini, E. Giunchiglia, M. Maratea
…………………………………………………
1
People and Robot Localization and Tracking through a Fixed Stereo Vision System
S. Bahadori, L. Iocchi, L. Scozzafava
……………………………………………………………………
7
Developing a Robot able to Follow a Human Target in a Domestic Environment
R. Bianco, M. Caretti, S. Nolfi
……………………………………..………………………………………
11
Human-Robot Interaction
A. Cappelli, E. Giovannetti
…………………………………..…………………………………………
15
Toward a Mobile Manipulator Service Robot for Human Assistance
S. Caselli, E. Fantini, F. Monica, P. Occhi, M. Reggiani
…………………………………………………
21
Knowledge Representation and Planning in an Artificial Ecosystem
M. Castelnovi, F. Mastrogiovanni, A. Sgorbissa, R. Zaccaria …………………………………………………
27
A Component-Based Framework for Loosely-Coupled Planning and Scheduling Integrations
A. Cesta, F. Pecora, R. Rasconi
…………………………………………..…………………………………
33
Path Planning in a Domestic Dynamic Environment
A. Farinelli, L. Iocchi, D. Nardi ……………………………………………………..………………………
39
Multi-scale Meshing in Real Time
S. Ferrari, I. Frosio, V. Piuri, A. Borghese ……………………………………………………………………
45
Environmental Adaptation Strategies and Attitude towards New Technology in the Elderly
M.V. Giuliani, F. Fornara, M. Scopelliti, E. Muffolini
…………………………………………………
49
Guests’ and Caregivers’ Expectancies towards New Assistive Technology in Nursing Homes
M.V. Giuliani, E. Muffolini, G. Zanardi, M. Scopelliti, F. Fornara
………………………………………
53
Human-Robot Interaction: How People View Domestic Robots
M.V. Giuliani, M. Scopelliti, F. Fornara, E. Muffolini, A. Saggese
57
………….……………………………
A Cognitive System for Human Interaction with a Robotic Hand
I. Infantino, A. Chella, H. Dzindo, I. Macaluso
…………………………………………………………
63
Analyzing Features of Multi-Robot Systems. An Initial Step for the RoboCare Project
L. Iocchi, D. Nardi, A. Cesta
………………………………………………………………….…………
70
Robotic Path Planning in Partially Unknown Environment
A. Scalzo, G. Veruggio ………………………………………………………………………………………
78
The Design of the Diagnostic Agent for the RoboCare Project
P. Torasso, G. Torta, R. Micalizio, A. Bosio……………………………………………………………………
83
Effective Reasoning Techniques for Temporal Reasoning
Alessandro Armando and Claudio Castellini and
Enrico Giunchiglia and Marco Maratea
MRG-DIST, University of Genova
viale Francesco Causa, 13 — 16145 Genova (Italy)
armando,drwho,enrico,marco @mrg.dist.unige.it
Abstract
Temporal reasoning is an important topic in AI and Computer Science, comprising well-practiced problems such as
planning and scheduling; more recently, also hardware verification problems have been cast in this framework. Efficient
decision procedures for Temporal Reasoning are, therefore,
strongly needed; at the same time, more and more expressivity is required.
In this paper we present the temporal reasoning techniques
built into TSAT++, an open reasoning platform able to decide the satisfiability of Boolean combinations of linear binary constraints and propositional variables. The experimental results show that TSAT++ outperforms its competitors
both on randomly-generated problems and on real-world instances taken from the computer-aided verification literature.
1 Introduction
Expressive and efficient decision procedures for temporal
reasoning are capital for many reasoning tasks in Computer
Science and AI, among which planning and scheduling. One
of the best known and studied formalism is the so-called
Simple Temporal Problem (Dechter, Meiri, & Pearl 1991),
basically consisting of a conjunction of linear binary constraints of the form
where and are variables
ranging over a dense domain (typically the reals) and is a
constant in the same domain. This problem is tractable, but
its expressiveness is rather limited; therefore, recently, several extensions to it have been introduced, some of which allow disjunctions and negations of binary constraints, as well
as propositionally-valued variables.
More in detail, in the last five years, at least two approaches and six systems have been proposed that are able
to deal with disjunctions and conjunctions of binary constraints and, possibly, propositional variables. Interestingly,
four of these systems have been proposed in the AI literature (Stergiou & Koubarakis 2000; Armando, Castellini, &
Giunchiglia 2000; Oddi & Cesta 2000; Tsamardinos & Pollack 2003) and two in the formal verification literature (Audemard et al. 2002b; Strichman, Seshia, & Bryant 2002),
meaning that the topic is hot and interdisciplinary, bearing
great significance both from a theoretical and practical point
of view.
Copyright c 2003, The ROBO C ARE Project — Funded by MIUR
L. 449/97. All rights reserved.
In this paper we present TSAT++, an open reasoning
platform able to deal with arbitrary conjunctions, disjunctions and negations of binary constraints and propositional
variables. The techniques enforced by TSAT++ mainly
come from the literature on propositional satisfiability, especially (Moskewicz et al. 2001), and from the above cited
systems for temporal reasoning, as well as from some novel
ideas described in this paper.
The result is a fine tunable system which is up to 1 order
of magnitude faster than its fastest competitor on randomly
generated problems, and up to 6 times faster than its fastest
competitor on instances coming from real-world problems.
This, notwithstanding the facts that it is not tuned nor customized on any particular class of problems.
The paper is structured as follows: first, background information is provided; then the techniques and ideas enforced
by TSAT++ are described, and some experimental results
are presented. Lastly, future work are outlined, and some
conclusions are drawn.
2 Temporal Reasoning Problems
Let and be two nonempty, disjoint sets of symbols. A
binary constraint is an expression of the form
where
and
; a propositional atom is an element of ; a literal is a binary constraint, a propositional
atom or the negation of a binary constraint or of a propositional atom; lastly, a temporal reasoning problem (TRP) is a
Boolean combination, via and/or, of literals.
An assignment is a function mapping each symbol in
to a real number and each propositional atom to the truth
values
of propositional logic. An assignment is
extended to map a TRP to
by defining
!
" $#%&'( )+*, if and only if $#-.)/$#%0)12 ;
" $#435)6*
7%3 , if and only if $#835)9* ; and
" $#;:5)'*< (with : being a TRP) according to the truth
tables of propositional logic.
:
$#=:.)>*?
Consider a TRP . We say that an assignment satisfies
if and only if
, and that is satisfiable if and
only if there exists an assignment which satisfies it. A finite
set of literals is satisfiable if and only if their conjunction,
as a TRP, is. We deal here with the problem of determining whether a TRP is satisfiable or not. It is clear that the
:
:
1
problem is NP-complete. Note that TRPs are as expressive
as Separation Logic (Strichman, Seshia, & Bryant 2002) and
strictly more expressive than the Disjunctive Temporal Problem (Armando, Castellini, & Giunchiglia 2000).
In the following, and without loss of generality, we restrict our attention to TRPs in Conjunctive Normal Form;
de facto, any TRP can be efficiently reduced to such a form
using well-known techniques of structure preserving clause
form transformation (Plaisted & Greenbaum 1986). With
this assumption, we represent a TRP as a set of clauses, each
clause being a set of literals.
3 TSAT++
:
Consider a TRP . TSAT++, like all its predecessors, works
of literals such that, for
by searching a satisfiable set
each clause
in , at least one literal in
also belongs
to . (It is easy to show that the assignment satisfying
also satisfies .) In previous approaches, this has been
done either as search in a meta-CSP associated with the
TRP (Stergiou & Koubarakis 1998; 2000; Oddi & Cesta
2000; Tsamardinos & Pollack 2003) or à la SAT (Armando,
Castellini, & Giunchiglia 2000; Audemard et al. 2002a;
Strichman, Seshia, & Bryant 2002). The two approaches
bear more than a casual resemblance to each other; also, the
ideas and optimisation techniques developed in both settings
are similar. In their basic versions, and starting from (Armando, Castellini, & Giunchiglia 2000), all the above systems (1) branch on a literal, (2) assign to true the literals
in the unit clauses, and, (3) upon failure of the subsequent
search, add the negation of the literal to the current state
and continue the search, till either a satisfying assignment is
found, or backtrack has to occur.1 Assuming all the systems
pick the same literal for branching, all the above systems explore isomorphic search trees, e.g., with an equal number of
branching nodes.
A reasonable way ahead is then that of gathering together this common corpus of ideas and techniques and try
to take the best of each single one. We shall now briefly
describe the techniques TSAT++ enforces, particularly focussing on (1) the computation done before the search starts
(pre-processing), (2) the way the search space is pruned
(look-ahead), (3) the way recovery from failures happens
(look-back), (4) the procedure used for picking the literal on
which to branch (branching rule), and (5) the procedure used
for checking the consistency of a set of literals (consistency
checking).
@
A
@
3.1
:
:
@
A
Pre-processing
In the CSP literature it is well-known that statically adding
redundant constraints to a problem generally helps guiding the solver and can sometimes dramatically speed up
the search; in fact, in (Armando, Castellini, & Giunchiglia
2000) it was shown that such a technique therein called
actually made the solver faster by up to one order
of magnitude. We have implemented
pre-processing
B @C#-D)
B @C#=EF)
1
In (Oddi & Cesta 2000; Tsamardinos & Pollack 2003), the
fact that unit clauses are assigned after a branch is hidden in the
heuristics.
2
as described in the aforementioned paper, whose cost is just
quadratic in the number of distinct binary constraints in the
,
problem. We have also implemented a variant of
, which, given two binary constraints and
called full
, checks all four pairs of literals deriving from them.
B C@ #;EG)
IH
B @C#=EF)
KJ
3.2
Look-ahead
Look-ahead techniques aim at pruning the search space, i.e.,
cutting away useless search branches, by looking at the current search node. In the following, let be a TRP, be the
set of all distinct binary constraints in , and
the
set of binary constraints picked up as candidate solution at
search depth along the current branch.
Basically all previous solvers enforce what is generally
called early-pruning. The intuition is that, if during the
search
becomes inconsistent, then it is useless to go on
searching along that branch, since cannot become consistent as more constraints are added; so in this case backtracking is forced. Early-pruning works by performing a consistency check on each time a new literal is added, i.e., each
time is incremented.
Early-pruning can help the solver avoid useless branches,
therefore effectively reducing the search space; but, on average, it will also cause a large number of useless consistency
checks to be performed: each time is checked and found
consistent, the search goes on and no advantage is obtained.
Especially when facing underconstrained problems, earlypruning is likely to waste a lot of time.
In order to mitigate this drawback, we have introduced
periodic early-pruning:
is now checked according to
some periodic policy, for instance only when
, or every time
, for some
. For appropriate values of and ,
periodic early-pruning indeed reduces the number of consistent sets checked, at the same time retaining most of the
advantage of early-pruning.
Another well-known look-ahead technique is forwardchecking. Let
; then it is generally useful to
check
for consistency, for some
, and if this
set is found inconsistent, to add to . The idea is that of
checking whether logically implies , and if so, to forbid
its further addition to .
Of course, much of the effectiveness of forward-checking
depends on the policy with which is chosen. It can be the
“best” choice according to some heuristics, usually Minimum Remaining Value (Stergiou & Koubarakis 2000), otherwise known as Unit Propagation in the SAT-based approach; or it can be all
, hoping that many inconsistencies will be found, leading to strong search space pruning
or to a failure.
It is clear that this technique subsumes early-pruning and
is by far heavier — the risk of getting consistent checks becomes much bigger. In our case, forward-checking also benefits from a periodic activation policy.
Another efficient look-ahead technique is unit propagation based on two literal watching. Introduced in
(Moskewicz et al. 2001), it consists of “marking” two literals in each clause, and then updating the marks accordingly
:
S
MN
S
:
L
MONQPRL
MN
MN
MN
Z \_^
k[D]
l S!m^n<opq` L+`
MN
S/TVUXWYD*
` M.ab`c* ` MdN!`egf hij f
D
o
MN <
* Lnr>M N
t
MdN.s ut tv OM N MN
MN
vt
MN
& MN
to the literals assigned during the search. The big advantage
of this technique is that undoing an assignment due to backtracking / backjumping is done in constant time. For more
details refer to the aforementioned paper.
3.3
Branching rule
It is well-known that the order in which literals are chosen during the search is capital. TSAT++ employs what is
called VSIDS (Variable State-Independent Decaying Sum)
from (Moskewicz et al. 2001): its strength is that, periodically, the scoring of the literals is updated, but not entirely
substituted, according to the occurrences in latest learned
clauses. Intuitively, this technique directs the search thanks
to the information obtained by failures.
3.5
4.1
Test set and experimental setting
For comparing TSAT++ against the other solvers, we have
considered a wide selection of publicly available benchmarks. In particular, we have considered
the Disjunctive Temporal Problem, introduced in (Stergiou & Koubarakis 1998) and since then used also
in (Armando, Castellini, & Giunchiglia 2000; Stergiou
& Koubarakis 2000; Oddi & Cesta 2000; Audemard et
al. 2002a; Tsamardinos & Pollack 2003). These problems are randomly generated following the Fixed Clause
Length () method from the SAT literature.
the post-office problem introduced in (Audemard et
al. 2002a), and the hardware verification problems
from (Strichman, Seshia, & Bryant 2002). These are
structured instances coming from formal verification
problems.
These last problems, though not directly related to the Planning and Scheduling literature, are interesting to us because
it is well known that solvers performances can greatly vary
when going from randomly generated to structured problems
and/or viceversa. In summary, we have considered all the
publicly available instances with the exception of the Fisher
protocol problems from (Audemard et al. 2002a) and the diamond problems from (Strichman, Seshia, & Bryant 2002).
The Fisher protocol problems have been excluded because
they can be solved by pure propositional reasoning: thus,
they are as good as a SAT instance. The diamond problems are hand-made instances which do not correspond to
any real-world problem. Still, they are interesting because
their difficulty can be varied according to 2 input parameters, and we plan to include these problems in the future.
As for the solvers, we have initially considered all the
6 systems mentioned in the introduction, namely TSAT++,
Epilitis, MathSAT, CSPi, SEP and Tsat. After a first run, we
have discarded SK (Stergiou & Koubarakis 2000), because
clearly not competitive with respect to the others.
Each solver has been run on all the benchmarks it can deal
with, not only on the benchmarks the solver was analyzed
on by the authors. The only exception to the above is SEP,
that on the DTP problems, is clearly not competitive. Each
solver has been run using the configuration suggested by the
authors for the specific problem instances. 2 When not publicly available, we asked such “best” solver configuration to
the authors. Given the wide variety of possibilities offered
by TSAT++, we have selected and experimentally evaluated
the subset that, to us, promised to be the best for each specific problem. Then, for each problem domain, we selected
the best among the experimented settings.
All the experiments were run on four identical Pentium IV
2.4GHz processors with 1GB of RAM. CPU time is reported
in seconds. In the analysis, TIME will mean that the process
was stopped after 1000 seconds and ‘ ‘ that the solver has
run in segmentation fault on the related instance.
"
Look-back
TSAT++ enforces two look-back techniques, both wellknown in the SAT and CSP literature: Backjumping and
Learning. With backjumping, upon failure, the solver identifies a subset of the currently assigned literals which is responsible for it (reason); then it keeps on backtracking until
one of the literals in the reason is un-assigned. This helps
avoiding search branches which indeed will not lead to a
satisfying assignment.
With learning, the system keeps track of the reasons found
by the backjumping machinery and uses them also outside
the search branch in which they were detected. The learning mechanism we use is called 1-UIP (Unique Implication Point) learning (inspired, again, by (Moskewicz et al.
2001)), which identifies conflicts in the implication graph
associated with the search.
3.4
4 Experimental Analysis
Consistency checking
It is evident that an effective method for checking the consistency of
is needed in order to reduce the time spent
by
, early-pruning and forward-checking. Also, it is
essential to be able to evaluate a reason
out of an inconsistent , which can then be used as a further guidance
for the solver.
We have devised an implementation of the Bellman-Ford
algorithm (Cormen et al. 2001), known to run in cubic time
with respect to the number of real-valued variables appearing in the constraints in . As far as reasons are concerned,
we have slightly modified the algorithm in order to show,
upon detecting an inconsistency, a subset of all reasons in .
TSAT++ currently can pick one of them (1) nondeterministically, or (2) choosing the smallest one under cardinality,
or (3) choosing the one which make the solver backjump as
much as possible.
A further improvement to the consistency checking mechanism consists of refining into
without affecting
the soundness of the system. TSAT++ currently enforces
two refining strategies, namely: (1) triggering ((Audemard
et al. 2002a)); and (2) evaluation of prime implicants. (2)
corresponds to a minimal assignment which satisfies the formula.
B @C#=EF) dM N
MdN
w N PM N
MN
MxN
MN
MdyN PzMdN
"
2
Each solver presented hear has many possible configurations.
In addiction, MathSAT has different binary files for each problem
domains.
3
30 variables
3
10
TSAT++
epilitis
MathSAT
cspi
Tsat
2
3
10
TSAT++
epilitis
MathSAT
cspi
Tsat
2
10
TSAT++ on hard DTP
35 variables
3
10
TSAT++
TSAT++
TSAT++
TSAT++
10
on
on
on
on
35
40
45
50
vars
vars
vars
vars
2
10
1
1
cpu time
10
cpu time
10
0
0
10
10
1
−1
10
10
−2
10
cpu time
10
−1
0
10
−2
4
2
6
8
ratio
12
10
14
10
2
10
8
ratio
6
4
12
14
−1
10
Figure 1: Evaluation of the DTP on 30 variables (left) and
35 variables(right).
−2
10
4.2
{2o|[D$[}~ l
"
"
"
"
o
" D
"
€~‚~$ƒ
N N N
N N
…N
4~†~$ƒ
}
" o
" ~
^„ID
}
D
D
= 100
100 instances for each points.
As they are constructed, DTP problems do not include negation of difference constraints: nevertheless, negations can
appear when implementing preprocessing and learning. In
this domain TSAT++ has been run with the following techniques enabled:
"
"
"
"
4
6
8
ratio
10
12
14
Comparative evaluation on DTPs
DTPs are randomly generated in terms of the tuple of parameters
where:
k is the number of disjuncts per clause,
n is the number of arithmetic variables,
m is the number of clauses, and
L is a positive integer number such that all the constant
are taken from the interval
A DTP is produced by randomly generating
clauses of
length . Each disjunct
in a clause is generated
by choosing
and
with probability
out of variables and taking to be a randomly selected integer in the
interval
. Pairs of identical variables in a disjunct,
and clauses with identical disjuncts are discharged. We generated DTP problems with:
, ranging from 20 to 35 variables step 5.
ratio between and ranging from 2 to 14 step 1.
=2
"
2
IS(2) preprocessing
periodic early pruning (periodicity 1)
triggering
smallest reason
In Figure 1 there is the analysis on DTP benchmarks with
30 (left) and 35 variables (right).
On the x-axis there is the ratio, and on the y-axis the median CPU time. It is clear that TSAT++ is the system that
performs better here. The only solver that is barely competitive with TSAT++ in this domain is Epilitis; Nevertheless
TSAT++ gains up to a factor five on 30 variables (left) and
up to one order of magnitude on 35 variables (right). All the
other systems are at least 2 orders of magnitude slower than
TSAT++ in the hardest region.
4
Figure 2: TSAT++ on 35, 40, 45 and 50 variables.
We can notice how most of the solvers perform better than
TSAT++ for low ratios: this is probably due to the preprocessing step enabled in TSAT++.
Moreover, in order to evaluate the scalability of our system, we generated harder DTP problems. In Figure 2, we
evaluate TSAT++ also on 40, 45, and 50 variables.
We can see that TSAT++ seems not to be very sensitive to
the increment of variables. Its behavior seems to be linear:
it pays around a factor 4 on the hardest point when adding
five variables. This is not an obvious features: for examples,
from the SAT literature, on randomly generated instances,
SAT solvers degrade exponentially their performances with
the number of variables. Finally, one consideration about the
time spent inside the modules implemented in TSAT++Ȯn
harder instances, the time spent in the SAT solver is sometimes “order-of-magnitude” greater than the time spent in
the Consistency Checker. This can suggest that future investigations should focus on improving the “quality” of the
informations that the Consistency Checker gives back to the
SAT solver, even spending more time to compute them.
4.3
Comparative evaluation on real-life domains
Bounded Model Checking of Timed Automata This
benchmarks are about the notion of bounded model checking applied to timed systems. The problem is about a post
office with N desks, each desk serving a customer every T
seconds. Every new customer try to choose the desk with
shorter queue. We want to prove that, although the customers are bright, some annoying situation in which a customer must wait will appear. The benchmarks are publicly
available at. 3 They have a number of desks varying form 2
to 5. We show here only the instances in which at least one
solver took more than 1 second, but for each number of desk
we show at least two instances (the last unsatisfiable and the
satisfiable). In this domain TSAT++ has been run with the
following techniques enabled:
"
"
IS(2) full preprocessing
prime implicants
3
http://dit.unitn.it/˜rseba/Mathsat.html
"
Instance
2ba
abz5-900
cache.1step
cache.2step
LD-ST.1step
LD-ST.2step
LD-ST.3step
OOO.2steps
OOO.3steps
RailRoad1-2
RailRoad1-3
RailRoad1-4
RailRoad-0
RailRoad-2
ring2-10
ring2-100
smallest reason
In table 1 the results about the post-office problem.
Instance
P02-7-P02
P02-8-P02
P03-5-P03
P03-6-P03
P03-7-P03
P03-8-P03
P03-9-P03
P03-10-P03
P04-4-P04
P04-5-P04
P04-6-P04
P04-7-P04
P04-8-P04
P04-9-P04
P04-10-P04
P04-11-P04
P04-12-P04
P05-4-P05
P05-5-P05
P05-6-P05
P05-7-P05
P05-8-P05
P05-9-P05
P05-10-P05
P05-11-P05
P05-12-P05
P05-13-P05
P05-14-P05
TSAT++
0.02
0.01
0.01
0.03
0.03
0.06
0.13
0.1
0.03
0.05
0.07
0.11
0.18
0.31
0.51
1.01
0.58
0.08
0.13
0.21
0.36
0.69
1.35
2.41
3.44
4.79
8.88
2.99
MathSAT
0.01
0.03
0.04
0.06
0.09
0.11
0.14
0.1
0.2
0.26
0.36
0.36
0.48
0.69
1.06
2.13
0.91
1.15
1.48
1.89
2.27
2.75
3.59
5.32
9.23
22.06
54.17
11.36
SEP
0.85
2.05
2.54
46.94
197.79
TIME
TIME
TIME
3.69
2.25
16.02
134.21
TIME
TIME
TIME
TIME
TIME
SAT?
NO
YES
NO
NO
NO
NO
NO
YES
NO
NO
NO
NO
NO
NO
NO
NO
YES
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
YES
Table 1: Analysis on Bounded Model Checking of Timed
Automata.
The data presented in the table are: the name of the instance, the CPU time for TSAT++, MathSAT and SEP respectively, and if the instance is satisfiable or not.
We can see that SEP is not competitive on this domain:
it run in segmentation fault on 11 out of 13 instances with
5 desks, and on the other hardest instances it is outperformed by different orders of magnitude by TSAT++ and
MathSAT. Both TSAT++ and MathSAT are able to solve
all the instances within the timeout; TSAT++ can outperform MathSAT up to a factor 6 on hardest problems. It is
interesting to note that on this domain we have used a customized version of MathSAT as explained in (Audemard et
al. 2002a). In this problems, by construction, when there is
a literal like (
), it can ignored because in the instance there is another literal as (
). We know that
literal like (
) are “expensive” to process: nevertheless TSAT++, that does not use this optimization, is faster
than MathSAT.
Hardware verification In this subsection, we experiment
with formulas that are generated using the UCLID verification tool (Lahiri, Seshia, & Bryant 2002) and publicly avail-
v 5 N*RGN
v 5Nx*GN
N l N
TSAT++
0.28
1.9
0.00
0.01
0.00
0.01
0.02
0.00
0.00
0.01
0.01
0.01
0.00
0.00
0.02
2.07
SEP
0.04
TIME
0.01
0.04
0.02
0.05
0.35
0.04
0.18
0.01
0.01
0.02
0.01
0.03
0.05
0.01
MathSAT
0.14
0.82
0.00
0.03
0.01
0.11
9.2
0.00
3.6
0.00
0.01
0.01
0.00
0
1.07
0.01
SAT?
YES
YES
NO
NO
NO
NO
NO
NO
NO
YES
YES
YES
YES
YES
YES
YES
Table 2: Analysis on hardware verification benchmarks.
able at. 4 These benchmarks are composed by:
"
2 cache coherence protocol problems (cache*)
"
"
3 load-store unit from an industrial microprocessor (LDST*)
2 out-of-order execution problems (OOO*)
"
"
5 problems representing symbolic simulation of the wellknown timed-automaton example of a railroad crossing
(RailRoad*)
"
3 problems resulting from an analysis of a timed-automata
(ring* and 2ba)
1 scheduling problem (abz5-900)
In this domain TSAT++ has been run with the following
techniques enabled:
"
"
periodic early pruning (periodicity 1)
smallest reason
In table 2 the results about the benchmarks.
The data presented are: the name of the istance, the CPU
time for TSAT++, SEP, and MathSAT, and if the instance
is satisfiable or not. All the experiments but four are simple for the solvers proposed, and are solved in less than one
second. On the ring2-100, TSAT++ takes around 2 seconds
to solve it, while SEP and MathSAT solve this benchmark
instantaneously. Here the point is that the instance is satisfiable and it is easy to check it is, but TSAT++ has to do
many (around 1000) consistency checks before find the solution. On the other hand, on LD-ST.3step and OOO.3step
MathSAT takes some seconds (up to 10) in order to solve
them, while TSAT++ and SEP are instantaneously. Finally,
on the only scheduling problem we proposed, SEP is outperformed by different order of magnitude by TSAT++ and
MathSAT.
4
http://iew3.technion.ac.il/ ofers/smtlib-local/benchmarks.html
5
If we would compute the cumulative sum of the CPU time
the solver spent to solve all the benchmarks suite, it would
be clear that TSAT++ is the solver that performs better on
this domain.
5 Conclusions and future work
In this paper we have shown how the effective and modular
combination of old and new techniques for temporal reasoning can improve the state-of-the-art both on randomly generated problems, and on real-world instances coming from
the computer-aided verification community. TSAT++, the
system we have sketched, is clearly faster than its fastest
competitors both on the Disjunctive Temporal Problem and
on a test-suite including scheduling, hardware verification
and bounded model-checking problems. TSAT++ is going
to be at the reasoning core of the RoboCare project.
Among the lines of future work, we plan to (1) improve the reason detecting mechanism from failures of
the consistency checker; (2) add the possibility of having
constraints taken from different theories, e.g., full linear
arithmetic, uninterpreted functions, arrays, lists; (3) apply
TSAT++ to practical problems which can be recast as DTPs.
Acknowledgments
This research is partially supported by MIUR (Italian Ministry of Education, University and Research) under project
RoboCare (A Multi-Agent System with Intelligent Fixed
and Mobile Robotic Components).
References
Armando, A.; Castellini, C.; and Giunchiglia, E. 2000.
SAT-based procedures for temporal reasoning. In Biundo,
S., and Fox, M., eds., Proceedings of the 5th European
Conference on Planning (Durham, UK), volume 1809 of
Lecture Notes in Computer Science, 97–108. Springer.
Audemard, G.; Bertoli, P.; Cimatti, A.; Kornilowicz, A.;
and Sebastiani, R. 2002a. A SAT based approach for solving formulas over boolean and linear mathematical propositions. In Voronkov, A., ed., Automated Deduction –
CADE-18, volume 2392 of Lecture Notes in Computer Science, 195–210. Springer-Verlag.
Audemard, G.; Cimatti, A.; Kornilowicz, A.; and Sebastiani, R. 2002b. Bounded model checking for timed systems. In IFIP WG 6.1 International Conference on Formal Techniques for Networked and Distributed Systems
(FORTE), LNCS, volume 22.
Cormen, T. H.; Leiserson, C. E.; Rivest, R. L.; and Stein,
C. 2001. Introduction to Algorithms. MIT Press.
Dechter, R.; Meiri, I.; and Pearl, J. 1991. Temporal constraint networks. Artificial Intelligence 49(1-3):61–95.
Lahiri, S. K.; Seshia, S. A.; and Bryant, R. E. 2002. Modeling and verification of out-of-order microprocessors in
UCLID. Lecture Notes in Computer Science 2517:142–??
Moskewicz, M. W.; Madigan, C. F.; Zhao, Y.; Zhang, L.;
and Malik, S. 2001. Chaff: Engineering an Efficient SAT
Solver. In Proceedings of the 38th Design Automation
Conference (DAC’01).
6
Oddi, A., and Cesta, A. 2000. Incremental forward checking for the disjunctive temporal problem. In Proceedings
of the 14th European Conference on Artificial Intelligence
(ECAI-2000), 108–112.
Plaisted, D., and Greenbaum, S. 1986. A Structurepreserving Clause Form Translation. Journal of Symbolic
Computation 2:293–304.
Stergiou, K., and Koubarakis, M. 1998. Backtracking
algorithms for disjunctions of temporal constraints. In
Proc. AAAI.
Stergiou, K., and Koubarakis, M. 2000. Backtracking algorithms for disjunctions of temporal constraints. Artificial
Intelligence 120(1):81–117.
Strichman, O.; Seshia, S. A.; and Bryant, R. E. 2002.
Deciding separation formulas with SAT. Lecture Notes in
Computer Science 2404:209–??
Tsamardinos, I., and Pollack, M. 2003. Efficient solution techniques for disjunctive temporal reasoning problems. Artificial Intelligence. To appear.
People and Robot Localization and Tracking
through a Fixed Stereo Vision System
Shahram Bahadori, Luca Iocchi, Luigi Scozzafava
Dipartimento di Informatica e Sistemistica
Università di Roma “La Sapienza”
E-mail: <bahadori,iocchi,scozzafava>@dis.uniroma1.it
Abstract
The task of localizing and tracking people and robots
in a dynamic environment in which persons and robots
interact each other is important for many different reasons: for global monitoring the status of the system,
for supervision purposes, for the analysis of the behaviors of the agents, etc. In this paper we describe the
setting and the use of a stereo vision system placed
in a fixed position in the RoboCare domestic environment, that implements the functionality of computing
the position and the trajectories of persons and robots
moving in the environment.
1
Introduction
Applications of multi-robot systems interacting with
persons in a real dynamic environment are very promising and many experiments have been performed in this
area. In this context the ability of monitoring the status
of the system, specifically the position and the trajectories of moving agents (both persons and robots) in
the environment is a fundamental requirement in order
to analyze the system and for human and/or machine
supervision.
The general form of this problem presents many difficulties: first of all, object tracking results difficult when
many similar objects move in the same space; second,
object recognition is generally difficult when the environment and the agents cannot be adequately structured; third, when moving observers are used for monitoring large environments the problem is even harder
since it is necessary to take into account also the noise
introduced by the motion of the observer.
With respect to the above difficulties in this paper
we present a solution that makes the following choices:
i) we limit the number of moving objects in the space
to be at most two or three persons plus any number of
robots, but in such a way to have not a very crowded environment; ii) persons are not marked in any way, while
robots can be visually recognized by placing on top of
them special markers; iii) we make use of a fixed stereo
vision system that is able to monitor only a portion of
the area in the domestic environment.
c 2003, The RoboCare Project — Funded by
Copyright °
MIUR L. 449/97. All rights reserved.
The goal of the present work is thus to elaborate
the information computed by a stereo vision system
placed in a fixed position in the environment, in order to
determine the position and the trajectories of moving
objects. Since the stereo camera is fixed in the environment, we exploit the knowledge of the background
in order to identify other moving objects (persons and
robots). Persons are recognized by a model matching
between the extracted information and some predefined
values (e.g. eccentricity, height, etc.), while robots are
recognized through special markers put on top of them.
In the literature there are several works on people
localization and tracking through a vision system, aiming at realizing systems for different applications. These
systems are typically based on the use of a single camera
(Wren et al. 1997), stereo vision (Darrell et al. 1998;
Beymer & Konolige 1999) or multiple cameras (Yang,
Gonzalez-Banos, & Guibas 2003).
While the systems based on a single camera are used
for modelling and reconstructing a 2D representation of
human beings (e.g. heads and hands) and mainly used
for virtual reality and gesture recognition applications,
in the RoboCare project we are interested in determining the 3D position of a person in the environment and
to track his/her trajectory while he/she moves. To this
end stereo vision or multiple cameras can be effectively
used in order to compute the 3D position of objects in
the scenes. In this way the system is more efficient in
object recognition and tracking and also provides the
coordinates of such objects in the environment. The
present work follows the general ideas of the previous
works in people tracking through a stereo vision system. For example, the stereo vision system described in
(Beymer & Konolige 1999) focuses on the development
of a real-time system for person detection and tracking
by using a model matching technique. Also the work
in (Darrell et al. 1998) uses information about stereo
vision for identifying and tracking persons in an environment; person recognition is here implemented by
using a set of patterns to be matched.
The prototipe implementation of our system that is
described in this paper has the objective of providing
for real-time processing of stereo images and for identifying people and robot trajectories in the environment.
7
Preliminary experiments on the system show the effectiveness of the approach, a sufficient accuracy for many
tasks, and a good computational performance. However, a more extensive experimentation and several improvements to the system are still under development.
2
3D Reconstruction through Stereo
Vision
Stereoscopic vision is a technique for inferring the 3D
position of objects from two or more simultaneous views
of the scene. This technique has received increasing attention in the last years thanks to the improvements
in the algorithms that allows for real-time implementation of stereo vision systems (Kanade et al. 1996;
Konolige 1997). Other advantages of using a stereo vision system are that: 1) it is a cheap solution for 3D
reconstruction of an environment; 2) it is a passive sensor and thus it does not introduce interferences with
other sensor devices (when multiple sensors are present
in the environment); 3) it can be easily integrated with
other vision routines, such as object recognition and
tracking.
Reconstruction of the world seen through a stereo
camera can be divided in two steps:
1. Correspondence problem: for every point in one image find out the correspondent point on the other
image and compute the disparity (distance in pixels)
of these points.
2. Triangulation: given the disparity map, the focal distance of the two cameras and the geometry of the
stereo setting (relative position and orientation of the
cameras) compute the (X, Y, Z) coordinates of all the
points in the images.
Both the above tasks are easier to realize when cameras are in a standard setting, that is with parallel optical axes and coplanar image planes. In this setting
epipolar lines correspond to rows in the frame buffer
and point correspondences are searched over these rows.
Obviously the standard setting cannot be obtained with
real cameras, and thus a calibration procedure is needed
in order to compute the parameters that relate the ideal
model of the cameras to the real devices. This is required because of two factors: (i) both the correspondence problem and triangulation make the assumption
of an ideal model of the camera (pinhole model ), that
can be very different from actual imaging devices; (ii)
the relative position and orientation of the two cameras
must be known in order to determine range information.
Stereo head calibration is thus the task of relating
the ideal pinhole model of the camera with an actual
imaging device (internal calibration) and of determining the relative position and orientation of the cameras
(external calibration). This is a fundamental step for
3D reconstruction and in particular for stereoscopic vision analysis. It allows not only for determining the
geometry of the stereo setting (needed for triangulation), but also for removing radial distortions provided
by common lenses.
8
When acquiring two images at the same time from
two different cameras, the stereo algorithm is in charge
of finding correspondences of points from one image to
the other and to compute their disparities. There exist different kinds of stereo algorithms, some of them
are based on area correlation, some others on feature
detection, etc. The system we are using (Small Vision
System) (Konolige 1997) implements an area correlation function that computes the most likely position in
the right image for every point in the left one. This
correlation-based method allows for dense disparities
and it is more adequate for 3D reconstruction. Moreover, by exploiting some geometric constraints of stereo
vision systems and by an optimization in the implementation, the system allows for real-time processing also
for high-resolution images.
2.1
Stereo setting and calibration
Hardware Settings The hardware setting of our
system is constituted by a couple of cheap Firewire webcams (Table 1) placed in a parallel setting with 18
cm of distance between them. This stereo system is
installed at 220 cm of altitude from the ground and it
points down with an angle of 20◦ with respect to the
horizon line. With this configuration, considering the
62◦ field of view for each camera, the system is able to
cover an area of dimensions 2.2 × 1.8 meters.
Calibration For good stereo processing, the two images must be aligned correctly with respect to each
other, since actual stereo camera setups differ from the
standard setup in which the cameras are perfect pinhole
cameras and they are aligned precisely in a parallel setting. Obviously the displacement from the ideal model
causes erros and noises in the quality of stereo match
since epipolar lines are not horizontal. In addition, if
the camera calibration is unknown, one does not know
how to interpret the stereo disparities in terms of range
to an object.
Camera calibration addresses these issues by creating a mathematical model of the camera. This is performed by fitting a model of a known calibration object
to a number of images taken by the cameras about this
calibration object. The calibration object presents a
known pattern that can be recognized and measured
by an automated calibration procedure, that computes
the parameters needed for image rectification and stereo
computation.
The calibration procedure we have used in our setting
is the standard one that is implemented within Small
Vision System (Konolige 1997).
3
System architecture and implementation
The implementation of the tracking system is based on
the following components:
• foregroung/backgroung segmentation, that is able to
distinguish pixels in the image that are coming from
Table 1: FireWire Webcam features
Feature
Specifics
Bandwidth
200Mb/s
Data Trasmission Non.Compressed full-motion digital video
Frame Rate
30 fps
Resolution
640 × 480
Sensor
1/4 Color CCD Image Sensor
Field of view
62◦ angle of view
representing the background and the current image. In
order to deal with the fact that the background can
change over time (for example pieces of furniture or
objects on a table can be moved), a special routine
is able to detect these changes after the processing of
a number of frames and to upgrade the background
model dynamically.
Stereo computation. The stereo vision system we
are using (Small Vision System (Konolige 1997)) provides a real-time implementation of a correlation-based
stereo algorithm. The snapshots in Figure 1 show some
situations of a person in the environment and the corresponding disparity images. These disparity images
represent points that are closer to the cameras with
brighter pixels; from these disparity values a geometric
triangulation is used to retrieve the 3D coordinates of
every point in the space. The stereo vision system thus
returns a set of 3D points representing the 3D coordinates of all the pixels matched in the two images; this
set of points is the 3D model of the scene seen through
the cameras.
Figure 1: Stereo vision processing
the background from the ones that represent the foreground (i.e. persons or robots moving in the environment);
• stereo vision computation, that evaluates disparities
only for the foreground in the two images and compute the 3D position of a set of points;
• 3D segmentation, that processes the 3D points computed by the stereo algorithm and cluster them in
continuous regions;
• object association, that associates each 3D cluster of
points to a specific robot (when its special marker is
recognized) or to a generic person (if it is compatible
with some predefined values).
• position filtering, that makes use of a Kalman Filter
to reliably track the position of objects in the space.
Foregroung/backgroung
segmentation. Foreground/background segmentation is obtained by
computing the image difference between a stored image
3D segmentation. The segmentation algorithm that
follows stereo computation is in charge of clustering the
set of 3D points in clusters, such that each cluster represents a single object in the environment.
Object association. Each cluster of 3D points is associated to the representation of a moving object in
the environment (either a person or a robot) that is
tracked by the system. This step takes into account a
simple model of a person (eccentricity, height, width,
etc.) and the recognition of special markers applied on
the robots. Those clusters that do not match any of the
specifications are simply discarded.
Position filtering. In order to keep track in a consistent and robust way the position of an object a Kalman
Filter has been used. This allows for dealing with temporary errors that may occur when an object is completely occluded by another or when the stereo system
is not able to distinguish two objects that are close each
other.
Although there are some situations in which the
above method return erroneous results, specially whene
9
Marker
M1
M2
M3
M4
M5
M6
M7
M8
M9
Table 2: Accuracy results
Distance Angle Err.medium
3.00 m
00◦
7.3cm
2.70 m
00◦
7.1cm
2.40 m
00◦
6.8cm
3.00 m
32◦
9.8cm
2.70 m
35◦
9.6cm
2.40 m
37◦
8.8cm
3.00 m
-32◦
9.2cm
2.70 m
-35◦
9.6cm
2.40 m
-37◦
8.8cm
Err.Variance
6-8.2 cm
6-7.7 cm
5.8-6.7cm
7.8-10.7cm
7.8-9.7cm
7.8-9.7cm
7.8-10.5cm
7.8-9.7cm
7.8-9.7cm
several objects are in the environment, a preliminary set
of experimental results that are described in the next
section shows the feasibility of the method and several
improvements can be adopted to increase its robustness.
4
Experimental evaluation
Considering our hardware settings and also software algorithms we did some experiments from the point of
view of accuracy and computational efficiancy that the
results are as follow:
4.1
Accuracy of the system
For masuring the accuracy of the system we used 9
markers with diffeante ditances and angles in scenario
to masure the accurancy of the system in defferant distances the result after 40 experiments are as follow
This experiments have done with the standard external calibration system.
4.2
Computational efficiency
The Performance of computational afficiancy without
the grabbing using a 700 MGhz processor with the presance of one object to tracking is about 130 ms, by
running the grabbing and filtring procedures the calculation time incearse to 400ms with one object, 500
ms with two, and 650 ms for three moving objects in
the scenario. The major part of the calculation time is
dadicated to gaussian filtring and noise detection of the
background.
5
Conclusions
In this paper we have described an ongoing project that
has the objective of realizing a system for localizing and
tracking persons and robots in a real domestic environment.
The system has been implemented only in a prototype version, but some preliminary experiments we have
performed show its effectiveness, a sufficient accuracy
and good computational properties.
However, we are actively working on the development
of several improvements. In particular, we are investigating efficient algorithms for segmentation and for
object recognition when the number of objects in the
10
scene is more than a few units. We are generally developing several improvements for the realization of the
entire system.
6
Acknowledgments
This research is partially supported by MIUR (Italian
Ministry of Education, University and Research) under
project RoboCare (A Multi-Agent System with Intelligent Fixed and Mobile Robotic Components).
References
Beymer, D., and Konolige, K. 1999. Real-time tracking of multiple people using stereo. In Proc. of IEEE
Frame Rate Workshop.
Darrell, T.; Gordon, G.; Harville, M.; and Woodfill, J.
1998. Integrated person tracking using stereo, color,
and pattern detection. In IEEE Conf. on Computer
Vision and Pattern Recognition.
Kanade, T.; Yoshida, A.; Oda, K.; Kano, H.; and
Tanaka, M. 1996. A stereo machine for video-rate
dense depth mapping and its new applications. In
Proc. of CVPR’96.
Konolige, K. 1997. Small vision systems: Hardware
and implementation. In Proc. of 8th International
Symposium on Robotics Research.
Wren, C. R.; Azarbayejani, A.; Darrell, T.; and Pentland, A. 1997. Pfinder: Real-time tracking of the
human body. IEEE Transactions on Pattern Analysis
and Machine Intelligence 19(7):780–785.
Yang, D.; Gonzalez-Banos, H.; and Guibas, L. 2003.
Counting people in crowds with a real-time network of
image sensors. In Proc. of ICCV.
Developing a Robot able to Follow a Human Target in a Domestic
Environment
Raffaele Bianco, Massimiliano Caretti, Stefano Nolfi
Laboratory of Artificial Life and Robotics,
Institute of Cognitive Sciences and Technologies, CNR
Viale Marx, 15, 00137
Roma, Italy
[email protected], [email protected], [email protected]
Abstract
In this article we describe the preliminary work conducted
to develop a mobile robot able to find, follow, and monitor
an human target moving in a domestic environment. More
specifically we describe: (1) the characteristics of the
transmitter-receiver devices that we are developing and, (2)
the results of a set of evolutionary experiments conducted in
simulation in which we evolved the neural controller of the
robot. The obtained results indicate that robots provided
with a device that detects the current direction of the
moving target can solve the problem with rather simple
behavioral strategies even if the device does not produce
accurate directional information.
(1) Koala, a commercially available robot developed by KTeam (http://www.k-team.com/, see Figure 1), and (2) one
s-bot, a mobile robot currently under development by us
and other partners (University Libre de Bruxelles,
Belgium; Ecole Politechnique Federale de Lausanne,
Switzerland; Dalle Molle Institute for Artificial
Intelligence, Switzerland) within the European Project
SWARM-BOTS (http://www.swarm-bots.org/, Figure 2).
1 Introduction
In this article we describe the preliminary work conducted
in order to develop a mobile robot able to find, follow, and
monitor an human target moving in a domestic
environment. In section 2 we describe the robotic
platforms that we plan to use and the sensor-emitter
systems that we are developing, in section 3 we describe
the preliminary experiments performed in simulation.
Finally, in section 4, we describe what we plan to do in the
future.
Figure 1. The Koala robot provided with 2 pan-tilt cameras.
2 The robots and the transmitter-receiver
device
To achieve the goal described in the introduction we plan
to use two robotic platforms that we have at our disposal:
Copyright © 2003, The ROBOCARE Project
Funded by MIUR L. 449/97. All rights reserved.
Figure 2.The first prototype of an s-bot.
11
The Koala robot (Figure 1) has three soft rubber wheels on
each side. The robot moves by means of the two middle
wheels that are connected to two corresponding motors and
are slightly lower, while the other four wheels only provide
physical support. Moreover, the robot has 16 infrared
sensors distributed around the body that can detect
obstacles up to a distance of about 20 cm.
The s-bot (Figure 2) has a cylindrical body with a
diameter of 116 mm and consists of a mobile base
provided with two differential drive mechanisms
controlling tracks and wheels, and a main body provided
with two grippers that allow an s-bot to assemble with
other s-bots or to grasp objects. The first gripper is
supported by a mobile structure that can rotate around an
horizontal axis and the second gripper is supported by a
motorized arm that allows large movements through the
vertical and horizontal axis. The main body rotates with
respect to the base supporting the tracks. From the sensory
point of view, each s-bot is provided with proximity
sensors, light sensors, accelerometers, humidity sensors,
sound sensors, an omni-directional color camera, light
barrier sensors (on the grippers), force sensors etc.
To allow the robot to easily identify the relative direction
of the target person we are developing a transmitterreceiver system consisting of a transmitter carried by the
target person and a receiver installed on the robot that
should provide the angle (and possibly the distance) of the
transmitter located on the target person. In particular we
are currently developing and testing two systems based on
radio and sound emitter-receivers devices.
The former system consists of a transmitter producing
a continuous signal of 433 MHz (Figure 3, left), a receiver
(Figure 3, top-right), and a directional antenna (Figure 3,
bottom-right). To provide information about the current
direction of the transmitter the antenna should be mounted
on a motorized support (still to be developed) that allow
the rotation of the antenna and the detection of its current
orientation with respect to the frontal direction of the
robot. Preliminary tests of this system indicate that it
provide reliable information about direction but not about
the distance of the target.
The latter system, consists of a transmitter producing
very short sound tones separated in time and a receiver
provided with two omni-directional microphones that
detect the arrival of the first waves and then stop listening
until echoes have disappeared. The receiver device detect
the time difference between the signals detected by the two
microphones that provides information about both
directionality and distance.
3 The controller
Figure 3.The radio transmitter-receiver device. Left: The
transmitter. Top-right: The receiver. Bottom-right: The antenna.
12
To develop the control system of the robots described in
section 2, we run a set of evolutionary experiments in
simulation. These experiments had different goals, namely:
(1) verify the feasibility of the use of the evolutionary
method (Nolfi and Floreano, 2002) for the problem
described above, (2) identify the minimal requirements that
the transmitter-receiver device should have to assure
reliable and effective performance, (3) identify the
simplest neural architecture that can solve the problem.
In a first set of experiments we evolved the control
system of a robot provided with 8 infrared sensors and the
specially designed directional sensor providing the relative
angle of the target person. The control system of the robot
consists of a simple neural controller provided with 9
sensory neurons (encoding the infrared sensors and the
directional sensor) directly connected to two motor
neurons encoding the current speed of the two motors
controlling the two motorized wheels.
The genotype of evolving individuals encodes the
connection weights and the biases of the neural controller.
Each parameter is encoded with 8 bits. Weights and biases
are normalized between –10.0 and 10.0. Population size is
100. The evolutionary process is continued for 200
generations.
The 20 best individuals of each generation were
allowed to reproduce by generating 5 copies of their
genotype which were mutated by replacing 2% of
randomly selected bits with a new randomly chosen value.
Each experiment was replicated 10 times.
Each individual of the population was tested for 10
trials, with each trial consisting of 1000 steps (each step
lasts 100 ms of real time). At the beginning of each trial
the robot and the target person are placed in a randomly
selected position and orientation (see Figure 4). During
each time step the position of the robot and of the moving
target are updated. More precisely, in the case of the robot,
each time step: (1) the state of the sensory neurons is
updated, (2) the activation state of internal (if present) and
motor neurons is determined, (3) the desired speed of the
motors controlling the two wheels is set according to the
state of the motor neurons, and (4) the position of robot
and of the target person is updated on the basis of the
desired speed of the two motors and, eventually, of
intervening collisions. The moving target, instead, moves
by producing random actions. More specifically each time
step it has a probability of 95% and 5% of moving with a
randomly selected speed and orientation or of staying still,
respectively.
To identify the minimal requirements that the
transmitter-receiver device should have, different
replications of the experiments were performed by varying
the amount of noise added to the directional sensor. To
take into account the fact that the main source of noise is
the reflection of the radio or sound waves we simulated a
form of conservative noise (Miglino, Lund & Nolfi, 1995),
i.e. we perturbed the perceived direction of the target by
adding to it the value of a variable allowed to vary within a
given range. The value of this variable is initially assigned
randomly in the given range and then modified each time
step by adding to it a value randomly selected within the
interval [-0.02, 0.02].
Obtained results indicate that reasonably good
performance can be obtained when the range of the noise is
allowed to vary up to 20%.
The moving
target
The robot
Figure 4. A screenshot of the simulator in the first experiment.
The environment consists of an arena surrounded by walls. The
robot has to follow and remain close to the moving target without
colliding against it.
In a second experiment we placed the robot and the
moving target in an environment in which different regions
are separated by walls (see Figure 5). Moreover we
provided the robot with a directional sensor that can only
detect the target up to a limited distance.
The moving
target
The robot
Walls
Figure 5. A screenshot of the simulator in the second experiment.
To reach and follow the moving target the robot should be able to
overcome inside obstacles consisting of walls that separate
different regions of the arena.
13
of experiments by using sensors simulated according to the
data obtained by recording the response of the devices in
the real environment; (4) test the evolved neural controller
on the real robot.
Acknowledgments
This research has been partially supported by MIUR
(Italian Ministry of Education, University and Research)
under project RoboCare (A Multi-Agent System with
Intelligent Fixed and Mobile Robotic Components).
The
robot's
trajectory
Figure 6. The behavior on an robot evolved in the environment
shown in Figure 5. The lines indicate the trajectory of the robot
and of the moving target during a trial.
As shown in Figure 6, that displays the behavior of a
typical evolved individual, robots are able to overcome
obstacles by integrating a simple wall following behavior
with a target approaching behavior. Moreover, when
individuals are too far from the target and do not have
directional information, they explore the environment (by
using a simple strategy that allows them to avoid obstacles
and to not fall into behavioral limit cycles) until they start
to have enough directional information to reach the target
with the strategy described above.
4 Discussion and Future Work
In this article we described the work done in the attempt to
build a robot able to find, follow, and monitor an human
target moving in a domestic environment. More
specifically we described the transmitter-receiver devices
that we are building and that will provide directional
information on the relative position of the moving target
and a set of preliminary experiments in which we evolved
the control system of simulated robots. Preliminary results
indicate that, robots provided with a device that detects the
current direction of the moving target can solve the
problem with rather simple behavioral strategies even if
the device does not produce accurate directional
information
In future work we plan to: (1) terminate the
development and the optimization of the transmitterreceiver devices; (2) analyze the performance of the
devices in varying environmental conditions by sampling
the response of the devices for targets with different angles
and distances and for different environmental conditions
(e.g. presence of intervening obstacles); (3) run a new set
14
References
Miglino O., Lund H.H., Nolfi S. (1995). Evolving mobile
robots in simulated and real environments. Artificial
Life, (2) 4: 417-434
Nolfi, S., and Floreano, D. (2000). Evolutionary Robotics:
The Biology, Intelligence, and Technology of SelfOrganizing Machines. Cambridge, MA: MIT
Press/Bradford Books
Human-Robot Interaction
Amedeo Cappelli, Emiliano Giovannetti
KDD Laboratory
Istituto di Scienza e Tecnologie dell’Informazione “A. Faedo”, CNR, Pisa
[email protected]
[email protected]
Abstract
Human-Robot Interaction (HRI) is a constantly growing
multidisciplinary area rich of cues for advanced researches
and technology transfers. It plays a fundamental role in the
development of robots that operate in an open environment
and cooperate with humans. This task requires the
development of techniques that allow inexpert users to use
their robots in an efficient and safe way, using an intuitive
and natural interface. In this work, after an introduction to
the fundamental issues concerning HRI, we will present the
different possible interaction modalities between robot and
man followed by a series of advanced interface applications
for autonomous mobile robots.
1
Introduction
The use of robots on a large industrial scale has brought a
substantial improvement in productivity and a reduction of
production costs.
In parallel with the progresses of the technology it has
been possible to make robots more independent from
human operators and capable of moving inside the work
environment with greater autonomy.
In order to operate effectively in a complex and
dynamic environment, a robot must be equipped of
external perception systems to “see” what is around and to
react adequately and autonomously to different, possibly
unpredictable, situations.
Using a robot in a natural and dynamic environment
inhabited by humans involves precise requirements about
sensorial perception, mobility and dexterity as well as
capacity of task planning, decision and reasoning.
However, modern technology, at the state of art, is not
yet capable of satisfying all these requirements. A limit to
the development of this kind of “social” robots comes from
the lack of appropriate interfaces that allow a natural,
intuitive and versatile (i.e., human-friendly) interaction
and communication with the robot. Interfaces of this kind
are considered essential to efficiently program and instruct
the robot.
Copyright © 2003, The ROBOCARE Project
Funded by MIUR L. 449/97. All rights reserved.
A natural code of communication is fundamental for the
interaction with a service robot: as an example, the use of
a keyboard or a mouse to communicate with a mobile
house-cleaning robot is not reasonable, since, functionally,
a domestic robot would [Khan, 1998]:
•
•
•
•
•
•
•
•
fetch and carry;
meal preparation;
clean house;
monitor vital signs;
assist ambulation;
manage the environment;
communicate by voice;
take emergency action (fire, intrusion, etc.)
This work is part of a survey which is proposed to give
an overview on the issues and applications concerning
Human-Robot Interaction and to provide a starting point
of reference for the study of the interfaces capabilities of
the mobile robots developed within the “RoboCare”
project [Cesta et al., 2003], coordinated by the Institute for
Cognitive Science and Technology of CNR of Rome.
The project aims to the development of distributed
systems in which software and robotic agents collaborate
to provide services in environments in which humans may
need assistance and guidance, such as health care
facilities.
The study and achievement of complex systems cabable
of carrying out tasks of these kind requires the synergy of
a number of disciplines, that the RoboCare project
effectively includes, such as communication, knowledge
representation, human-machine interaction, learning,
collective and individual symbolic reasoning.
2
Human-Robot Interaction
Human interface and interaction issues have long been a
part of robotics research.
People are typically involved in supervision and/or teleoperation of robots: the first studies aimed to the
improvement of interfaces were motivated to facilitate this
kind of interaction.
Anyway, until a few years ago, the focus of the robotics
community was mainly “robot-centric” with an emphasis
15
on the technical challenges of achieving intelligent control
and mobility.
It is only recently that predictions such as the following
can be made: "within a decade, robots that answer
phones, open mail, deliver documents to different
departments, make coffee, tidy up and run the vacuum
could occupy every office.” [Lorek, 2001].
Due to the nature of the intelligence needed for robots
to perform such tasks, there is a tendency to think that
robots ought to become more "human" and that they need
to interact with humans (and with each other) the way
humans do. This approach to robotics, sometimes termed
"human-centered”, emphasizes the study of humans as
models for the robots.
As the physical capabilities of robots improve, the
reality of using them in everyday locations such as offices,
factories, homes and hospitals, as well as in more
technical environments such as space stations, planets,
mines and ocean floors is quickly becoming more feasible.
However, before intelligent robots are fully developed and
integrated into our society, we need to investigate more
carefully the nature of human-robot relationships, and the
impact these relationships may have on the future of
human society. One way to do this is to consider the work
that has been done in the community of human-computer
interaction (HCI), where the directions of technology
development and its impact on humans have been studied.
2.1
Interaction modalities
Communication between a human being and a robot must
be “human-friendly” and involve all human senses and
communication channels, like speech, vision, gestures and
mime understanding and the sensitivity of forces through
touch. In general, we can distinguish five main categories
of interaction modalities:
•
•
•
•
•
speech;
gestures;
facial expressions;
gaze;
proxemic and kinesic signals.
From the results of a survey [Khan, 1998], it turned out
that the majority of people, when asked about their
opinions in human-robot interaction, clearly prefer the use
of speech in combination with other interaction
modalities. For this reason, usability research into HumanRobot Interaction will therefore need to investigate
whether user expectations can be fulfilled when applying
the technologies available today and whether
methodologies developed for Natural Language Interfaces
(NLI), Multi-modality or Agent technology are sufficient
to cover interaction with robots.
To evaluate the communicative characteristics of the
next generation of robots, it is useful, especially from a
“human-centric” point of view, to analyse the
communicative experiences of humans and use this
16
knowledge as a point of reference for the development of
human-robot interfaces.
The range of communication and interaction systems
that users are experienced with and use skilfully, include
face-to-face, mediated human-to-human and man-machine
communication
and
interfaces.
In
face-to-face
communication people use (spoken) language, gestures,
and gazes to convey an exchange of meaning, attitudes
and
opinions.
As
typical
properties,
human
communication is rich in phenomena like ellipses, indirect
speech acts, and situated object or action references
[Donellan, 1966; Milde et al., 1997]. Another
implicit
characteristic of human-to-human communication,
ambiguity, is considered one of the greatest challenges for
Natural Language Processing. The ambiguities
incorporated in a human-to-human conversation needs to
be carefully thought through and designed for in HumanRobot Interaction (see e.g., Grice, 1975). For this reason
an extensive knowledge of the problem is needed for the
development of human-robot interfaces.
Various researches have been made concerning NLI
[Ogden et al, 1997] and a number of telephony systems
based on NLI have been developed [Bernsen et al., 1997]
through which a man and a machine can communicate:
however, the physical embodiment of a robot might
require new dialog strategies both different from telephony
based or workstation based NLI systems.
To comprehend the difficulties concerning human-robot
communication let us consider the following situation: a
mobile robot and a user are physically in the same room
and the robot is told to "go left". The correct execution
might mean two different directions depending upon the
location of the robot with respect to the user. In other
words, the robot must detect the ambiguity of the term
“left” whose meaning is pragmatically influenced by the
relative position of the two interlocutors. This can be
solved autonomously or by instantiating an appropriate
dialogue with the user, even through a “multi-modal
interface” which, in such a type of situations, can
substantially help in accomplishing the task.
Multi-modal interfaces are supposed to be beneficial
due to their potentially high redundancy, higher
perceptibility, increased accuracy, and possible synergy
effects of the different individual communication modes.
For the prominent interaction mode of systems of today,
to overcome this kind of problems, the physical act is
restricted to the direct manipulation of the input devices,
for example dials, knobs and buttons. The graphical act is
often dependent and bounded by looking at a display,
which resides on the system and is an integral part of it for
interaction and visual feedback.
What is desired is to move the location of interaction
from the screen’s surface to the real space of a room [Bolt,
1980] that a user and a robot share. The challenges, that
researchers are facing for the development of human-robot
interfaces which adequately combine different interaction
and communication modalities, are multiple.
The first step to take is to determine some “guiding
principles” for the design of interactive systems to avoid
or minimise the complexity of input devices (data-gloves,
head-mounted microphones, eye-tracking systems, etc.),
which have been used in multi-modal interaction research
so far.
Analogously, guiding lines concerning safety, command
authority and sub-ordinance are necessary: “The Three
Laws of Robotics” by Isaac Asimov [Asimov, 1995] could
be the starting ground.
2.2
Speech
The capacity of interacting with a robot using the voice is
considered primary: giving instructions or receiving
answers through speech is one of the fundamental
objectives for the development of human-robot interfaces.
Speech based interfaces, until a few years ago, have
remained confined among the walls of research
laboratories. As soon as robots have made the first steps in
the real world, speech based interfaces have become more
and more desirable.
As robots get more complex and capable of dealing with
more difficult problems, natural language seems to be a
very attractive alternative to the selection of a command
through a keyboard or to the visualization of a menu on a
screen. However, speech is not considered the ideal mean
of communication for every situation: in many cases “oldstyle” interaction devices are preferable, as in teleoperation (using joysticks) or when it is necessary to
indicate a location on a map (clicking with a mouse on a
screen) and in all those situations where common devices
are involved, like lown-mowers, vacuum-cleaners, etc., for
which, at least until today, the use of buttons and small
screens are preferred.
We can distinguish two categories of typical situations,
not necessarily disjoint, in which a speech interface can be
successfully used:
•
•
The user’s hands or eyes are occupied;
The use of standard input devices is not
recommended or undesirable.
Situations involving the interaction with mobile service
robots, especially in domestic environments, fall in the
second category: the robot is free to move around, a
situation in which it is not suitable to give commands and
feedbacks through standard devices. When the robot is
used as a support for persons with particular needs we
have situations of the first category.
2.2.1 Natural Language Processing. The first step to
take in the design of a speech interface concerns Natural
Language Processing. To establish a bidirectional
communication, it is necessary to use Natural Language
Understanding and Generation techniques.
Subsequently, the system must be supplied with the
capacity of understanding speech commands through
Speech Recognition techniques to translate the utterances
in the relative internal textual representations.
Analogously, Speech Generation techniques are needed to
translate the sentences the robot must address the user
through speech.
Finally, once the robotic system is able to understand
and generate natural language, other important issues
must be dealt with. Apart from all the difficulties tied to
the development of a natural language understanding
component, the real challenge, for many researchers, is
the ability to keep trace of the current context in which the
robot is used.
In particular, a dialogue between a human and a robot
can go on for a long time, requiring the system to have a
precise knowledge of time and space to behave in the
correct way. Besides, we require a service robot to be able
to interact using a speech interface with multiple users
(especially if it is used in a public environment), each of
which possibly using a different communication modality.
Another important issue, which can influence the
design of a speech interface, concerns the feedback
expected from the robot. Normally, a service robot is
equipped with a small display or does not even have one
(for reasons tied to cost and battery reduction): it is
necessary to overcome this lack by using “conversational
feedbacks” with no need for particular physical actions by
the robot.
KANTRA is a speech interface, developed by the
University of Karlsruhe and the University of Saarland,
which is applied to a mobile robot called KAMRO
[Leangle et al., 1995]. The chosen approach is dialogue
based and deals with the matter of HRI presenting four
main situations:
•
•
•
•
task specification;
execution monitoring;
explanation of error recovering;
updating and describing the
representation.
environment
In the University of Edinburgh, the mobile robot Godot
[Theobalt et al., 2002] has been used as a testbed for an
interface between a sophisticated low-level robot
navigation and a symbolic high-level spoken dialogue
system.
In the “Instruction Based Learning” project of the
Robotic Intelligence Laboratory at the University of
Plymouth, a mobile robot is designed to receive spoken
instructions on how to travel from one place to another in
a miniature town [Bugmann, 2003].
The system Kairai is the result of a joint research
project between the New York University and the Tokyo
Institute of Technology. The system incorporates many 3D software robots with which it is possible to talk. It
accepts spoken commands, interprets them, and executes
the relative task in a virtual space [Tanaka et al., 2002].
17
2.3
Gestures
2.5
Gaze
The recognition of human gestures is a growing area of
research, especially in the field of Human-Robot
Interaction and Human-Computer Interaction obtained
with multi-modal interfaces.
Several studies have
been addressed to the role
of gestures in HumanRobot
Interaction
[Breazeal, 2001; Kanda et
al.,
2002].
Many
researchers are interested
in the collaborative aspect
and in the dialogue
between a human and a
robot [Fong et al., 2001] Figure 1: Penguin robot at
and some experiments have MERL that recognizes
been carried out by using and
produces
gestures. gestures
bidimensional agents capable
of reproducing
during a conversation [Cassell et al., 2000; Johnson et al.,
2000] but not incorporating a phase of recognition in the
system. On the other hand, at MERL (Mitsubishi Electric
Research Laboratories) (Figure 1), principles of humanrobot interaction are investigated in order to create a
system in which production and recognition of gestures
are integrated in the phase of conversation and
collaboration [Sidner et al., 2003].
Gaze direction plays an important role in the identification
of the “focus” of attention of a person: this information
can be exploited as a useful communicative cue in the
design of a human-robot interface. By detecting the gaze
direction, i.e. the focus of attention of a person, a robotic
system will be able to understand if the person is
addressing to it or to another human.
To identify the gaze direction of a person it is necessary
to determine the orientation of the head and the
orientation of the eyes. While the orientation of the head
shows the approximate direction of the gaze, through the
orientation of the eyes it is possible to precisely determine
the direction a person is looking at.
Many of the used tracking methods are based on
intrusive techniques, such as directing a beam of light into
the eye and then measuring the light being reflected [Haro
et al., 2000; Morimoto et al., 2000], or measuring the
electric potential of the skin around the eye
(electroculography) [Lusted et al., 1996] or by applying
special type of contact lenses that facilitate eye tracking.
More recently, non-intrusive tracking techniques have
been developed, which are capable of detecting and
tracking the direction of the user’s eye in real-time as soon
as the face appears in the field of view of the camera, with
no need for additional lighting or particular marks on the
user’s face [Stiefelhagen, 2001].
2.4
2.6
Facial Expressions
A human face can be considered a sort of “window”
looking onto the mechanisms that rule emotions and social
life [Gutta et al., 1996]. It is not difficult for a human to
recognize a face, even in presence of considerable changes
due to different visibility conditions, expressions, age,
hairstyle, etc. A machine capable of recognizing a face can
be used for many applications, as criminal identification,
finding of lost children, credit card validation, videodocuments retrieval, etc.
The recognition and production of facial expressions
permit a robot to widen its communicative capacities.
Among the experiments done in this
field, K-Bot, developed by the
University of Texas (Figure 2), can
reproduce 28 facial expressions, such
as joy, anger or scorn, in answer to the
attitude shown by the face of the
human interlocutor that the robot can
see with its “eyes” (two tiny cameras)
and interpret. In front of a joyful and
satisfied face, for example, K-Bot’s
Figure 2: K-Bot
face will illuminate with a smile of
happiness: the visual information taken by the cameras
will be transformed in commands for the 24 mechanical
muscles, responsible for the control of the eyes, the
inclination of the head and the movement of the covering
artificial skin.
Proxemic and Kinesic Signals
More sophisticated communication modalities (usually
classified as “nonverbal”) come from “proxemics” and
“kinesics”. The proxemics concerns the distance between
interlocutors, the variation
of which can provide an
important cue about the
availability or reticence to
speak.
Experiments
of
proxemics have been carried
on at MIT with the Kismet
robot [Breazeal et al., 2000]
(Figure 3). Kinesics, on the
other hand, concerns the
Figure 3: Kismet
gestures, more or less
unconscious, that is produced during a conversation and
which can be an additional source of information: folding
the arms, bowing, nodding, moving the weight of the body
from one leg to another, etc. [Ogden et al., 2000].
3
Applications of HRI
In this chapter a brief description is given of
projects
of
“social”
autonomous
mobile
robots in which the
interface
plays
a
fundamental role.
Figure
Pearl
18
4:
some
In the context of the multi-disciplinary project
“NurseBot”, founded in 1998 by a team of researchers
from three universities of the United States, the robot
Pearl has been realized (Figure 4) [Pollack et al., 2002].
The system, designed as a mobile robotic assistant for
elderly people, has two primary functions: reminding
people about routine activities (such as eating, drinking,
taking medicine, and using bathroom) and guiding them
through their environment. A prototype version of Pearl
has been built and tested at the Longwood Retirement
Community in Oakmont, Pennsylvania.
The so called “human-oriented mobile robots” (HOMR)
constitute an advanced class of robots capable of adapting,
learning and working in symbiosis with humans. An
example of HOMR has been developed in the context of
the RobChair project: a motorized wheelchair equipped
with a multi-modal interface (joystick + speech
commands), several sensors and a cognitive component
capable of path and task planning [Nunes et al., 2000].
At the Carnegie Mellon University research is mainly
directed at three aspects of service robots in society: the
design
of robots, human-robot
interaction and how service robots
function as members of a work team.
The initial domain for this work is
elder communities and hospitals,
where service robots can do useful but
taxing tasks. The research aims at the
design of appropriate appearance
Figure 5.
(especially of the head, as shown in
Figure 5) and interactions of service robots in these
contexts.
Researchers of the Stanford University, at the “Center
for Work Technology and Organization” (WTO), are
conducting some studies with an autonomous mobile
robot, called HELPMATE, in hospital environments
[Helpmate]. Designed by Pyxis Corporation, HELPMATE
is a battery operated robotic courier that responds to
programmed requests by transporting materials between
different employees and location in a hospital. The nature
of this field study is etnographic: researchers are
collecting qualitative data by observing the interactions
among employees and between employees and the robot,
and by interviewing employees about their experiences.
The first study is focused on changes in routines and work
practices before and after the introduction of the robotic
assistant.
At the “Interaction and Presentation Laboratory”
(IPLab) in Stockholm, a project called CERO is in
progress [Hüttenrauch et al.,
2002; Severinson-Eklundh et
al., 2003]. The researchers
are interested in how people
can use a robot in the
everyday life. The mobile
service robot realized within
the project (Figure 6) is
mainly designed to assist
physically impaired users by carrying small objects.
4
Conclusions
We might wonder, at this point, whether a “best” humanrobot interface exists and what kind of characteristics it
should have, but we think that every situation requires
specific features and different strategies of interaction.
Among the different interactions modalities we have
introduced, proxemics and kinesics are at an initial stage
and require deeper investigations. In any case, speech
seems to be the most adequate interaction modality in the
majority of situations: besides, the development of NLI
can benefit from a long tradition of research in the context
of HCI, NLP and Speech Technologies, and some robust
commercial products are already available. Moreover, the
combination of speech with other modalities, such as
gestures and gaze, seems to be crucial for the development
of social robots capable of extablishing a dialogue and, for
this reason, multi-modal interfaces deserve more interest.
References
[Asimov, 1995] I. Asimov. The Complete robot - The
Definitive Collection of Robot Stories. London: Harper
Collins, 1995.
[Bernsen et al, 1997] N. O. Bernsen, H. Dybkjær and L.
Dybkjær. What Should Your Speech System Say, IEEE
Computer, 30 (12), (1997).
[Bolt, 1980] R. A. Bolt. “Put-That-There”: Voice and
Gesture at the Graphics Interface, Computer Graphics, 14
(3), (1980), pp. 262 – 70.
[Breazeal et al., 2000] C. Breazeal and B. Scassellati.
Infant-like social interactions between a robot and a human
caretaker, Adaptive Behavior, 8(1), (2000), pp. 49–74.
[Breazeal, 2001] C. Breazeal. Affective interaction
between humans and robots, in Proceedings of the 2001
European Conference on Artificial Life (ECAL2001),
Praga, 2001.
[Bugmann, 2003] G. Bugmann. Instructing Robots, AISB
Quarterly, vol. 113, (2003), pp. 6-7.
[Cassell et al., 2000] J. Cassell, J. Sullivan, S. Prevost and
E. Churchill. Embodied Conversational Agents, MIT
Press, Cambridge, MA, 2000.
[Cesta et al., 2003] A. Cesta and F. Pecora. The RoboCare
Project: Multi-Agent Systems for the Care of the Elderly,
ERCIM News No. 53, (2003).
[Donnellan, 1966] K. Donnellan. Reference and Definite
Descriptions, Philosophical Review, LXXV, (1966), pp.
281-304.
Figure 6: Cero
19
[Fong et al., 2001] T. Fong, C. Thorpe and C. Baur.
Collaboration, Dialogue and Human-Robot Interaction,
10th International Symposium of Robotics Research,
Lorne, Victoria, Australia, 2001.
[Grice, 1975] H. P. Grice. Logic and Conversation, in P.
Cole & J. L. Moorgan (eds.), Syntax and Semantics — III:
Speech Acts, New York: Seminar Press, 1975.
[Gutta et al, 1996] S. Gutta, J. Huang, I. Imam and H.
Wechsler. Face and Hand Gesture Recognition Using
Hybrid Classifiers, Department of Computer Science,
George Mason University, Fairfax, 1996.
[Haro et al., 2000] A. Haro, M. Flickner and I. Essa.
Detecting and Tracking Eyes By Using Their
Physiological Properties, Dynamics, and Appearance,
IEEE CVPR 2000, 2000, pp. 163-168.
[Morimoto et al., 2000] C. Morimoto, D. Koons, A. Amir
and M. Flickner. Pupil Detection and Tracking Using
Multiple Light Sources, Image and Vision Computing,
Special issue on Advances in Facial Image Analysis and
Recognition Technology, Vol.18, No.4, (2000), pp. 331335.
[Nunes et al., 2000] U. Nunes, R. Cortesao, J.L. Cruz and
P. Coelho. Shared-Control Architecture: concepts and
experiments. “Service Robotics – Applications and Safety
Issues in an Emerging Market”, in W. Horn (ed.)
Proceeding of the ECAI2000 European Conference on
Artificial Intelligence, Amsterdam: IOS Press, 2000.
[Ogden et al, 1997] W. C. Ogden and P. Bernick. Using
Natural Language Interfaces, in M. Helander, T. K.
Landauer e P. Prabhu (eds.), Handbook of Humancomputer Interaction, Amsterdam: Elsevier Science
Publishers B.V., 1997.
[Helpmate] www.pyxis.com/products/newhelpmate.asp
[Hüttenrauch et al., 2002] H. Hüttenrauch and K.
Severinson-Eklundh. Fetch-and-carry with CERO:
Observations from a long-term user study with a service
robot, 2002.
[Johnson et al., 2000] W.L. Johnson, J. W. Rickel, J. W.
and J.C. Lester. Animated Pedagogical Agents: Face-toFace Interaction in Interactive Learning Environments,
International Journal of Artificial Intelligence in
Education, 11, (2000), pp. 47-78.
[Kanda et al., 2002] T. Kanda, H. Ishiguro, M. Imai. T.
Ono and K. Mase. A constructive approach for developing
interactive humanoid robots, in Proceedings of IROS
2002, IEEE Press, NY, 2002.
[Khan, 1998] Z. Khan. Attitudes towards Intelligent
Service Robots, IpLab, Nada, Royal Institute of
Technology, 1998.
[Leangle et al., 1995] T. Leangle, T. C. Lueth, E. Stopp,
G. Herzog and G. Kamstrup. KANTRA - A Natural
Language Interface for Intelligent Robots, Intelligent
Autonomous Systems (IAS 4), (1995) pp. 357-364.
[Lorek, 2001] L. Lorek. March of the A.I. Robots,
Interactive Week, 30 April, (2001).
[Lusted et al., 1996] H. S. Lusted and R. B. Knapp.
Controlling Computers with Neural Signals, Scientific
American, October 1996.
[Milde et al., 1997] J. T. Milde, K. Peters and S.
Strippgen. Situated communication with Robots, in
Proceedings of the First International Workshop on
Human-Computer Conversation, Bellagio, Italy, 1997.
20
[Ogden et al., 2000] B. Ogden and K. Dautenhahn.
Robotic Etiquette: Structured Interaction in Humans and
Robots, in Proceedings of SIRS2000, Symposium on
Intelligent Robotic Systems, Reading, UK, 2000, pp. 353361.
[Pollack et al., 2002] M. Pollack et al. Pearl : Mobile
Robotic Assistant for the Elderly, AAAI Workshop on
Automation as Eldercare, 2002.
[Severinson-Eklundh et al., 2003]
K. SeverinsonEklundh, A. Green and H. Hüttenrauch. Social and
collaborative aspects of interaction with a service robot,
2003.
[Sidner et al., 2003] Candace L. Sidner, Christopher Lee
and Neal Lesh. The Role of Dialogue in Human Robot
Interaction, Mitsubishi Electric Research Laboratories,
2003.
[Stiefelhagen, 2001] R. Stiefelhagen, J. Yang and A.
Waibel. Tracking focus of attention for human–robot
communication, in Proceedings of the International
Conference on Humanoid Robots, 2001.
[Tanaka et al., 2002] H. Tanaka, T. Tokunaga and Y.
Shinyama. Animated Agents that Understand Natural
Language and Perform Actions, in Proceedings of Lifelike
Animated Agents (LAA), Tokyo, 2002.
[Theobalt et al., 2002] C. Theobalt, J. Bos, T. Chapman,
A. Espinosa-Romero, M. Fraser, G. Hayes, E. Klein, T.
Oka and R. Reeve. Talking to Godot: Dialogue with a
Mobile Robot, in Proceedings of IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS
2002), 2002, pp. 1338-1343.
Toward a Mobile Manipulator Service Robot for Human Assistance
S. Caselli, E. Fantini, F. Monica, P. Occhi, M. Reggiani
RIMLab – Robotics and Intelligent Machines Laboratory
Dipartimento di Ingegneria dell’Informazione
University of Parma, Italy
E-mail: {caselli, efantini, fmonica, occhi, reggiani}@ce.unipr.it
Abstract
This paper describes the ongoing development of
a manipulator-equipped service robot for assistance
tasks. Various requirements characterizing service
robots are discussed, including mechanical configuration, safety, cost, user interface, suitability for integration in household environments and interaction with
other devices. Next, design decisions related to a prototype under development are illustrated and a new
control architecture is discussed. Results related to the
control architecture for the mobile base are reported
and compared with those of the original architecture
supplied with the robot.
1
Introduction
The development of service robots for health care and
assistance to elderly, disabled, or impaired people is
an active area of research and development. Several
projects have been undertaken (Morpha ; Hermes ;
Nursebot ; Harashima 2002; Zenn Bien et al. 2002;
Petersson, Egerstedt, & Christensen 1999) and are leading to credible prototypes and implementations as well
as to commercial products (Hans, Graf, & Schraft 2002;
Exact Dynamics ; J. Hegarty 1991; Guido ).
In spite of the projects underway and their achievements, many issues remain to be addressed. The very
concept of a service robot for health care and assistance
largely depends upon the functional limitations of the
anticipated users, ranging from severe disabilities and
handicaps, to partial disability (e.g., limited to lower
limbs), to lack of strength and fatigue, to limited, temporary, or no disability. Rehabilitation robotic devices,
such as vocational workstations (Mahoney 1997) or the
Manus arm (Kwee H. et al. 1989), address the needs
of severely impaired persons, whereas, in general, novel
service robot designs could target users anywhere in the
spectrum, including aging people or users with limited
disability.
In this paper, we outline a project undertaken at the
Robotics and Intelligent Machines Laboratory of the
University of Parma as part of the RoboCare program
c 2003, The RoboCare Project — Funded by
Copyright MIUR L. 449/97. All rights reserved.
aiming at the development of a manipulator-equipped
service robot for assistance tasks. For example, the service robot might be required to identify, pick up, and
return to the user an object located in a high shelf or
laying on the floor around a corner not accessible to
navigation. These two tasks have been rated as highly
desirable by potential users in rehabilitation robotics
surveys (Stanger et al. 1994). They require both adequate sensorimotor capabilities and acquisition and fusion of a variety of exteroceptive information. Range or
vision sensors are required for autonomous and safe navigation of the mobile base toward target areas. Proximity sensors (e.g., infrared or tactile) mounted on the
tip of the manipulator could enable these tasks, as long
as the sparse information they generate is conveyed to
the user in a suitable representation (Aleotti, Caselli,
& Reggiani 2003).
The remaining of this paper is organized as follows.
Section 2 outlines the service robot concept shaping the
project and the issues involved. Section 3 describes mechanical and hardware features of the prototype robot
under design. It also illustrates a 3D simulation of the
mobile manipulator which has been developed to explore various design tradeoffs. Section 4 describes the
software control architecture and reports some preliminary experimental results. Finally, section 5 briefly
summarizes the paper.
2
Service robot concept
The project underway at RIMLab pursues the development of a prototype of a mobile manipulator for assistance and caring tasks. Potential users targeted by the
project are people with various degrees of motor impairment, especially at the lower limbs, connected to age or
health, from minimal to serious. These motor disabilities could also be permanent or temporary, although we
target users with full cognitive and perceptual skills.
A mobile base equipped with a manipulator has the
potential of fulfilling a large number of useful tasks, as
it can relocate itself and perform missions both nearby
and far from the assisted person. Examples of tasks
useful to bedridden persons, as well as persons impaired
in the lower limbs, are searching and fetching objects
of interest, disposing items in the kitchen or the trash
21
can after a meal, inserting a DVD in a player. Rehabilitation robotics user surveys (Stanger et al. 1994;
Mahoney 1997) emphasize that any function that can
be performed by an assistant device has the primary
benefit of reducing the need for a human attendant,
thereby increasing the sense for autonomy of the impaired person.
In recent years, a number of research and development projects has targeted similar needs, albeit with
variable success. Examples of successful prototypes
and experiments involving mobile robots as user
assistant or extension similar to our approach are described in (Fiorini, Ali, & Seraji 1997; Mahoney 1997;
Huttenrauch & Eklund 2002).
Architectural and
control issues arising in mobile manipulator design
are discussed in (Petersson, Egerstedt, & Christensen
1999). Selected requirements pertinent to our project
are listed in the following.
Mechanical configuration.
We target service
robots consisting of a mobile base equipped with a
manipulator with adequate payload and dexterity
characteristics. For example, the mobile base should
be able to transit doors and navigate human-sized
environments. The end tip of the manipulator should
reach a reasonable fraction of the workspace reachable
by a standing person and possibly be able to pick up
objects from the floor.
Safety. Safety is a major concern in a service robot,
with implications at several levels. The mobile base
should be able to navigate reliably according to
understandable trajectories and to stop immediately
as needed. Ensuring safety in manipulator operations
is difficult, since manipulator and human being will
share a common workspace in a number of situations.
The on-board manipulator, hence, should be a safe,
low-impedance equipment. The development of lowimpedance devices is essential for service robotics and
an active area of research. Ongoing investigations have
the potential of overcoming the safety limitations of
current manipulators, e.g., (Zinn et al. 2002). During
the design phase, safety concerns reflect also into the
study of mechanical configurations less susceptible to
unwanted interference with humans. At the control
level, safety calls for reliable real-time operation of the
underlying control system.
Cost. Users of rehabilitation robotic systems are
reluctant to pay the high costs of specialized devices (Stanger et al. 1994). A component-based
economy is required to lower costs, reduce development
times and build up the market. Even though the
required components are not yet available, service
robot development should leverage as much as possible
on existing technological standards and open systems,
in order to be ready to incorporate useful components
and interoperate with other subsystems.
22
Integration. Several accepted service and rehabilitation robotics systems take advantage from some special
arrangement of the environment (Stanger et al. 1994;
Mahoney 1997; Zenn Bien et al. 2002). The increasing
trend toward technology-enabled buildings and smart
appliances will simplify the integration of service
robots in human-inhabited environments. This is
shown by the widespread wireless ethernet standard
(IEEE 802.11b/g) enabling the user to monitor the
service task while in progress, even if the robot is not
in sight. Monitoring the robot, in turn, will enable
interaction during task execution, thereby leading to
more complex and accomplishing tasks.
Shared control. For the class of tasks envisioned,
a service robot should have both the ability to perform requested routinary tasks autonomously, and
the ability to interact with the user by means of an
advanced user interface during task execution, possibly
prompting requests for suggestion in ambiguous states
or enabling teleoperation modes.
User interface. Users are becoming, on the average,
more acquainted with technology, and many of them
will be at ease with computer, cell phone, or PDAbased interaction. In such a scenario, the user interface
has been identified as a central issue for service robots
devoted to human assistance (Stanger et al. 1994;
Huttenrauch & Eklund 2002). Several aspects (visual,
gesture-based, and voice interaction) can be integrated
in a multimodal user interface.
Figure 1: A sample task for the service robot.
3
Service robot platform
The service robot prototype under development at the
University of Parma is inspired by the requirements
listed in the previous section, even though present technological and resource limitations motivate tradeoffs
and departures from the proposed design principles.
Figure 1 shows a sample task for the service robot:
retrieving items from a shelf and returning them to the
user, either with autonomous execution or under user
supervision. For tasks in this class, the robot should
be able to reach and grasp objects otherwise difficult to
access for the user, possess autonomous navigation and
manipulation capabilities, operate with high reliability
and safety, and act as part of a multiagent architecture.
With an 80 cm height and a 50 cm diameter, the Nomad 200 can accept a payload of about 20 kg, which
allows mounting a manipulator on top of it.
The main drawbacks of the mobile robot are related
to its seasoned design. Hence, in the context of this
project, we have replaced processor, memory, storage,
camera, framegrabber, and radio-LAN access with current technology, as shown in Figure 4. The new motherboard (Soyo SY-7VEM) fits the size of the turret
and provides an ISA slot to connect hard-to-replace
legacy equipment, namely the Galil DMC-360 motor
board and the Intellisys 100 sensor board. A CANbus
board provides the hardware interface to the manipulator. Field sensing for the mobile base is provided by
a Videre Design Megapixel stereo head with FireWire
interface. Finally, speech synthesis is now directly performed by the main processor exploiting the integrated
audio codec and the Festival Speech Synthesis System
developed by the University of Edimburgh.
3.2
Figure 2: The Nomad 200 mobile robot.
Figure 3: The Manus arm.
The service robot platform consists of two main parts:
the mobile base and the onboard manipulator. As a
mobile base, we have chosen to develop the prototype
using a Nomad 200 robot (by Nomadics, Inc.) which
was already available to our laboratory. The Nomad is
shown in Figure 2. The manipulator to be mounted on
the mobile base is a Manus arm (by ExactDynamics,
BV) shown in Figure 3.
3.1
Mobile base
The Nomad 200 is a robust, three-wheel synchronous
drive, non-holonomic base. The three wheels are controlled by two motors, one driving their rotation (yielding translation of the robot) and the other controlling
their steering. A third motor controls the rotation of
the turret on top of the base housing sensors and onboard computers. Twenty tactile sensors are arranged
in two offset rings around the base. Around the turret,
two rings of sonar and active infrared sensors (16 sensors each) provide distance and proximity information.
Manipulator
The Manus arm is a lighweight 6 d.o.f. manipulator
(about 15 kg, with all the motors located in the base
link) designed to be mounted as a wheelchair add-on
for disabled persons. The maximum payload is 2 kg.
The standard user interface provided with the arm is
a keypad or a joystick, but an optional CANbus interface is available for external control. Due to its target
usage, the Manus arm features low weight and power
consumption (2A/24V) and good reach (80 cm horizontal reach in extended position). Positional accuracy
is limited, since the manipulator is designed to be servoed by the user in task space. Maximum speed of the
gripper is 0.5 m/s.
The Manus arm has been designed to operate in human environments and with disabled persons. Therefore it incorporates several important safety features.
The maximum motor power, speed of arm motion, and
gripping force are all limited to prevent serious damage. Moreover, each joint incorporates an additional
safety slipring in its construction which allows users
to manually move the joint. (This features also protects the arm itself in case of crashes against obstacles.)
Another important safety feature is the very compact
folded position of the arm, which minimizes the chances
of unwanted collisions while the wheelchair or the mobile base is moving. Finally, at power up, no motion is
enabled until an explicit command is given to the robot.
Pro’s of the Manus arm for a service robot platform
are its low power consumption, available external control interface, high payload to weight ratio, and safety
features. On the other hand, once taken into account
the arm support unit, the dedicated computer, and the
arm payload, the overall Manus weight almost saturates the Nomad 200 payload. Moreover, due to the
Nomad 200 height and need to ensure balance to the
whole platform, it is unlikely to find an arm mount
that enables reaching the floor with the arm tip.
23
Sonar
Compass
Intellisys 100
Motherboard
Bumper
ISA Bus
Pentium III − 1Ghz
256 MB RAM
PCI Bus
Infrared
Vision
40 GB HD
Motors
TTS Speaker
Megapixel
Stereo
Head
Videre Design
Festival
Speech
Synthesis
802.11b
Network Conn. USB Wireless
Adapter
DMC−630
Encoder
CANbus
Adapter
Manipulator
Manus
ARM
Figure 4: Nomad 200 modified hardware configuration.
3.3
Mobile base – arm integration
We have implemented kinematically and geometrically
correct models of the arm and the base exploiting a
Java3D modeling and visualization tool developed in
our laboratory. Using this tool, existing model can be
also combined in recursive structures. We have then investigated several combinations of the Nomad 200 and
Manus models to assess the resulting workspace and
reachability characteristics. Some of these combinations are discussed next.
Figure 1 and 5 show the reachable workspace with a
central vertical mount of the Manus on top of the Nomad. This configuration maintains the arm in the most
protected position during base motions, and hence is
the best for safety. It also satisfies the payload requirements of the Nomad. Unfortunately, reaching the floor
with the tip of the arm is not possible in this configuration. Figure 6 shows a lateral mount which would
allow raching the floor with the tip of the arm. Unfortunately, this configuration is not acceptable due to
static and dynamic balance requirements.
Alternative configurations, including horizontal
mounts of the arm, have been investigated in the simulation environment, but none of them has been found
acceptable. Due to the above considerations, the vertical mount has been adopted.
4
Robot control architecture
The effective control of the service robot platform is
largely dependent on the software architecture, combining reactive and deliberative behaviors, to autonomously deal with a dynamic environment while
communicating with other entities (robots, sensors, ...)
operating in the robot’s workspace.
Several hybrid architectures, such as SmartSoft (Schlegel & Wörz 1999), BERRA (Mattias,
Orebäck, & Christensen 2000), ETHNOS (Piaggio,
Sgorbissa, & Tricerri 1999), CLARAty (Volpe et al.
2001), have been recently proposed by the literature.
They all pursue a set of properties required in a software
architecture for robot control: availability of high-level
24
Figure 5: Mobile base-arm integration and reachable
workspace with a vertical mount of the arm on top of
the base.
APIs to support communications among components,
use of advanced real-time schedulers (such as RMA) for
task management, Object Oriented approach to ensure
reuse and support evolution of the architecture components. Despite the advantages provided by these architectures, none of them seems to fully achieve the stated
objectives, thus motivating the development of a new
architecture.
The principal objective of the architecture is to simplify the development of elementary modules and their
composition, required to exhibit complex behaviors.
The concrete implementation of the architecture takes
advantage of a framework, providing a set of tools for
creation and scheduling of the elementary modules and
their local and remote communication (Figure 7). In
the following, the main functionalities of the framework
are described in detail.
Modules. The framework is composed of modules, i.e.
active and autonomous components that cooperate
change among activities both in local and remote environments. The implementation of the Communication Patterns (Schlegel 2002), included in the framework, realizes three types of communication:
Figure 6: Mobile base-arm integration and reachable
workspace with a lateral mount of the arm to the base.
Module
Module
Module
Module
Module
Module
Deliberative Layer
Module
Module
Middle Layer
Module
Reactive Layer
(Concrete) Architecture
Communications
Scheduling
Framework
Figure 7: Framework.
exchanging information and commands. Each module provides a set of activities that either implement
a behavior or allow interaction with robot sensors and
motors. A symbolic name associated with each module (i.e., Sonar Ring, Motors, WallFollowing, Collision Avoidance,...) allows higher level modules of the
architecture to turn on/off activities while running
the application.
Scheduling. The framework exploits a real-time
scheduler, currently implementing an Earliest Deadline First algorithm, for the execution of the application activities. Each activity can be defined as either
aperiodic or periodic, and properties related to activity period and deadline can also be specified. The use
of an advanced dynamic scheduler within the framework decreases the complexity related to the assignment of task priorities.
Communications. The framework provides a set of
high-level primitives fully supporting information ex-
• Information implements a one-to-many communication. Modules requiring the data produced by a
sender module register themselves as receiver modules. Each time a new data is available the framework takes care of distributing it to all receivers.
• Command implements a many-to-one communication for the distribution of commands between requester and performer modules. An arbitration
mechanism is also included in the framework to
solve conflicts when multiple commands are received by the same performer.
• Query implements a communication mechanism
similar to a Remote Procedural Call. An asker
module requests a replier module, executing an
aperiodic activity, to produce the reply. The query
can be specified to be either a blocking or a nonblocking call.
4.1
Early experimental evaluation
Some experiments were designed to validate the proposed framework. The first application tested the reactivity of the architecture compared to the original
robotd architecture provided by Nomadics. Two version
of a Collision Avoidance behavior were implemented
based on the two architectures (both on the new hardware described in Section 3). The return value for the
sonar front sensor was continuously monitored and the
robot’s motors were turned off as soon as the behavior
became aware that an obstacle was closer than 35 cm.
The distances between the robot and the obstacle were
measured in a set of ten experiments with the robot
moving at a speed of 25cm/s towards a wall. Table 1
shows average and standard deviation from the mean
for the two Collision Avoidance implementations. The
new framework, reducing the latency between the sensor data evaluation and its actual reading, increases the
reactivity to the environment, resulting in an increase
in the halt distance between the robot and the wall.
robotd [cm]
η = 19, 92
σ = 5, 89
framework [cm]
η = 27, 17
σ = 4, 26
Table 1: Average and standard deviation from the mean
for a set of ten experiments of the reactivity test.
A second application investigated the benefits
achieved with a more precise evaluation of robot position during sensor readings, as available in the proposed framework compared to the robotd architecture.
Figure 8 represents the wall profile reconstructed from
the lateral sensor readings during a linear navigation at
a speed of 50cm/s. While both the lines are jagged due
to sensor noise, the reconstruction obtained with the
25
application based on the proposed framework is somehow closer to the wall shape.
1m
1m
framework
robotd
Figure 8: Test.
5
Summary
We have described the goals and current state of development of a mobile manipulator service robot. The
prototype robot under development provides a tool to
explore many of the issues involved in service robotics.
References
[Aleotti, Caselli, & Reggiani 2003] Aleotti, J.; Caselli,
S.; and Reggiani, M. 2003. Development of a sensory
data user interface for a service robot. In Intl. Workshop on Advances in Service Robotics (ASER03).
[Exact Dynamics ] Exact Dynamics.
Home Page.
http://www.exactdynamics.nl/.
[Fiorini, Ali, & Seraji 1997] Fiorini, P.; Ali, K.; and
Seraji, H. 1997. Health care robotics: a progress report. In IEEE Int. Conf. on Robotics and Automation.
[Guido ] Guido.
Project
Home
Page.
http://www.appliedresource.com/RTD/index.htm.
[Hans, Graf, & Schraft 2002] Hans, M.; Graf, B.; and
Schraft, R. 2002. Robotic home assistant Care-O-bot:
Past - present - future. In IEEE Int. Workshop on
Robot and Human interactive Communication (ROMAN2002).
[Harashima 2002] Harashima, F. 2002. Interaction
and intelligence. In IEEE International Workshop on
Robot and Human Interactive Communication.
[Hermes ] Hermes.
Project
Home
Page.
http://www.unibw-muenchen.de/hermes/.
[Huttenrauch & Eklund 2002] Huttenrauch, H., and
Eklund, K. 2002. Fetch-and-carry with CERO: Observations from a long-term user study with a service
robot. In IEEE International Workshop on Robot and
Human Interactive Communication.
[J. Hegarty 1991] J. Hegarty. 1991. Rehabilitation
robotics: The users perspective. 2nd Cambridge
Workshop on Rehabilitation Robotics.
26
[Kwee H. et al. 1989] Kwee H.; Duimel J.; Smits J.;
Tuinhof de Moed A.; and van Woerden JA. 1989. The
Manus wheelchair-borne manipulator: System review
and first results. In IARP 2nd Workshop on Medical
and Healthcare Robotics.
[Mahoney 1997] Mahoney, R. 1997. Robotic products
for rehabilitation: Status and strategy. In Int. Conf.
on Rehabilitation Robotics (ICORR).
[Mattias, Orebäck, & Christensen 2000] Mattias, L.;
Orebäck, A.; and Christensen, H. 2000. BERRA:
A research Architecture for Service Robots.
In
IEEE Proceedings., ed., International Conference on
Robotics and Automation, 2000.
[Morpha ] Morpha.
Project
Home
Page.
http://www.morpha.de/.
[Nursebot ] Nursebot.
Project Home Page.
http://www-2.cs.cmu.edu/ nursebot/.
[Petersson, Egerstedt, & Christensen 1999] Petersson,
L.; Egerstedt, M.; and Christensen, H. 1999. A
hybrid control architecture for mobile manipulation.
In IEEE/RSJ International Conference on Intelligent
Robots and Systems.
[Piaggio, Sgorbissa, & Tricerri 1999] Piaggio,
M.;
Sgorbissa, A.; and Tricerri, L. 1999. Expert Tribe in a
Hybrid Network Operating System. Università di Genova - Dip. di Informatica Sistemistica e Telematica,
4.2.1 edition.
[Schlegel & Wörz 1999] Schlegel, C., and Wörz, R.
1999. The Software Framework SmartSoft for Implementins Sensorimotor Systems. In IEEE/RSJ International Conference on Intelligent Robots and Systems,
IROS ’99.
[Schlegel 2002] Schlegel, C. 2002. Communications
Patterns for OROCOS. Hints, Remarks, Specifications. Technical report, Research Institute for Applied
Knowledge Processing (FAW).
[Stanger et al. 1994] Stanger, C.; Anglin, C.; Harwin,
W.; and Romilly, D. 1994. Devices for assisting manipulation: A summary of user task priorities. IEEE
Trans. on Rehabilitation Engineering 2(4):127–143.
[Volpe et al. 2001] Volpe, R.; Nesnas, I.; Estlin, T.;
Mutz, D.; Petras, R.; and Das, H. 2001. The
CLARAty architecture for robotic autonomy. In IEEE
Proceedings., ed., Aerospace Conference, 2001, volume 1, 121–132.
[Zenn Bien et al. 2002] Zenn Bien, Z.; Do, J.-H.; Kim,
J.-B.; Kim, Y.; Lee, H.-E.; and Park, K.-H. 2002.
Human-friendly interaction in intelligent sweet home.
In International Conference on Virtual Systems and
Multimedia (VSMM).
[Zinn et al. 2002] Zinn, M.; Khatib, O.; Roth, B.;
and Salisbury, J. 2002. Towards a human-centered
intrinsically-safe robotic manipulator. In IARP Workshop on Technical Challenges for Dependable Robots in
Human Environments.
Knowledge Representation and Planning in an Artificial Ecosystem
Mattia Castelnovi, Fulvio Mastrogiovanni, Antonio Sgorbissa, Renato Zaccaria
DIST – University of Genova,
Via Opera Pia 13, Tel +39 010 3532801
[email protected], [email protected], [email protected], [email protected]
Abstract
Within the framework of mobile robotics for human
assistance1, we propose a multiagent, distributed and hybrid
approach which aims at integrating reactive capabilities with
deliberative reasoning about actions and plans: robots are
thought of as mobile units within an intelligent house where
they coexist and co-operate with intelligent devices, located
onboard and throughout the environment, that are assigned
different sensing/acting roles, such as helping the robot to
localize itself, controlling automated doors and elevators,
detecting emergency situations etc. Moreover, robots are
provided with a symbolic component which enables them to
execute complex planning activities by hierarchically
decomposing a given task, thus showing an efficient and
adaptive behaviour even in presence of an unknown,
dynamical environment.
1 Introduction
In the last years, many approaches were proposed in order
to implement fully autonomous mobile robots. While in
most cases these attempts focused on the trade-off between
reactive and deliberative activities to govern the robot’s
actions [1, 2], very few existing systems prove to be
effective, if the robots are asked to autonomously solve
complex navigation problems (i.e., planning and executing
complex activities, such as “use an elevator to move to
another floor”, etc.) rather than just simple ones (e.g.
“move to the goal” and “avoid obstacles”). This becomes
even more critical when the system has to consider selfmaintenance (e.g. battery recharging) and multiple tasks
scheduling, thus requiring complex reasoning capabilities.
In our view, this is mainly due to the fact that almost all
the existing systems share a philosophical choice which
leads to a centralized design. The autarchic robot design,
as we define it, can be summarized as follows:
• robots must be fully autonomous: i.e., they are often
asked to co-operate with humans or other robots but
they mainly relies on their own sensors, actuators, and
decision processes in order to carry out their tasks.
• robots must be able to operate in non structured
environments (i.e., which have not been purposely
modified to help robots to perform their tasks).
Up to now, no robot (or team of robots) has proven yet to
be really “autonomous” in a generic, “non structured”
1
Copyright © 2003, The ROBOCARE Project
Funded by MIUR L. 449/97. All rights reserved.
environment, while the few examples of robots which are
closer to be considered autonomous (in the sense depicted)
were designed to work in a specific - even if unstructured environment: see for example the museum tour-guide robot
Rhino and Minerva [3], which relied on the building’s
ceiling lights to periodically correct their position.
In line with these considerations, our robots are thought
of as mobile physical agents within an intelligent
environment (an “Artificial Ecosystem - AE”) where they
coexist and cooperate with fixed physical agents, i.e.
intelligent sensing/actuating devices that are assigned
different roles: devices which provide the robots with clues
about their position in the environment, that control
automated doors and elevators, that detect emergency
situations, etc. Both robots and intelligent devices are
handled by software agents, which can communicate
through a distributed message board and are given a high
level of autonomy. According to this definition, autonomy
(and intelligence) is not just a characteristic attribute to
robots, but it is distributed throughout the building (Fig. 1
on the left); notice that the AE paradigm is particularly
suited to model robots for personal human in a house
environment (one of the main objectives of the RoboCare1
project), where it is very easy to plug devices and to
integrate them in a standard network for home automation.
On Board PC: On Board PC: On Board PC:
Build Map
Generate a
Plan a path
trajectory
Intelligent Device:
Beacon for Localization
Control of TV camera
Intelligent Device:
Beacon for Localization
Intelligent Device:
Beacon for Localization
Control of Elevator
Global
Message Board
to/from Intelligent
Devices in the building
Intelligent
Device:
USSensors
Local
Message Board
Intelligent
Device:
Bumpers
Intelligent
Device:
Motors
Fig. 1: Left: agents in the building. Right: agents on the robot.
Next, the concept of intelligent devices is extended to the
sensors and actuators on board the robot, thus performing
intelligent control on the robot at two different levels (Fig.
1 on the right): on the lower level a higher reactivity is
reached through simple reactive agents which control
sensors and actuators; on a higher level, sensorial inputs are
collected by agents running on the onboard computer in
order to perform more sophisticate computations before
issuing control to actuators. Finally, complex planning
activities are obtained by integrating in the architecture
deliberative agents which manage problems provided by
the user, and cooperate with the low level counterpart.
27
In the following Sections, we will show in details the
overall architecture of the system.
2 Agent Architecture
Three different types of software agents are devised in our
architecture (according to Russel&Norvig’s definition [4]):
• type 1: simple reflex agents, i.e. agents with no
internal state governed by condition-action rules.
These agents are used for purely reactive behaviours,
e.g. stopping the motors to avoid a collision (a task
for mobile robots) or opening an automated door
upon request (a task for fixed devices).
• type 2: agents that keep track of the world, i.e. agents
which maintain an internal state and/or representation
in order to choose which action to perform. These
agents are used for more complex tasks, e.g. avoiding
obstacles on the basis of a continuously updated local
map of the environment (a task for mobile robots) or
controlling an elevator dealing with multiple requests
(a task for fixed devices).
• type 3: goal-based agents, i.e. agents which handle
goals and find action sequences to achieve their goals.
These agents are used for high-level planning tasks,
e.g. finding a sequence of STRIPS-like rules to reach
a target location (a task for mobile robots).
We refer to agents running on intelligent devices as ID
agents and to agents performing sub-symbolic/symbolic
activities related to action/path planning etc. as PC agents,
for they are executed on a personal computer running
LinuxOS. In spite of differences in their internal model and
implementation, ID and PC agents have some common
requirements in terms of Communication and Scheduling.
Communication: agents are not omniscient: at any given
time, they have only partial knowledge of the state of the
world and the system. To share information, all agents can
communicate on the basis of a two levels publish/subscribe
protocol (Fig.1). The first level is intended for global
communication between all the ID and PC agents, while the
second one is local, thus allowing to share information
which is useful only for some agents. This allows to
dynamically add/remove agents in the system without other
agents being aware of that, thus increasing the versatility
and reconfigurability of the system.
Scheduling: agents have different requirements in terms
of computational resource and timing constraints. Since we
want the system to operate in the real world and to deal
with an uncertain and dynamic environment, the system
architecture must have Real-Time characteristics in order
to guarantee a predictable and safe behavior of the system
when computational resources are limited.
2.1 Agents running on intelligent devices
In order to build intelligent devices (sensors/actuators both
in the building and on board the robot) and implement ID
agents, we refer to fieldbus technology for distributed
control (in particular Echelon LonWorks, a standard for
28
building automation). Communication and scheduling are
thus handled as follows:
Communication: the message board and the
publish/subscribe
communication
protocol
are
implemented by connecting all the control nodes on a
common communication bus. ID agents are thus allowed to
exchange information by sharing “network variables”, i.e.
strongly typified data which are propagated on the bus and
delivered to all agents in the system which have subscribed
to that kind of information. Notice that all the control nodes
in the system (i.e. sensors/actuators both on board the robot
and distributed in the building) are connected to the same
bus (we adopt an infrared channel for the communication
between robot’s devices and the building network).
Scheduling: Echelon’s microprocessors allow fast
response to inputs and fast communication between
network’s nodes, but they do not allow to explicitly set
hard Real-Time constraints. Thus, ID agents computational
load must always be lower than the computational
resources
available
on
the
intelligent
device
microprocessor, if we want to guarantee a response in RealTime. However, ID agents can be easily programmed by
means of an event-based programming language: “when”
clauses allow to produce a behaviour either periodically or
as a consequence of a particular event.
ID agent:
R ight M otor
ID agent:
Left M otor
A4
fro m P C a gen ts
ID agent:
F ront M otor
A5
Speed, Jog
to P C a ge nts
A6
M essage
B oard
P roxim ity D ata
A1
ID agent:
B um per
A 2,3
ID agent:
U S Sensors
A 10
ID agent:
M otion R eflexes,
Joystick
Fig. 2: Emergency behavior.
2.2 Agents running on the on-board PC
To implement PC agents, we refer to ETHNOS (Expert
Tribe in a Hybrid Network Operating System), an
operating system and programming environment for
distributed robotics application which has been described
in details in [5]. Communication and scheduling
requirements are handled as follows:
Communication: ETHNOS allows PC agents to
post/read messages from the global/local message board to
which all agents can publish/subscribe. In particular, agents
must specify - when publishing a message - if they want to
post it to the global message board or the local one; they
can make the decision in run-time, by calling the
appropriate communication primitive. Finally, some PC
agents work as a bridge between the fieldbus (which
connects ID agents) and the standard TCP/IP network
(which connects PC agents in ETHNOS).
Scheduling: ETHNOS can schedule three different kind
of agents: 1) periodic agents, i.e. executing at a fixed rate
(mostly dealing with type 1 and 2 activities, such as
piloting actuators, reading sensors, etc.), 2) sporadic
agents, i.e. executing in subordination to specific
conditions (mostly type 1 and 2 activities, such as purely
reactive behaviors, emergency recovery procedures, etc.),
3) background agents, i.e. non Real-Time agents which
execute when the scheduler has no pending requests
coming from periodic and sporadic agents to satisfy
(mostly type 3 deliberative activities, whose computational
time and frequency cannot be predicted or upper bounded
and whose execution is not critical for the system).
ETHNOS scheduling policy is described in details in [5].
able to generate simple motion reflexes in order to avoid an
imminent collision and post the necessary data to the
message board. Finally A4, A5, and A6, read this data on the
message board and consequently stop the motors before
waiting for a command from PC agents.
PC agent:
Build APF
PC agent:
Build Map
Bitmap
A 17
APF
A 18
A 19
Proximity Data
from ID
agents
Landmark
Position
Speed, Jog
PC agent:
Set Temporary
Target to Avoid
Obstacles
to ID agents
3 The Artificial Ecosystem
The AE approach has been tested at the Gaslini Hospital of
Genova: the experimental set-up is composed of the mobile
robot Staffetta2 and a set of intelligent devices distributed
in the building, with the primary purpose of controlling
active beacons for localization and elevators. Staffetta is
requested to carry out navigation/transportation tasks, thus
requiring to localize itself, to plan paths to the target, to
generate smooth trajectories and finally to follow it while
avoiding unpredicted obstacles in the environment.
Staffetta is not able to load/unload items, thus requiring
human intervention for this task.
3.1 ID agents
All ID agents are type 1 and 2 agents, i.e., simple reflex
agents and agents with internal state.
ID agents on board the robot: A1 controls a bumper able
to detect collisions on different areas of the robot chassis;
A2 and A3 control two sonar acquisition boards; A4, A5, and
A6 control the two rear motors and the front steering wheel;
A7 controls the DLPS (the onboard rotating laser and the
infrared communication device); A8 monitors the batteries’
state; A9 is directly connected to the motors and works as a
watchdog; A10 generates motion reflexes to avoid collisions
with unpredicted obstacles and allows controlling the robot
through a joystick interface for manual displacement;
finally, A11 handles the communication between the ID and
the PC agents.
ID agents in the building: a varying number of agents
A12i, each controlling a beacon i for localization, two agents
A13 which control the elevator. Fig. 2 shows how
interacting ID agents can increase the reactivity of the
system to obstacles which suddenly appear in the
environment by means of the ultrasonic sensors or the
bumper: the sensorial data collected by A1 (bumper) and
A2,3 (US sensors) are posted to the local message board
thus being available to PC agents for planning, map
building, trajectory generation, localization, etc. However,
these data are also available to the ID agent A10, which is
2
Staffetta is commercialized by the spin-off company Genova Robot
(www.genovarobot.com)
A 14,15
PC agent:
Plan a Path,
Set Target Location
A 16
PC agent:
Generate Trajectory to
Landmark Position
Fig. 3: Navigation architecture.
3.2 PC agents
PC agents can be agents of type 1, 2, and 3 (see Fig. 2 and
3). Agent A14 and A15 are type 3 goal-based agents
responsible of plan selection and adaptation, allowing the
robot to execute high-level plans (including navigation
tasks such as “go into office A”). These tasks, depending
on the current robot context, may be decomposed into
many, possibly concurrent, sub-tasks such as “localise, go
to the door, open door, etc...” and eventually into primitive
actions such as “go to position (x,y, )” (see Section 4).
In Fig. 3, A14 and A15 plan a path to a target as a
sequence of landmarks positions, and posts to the message
board the location of the next landmark to be visited. Agent
A16 is a type 1 agent which is capable to execute smooth
trajectories from the robot current position to the landmark
specified, relying on a non linear law of motion. It
produces speed and jog values which are made available to
ID agents A4, A5, and A6 to control motors (the details of
Staffetta’s navigation system is beyond the scope of this
paper but can be found in [6]).
4 Knowledge Representation and Planning
In order to achieve complex task planning and execution,
the system must be able to represent a significant part of
the world in which it operates. Building a complete and
consistent model of the environment requires a way to keep
track of the objects useful for future interaction, of their
status, of the actions performable and of all the relevant
consequences that those actions generates. NeoClassic [7]
is a system for knowledge representation (based on
Description Logics) capable to deal with one or more KB,
storing objects like concepts, individuals (i.e. concept
instances), descriptions of concepts and individuals, roles
(i.e. relations between objects of the KB) and rules which
hold in the domain, deriving consequences from the stored
knowledge and adding/removing information in run-time.
Particularly interesting are the so-called hook functions:
29
procedural, non symbolic user-defined functions activated
asynchronously as a consequence of a particular state
within the KB. In our case, they allow to insert NeoClassic
functionalities into the ETHNOS multiagent architecture,
thus allowing interoperability between symbolic and
reactive agents.
4.1 The Symbolic Agents
The symbolic component of ETHNOS is handled by two
different agents, A14 and A15:
• A14 handles the NeoClassic KB: it adds and removes
concepts, individuals, descriptions, etc. Moreover,
when a new problem is given, A14 publishes a message
containing a description of the domain together with
the initial and goal state required by A15.
• A15 implements a STRIPS-like planner: after reading a
message published by A14, it searches for a plan to
solve the particular problem given. Finally, the results
of this search are published to A14 which will use this
information to update the KB.
predicates which will be passed to the planner to describe
the state of the system and the precondition, add, and delete
list of each operator. Again, through the intermediate base
concept BW-PLAN-PRED, we consider specific predicates
for the each domain, such as ON, HOLDING, ON-TABLE,
etc. for the BW-DOMAIN. The predicate arguments are
modelled through the concept’s roles, i.e. grounding an
argument is equivalent to fill a role with an individual.
NAV DO M A IN
DOM AIN
BW D OM A IN
NAV O PERATO R
op
bwop
O PERATOR
pp
B WO PERATO R
bwpp
ob
bwob
N AV OB JECT
OB JECT
4.2 The Knowledge Base
The Knowledge Base can be ideally divided into separate
areas representing three kinds of concepts: objects,
operators and plan predicates, problems and plans.
Objects: all the objects in the KB are subsumed by the
concept OBJECT. In a scenario of an Autonomous Mobile
Robot carrying out navigation tasks within a civilian
building, we would consider concepts like PLACE,
FLOOR, DOOR, ELEVATOR, OBSTACLE, etc. If the
robot is equipped with a manipulator to grasp and move
blocks in a so called “blocks-world”, we must consider
objects such as BLOCK, TABLE, HAND, etc: the resulting
KB is composed of both navigation and blocks-world
objects, thus increasing the search space and making
planning more and more complicate. Introducing
intermediate base concepts (e.g. BW-OBJECT, NAVOBJECT, etc.), used to represent objects relative to a
specific domain, all the objects in a given domain are
subsumed by the corresponding base concept, thus
inheriting its characteristics and adding new ones. A
particular concept, DOMAIN, provides information about
the objects which must be considered in a particular
domain, the actions that can be eventually performed in
that domain. DOMAIN subsumes more specific domains,
such as BW-DOMAIN and NAV-DOMAIN (Fig. 4).
Operators and Plan Predicates: actions are represented
within the KB in a way that is very similar to the traditional
STRIPS schemas. An OPERATOR is a concept which is
defined through the roles pl (precondition list), dl (delete
list), and al (add-list), together with accessory roles. The
prototype for the generic concept OPERATOR is shown in
Fig. 5: notice that specific operators (STACK in the figure)
are subsumed by OPERATOR through an intermediate
concept (BW-OPERATOR) which restrict the set of the
actions that can be performed in the BW-DOMAIN. The
concept PLAN-PREDICATE is introduced to model
30
B WOB JEC T
ON
BW PLANPRED
PLANPR ED ICATE
C LEAR
NAV PLANPRED
Fig. 4: Relations between OBJECTs.
pl
OPERATOR
PLANPREDICATE
dl
al
NAV OPERATOR
NAV PLANPRED
pl
dl
BWOPERATOR
pl
STAC K
BWPLANPRED
al
holding
dl
clear
al
hand
empty
Y
X
BLOCK
on
Fig. 5: OPERATORSs and PLAN-PREDICATEss.
Problems and Plans: in order to formalize a particular
problem, the system uses the PROBLEM concept, defined
by an initial and a goal state, described as a conjunction of
predicates: this is done through the roles is (initial-state)
and gs (goal-state), thus relating the concept PROBLEM
with the concept PLAN-PREDICATE (Fig. 6). Finally, the
role d (domain) relates a problem with a corresponding
domain, thus allowing to consider problems meaningful
only in a specific domain. Once a solution (or no solution)
has been found, a new PLAN individual is stored in the
KB, defined through the initial and goal state of the
corresponding PROBLEM and the OPERATORs which
solve it (or an empty list if it cannot be solved): this
avoiding re-planning if the execution of the same plan
should be necessary in future.
is
PROBLEM
PLANPRED ICATE
gs
DOM AIN
d
NAVPROBLEM
BWPLANPRED
is
BWDOM AIN
gs
BW PROBLEM
following high-level OPERATORs are available for
planning: NAVIGATE, LOAD, UNLOAD, ARRANGE
BLOCKS (see Fig. 7). In particular, LOAD and UNLOAD
are responsible for deleting and adding IS-IN predicates in
order to reflect variations in the state of the involved BWOBJECT. The automatic generation of the domainindependent problem is shown in Fig. 8: domainindependent PLAN-PREDICATEs (bold font) are left
unaltered, while domain-dependent ones are combined into
a
new
domain-independent
PLAN-PREDICATE
individual. Finally notice that these considerations still
hold when new DOMAINs, OBJECTs, OPERATORs, and
PLAN-PREDICATEs, etc. are added to the KB.
d
Fig. 6: PROBLEMs and PLANs.
4.3 Plan Generation
Dividing the KB into different domains obviously helps to
reduce the time required to solve a given problem, although
it is not always a trivial task for A14 to automatically
generate a high-level, domain-independent problem from
the original one: i.e., the BW-DOMAIN and the NAVDOMAIN are not really disjointed, in the sense that a BWPROBLEM often requires BW-OBJECTs to be moved to
one place to another thus requiring to automatically
instantiate and solve a NAV-PROBLEM. We need a
domain-independent PROBLEM to be instantiated and a
domain-independent PLAN to be found which contains
only the right sequence of high level, domain-independent
OPERATORs, postponing domain-dependent planning to a
subsequent phase.
(:action navigate
:parameters
(?r - MOBILE-ROBOT
?p1 - PLACE
?p2 - PLACE)
:precondition
(and (is-in ?r ?p1))
:effect
(and (not (is-in ?r
?p1))
(is-in ?r ?p2)))
(:action load
:parameters
(?r - MOBILE-ROBOT
?b - BLOCK
?p - PLACE)
:precondition
(and (is-in ?r ?p)
(is-in ?b ?p)
(gripper-free ?r))
:effect
(and
(not (gripper-free ?r))
(gripper-holding ?r ?b)
(not (is-in ?b ?p))))
(:action arrange-blocks-0
:parameters
(?r - MOBILE-ROBOT
?p - PLACE)
:precondition
(and (gripper-free ?r)
(is-in ?r ?p)
(is-in block-a ?p)
(is-in block-b ?p)
(not (problem-solved ?p
block-conf-h-tv-hall)))
:effect
(problem-solved ?p
block-conf-h-tv-hall)))
(:action unload
:parameters
(?r - MOBILE-ROBOT
?b - BLOCK
?p - PLACE)
:precondition
(and (is-in ?r ?p)
(gripper-holding ?r
?b))
:effect
(and
(not (gripper-holding
?r ?b))
(is-in ?b ?p)
(gripper-free ?r)))
Fig. 7: High level OPERATORs in the PDDL formalism.
To achieve this, the domain-independent PLANPREDICATE IS-IN is introduced, with the purpose of
creating a link between the BW and the NAV-DOMAIN,
for the two arguments of IS-IN belong to different
DOMAINs, and allow to specify the NAV-PLACE in
which each BW-BLOCK is located. Moreover, the
Start State (given problem)
is-in staffetta entrance
gripper-free staffetta
is-in block-a carpet-room
is-in block-b tv-hall
on-table block-a
on-table block-b
clear block-a
clear block-b
Start State (domain independent problem)
is-in staffetta entrance
gripper-free staffetta
is-in block-a carpet-room
is-in block-b tv-hall
Goal State (given Problem)
is-in staffetta tv-hall
gripper-free staffetta
is-in block-a tv-hall
is-in block-b tv-hall
on-table block-a
on block-a block-b
clear block-b
Goal State (domain independent problem)
is-in staffetta tv-hall
gripper-free staffetta
is-in block-a tv-hall
is-in block-b tv-hall
not problem-solved tv-hall block-conf-tv-hall
problem-solved tv-hall block-conf-tv-hall
Fig. 8: Generation of a domain-independent PROBLEM.
b)
a)
d)
c)
put down
box D
e)
pickup
box D
f)
g)
h)
Fig. 9: Top: a public exhibitions. Bottom: a toy-world scenario.
5 Experimental Results
In the following, we will describe some of the successful
experiments that have been carried out at Gaslini Hospital
of Genova (Fig. 9c and d), in our Lab at DIST – University
of Genova and during many public exhibitions. During the
experiments, almost all the PC and ID agents described in
the previous Sections are running: thus, the robot performs
concurrently planning, goal-oriented navigation, map
making, smooth trajectory generation, obstacle avoidance,
localization, etc. Finally, the environment is intrinsically
dynamic because of the presence of people. However, the
system keeps working with no performance degradation:
during the Tmed exposition (Magazzini Del Cotone, Porto
Antico di Genova, October 2001 – Figs. 9a and b), the
system was running from opening to closing hours and
interacting with visitors. We were sometimes forced to stop
the robot for battery recharge, but never because the robot
got lost in the environment, thanks to the intelligent devices
(beacons) distributed in the building.
31
virtually (i.e., it asks for human intervention). However,
plan generation and execution, even if in a toy world, allow
to evaluate the behavior of the ETHNOS multiagent system
when agents A14 and A15 are integrated in the navigation
architecture. A more realistic implementation on-board of
our mobile robot Staffetta [8] is currently work-in-progress.
Conclusions
Fig. 10: Avoiding an obstacle at 200mm/sec constant speed.
In Fig. 10, Staffetta’s obstacle avoidance behavior is
shown: the robot trajectory is smooth because of the
particular navigation strategy implemented, described in
[6]. As regards the interaction between the symbolic
components and the reactive ones, we performed many
experiments in simulation, supposing that the robot is
equipped with a gripping-hand. Staffetta is asked to carry
out various compound navigation/manipulation tasks within
our Department building, composed of three floors
g)
connected by an elevator. For example: a)
Move from the
little room to Prof. Zaccaria Office AND b) Change the
arrangement of the boxes from their initial configuration to
the final one. The boxes are distributed at the beginning in
different areas of the building, thus forcing the robot to
alternate navigation and manipulation activities in order to
solve the joint problem given. It is first solved in the
general DOMAIN, thus producing high-level, domainindependent OPERATORs that will originate sub-problems
to be solved within their specific domains. In this particular
case, our simulator returns the high-level PLAN shown in
Fig. 11 at the top; next, sub-problems are solved in their
corresponding domains as shown in Fig. 11 at the bottom.
Fig. 11: Screenshot from the simulator window. Top: a high level
plan; Bottom: low level plan for high level action #3.
Experiments with a real robot within a simplified
scenario have been carried out as well. Fig. 9e–h at the
bottom shows the maze in which the robot operates: boxes
marked with A, B, C, D are manipulated by the robot only
32
We describe the “Artificial Ecosystem”, a novel multiagent
approach to intelligent robotics (particularly suited for
human assistance in a home environment) which integrates
symbolic and sub-symbolic activities in a distributed
paradigm. We claim that, given the current technological
state in sensors and actuators, mobile robots will have a lot
of difficulties in substituting humans in a generic, human
inhabited environments, even for the simplest navigation
task. Thus, on one side, we continue pursuing the final
target of a fully autonomous artificial being which can
adapt to “live” in the same environment where biological
being live; on the other side, we think that modifying the
environment to fit the robot’s requirements can be a
temporary solution to obtain significant results given the
current technological state.
References
[1] R.C. Arkin, Motor Schema-Based Mobile Robot Navigation,
International Journal of Robotics Research, Vol. 8, No. 4, 1989, pp.
92-112.
[2] Gat, E. Integrating planning and reacting in a heterogeneous
asynchronous architecture for controlling real-world mobile robots,
in AAAI-92, San Jose, CA, 1992, pages 809-815.
[3] Thrun, S., Beetz, M., Bennewitz, M., Burgard, W., Cremers, A. B.,
Dellaert, F., Fox, D., Hähnel, D., Rosenberg, C., Roy, N., Schulte, J.,
and Schulz. D., (2000). Probabilistic Algorithms and the Interactive
Museum Tour-Guide Robot Minerva. International Journal of
Robotics Research, Vol. 19, Number 11.
[4] Russell, S. and Norvig, P. Artificial Intelligence: A Modern
Approach. Prentice Hall, Englewood Cliffs, NJ, 1995.
[5] Piaggio, M., Sgorbissa, A., Zaccaria, R. (2000). Pre-emptive Versus
Non Pre-emptive Real Time Scheduling in Intelligent Mobile
Robotics. Journal of Experimental and Theoretical Artificial
Intelligence, (12)2.
[6] Sgorbissa A., Zaccaria R., Roaming Stripes: smooth reactive
navigation in a partially known environment, to be published in RoMan 03, 12th IEEE Workshop Robot and Human Interactive
Communication, Westin San Francisco Airport Hotel, Millbrae,
California USA.
[7] Alex Borgida, R.J. Branchman, D. L. McGuinness and L. A.
Resnick. CLASSIC: a structural data model for objects. Proceedings
of the 1989 ACM SIGMOD International Conference on
Management of Data: 59 -- 67, June 1989.
[8] Miozzo, M., Scalzo, A., Sgorbissa, A., Zaccaria, R. Autonomous
Robots and Intelligent Devices as an Ecosystem, International
Symposium on Robotics and Automation, September 2002, Toluca,
Mexico.
A Component-Based Framework for Loosely-Coupled
Planning and Scheduling Integrations
Amedeo Cesta, Federico Pecora, Riccardo Rasconi
{cesta, pecora, rasconi}@ip.rm.cnr.it
Institute for Cognitive Science and Technology
Italian National Research Council
Viale Marx 15, I-00137 Rome, Italy
Abstract
This paper attempts to characterize the issue of planning and scheduling integration in terms of the frequency at which information sharing occurs among
the two basic solving cores. In this context, we focus
on one end of the spectrum, namely, loosely-coupled
planner-scheduler integrations. We show how the elementary building blocks of this type of integration
are the planner, the scheduler and a plan adaptation
procedure which accommodates time and resource constraints and computes a minimal-constrained deordering of the plan produced by the planner. Our investigation on the theoretical properties of the loosely-coupled
approach exposes the advantage of propagating sets of
partial plans rather than reasoning on sequential state
space representations. In particular, we show how optimal planning graph based planners tend to maximize
the critical path through the causal network which is
reasoned upon by the scheduler.
with the introduction of special data structures capable
of modeling time and/or resources, while a distinct approach is to implement the planning and the scheduling
paradigms separately, thus allowing them to independently solve the two problem instances they are best
suited for, and to link the planning and the scheduling
engines afterwards. This second approach raises the important issue of defining effective forms of information
sharing between the two subsystems.
The research presented herein is focused on the second approach to address the matter of Planning and
Scheduling integration. Our studies have lead to a theoretical characterization of loosely-coupled integrated
solvers. In this paper we will describe the algorithmic
properties of such integrations by means of a framework
in which an explicit distinction between the two aspects
of the problem presented above is maintained.
2
1
Introduction
Integration of Planning and Scheduling techniques has
become an increasingly hot topic for the AI community
in the last years. Though the techniques exploited to
solve the Planning problem and the Scheduling problem are fundamentally different, both issues are in fact
addressed as search problems and as such, are solved
using search algorithms. These algorithms, given the
different nature of the two problems, involve different
data structures to represent the problem instances and
different techniques to generate and explore these data
structures.
Much attention is recently being paid to the possibility of mutually exchanging the information yielded
during the planning and the scheduling search procedures, in order to use the shared information to better
guide both searches, thus gaining a mutual benefit for
the sake of the ultimate common goal of finding a reliable and efficient solution.
Many approaches to planning and scheduling integration have been proposed. Some deal with enhancing the ordinary causal solving techniques of a planner
c 2003, The RoboCare Project — Funded by
Copyright °
MIUR L. 449/97. All rights reserved.
Integrating Planning and Scheduling
Planning and Scheduling (P&S) address complementary aspects of Problem solving: broadly speaking,
the former addresses the problem of defining the tasks
which, if executed, achieve a predefined goal, while the
latter addresses the problem of assigning time and resources to those tasks. As a consequence, the solving
techniques employed to tackle these problems are different: on one hand, a planner reasons in terms of causal
dependencies by performing logical deduction in a predefined logic theory, deriving the set of actions which
achieve the goal; on the other hand, a typical CSP 1
scheduler works by propagating a set of constraints,
which in turn determines the domains in which every
variable of the problem can be legally assigned a value.
One first question which arises while analyzing the
issue of P&S integration is which problems need an integrated solver. A second question we may ask is which
P&S technology is best suited for the integration. Finally, we may wonder how to effectively integrate the
selected solving cores. In the following Sections we will
try to answer these three questions by presenting the
1
Constraint Satisfaction Problem solving tools represent
the dominating technology in state-of-the-art scheduling architectures.
33
theoretical properties and an implementation of a serialized approach to P&S integration.
As a starting point, we can imagine a general integrated P&S architecture as being composed by two
main interconnected blocks: the Planning Engine and
the Scheduling Engine, as shown in Figure 1.
Planning
Engine
Scheduling
Engine
Figure 1: The general model of Planning and Scheduling
Integration.
While the two subsystems reason upon the two complementary aspects of a particular problem, they share
information which enriches the heuristics used in the respective search procedures. One key point here is how
often data are actually passed between the two subsystems: in fact, toggling the frequency of information
sharing leads to two radically different behaviors of the
global system (see Figure 2).
Planning
Engine
Scheduling
Engine
scheduling phase to produce the final solution. We refer to this approach as serial causal and time/resource
reasoning. We also refer to this type of integration as
Naı̈ve Component-Based Approach (N-CBA), this name
deriving from the serial nature of the integration (hence,
Naı̈ve) and from the fact that it employs off-the-shelf
components as the two reasoning cores.
3
The Naive Component-Based Approach
As depicted in Figure 3, the general schema of the NCBA is rather straightforward: A planner is given a
problem specification along with a domain description
which is purified with respect to time and resource related constraints. These constraints are accommodated
after the planning procedure has taken place, i.e. has
produced a Partial-Order Plan (POP). The plan adaptation procedure is responsible for inserting this additional information. A scheduler is then employed in
order to obtain a completely instantiated solution of
the initial planning problem.
Planning
Engine
POP
AdaptPlan
Procedure
completePOP
contextual
Scheduling
Engine
frequency of
information sharing
Planning
Engine
Scheduling
Engine
serial
Figure 2: Contextual vs. Serial Planning and Scheduling
Integration approaches.
On one end of the spectrum we have the case in which
data exchange is performed at every decision point:
we refer to this approach as contextual causal and
time/resource reasoning. Clearly, this type of integration represents the optimal strategy, since it crossvalidates causal and time/resource-related aspects of
the problem at every decision point. Examples of
this approach are reported in (Currie & Tate 1991;
Ghallab & Laruelle 1994; Jonsson et al. 2000; Muscettola et al. 1992).
On the opposite end we have the case in which the
planning and scheduling phases are simply serialized,
meaning that the exchange of information takes place
only once. Basically, the output of the planning procedure is directly forwarded as input of the subsequent
34
Causal
Model
Time & Resource
Constraints
schedule
Figure 3: The NCBA general architecture.
This approach is strongly intuitive, but we believe
it is instrumental for the comprehension of the phenomena underlying the integration of the two processes.
In fact, the choice of components for the serialized planning and scheduling system exposes very clearly the
relative fitness of particular planning and scheduling
solving strategies. In other words, studying both processes in a separate way has the effect of improving
both their performance in the serialized setting and the
performance of a “truly” integrated system (in which
planning and scheduling are more tightly interwoven)
which makes use of these solving algorithms. It should
be clear that the N-CBA is certainly not the best way
to build an integrated reasoner, since its efficiency relies very strongly on how separable the two aspects of
the problem are. This requirement is often not realistic, since the degree of inter-dependency between causal
and time/resource-related constraints in the problem
are much higher. Nonetheless, the N-CBA has the nice
property of delivering a very clear picture of what is
going on during problem resolution.
3.1
Planning and Scheduling subsystems
Let us first focus our attention on the planning phase.
Typically, a causal model of the environment is given
as input to the planner, i.e. the domain representation
and the problem definition, both expressed in STRIPSlike formalism. Whatever the planner, the output of
the planning phase will be a sequence of actions which,
without loss of generality, can be considered to be a
Partial-Order Plan (POP) 2 . The goal of our system is
to deliver a solution in which both causal dependencies
as well as time and/or resource constraints are taken
into account.
In an integrated P&S context, time and resource constraints as well as causal dependencies are contemplated
in the initial problem definition. In fact, every action:
• is inherently associated to a time duration;
• requires one or more resource instances that ensure
its executability.
Given a POP produced by the planning phase, these
time and resource constraints must be accommodated
into a new problem specification for the scheduling
phase, yielding a complete representation of the problem instance which can be reasoned upon by the scheduler. As we will see, the theoretical backbone of the
N-CBA lies in the procedure which is employed to integrate such information.
3.2
The AdaptPlan procedure
In order to produce a problem specification which can
be reasoned upon by the scheduling phase, we have developed the AdaptPlan procedure (see Alg. 1), which
is responsible deriving a causal network of activities
for the scheduling phase based upon the POP obtained
from the planner and the additional time and resource
constraints. The object produced by the AdaptPlan
procedure is called a completePOP, and it can be defined as a POP augmented with the time and/or resource information associated to the problem at hand.
The algorithm is basically composed of two parts: in
the first one, every activity belonging to the initial POP
is integrated with information regarding its duration as
well as the type and number of the resource instances
which are necessary for its execution 3 . Thus, the first
phase of the AdaptPlan procedure yields a set of activities, each of which represents one action in the POP
and is characterized by duration and resource usage parameters.
The second phase of the algorithm produces the
causal links among the activities, which are deduced
exclusively from the causal model of the problem and
2
A Total-Order Plan is obviously a particular case of
POP.
3
In a hypothetical implementation of the N-CBA, it is
reasonable to assume that this information is properly hidden in the ordinary PDDL causal model description, so as
to be invisible to the planner but not to the AdaptPlan
procedure.
the POP produced by the planner. In other words,
each node in the complete causal graph represents an
activity of the plan and each edge connecting two nodes
represents a causal constraint between the linked activities.
Algorithm 1 completePOP: AdaptPlan(POP)
Tasks T
Links L
{Step 1: accommodate time/resource constraints}
for i = 1 to n do
T ← ai
T ← Dur(ai )
T ← Res(ai )
end for
{Step 2: calculate causal links}
for i = 1 to n do
for j = i to n do
if ∃p ∈ Pre(aj ) ∧ p ∈ Eff(ai ) then
L ← (ai , aj )
end if
if ∃p ∈ Eff(aj ) ∧ ¬p ∈ Pre(ai ) then
L ← (ai , aj )
end if
end for
end for
Given the POP which is produced by the planner,
the causal links among its activities are computed by
AdaptPlan by adding a link between every pair (ai , aj )
of activities if and only if Pre(aj ) ⊆ Eff(ai ) or the link
resolves a threat. It is easy to see that this constructive
link-generating procedure is equivalent to the opposite
approach, better known as deordering, in which a totalorder plan is progressively “stripped” of its precedence
constraints so long as the plan is still valid. This form of
deordering is known as minimal-constrained deordering,
and is extensively described in (Bäckström 1998). Such
deordering has been proved to be achievable in polynomial time by means of a simple algorithm (MlCD)
if it is possible to decide in polynomial time whether
removing a precedence constraint from a plan causes
it to be invalid. Incidentally, it is interesting to notice that the AdaptPlan algorithm achieves a minimalconstrained deordering of the input plan, and that the
operation of checking whether a precedence constraint
can be removed in the original MlCD procedure described in (Bäckström 1998) corresponds to the (polynomial) opeartion of checking whether a link should be
added (step 2).
What has to be stressed at this point is the fact that
the topology of the causal graph depends, as shown
in the description of the algorithm, exclusively on the
causal model of the problem instance (domain, initial
conditions and POP). By observing the causal graph
produced by the AdaptPlan procedure, one extremely
important characteristic for our purposes can be recognized as the degree of concurrency (or parallelism).
35
Depending on many factors, such as the planner used
and/or the global properties of the causal problem instance, the produced graph will exhibit a different extent of concurrency of the activities. In a hypothetical
implementation of this framework, it is reasonable to
believe that given two plans which show different degrees of concurrency and which solve the same problem, the higher the degree of parallelism of the input
plan, the higher the quality of the solution found by the
scheduler. In particular, the makespan of the schedule is related to the critical path through the causal
network of activities, where by critical path we intend
the sequence of activities which determines the shortest
makespan of the schedule, i.e. the path that runs from
the source to the sink node such that if any activity on
the path is delayed by an amount t, then the makespan
of the entire schedule increases by t.
POP
1. —
Planner 1
2. —
completePOP
3.
. —
.
.
POP
1. —
Planner 2
2. —
completePOP
.3. —
.
.
Figure 4: Different planners lead to completePOPs with
different critical paths (which is strongly related to the
makespan). In the example, Palnner 1 produces a completePOP with a shorter critical path than Planner 2.
Notice that if the durations of the actions were all
the same, then the critical path would coincide with the
longest path through the graph. This is in general not
true since the path which determines the makespan of
the entire schedule may be shorter than the longest path
through the graph. Nevertheless, it is safe to say that,
on average, the critical path is usually one of the longest
paths through the graph. As a consequence, usually the
relative quality of a completePOP with respect to the
makespan of the final schedule which is deduced from
it can be measured by its longest path.
Given our main goal, which is that of obtaining an
efficient schedule to submit to execution, the previous
considerations naturally lead us to prefer, among the
existing planners, those which are more likely to pro-
36
duce plans which yield the shortest possible critical
path, or, at least in first approximation, which minimize the longest path through the graph, a characteristic which would maximize the performance of any
optimizing scheduler employed in the integration.
3.3
Planning Strategies
In order to understand which types of planners are best
suited for the N-CBA, let us compare the two most common planning paradigms, namely Heuristic Search (HS)
planning and Planning Graph (PG) based planning 4 .
PG-based planners work by alternating one step of
graph expansion to a search on the planning graph 5
for a valid plan, the search occurring at every level of
expansion (starting when the goals appear non-mutex
for the first time). This, together with the disjunctive nature of the search space (Kambhampati, Parker,
& Lambrecht 1997), makes PG-based planner optimal
with respect to the number of execution steps. This
guarantees that these planners find the shortest plan
among those in which independent actions may take
place at the same logical step (Blum & Furst 1997).
The optimality of PG-based planners is precisely
what makes them best suited with respect to the criteria described above for loosely-coupled P&S integrations. In fact, the following result relates the length of
the a POP in terms of parallel steps with the critical
path of the corresponding completePOP:
Theorem 1. The number of parallel steps of a plan
produced by a PG-based planner coincides with the
length of the longest path through the relative completePOP.
Proof. The proof of this theorem equates to proving
that if the number of steps in the POP is s, then (1) the
length of the longest path is at least s, and (2) the
length of the longest path is at most s.
To prove that the length l of the longest path cannot
be greater than s, let us proceed by contradiction. If
l > s, then we would have that there is at least one
sequence of s + m actions in the plan which belong to
different logical steps, which in turn would mean that
no valid plan of length s exists. This contrasts with the
validity of the POP generated by the planner, since the
shortest possible solution would be longer than s.
The fact that the longest path cannot be shorter that
the length of the POP can be deduced by observing
that if indeed l < s, then it would be possible to eliminate at least one extra precedence constraint in the
4
This category includes all those planners which maintain a planning graph representation of the search space
(i.e. they search in the space of plans), without referring
to the particular solution extraction algorithm they employ
(exploring the planning graph, casting it as a SAT problem
and so on).
5
Planners such as BlackBox cast the planning graph
to be searched as another problem, such as a SAT formula
or a CSP, but this distinction does not affect the generality
of the observations we make here.
completePOP, which contrasts with the fact that the
completePOP is a minimal-constrained deordering of
the POP.
• it is a general purpose tool, thanks to the PDDL planner input, and to the high expressivity of the scheduling problem specification (Brucker et al. 1998;
Bartusch, Mohring, & Radermacher 1988);
In one statement, this proves that PG-based planners, by minimizing the critical path, in fact maximize
the concurrency with respect to the causal model of the
problem.
It is interesting to notice how the AdaptPlan procedure we have shown earlier ignores the partial ordering information of the initial solution produced by the
planner. Rather, it “relies” on the fact that the plan is
optimal with respect to the number of parallel steps. In
other words, whereas any type of planner may be used
to produce the initial POP, the quality of the final solution is negatively affected by the non-optimality (with
respect to the number of parallel steps) of the generic
planner.
While the performance details of this implementation
of the N-CBA are outside the scope of this paper, some
preliminary experiments show that completePOPs produced from PG-derived plans invariably exhibit a characteristic of high concurrency (i.e. a shorter critical
path), while the ones produced from HS derived plans
tend to be highly sequential.
Clearly, a more tightly integrated system is often
necessary, especially because causal and time/resourcerelated aspects of the domain are often strongly dependent. Nonetheless, it is interesting to notice how the
basic properties of solving ideas and their relative adherence to the philosophy of integration are somehow
independent from when the information sharing occurs
(at partial plan level, at every decision point and so
on), rather they depend on which information is shared
between the two solving engines. It is clear already in
the N-CBA that the most effective solution seems to be
that of adopting strategies which are capable of propagating plans rather than states, in order to ensure fast
reactivity when it comes to taking decisions.
In the previous sections we have motivated the use
of a PG-based planner in conjunction with a CSP
scheduling tool. The most primitive approach to planning and scheduling integration is clearly the N-CBA,
an approach in which information sharing occurs only
once during the solving process. Given the coarseness of this integrated solving strategy, our opinion is
that more sophisticated implementations of integrated
solvers should be compared to the N-CBA 6 . In fact,
the overall performance of an integrated architecture
can only get better with higher occurrences of information sharing events between the two solvers.
4
The N-CBA as a Lower Bound
In this section we give a brief overview of an implementation of of the N-CBA which makes use of established
off-the-shelf components for the planning and scheduling modules. The choice of these components (or better, of the planning system) is strongly grounded on the
previous considerations. Our aim is to assert a basic
loosely-coupled system according to the N-CBA, which
can be to some extent taken as the “best we can do”
within the scope of loosely-coupled integrated reasoning.
The planning system which has been used is BlackBox (Kautz & Selman 1999), a PG-based planner which combines the efficiency of planning graphs
with the power of SAT solution extraction algorithms
(McAllester, Selman, & Kautz 1997; Moskewicz et al.
2001). This planner has done very well in recent planning competitions, and in the category of those planning paradigms which seem fit for planning and scheduling integration, it can certainly be considered to be one
of the best choices. The scheduler which has been selected for use in this implementation of the N-CBA is
O-Oscar (Cesta, Oddi, & Susi 1999), a versatile, general purpose CSP solver which implements the ISES
algorithm (Cesta, Oddi, & Smith 2002).
The result is an architecture which is effectively capable of solving quite a number of problems, and which
has been extensively used in the study of multi-agent
P&S in the context of the RoboCare project. It is not
trivial to notice that even this primitive form of integration has some important advantages:
• the only necessary additional development is the
AdaptPlan procedure, in other words, the information sharing mechanism;
• assuming the separability of causal and
time/resource-related problem specifications, the
system performs quite well, especially with respect
to plan quality;
5
Conclusions
In this paper we have presented an analysis of the issues
involved in planning and scheduling integration. Many
approaches to this problem have been developed, yet
some fundamental aspects still remain to be explored
fully. The work we have presented is motivated by the
fact that while the independent research in planning
and scheduling has evolved very rapidly, the study of
the combined approach is still trying to deal with some
basic issues.
Integrated architectures in general can be distinguished by the frequency at which information sharing
occurs. While various degrees of integration are obtainable by “toggling” the frequency parameter, we have
shown some fundamental properties of one end of the
spectrum, namely loosely-coupled systems in which the
output of the planning phase is fed into the the scheduling subsystem, an approach we have qualified ad Naı̈ve
6
Assuming the two aspects of the problem are distinguishable in the benchmark domains.
37
and Component-Based. In this context, we have seen
that the theoretical backbone of this form of integration lies in the accommodation of time and resource
constraints and deordering of the causal plan.
After defining the basic mechanisms of the N-CBA,
we have focused on which solving strategies are more
applicable in the integrated solver. In this context, we
have found that a desired property of the solvers is to
minimize the critical path thruogh the causal network of
activities (completePOP) which is reasoned upon by the
scheduler. Some theoretical considerations have lead us
to the conclusion that PG-based planners yield better
quality schedules in the loosely-coupled approach. This
is due to the fact that their optimality with respect to
number of parallel steps corresponds to a strong optimization with respect to the critical path in the completePOP.
The analysis of the N-CBA has shown how a variety of useful indications can be drawn with respect to
the general issue of planning and scheduling integration: thanks to these considerations, we are aiming at
furthering the study in this direction, looking in particular at CSP approaches to planning.
Acknowledgments
This research is partially supported by MIUR (Italian
Ministry of Education, University and Research) under
project RoboCare (A Multi-Agent System with Intelligent Fixed and Mobile Robotic Components). The
Authors are part of the Planning and Scheduling Team
[PST] at ISTC-CNR and would like to thank the other
members of the team for their continuous support.
References
Bäckström, C. 1998. Computational Aspects of Reordering Plans. Journal of Artificial Intelligence Research 9:99–137.
Bartusch, M.; Mohring, R. H.; and Radermacher, F. J.
1988. Scheduling Project Networks with Resource
Constraints and Time Windows. Annals of Operations
Research 16:201–240.
Blum, A., and Furst, M. 1997. Fast Planning Through
Planning Graph Analysis. Artificial Intelligence 281–
300.
Brucker, P.; Drexl, A.; Mohring, R.; Neumann, K.;
and Pesch, E. 1998. Resource-Constrained Project
Scheduling: Notation, Classification, Models, and
Methods. European Journal of Operations Research.
Cesta, A.; Oddi, A.; and Smith, S. 2002. A
Constrained-Based Method for Project Scheduling
with Time Windows. Journal of Heuristics 8(1):109–
135.
Cesta, A.; Oddi, A.; and Susi, A. 1999. O-Oscar:
A Flexible Object-Oriented Architecture for Schedule Management in Space Applications. In Proceedings of the Fifth International Symposium on Artifi-
38
cial Intelligence, Robotics and Automation in Space
(i-SAIRAS-99).
Currie, K., and Tate, A. 1991. O-Plan: The
Open Planning Architecture. Artificial Intelligence
52(1):49–86.
Ghallab, M., and Laruelle, H. 1994. Representation
and Control in IxTeT, a Temporal Planner. In Proceedings of the Second International Conference on AI
Planning Systems (AIPS-94).
Jonsson, A.; Morris, P.; Muscettola, N.; Rajan, K.;
and Smith, B. 2000. Planning in Interplanetary Space:
Theory and Practice. In Proceedings of the Fifth Int.
Conf. on Artificial Intelligence Planning and Scheduling (AIPS-00).
Kambhampati, S.; Parker, E.; and Lambrecht, E.
1997. Understanding and Extending Graphplan. In
Proceedings of ECP ’97, 260–272.
Kautz, H., and Selman, B. 1999. Unifying SAT-Based
and Graph-Based Planning. In Minker, J., ed., Workshop on Logic-Based Artificial Intelligence, Washington, DC, June 14–16, 1999. College Park, Maryland:
Computer Science Department, University of Maryland.
McAllester, D.; Selman, B.; and Kautz, H. 1997. Evidence for Invariants in Local Search. In Proceedings of
the Fourteenth National Conference on Artificial Intelligence (AAAI’97), 321–326.
Moskewicz, M.; Madigan, C.; Zhao, Y.; Zhang, L.; and
Malik, S. 2001. Chaff: Engineering an Efficient SAT
Solver. In Proceedings of the 38th Design Automation
Conference (DAC’01).
Muscettola, N.; Smith, S.; Cesta, A.; and D’Aloisi, D.
1992. Coordinating Space Telescope Operations in an
Integrated Planning and Scheduling Architecture. In
IEEE Control Systems, Vol.12, N.1, 28–37.
Path Planning in a domestic dynamic environment
Alessandro Farinelli, Luca Iocchi and Daniele Nardi
Dipartimento di Informatica e Sistemistica
Università di Roma “La Sapienza”
Via Salaria 113, 00198, Roma, Italy
{farinelli,iocchi,nardi}@dis.uniroma1.it
Abstract
In this article we describe a Path Planning method
for dynamic environments (LPN-DE) and its application in the RoboCare scenario. A description of the
LPN-DE method already implemented in the RoboCup
environment is given, and experiments both in the
RoboCup and in RoboCare environments are presented.
1
Introduction
Path planning is a fundamental task for autonomous
mobile robots. Every application that involves the
use of an autonomous mobile robot has to deal with
path planning and obstacle avoidance. Examples of
such applications are: air or navy traffic control, exploration and work in hostile environment (such as subsea or space), people rescue during disaster or dangerous situation, office work, service robots, robotic
soccer, etc. Path planning is a critical task for the
success of those applications and many different approaches can be found in literature to tackle the problem (Latombe 1991). Well known techniques based on
road-map construction, cell decomposition, or artificial
potential fields, are widely used.
In many applications the robots have to cope with
a dynamic environment, in which the problem of path
planning and obstacle avoidance becomes much harder,
because the robots have to take into account that configuration of the work space changes as time flows.
Different solutions for path planning in dynamic environments have been proposed in literature. A first
group of methods do not take into account any explicit
representation of time and are mostly focused on extending or adapting a standard path planning method
proposed for static environments. The goal is to have
a very fast algorithm to plan trajectories in a static
obstacle configuration and re plan the trajectories at
a fixed time interval to take into account environmental changes (Konolige 2000; Oriolo, Ulivi, & Vendittelli 1996; Borenstein 1991; Kreczmer 1999; Fujimori,
Nikiforuk, & Gupta 1997). Another group of works is
c 2003, The RoboCare Project — Funded by
Copyright °
MIUR L. 449/97. All rights reserved.
based on explicitly considering the time dimension during the path planning process. Some of those works rely
on the assumption of knowing in advance the dynamic
evolution of the environment (Fiorini & Shiller 1993;
Kindel et al. 2000). As the complete knowledge of the
dynamics of the environment is not available in most
cases, other authors have studied the possibility of predicting the future evolution of the environment to plan
the trajectories (Yamamoto, Shimada, & Mohri 2001;
Miura & Shirai 2000; Huiming & Tong 2001; Yung &
Ye Oct 1998).
In this paper an approach to the problem of path
planning in dynamic environments is described. The
general idea of the method is to integrate the obstacle’s dynamics characteristic into the planning method,
by changing their representation in the robot’s configuration space. This is a general approach already used
in literature, (Fiorini & Shiller 1993; Yamamoto, Shimada, & Mohri 2001), and can be used with different trajectory planning methods. The effectiveness of
the approach depends on the specific planning method,
and relies on how the information about the obstacles dynamics are taken into account. The method we
present has been developed and experimented in the
RoboCup environment (Farinelli & Iocchi 2003), and
finally adapted and tested in the RoboCare domain1 .
The basic approach, from which the present work has
started, is described in (Konolige 2000) and will be referred in this paper as LPN. The LPN method has very
good performance for a robot with few degrees of freedom and is able to compute safe and effective paths in
cluttered environment. However by not considering the
dynamics of the obstacles often the method results in
undesired behavior of the robot when facing moving obstacles. The LPN method for Dynamic Environment,
or LPN-DE, tested in the RoboCup scenario gave very
good results in a very dynamic environment and its
application in the RoboCare domain seems to be very
promising.
The LPN-DE method is presented in Section 2. In
Section 3 the method evaluation, experimental results
1
The
RoboCare
project
http://robocare.ip.rm.cnr.it/
home
page
39
is
in the RoboCup and in the RoboCare environment are
presented. Finally, conclusions are drawn in the last
section.
2
LPN-DE Gradient Method
The LPN-DE method is based on LPN, presented in
(Konolige 2000) (Linear Programming Navigation gradient method or LPN),
The LPN gradient method is based on a numerical
artificial potential field approach. The method samples
the configuration space and assigns a value to every
sampling point using a linear programming algorithm.
The values for the sampling points are the sampled values of a navigation function; this means that the robot
can find an optimal path, just following the descendant
gradient of the navigation function to reach the goal.
The method can take as input a set of goal points, the
final point of the computed path will be the best one
with respect to the cost function discussed below. In
order to build the navigation function a path cost must
be defined. A path is defined as an ordered set of sampling points
Pn = {pn , pn−1 , ..., p0 }
(1)
such that:
• pi ∈ R2
• ∀ i = n, ..., 1 pi must be adjacent to pi−1 or along the
axis of the work space or in diagonal
• ∀ i, j if i 6= j then pi 6= pj
• p0 must be in the set of the final configurations
• ∀ i = n, ..., 1 pi must not be in the set of the final
configurations
Given a point pk , a path that starts from pk and
reaches one of the final configurations p0 will be represented as Pk = {pk , ..., p0 }. A cost function for a path
P is an arbitrary function F : P 7→ R, where P is the
set of paths. This function can be divided into the sum
of an intrinsic cost due to the fact that the robot is in
a certain configuration, and an adjacency cost due to
the cost of moving from one point to the next one:
F (Pk ) =
i=k
X
I(pi ) +
i=0
i=k−1
X
A(pi , pi+1 )
(2)
i=0
where A and I can be arbitrary functions.
Normally, I depends on how close the robot is to
obstacles or to “dangerous” regions, while A is proportional to the Euclidean distance between two points.
The value of the navigation function Nk , in a point
pk , is the cost of the minimum cost path that starts
from that point:
Nk =
min F (Pkj )
j=1,...,m
(3)
where Pkj is the j-th path starting from point pk and
reaching one of the final destinations and m is the number of such paths.
40
Calculating the navigation function Nk for every
point in the configuration space directly, would require
a very high computational time, even for a small configuration space. The LPN algorithm (Konolige 2000) is
used to efficiently compute the navigation function. It
is a generalization of the wavefront algorithm (Thrun
et al. 1998; Arkin 1990) that is based into three main
steps:
• assign value 0 to every point in the final configuration
and an infinite value to every other point;
• put the goal set points in an active list;
• at each iteration of the algorithm, operate on each
point of the active list, removing it from the list and
updating its 8-neighbors (Expansion phase).
The expansion phase is repeated until the active list is
not empty. To update a point p we operate as follow:
• for every point q in the 8-neighbors of p compute its
value, adding to the value of p the moving cost from
p to q and the intrinsic cost of q.
• if the new value for q is less than the previous one,
update the value for q and put it into the active list.
The navigation function is computed according to the
intrinsic cost of the sampling points in the configuration space. Suppose the obstacles in the workspace are
given by a set of obstacle sampling points. Let Q(·)
be a generic function and d(p) the Euclidean distance
for the sampling point p from the closest sample point
representing an obstacle, then I(p) = Q(d(p)). In order
to compute d(p) for every sampling point we may apply
the LPN algorithm giving the obstacle sampling points
as the final configuration set, and assigning in the initialization phase a value 0 to the intrinsic cost for all
the sampling points. Once d(p) (and then I(p)) is computed for every sampling point p, we can execute again
the LPN algorithm to compute the navigation function.
The extension to the LPN method for application
in Dynamic Environments (LPN-DE) is based on additional information about moving obstacles, which are
represented by a velocity vector. Observe that we do
not require to have complete knowledge of the dynamics of the environment in advance (that would not be
possible in many cases), but only assume to know an
estimation of velocity vectors for the obstacles. Notice
also that this requirement can be obtained by an analysis of sensor data, and its precision is not critical for the
method, that is robust with respect to errors in evaluating such velocity vectors. Moreover, the proposed extension does not depend on the technique used to compute the velocity vectors for the obstacles. How those
information are obtained, mainly depends on specific
sensors of the robot or on external system and our particular implementation in the RoboCare environment
will be described later. By suitably taking into consideration the information about the velocity vectors in
the planning method we show that our extension gives
better result than the basic LPN algorithm.
Goal
Obstacle
Influence region
Robot
with moving obstacles is minimum. In fact, the function I(p) in LPN-DE modifies the shape of the influence
region according to the information about the obstacle
dynamics, as shown in Figure 2. Consequently the behavior of the robot is more adequate to the situation,
since the chosen path will be the one passing behind
the moving obstacle (either another robot or a person),
which is shorter an safer.
If a given point p belongs to the influence region of
an obstacle Oi the intrinsic cost function for p will depend not only on the distance from Oi , but also on the
position of p with respect to the velocity vector of Oi .
Therefore the function I(p) will depend on
• the distance d(p) of the point p with respect to Oi
Figure 1: Simple influence region (LPN)
• the angle α(p) between the velocity vector and the
line reaching the center of Oi ,
• the velocity’s module k v k of Oi
Goal
So I(p) in LPN-DE is given by a function Q(d(p), α(p), k
v k). For a more detailed description of the calculation
peformed in the LPN-DE method see (Farinelli & Iocchi
2003).
3
Obstacle
Influence region
Robot
Experiments and Method Evaluation
In this section we present experiments made with our
Path Planning method both in the RoboCup and in the
RoboCare environment. For the RoboCup environment
we show only the most interesting results achieved (see
(Farinelli & Iocchi 2003) for a more detailed description
of the experiments), for the RoboCare scenario we discuss the implementation of the method and the results
achieved.
Figure 2: Extended influence region (LPN-DE)
3.1
In order to clarify how the LPN-DE integrate the
obstacle dynamics information into the planning process, we introduce the concept of influence region of an
obstacle. This definition is based on the fact that the
intrinsic cost I(p) is generally limited, i.e. it is 0 for
points p such that d(p) > δ, for a given threshold δ.
Definition 1 The influence region for an obstacle Oi
is the set of sample points p whose intrinsic cost I(p)
is modified by the obstacle Oi .
Basically the influence region of an obstacle Oi is the
area surrounding the obstacle that we want to avoid, i.e.
the set of point around Oi for which I(p) > 0. Considering the obstacle pointwise, an intrinsic cost function
depending only on the distance d(p), will result in circular influence regions, as it can be seen in Figure 1.
The LPN-DE method described here defines a function
I(p) resulting in influence regions as shown in Figure
2.
As intuitively shown in Figures 1 and 2, LPN-DE is
designed in such a way to prefer trajectories for which
the probability of collisions (or trajectory intersections)
Performance Evaluation
The LPN-DE method has many advantages. First of
all, the method finds optimal paths with respect to the
cost function used. The known information about the
environment (such as field shape and position of fixed
obstacles) can be effectively taken into account, and
easily integrated with information coming in real time
from the sensor reading. Moreover, the intrinsic cost
function can be tailored in a very direct and simple
manner, to obtain different trajectories, and so different behaviors of the robot. Finally, thanks to the low
computational requirement the sampling of the configuration space can be exploited in order to reach a
given precision (10 cm in RoboCup and 6 cm in RoboCare) while maintaining the path computation under
real-time constraints. In particular in the RoboCup environment the method is able to run in a cycle of 100
ms (including all the processing time required by other
modules developed for the soccer application), on an
AMD K6 350 MHz, with a sampling space of 100 × 60
cells. In RoboCare we tested the method on a P4 2
GHz, with a sampling grid of 133 × 83 cells, keeping
the computation time below 100ms.
41
3.2
Experiments in RoboCup
In the RoboCup environment the experimental setting
is composed of unicycle-like robots equipped with a
color camera. The robots are supposed to be localized
(i.e. they know their global position inside the field
of play), and the information about obstacles are extracted using the particular color setting of the environment (obstacles are black, while the field background is
green).
In figures 3 and 4 we report a typical situation arising
in our experimental setting. The figures represent the
actual trajectories followed by the two robots, whose
initial positions S1 and S2 are shown in the left side of
the figures and the target points G1 and G2 are in the
right side.
In the first case both the robots use the LPN method,
and their behavior is not optimal since one robot (robot
1) always tries to pass in front of the other one. The
second case, in which both the robots use the LPNDE method, shows instead that the additional information about the obstacle dynamics has been properly
exploited in the LPN-DE method for planning a better trajectory in presence of moving obstacles. Notice
also that in any case the trajectories in Figure 4 are not
optimal, as they would be if any robot would know in
advance the trajectory planned by the other. Thus the
robots execute a small portion of their path in parallel
as in the previous case. However, in this case, at a certain point one of the robots (specifically robot 1) is able
to detect such situation and decides to pass behind the
other robot, resulting in a more effective path.
3.3
Experiments in RoboCare
In the RoboCare environment the current experimental
setting is composed by a unicycle like Pioneer 3 robot,
equipped with a ring of eight sonar, positioned in the
front of the robot. The main differences with respect to
the RoboCup environment are: i) unstructured environment, ii) obstacles with complex geometry, iii) more unreliable sensors. The fact that the environment, which
is an indoor domestic-like scenario, is not structured,
and the obstacles can present complex geometry (i.e.
desks, chairs, etc ) entails that the input given to the
Path Planning method can not be high level features,
such as in the RoboCup environment. Moreover, while
the map of the environment is known in advance, unpredictable moving obstacles, such as persons moving
in the environment, have to be taken into account. We
decided to represent the known obstacles in the environment, as sampling points that approximate the real
objects, while in order to keep into account moving
and unpredicted obstacles, we integrated the known obstacle representation with the sonar readings. Having
both those information allow us to compute effective
paths leveraging the knowledge of the map, as well as
to avoid unexpected obstacles due to moving persons or
to possible localization errors, caused by the low robustness of the sonar readings. The results have been very
42
Figure 3: LPN method
Figure 4: LPN-DE method
promising, the robot is able to safely move in the environments, avoiding moving person, entering even very
narrow passages and finding effective path in complex
obstacles configurations.
Figures 5 and 6 show two examples of the experiments performed. The lines represent the known obstacles, while the black dots near the lines represent the
sonar readings. It is possible to see that the computed
path (represented by the dots starting from the current
robot position) reaches the desired position avoiding
both the known obstacles and the sonar readings. In
particular Figure 5 shows that the robot is able to find
effective paths with complex obstacles configurations.
while figure 6 shows that our path planning method is
suitable for navigation in narrow passages.
As for moving objects (specially persons), the LPNDE policy can result in paths that are safer from the
perspective of interaction with human beings. By integrating the obstacles dynamic into the path planning
process as previously described, we are able to take
into consideration the current obstacle motion direction
and when possible the robot computes paths that pass
on the back of the moving person, yielding the way to
human beings, producing a more gentle integration of
movements between the robot and the persons. This is
an important feature of the implemented system, since
this project is specifically oriented to assist elderly or
disable person which can have problem in movements.
While in the RoboCup environment the information on
Figure 5: Navigation with complex obstacles
performed better than LPN, when facing fast moving
obstacles, keeping the same computational requirement.
The LPN method and the extension have been implemented and tested on robotic platforms.
The reported experiments show that better results
can be obtained by integrating into the path planning
method the information on the obstacles dynamics, and
that the proposed extension in highly dynamic environments turned out to be effective and it has actually improved the performance of the robotic platform.
Moreover, the implementation of the LPN method in
the RoboCare environment, gave very promising results
and the implementation of the extension seems to be
very interesting, in order to obtain path which are safer
from the human being perspective. In the near future,
the robot will be equipped with a laser range finder;
this will result in more accurate obstacle detection and
more robust localization capabilities, improving the robustness of the overall navigation.
5
Acknowledgments
This research is partially supported by MIUR (Italian
Ministry of Education, University and Research) under
project RoboCare (A Multi-Agent System with Intelligent Fixed and Mobile Robotic Components).
References
Figure 6: Navigation in narrow passages
the obstacles dynamics are extracted using the on board
camera of the robot, in the RoboCare project we can
use a fixed stereo vision system (Bahadori, Iocchi, &
Scozzafava 2003) present in the environment, to obtain
a high level feature-based representation of the moving
obstacles, and their dynamics characteristics (mainly
the velocity vector). The fixed stereo vision system is
able to reconstruct a top view of the environment and
therefore can be effectively used to extract reliable information about the obstacles dynamics that are considered in the LPN-DE method, improving the effectiveness of the computed paths.
4
Conclusions
In this article we have presented an approach to cope
with dynamic environments extending a very efficient
method for path planning presented in (Konolige 2000).
This method has been chosen because it is appropriated
for our testing environments (RoboCup and RoboCare).
The basic idea of LPN-DE relies on the integration of
the information about the velocity of moving obstacles
into the path planning algorithm. In particular, in the
present implementation, the dynamics of the obstacles
have been integrated in the LPN method, modifying
the cost function used for computing the global navigation function. Although this extension does not guarantee optimal trajectories along the time dimension, the
performed experiments show that the LPN-DE method
Arkin, R. C. 1990. Integrating behavioral,perceptual
and world knowlodge in reactive navigation. In
Robotics and Autonomus Systems 6, 105–122.
Bahadori, S.; Iocchi, L.; and Scozzafava, L. 2003. People and robot localization and tracking through a fixed
stereo vision system. In Proc. of First RoboCare Workshop.
Borenstein, J. 1991. The vector field histogram-fast
obstacle avoidance for mobile robots. Robotics and
Automation, IEEE Transactions on Volume: 7 Issue:
3:278 –288.
Farinelli, A., and Iocchi, L. 2003. Planning trajectories
in dynamic environments using a gradient method. In
Proc. of VII RoboCup Symposium.
Fiorini, P., and Shiller, Z. 1993. Motion planning
in dynamic environments using the relative velocity
paradigm. IEEE Int. Conference on Robotics and Automation vol.1:560 –565.
Fujimori, A.; Nikiforuk, P. N.; and Gupta, M. M. 1997.
Adaptive navigation of mobile robots with obstacle
avoidance. Robotics and Automation, IEEE Transactions on Volume: 13 Issue: 4:596 –601.
Huiming, Y., and Tong, S. 2001. A destination driven
navigator with dynamic obstacle motion prediction.
Proceedings of the IEEE Int. Conference on Robotics
and Automation.
Kindel, R.; Hsu, D.; Latombe, J.; and Rock, S. 2000.
Kinodynamic motion planning admist moving obstacle. Proocedings of the 2000 IEEE Int. Conference on
Robotics and Automation.
43
Konolige, K. 2000. A gradient method for realtime
robot control. AIROS.
Kreczmer, B. 1999. Application of parameter space
discretization for local navigation among moving obstacles. Robot Motion and Control, 1999. RoMoCo
’99. Proceedings of the First Workshop on 193 –198.
Latombe, J. C. 1991. Robot Motion Planning. Kluwer
Academic Publisher.
Miura, J., and Shirai, Y. 2000. Modelling motion uncertainty of moving obstacles for robot motion planning. In IEEE Int. Conference on Robotics and Automation, Proceedings. ICRA ’00, volume Volume: 3,
2258 –2263.
Oriolo, G.; Ulivi, G.; and Vendittelli, M. 1996. Path
planning for mobile robots via skeleton on fuzzy maps.
Intelligent Automation and Soft Computing.
Thrun, S.; Buken, A.; Burgarg, W.; Fox, D.; Hennig, T. F. D.; Hofmann, T.; Krell, M.; and Schmidt,
T. 1998. Map learning and high-speed navigation in
RHINO. In AI-based Mobile Robots : Case studies of
successful robot systems. MIT Press, Cambridge,MA.
Yamamoto, M.; Shimada, M.; and Mohri, A. 2001.
On-line navigation of mobile robot under the existence
of dynamically moving multiple obstacle. In Assembly
and Task Planning, 2001, Proceedings of the IEEE Int.
Symposium on.
Yung, N. H. C., and Ye, C. Oct 1998. Avoidance of
moving obstacles through behaviour fusion and motion
prediction. In Systems, Man, and Cybernetics, 1998.
1998 IEEE Int. Conference on, volume Volume: 4,
3424 –3429.
44
Multi-scale Meshing in Real-time
Stefano Ferrari1, Iuri Frosio2,3, Vincenzo Piuri1, and N. Alberto Borghese2
1
Department of Information Technologies, University of Milano, Italy.{ferrari, piuri}@dti.unimi.it.
2
Department of Computer Science, University of Milano, Italy. {frosio, borghese}@dsi.unimi.it.
3
Department of Bioengineering, Politecnico of Milano
Abstract
A procedure for the construction of 3D surfaces from range
data in real-time is here described. The process is based on a
connectionist model, named Hierarchical Radial Basis
Functions Network (HRBF), which has been proved effective
in the reconstruction of smooth surfaces from sparse noisy
points. The network goal is to achieve a uniform
reconstruction error, equal to measurement error, by stacking
non-complete grids of Gaussians at decreasing scales. A
partitioning of the data in a voxel-like structure and the use of
local operations produces a fast configuration algorithm,
which allows reconstructing meshes from range data of 100k
data points, in less than 1 s on Pentium III, 1 Ghz, 256 Mbyte
machines.
1 Introduction
3D Models are becoming very common in a wide variety of
applications. In particular, 3D models of Human faces have
been recently introduced for robust person identification
(D’Apuzzo, 2003). These 3D models are usually produced
starting from a set of 3D data points measured automatically
on the subject’s face. Although sampling can be indeed fast,
the conversion of the points cloud into a 3D model requires a
considerable amount of time, ranging from several minutes
to hours for the most complex models.
In fact, due to measurement error, the interpolation of the
points (for instance through Delauney tessellation) would
produce a wavy surface, useless for accurate measurements;
and some sort of filtering is required. A possible solution (cf.
Hoppe, 1992; Remondino, 2003) is based on fitting piecewise planar or polynomial patches locally to the range data
set. More refined approaches are based on splines (Eck and
Hoppe, 1996; Barhak and Fisher, 2001), where polynomial
patches with a regular grid as a support are employed. These
approaches have difficulties to cope with data sets, which
have different point density in different regions of the input
domain, and to reproduce different levels of detail in
different regions. Morevoer, they are computationally
intensive and not suitable to real-time operation.
A different approach is presented here, where a parametric
surface (linear combination of Gaussians) defined over a 2D
grid is used to represent the surface. The parameters are
computed through algebraic operations carried out locally on
the data, making the configuration algorithm suitable to realtime implementation. Moreover, to add finer details of the
face, which are often circumscribed in few regions: mouth,
chin, eyes, nose, an adaptive scheme has been developed.
This schema automatically identifies these regions in the
manifold and inserts clusters of Gaussians at smaller scales
there. The model has been termed Hierarchical Radial Basis
Function Networks (HRBF, Borghese and Ferrari, 2000;
Ferrari et al., 2003), and it is proposed here its extension
suitable to strict real-time. By introducing the partitioning of
the data points into a voxel structure, the multi-scale mesh
construction time is fast enough to be suitable for quasi-realtime application.
2. The HRBF Model
Let us suppose that the set of range data, which can be
expressed as a 2½D data set, that is as a height field: {z =
S(x,y)}. In this case, the surface will assume the explicit
analytical shape: z = S(P).
Copyright © 2003, The ROBOCARE Project. All rights reserved. This
research is partially supported by MIUR (Italian Ministry of Education,
University and Research) under project RoboCare (A Multi-Agent System
with Intelligent Fixed and Mobile Robotic Components)."
45
In the HRBF (Hierarchical Radial Basis Function Network)
model, such a surface is obtained by adding the output of a
set of hierarchical layers, al(P), at decreasing scale:
M
s(P) =
∑ a (P | σ )
l=0
l
l
(1)
where σl determines the scale of the l-th layer, and σl > σl+1
holds. The layers {al(.)} can be regarded as a stack of grids.
When the Gaussian, G(.), is taken as a base of the space of
the surfaces, the output of each layer can be written as:
N
a l (P | σ l ) = ∑ w j,k G (P − Pj,k |σ l )
(2)
decreases with the distance of Pm from Pj,k,. It should be
remarked that there is no point in considering the data points
far from Pj,k in the estimate. This can be circumscribed to
those points, which lie in an appropriate neighborhood of Pj,k.
This neighbourhood, called receptive field, A(Pj,k), is set,
somehow arbitrary, as the square region centered in Pj,k, of
side equal to ∆Pl. The estimate of is S(Pj,k) is therefore
carried out locally in space (inside the receptive field) and
will be the core of the fast version of the algorithm. A
possible estimating function is reported in Eq. (3) where the
Gaussian function is used to weights the measured range
points, Pm:
∑( S() P
P ∈A P
~
S(Pj,k ) = m j, k
As can be seen from Eq. (2), the surface depends on a set of
parameters: the number, N, the scale, σl, and the position,
{Pj,k} of the Gaussians, also termed structural parameters,
and the shape parameters: the weights {wj,k}. Each grid, l,
realizes a low-pass filter, which is able to reconstruct the
surface up to a certain scale, determined by σl. Given a
certain scale, σl, the grid side, ∆Pl, and consequently N and
the {Pj,k} can be derived by allowing a certain amount of
overlap between adjacent Gaussian units. This is related to
the relationship between grid side (sampling rate) and the
maximum frequency content (level of detail) which can be
reconstructed by the layer (Borghese and Ferrari, 2000).
m
−
k =0
The G(.) are equally spaced on a 2D grid, which cover the
input domain of the range data: that is the {Pj,k}s are
positioned in the grid crossings. The side of the grid is a
function of the scale of that layer: the smaller the scale, the
shorter is the side length, the denser are the Gaussians and
the finer are the details which can be reconstructed. It has
been shown in (Ferrari et al, 2003) that, given σl small
enough, Eq. (1) can approximate arbitrarily well, any
continuous function with equi-limited derivatives, like most
real surfaces are.
−
Pkl − Pr
)e
Pj , k − Pm
∑( e )
σl
2
2
2
(3)
σl 2
Pr ∈A Pj , k
If only one layer were adopted, a serious drawback would be
introduced: if the scale is small enough to resolve the finest
details, an unnecessary dense packing of units is produced in
all those regions which feature a low scale. In these regions
there might even be not enough points to get the estimate in
Eq. (3). A similar problem can be found in (Hoppe and Eck,
1996) for instance. A better solution is to adaptively allocate
the Gaussian units, with an adequate scale, and consequently,
adequate receptive fields, in the different regions of the input
data domain.
To the scope, first a gross description of the surface is output
by a first layer, featuring a large scale. The residual is then
analyzed locally and non complete grids, with lower scales,
are stacked over the first one until the residual goes
uniformly under the measurement error, obtaining a sparse.
More formally, the first grid outputs a rough estimate of the
surface, a1(P) at a large scale as:
N
a l (P | σ l ) = ∑ w j,k G (P − Pj,k |σ l )
(4)
k =0
With these considerations the structural parameters can be
determined, and the weights {wj,k} are the only parameters
left to be determined. According to digital filtering theory,
these could be chosen equivalent to the surface height in the
grid crossings: wj,k = S(Pj,k). However the sampled data
points are usually not equally spaced, and the {S(Pj,k)} not
available. To estimate them a weighted average of the
measured data points, {Pm} is performed, where the weight
46
For each of the range data points, a residual is computed as
the difference between the measured value of the surface and
the reconstructed one:
N
r(Pm) = a1(Pm) -
∑w
k , j= 0
j, k
G (Pm − Pj,k |σ1 )
(5)
This residual is constituted of measurement noise plus the
details, which were not captured at the scale of that grid. The
details will be added in the higher layers, captured by pools
of Gaussians at smaller scales.
These details are added as follows. A second grid, featuring
a scale smaller than the first one is created. Somehow
arbitrary we usually choose σl+1 = σ1/2, as usually chosen in
Wavelet decomposition. This grid will not be full, but
Gaussians are inserted only where a poor approximation is
obtained. This is evaluated, for each Gaussian, Pj,k, through
an integral measure of the residuals inside the receptive field
of that Gaussian, A(Pj,k). This measure, which represents the
local Residual Error, R(Pj,k), is computed as the L1 norm of
the local residuals as:
∑ r (P
R(Pj,k) = a1(Pj,k) -
m∈RF ( Pk , j )
m
M RF( Pk , j )
)
(6)
When R(Pj,k) is over threshold (larger than the measured
error), the Gaussian is inserted in the corresponding grid
crossing of this second layer.
Grids are created one after the other until the Residual Error
goes under threshold over all the input domain. The
threshold is usually defined as standard deviation of the
measurement error. As a result, Gaussians at a smaller scale
are inserted only in those regions where some details are still
missing, resulting in a sparse approximation. Moreover, the
number of layers is not given a-priori, but it is the result of
the configuration procedure: the introduction of a new layer
stops when the residual error is under threshold over the
entire domain (uniform approximation).
3. Fast Configuration of HRBF Surfaces
To obtain a fast processing scheme on a sequential machine
locality has been fully exploited. The data are partitioned
into voxels by a smart organization of their coordinates. Let
us suppose that the range data points are stored into a onedimensional vecotor: they will be arranged such that their
position in the array will reflect their position in space. In
particular, points belonging to the same voxel will lie close
inside the array. Each voxel, V, will be a structure, which
contains the number of data points, which lie inside that
voxel, Nv, and a pointer to the position in the array where the
first point of that voxel is memorized, ptrv. The arrangement
of the data guarantees that all the other data points which
belong to that voxel lie in adjacent positions. All the points
belonging to a voxel can be retrieved easily from Nv and ptrv.
This subdivision scheme can be efficiently used to compute
the parameters in Eq. (3) and Eq. (6). If we accept the voxel
as an approximation of the receptive field, and we align the
voxels with the grid mesh supports, the points which lie
inside a voxel lie also inside the receptive field of a Gaussian.
The same partitioning scheme can be iterated for the higher
layers using some sort of quadtree subdivision: each voxel
(father) is subdivided into four voxels (sons) of half size, and
the points beloning to each of these four voxels can be
obtained by sorting only the points contained in the father
voxel. The rearrangement of the points is obtained by an inplace partial sorting algorithm, a variant of Quicksort, in
which the pivot value is the mean value of the cell (Hoare,
1961). The partitioning schema is illustrated in Fig. 3.
This pre-processing has allowed to obtain a processing time
of 1.78 s averaged over 20 trials on a Pentium III 1 Ghz
machine, for face models constituted of 10-30k polygons.
The amount of overhead added by data partitioning was
negligible, being the pre-processing time of one order of
magnitude smaller than network configuration time.
4. From HRBF Surfaces to HRBF Meshes
The output of the HRBF network is a multi-scale continuous
surface. To be visualized by graphical hardware, this surface
has to be digitized, that is converted into a multi-scale mesh.
One possibility is to densely sample the surface and tessellate
it. This would produce an un-necessary dense mesh.
A better result is obtained by exploiting the differential
properties of the HRBF surface, and to produce mesh, which
is denser in those regions where geometry contains more
details.
We start by sampling the reconstructed surface in the grid
crossings of the first grid. These points, {V1}, constitute a
first ensemble of mesh vertexes. Notice that these points are
obtained as sum of the outputs of all the four (and in general
M) layers (Eq. (1)). The adequacy of the resulting mesh is
evaluated by analyzing the approximation error: we will
make the mesh denser (of vertexes), where the approximation
error is higher. To the scope, the height of the reconstruct
surface: is evaluated in the mid-points between two grid
47
crossings: zb=S(Pb), Pb=(Pj,k+Pj+1,k)/2. zb is than compared
with the piece-wise approximation and the difference
computed as: db = zb-(S(Pj,k)+S(Pj+1,k))/2. If this difference is
over-threshold, the point (Pb, zb) is added as a vertex of the
model.
5. Conclusion
The HRBF model was derived in the artificial intelligence
domain, where the problem of fitting a surface to range data
is studied into the broad domain of multi-variate
approximation (Girosi et al., 1995). Main characteristic of
the model is the ability to reconstruct a 3D surface with no
iteration on the data, therefore allowing fast computation of
the configuration parameters. The closest approach to our is
based on stacking grids of B-splines (Lee et al., 1997). The
main difference is that, in the HRBF model, the grids in the
superior layers are not complete, but Gaussian units are
inserted in clusters where the residual is over threshold. This
allows coping with range of different densities and different
details content and to allocate units where these are mostly
requested.
Data pre-processing presented here, allows to place the data
in the input array such that the points inside the receptive
field of each Gaussian can be directly addressed without any
sorting. This allows to efficiently implement computation
locally on the data and achieve real-time meshing on
sequential machines. The price to be paid is in terms of
overhead of memory allocation and computing time.
Memory overhead, scales with the number of voxels. As
many points have to be used to produce a consistent estimate
of the network parameters (Eq. 6), the memory overhead can
be assumed to be less than one order of magnitude.
Computing time overhead is negligible being experimentally
measured of one order of magnitude smaller than
configuration time.
References
D’Apuzzo N., Three-dimensional Human face feature
extraction from multi-images, Proc. VIth Optical 3D
Measurement Techniques Conference, Zurigo, 2003.
Levoy M., Pulli K., Curless B., Rusinkiewicz S., Koller D.,
Pereira L., Ginzton M., Anderson S., Davis J., Ginsberg J.,
Shade J., and Fulk D., 2000: “The Digital Michelangelo
Project: 3D Scanning of Large Statues,” Proc. ACM
SIGGRAPH 2000, pp. 3-12.
48
Rusinkiewicz S., Hall-Holt O., Levoy M., 2002: “3D realtime model acquisition,” Proc. ACM SIGGRAPH 2002, pp.
438-446.
Pfister H., Zwicker M., Van Baar J., and Gross M., 2000:
Surfels: Surface Elements as Rendering Primitives, Proc.
ACM SIGGRAPH 2000, pp. 335-342.
Hoppe H., 1992: Surface reconstruction from unorganized
points. Proc. ACM SIGGRAPH 1992, pp. 71-78.
Remondino F., 2003: From point cloud to surface: the
moding and visualization problem, Int. Arch. Photogr.,
Remote Sensing and Spatial Inform. Sciences, Vol. XXXIV5/W10, pp. 107-112.
Eck M. and Hoppe H. 1996: Automatic Reconstruction of
B-spline Surfaces of Arbitrary Topological Type, Proc. Proc.
ACM SIGGRAPH 1996, pp. 325/334.
Barhak J. and Fischer A. 2001: Parameterization and
Reconstruction from 3D Scattered Points Based on Neural
Network and PDE Techniques, IEEE Trans. Visual.
Computer Graphics, vol. 7(1), pp. 1-16.
Girosi F., Jones M. and Poggio T. 1995, “Regularization
theory and Neural Networks Architectures,” Neural
Computation 7, pp. 219-269.
Borghese N.A. and Ferrari S. 2000, A portable modular
system for automatic acquisition soft 3D objects, IEEE Trans.
on Instrumentation and Measurements, vol. 49, pp. 11281136.
Ferrari S., Maggioni M., and Borghese N.A., 2003: MultiScale Approximation with Hierarchical Radial Basis
Functions Networks, IEEE Trans. on Neural Networks, in
press.
Borghese N.A., Ferrigno G., Baroni G., Savarè R., Ferrari S.
and Pedotti A., 1998: Autoscan, a flexible and portable
scanner of 3D surfaces. IEEE Computer Graph. Appl., vol.
18(3), pp. 2-5, May/June.
Hoare C.A.R., 1961: Algorithms 64: Quicksort,
Communications of the ACM, vol. 4(7), pp. 321.
Lee S., Wolberg G., and Shin S.Y., 1997: Scattered data
interpolation with multilevel B-splines, IEEE Trans. Visualiz.
and Computer Graphics, vol. 33(3), pp. 228.244.
Environmental adaptation strategies
and attitudes toward new technologies in the elderly
Maria Vittoria Giuliani, Ferdinando Fornara, Massimiliano Scopelliti, Edoardo Muffolini
Institute of Cognitive Sciences and Technologies 1
Viale Marx, 15, 00137 Rome, Italy
[email protected], [email protected]
Abstract
Decline of physical and cognitive functions in old age has
shown different trends among individuals. Searching for
predictors of a “successful ageing” has been the main goal of
researchers in this field.
Strategies of adaptation have been among the most studied
psychological patterns mediating between environmental
demands and individual's response.
Brandtstadter and Renner (1990) distinguished two general
coping strategies to maintain life satisfaction: assimilation,
involving active modification of the environment in order to
reach personal goals, and accommodation, involving a more
passive acceptation of life circumstances and obstacles.
Following this distinction, adaptive strategies can be put along
a continuum from the most assimilative to the most
accommodative.
Referring to daily activities related to one’s home, Slangen-de
Kort and collegues (1998) made a distinction between
adaptation of: a) the physical environment (the most
assimilative strategy), b) the social environment (divided into
formal and informal help), and c) the person him- or herself
(the most accomodative strategy).
Following Slangen-de Kort et al. (1998) conceptual
framework, the present study addresses a further sub-category,
which is the use of technological aids as a specific
assimilative strategy for adapting one's own physical
environment.
In order to verify which kind of factors are the best predictors
of strategy choice, sense of competence, attitudes toward new
technologies and home modifications, a questionnaire has
been designed. It includes: a) a set of eight hypothetical
scenarios, each of them describing an old person (a man in the
male-version, a woman in the female-version) who finds
difficulties in coping with a specific situation; b) a set of eight
instrumental activities of daily living; for each, respondents
are asked to indicate their degree of autonomy, their ease in
performing it, and their satisfaction with the way in which it is
performed; c) a set of scales and items on home safety and
comfort, attitudes and behavioural intentions toward home and
technological modifications, and attitudes toward new
technologies; d) a set of items on home and personal
indicators.
Up to now, about half of the planned sample (about one
hundred participants divided in two age groups: from 65 to 75
and over 75) has been interviewed.
1 Introduction
This contribution summarises an on-going study on
adaptation and coping strategies in later life, with
particular reference to the potential usefulness of
technological devices for older people.
Research literature on problems related to ageing has
stressed the general progressive decline of life
functions, both physical (e.g., overall health, eyesight,
hearing, motion, disposition to illness) and cognitive
(e.g., memory, attention, problem-solving and learning
abilities), as people get older.
However, the decline process shows a different trend
according to individual differences. Given that genetic
differences alone are not able to explain why there is
broad variety on a continuum from old people who
obtain good performances in life functions and people
who definitely do not, great attention has recently been
put on psychological, personal, social and
environmental moderators which can predict a
“successful ageing” (Rowe and Kahn, 1987; Femia,
Zarit, and Johansson, 1997).
There is empirical evidence that older adults’ functional
status is related both to their sense of psychological
well-being and their physical well being.
Self-efficacy (McAvay, Seeman, and Rodin, 1996),
mastery and perceived control (Rodin, 1990),
competence (Diehl, 1998), adaptation and problemsolving strategies (Slangen-de Kort., Midden, and van
Wagenberg, 1998) are some of the main psychological
patterns previously studied as predictors of elderly wellbeing.
Self-efficacy is defined as an individual’s assessment of
her or his ability to perform behaviours in specific
situations (Bandura, 1977). Research literature has
shown that stronger self-efficacy beliefs are related to
Copyright © 2003, The R OBOCARE Project
Funded by MIUR L. 449/97. All rights reserved.
49
maintenance of health promoting behaviours and more
positive health outcomes (McAvay et al., 1996).
The constructs of both mastery and perceived control
over events imply that individuals who believe that they
can cause the events to occur (by organizing the
necessary resources or employing the appropriate
behaviours) modify the subjective meaning of
experience (Pearlin and Schooler, 1978). Mastery and
perceived control seem to be associated with better
functioning or improved outcomes in later life (Roberts,
Dunkle, and Haug, 1994).
“Everyday competence” (see Lawton, 1982; Willis,
1996) refers to one’s ability to perform a broad array of
activities which are considered essential for independent
living. In general, lower everyday competence is
associated with both lower self-esteem and life
satisfaction (Kuriansky, Gurland, Fleiss, and Cowan.,
1976), with greater use of home health care services
(Wolinsky, Coe, Miller, Prendergast, Creel, and
Chavez, 1983), greater risk of hospitalization and
institutionalization (Branch and Jetter, 1982) and higher
mortality (Keller and Potter, 1994). Lawton (1982)
suggested that higher competence is associated with
greater independence from the behavioural effects of
environmental press. This has been termed the “docility
hypothesis”, inferring that the lower the individual's
competence, the less the ability to adapt to
environmental demands. The study of competence in
elderly populations has often been related to the
individual’s ability to perform activities of daily living
(ADL; e.g. see Czaja, Weber, and Nair, 1993) such as
dressing, moving, shopping, using the phone, managing
money and so on.
Adaptation may occur when environmental demands
exceed the individual’s resources (Lazarus and
Folkman, 1984), when there is a lack of congruence
between the individual’s needs and life situation
(Kahana, 1982), when there is stress between the
perceived self and the perceived environment (Lawton
and Nahemow, 1973).
Lawton (1989) indicated two classes of strategies to
eliminate perceived discrepancies between the actual
and the desired course of development; the first implies
environmental “proactivity”, i.e. the tendency to adjust
life circumstances to personal preferences; conversely,
the second refers to environmental “reactivity”, i.e. the
tendency to adjust personal preferences to situational
constraints.
Brandtstadter and Renner (1990) distinguished two
general coping strategies to maintain life satisfaction:
assimilation, involving active modification of the
environment in order to reach personal goals, and
accommodation, involving a more passive acceptance
of life circumstances and obstacles. Following this
distinction, adaptive strategies can be put along a
continuum from the most assimilative to the most
accommodative ones. Some studies (Wister, 1989;
Brandtstadter and Renner,1990) showed that old people
50
tend to shift from assimilative to accommodative
strategies as age increases. Anyway, the use of both
these strategies was positively related to life
satisfaction.
A more articulated picture, grounded on environmental
psychology, was provided by Slangen-de Kort and
collegues (1998), who focused on a categorization of
the object or activity that is adapted. Referring to daily
activities related to one’s own home, these authors
made a distinction between adaptation of: a) the
physical environment (i.e., modification of the home,
use of assistive devices), b) the social environment
(divided into formal help, e.g.. paid housekeeping, and
informal help, e.g., help from friends), and c) the person
him- or herself (e.g.., changes of behavior, “give-up”
reaction).
Strategies of adaptation of the physical environment are
considered as the most assimilative and proactive,
whereas strategies of personal adaptation (particularly
the “give-up” reaction) are categorized as the most
accommodative and reactive ones.
Adaptations of the social environment can be seen as
most accommodative, since they imply giving up a goal
that is relevant to most people, that is independence or
autonomy. Requiring formal help such as public,
volunteer or paid assistance represents a more active
and goal-directed behaviour aiming at modifying the
situation, whereas relying on friends and relatives
mirrors a more dependent and accommodative choice.
Focusing on resources provided by the physical
environment, some researches (Wister, 1989; Slangende Kort et al., 1998) investigated how the home
environment can afford assimilative and proactive
coping strategies in the elderly. Results showed that
persons judging their home as more “adaptable” were
more likely to choose an assimilative strategy than an
accommodative strategy.
2 Aims
Following Slangen-de Kort et al. (1998) conceptual
framework, the present study addresses a further issue,
which is related to new technologies. More specifically,
the use of technological aid is added as a specific
assimilative choice for adapting one's own physical
environment. Furthermore, the investigation of attitudes
and behavioural intention toward new technologies is
one of the objectives of this study.
In general, the present contribution focuses on a series
of tools that are being used for an on-going research on
adaptation strategies in the elderly.
The main objectives of the research consist in finding
responses to the following questions.
Which personal (i.e., age, gender, education),
psychological (i.e., perceived health, competence,
openness to home changes), social (i.e., household),
environmental (i.e., home safety and comfort) and
situational (i.e., typology of problems) factors are more
related to the choice of adaptation strategies in different
situations? Adaptation strategies vary from purely
accomodative to purely assimilative, by using personal,
social, environmental and technological resources.
Are a higher sense of independence and sense of
competence related to higher satisfaction toward the
outcomes in activities of daily living? Are there any
differences in this between more cognitive and more
body-motion activities?
Which
personal,
psychological,
social
and
environmental factors are associated with attitudes and
behavioural intentions towards modifications of one's
own home, both in general and in the technological
sense?
Which factors weigh most in predicting attitudes toward
new technologies?
3 Tools
Scales and items to measure each of the variables above
were assembled into a paper-and-pencil questionnaire,
which was prepared in two versions, according to
gender.
The questionnaire is made up as follows.
a) The first section is comprised of a set of eight
hypothetical scenarios, each of them describing an old
person who finds difficulties in coping with a specific
situation. The eight situations are the following: 1)
feeling unsafe to go to a friend’s house to play cards; 2)
having hearing difficulties in using the telephone; 3)
forgetting when to take daily medicines; 4) eyesight
difficulties in reading; 5) housekeeping; 6) getting in
and out of the bathtub; 7) fear of ill-intentioned
intruders getting into home; 8) feeling unsafe about
house accidents.
The respondents are asked to suggest to the scenario’s
subject one among a few alternative solutions (five or
six), which represent adaptation strategies pertaining to
the following macro-categories (adapted from the
taxonomy of Slangen-de Kort et al., 1998): 1)
accommodation, i.e. give up behaviour; 2) use of social
resources, searching for either 2a) formal help, i.e. from
volunteers, health-care associations, paid assistant, etc.,
or 2b) informal help, i.e., relatives, friends, neighbours,
etc.; 3) adaptation of the physical environment, either
3a) changing the spatio-physical setting, or 3b) using
technological assistive devices.
The alternative solutions vary on a continuum from
purely accommodative to purely assimilative, but are
presented in a random order in each scenario’s response
set.
b) The second section contains a set of eight
instrumental activities of daily living. Only activities
usually performed by both sexes among elderly people
were selected. Four of these activities require a
cognitive effort (reminding to take a medicine,
reminding to switch off the gas, managing one’s own
money, keeping oneself well-informed about what is
happening in the world), the remaining four require an
active effort (house keeping or home maintenance,
cutting toe nails, climbing or descending the stairs,
kneeling or bending). Then the eight activities cover
different problem/ability types, such as mnemonic
functioning, performing complex cognitive tasks,
homecare, self care, flexibility of body motion. For each
target activity respondents are asked to assess: 1) their
degree of autonomy on a dichotomous response scale
(by oneself/with help from others); 2) their ease in
performing on a 5-point Likert-type response scale
(from not at all to very much); 3) their satisfaction with
the way in which the activity is performed on a 5-point
Likert-type response scale (from “not at all” to “very
much”).
c) The third section focuses on home environment. It
contains: 1) two short scales, measuring respectively
perceived safeness and perceived comfort of home
spaces (i.e., hall, kitchen, bathroom(s), bedroom(s),
living room) in a 5-point Likert-type response scale, and
one question about which room the respondent spends
most time each day; 2) a series of items measuring both
attitudes and intentions toward possible home changes
and modifications (response scales are both
dichotomous and Likert-type); 3) a scale measuring
attitudes toward new technologies by means of a 5-point
agree/disagree Likert-type response scale and finally
some items about attitudes and intention toward home
modifications in a technological sense.
d) The fourth section includes questions about: 1)
personal indicators such as socio-demographic factors
(gender, age, education, socio-economic level,
housemates, etc.), overall satisfaction toward health and
satisfaction toward life functions (sight, hearing,
memory, motion); 2) home indicators such as age and
type of building, number of rooms, kind of
technological appliances available.
4 Participants and procedure
About one hundred respondents over 65 years old are
foreseen in the research agenda. The sampling
procedure aims to get quotas as balanced as possible for
age (half sample aged between 65 and 75, the other half
over 75 years), gender and education. Up to now, about
half of the planned sample has been interviewed. Most
participants have been contacted and interviewed in
social centres for elderly, in urban parks and in their
own house (after a pre-contact). In spite of the selfreport format of the questionnaires, most of them have
been filled-in by the researchers, who interviewed the
participants and signed their responses on the
questionnaire. This strategy has been adopted because it
was the one preferred by many participants (particularly
the less educated ones).
5 Research agenda
Data collection should be concluded in the first half of
November 2003. Statistical elaboration will consist in
quantitative analysis (multivariate analyses using the
51
SPSS software) and more qualitative procedures
(correspondence analysis using the SPAD software).
Hence, the main outcomes of the present research are
expected before the end of 2003 .
References
Lawton, M. P. 1989. Environmental proactivity in older
people. In V.L. Bengston and K.W. Schaie eds., The
course of later life: Research and reflections. New
York: Springer, pp. 15-23.
Bandura, A. 1977. Self-efficacy: Toward a unifying
theory of behavioural change. Psychological Review,
84, 191-215 .
Lawton, M. P., and Nahemow, L. 1973. Ecology and
the aging process. In C. Eisdorfer and M.P. Lawton
eds., The psychology of adult development and aging.
Washington: APA, pp. 619-674.
Branch, L. C., and Jette, A. M. 1982. A prospective
study of long-term care instituzionalization among the
aged. American Journal of Public Health, 72, 13731379.
Lazarus, R. S., and Folkman, S. 1984. Coping and
adaptation. In W.D. Gentry ed., Handbook of
Behavioral Medicine. New York: Guilford Press, pp.
282-325.
Brandtstadter, J., and Renner, G. 1990. Tenacious goal
pursuit and flexible goal adjustment: Explications and
age-related analysis of assimilation and accommodation
strategies of coping. Psychology and Aging, 5, 58-67.
McAvay, G. J., Seeman, T. E., and Rodin, J. 1996. A
longitudinal study of change in domain-specific selfefficacy among older adults. Journal of Gerontology,
51, 243-253.
Czaja, S. J., Weber, R. A., and Nair, S. N., 1993. A
Human Factors Analysis of ADL Activities: A
Capability-Demand Approach. Journal of Gerontology,
48, 44-48.
Pearlin, L. I., and Schooler, C. 1978. The structure of
coping. Journal of Health and Social Behavior, 19, 221.
Diehl, M. 1998. Everyday competence in later life:
Current status and future directions. The Gerontologist,
38, 422-433.
Roberts, B. L., Dunkle, R., and Haug, M. 1994.
Physical, psychological, and social resources as
moderators of the relationship of stress to mental health
of the very old. Journal of Gerontology, 49, 35-43.
Femia, E. E., Zarit, S. H., and Johansson B. 1997.
Predicting change in activities of daily living: A
longitudinal study of the oldest old in Sweden. Journal
of Gerontology, 52, 294-302.
Kahana, E. 1982. A congruence model of personenvironment interaction. In M.P. Lawton, P.G.
Windley, and T.O. Byerts eds., Gerontological
Monograph No. 7 of the Gerontological Society. Aging
and the Environment: Theoretical Approaches. New
York: Springer, pp. 97-121.
Keller, B. K., and Potter, J. F. 1994. Predictors of
mortality in outpatient geriatric evaluation and
management clinic patients. Journal of Gerontology,
49, 246-251.
Kuriansky, J., Gurland, B., Fleiss, J. L., and Cowan., D.
1976. The assessment of self-care capacity in geriatric
psychiatric patients by objective and subjective
methods. Journal of Clinical Psychology, 32, 95-102.
Lawton, M. P. 1982. Time budgets of older people: A
window on four lifestyles. Journal of Gerontology, 37,
115-123.
Lawton, M. P. 1985. The elderly in context:
Perspectives from environmental psychology and
gerontology. Environment and Behavior, 17, 501-519.
52
Rodin, J. 1990. Control by any other name: Definitions,
concepts, and processes. In J. Rodin, C. Schooler, and
K.W. Schaie eds., Selfdirectedness: Cause and effects
throughout the life course. Hillsdale, NJ: Erlbaum, pp.
1-17.
Rowe, J. W., and Kahn, R. L. 1987. Human aging:
Usual and successful. Science, 237, 143-149.
Slangen-de Kort, Y. A. W., Midden, C. J. H., and van
Wagenberg, A. F. 1998. Predictors of the adaptive
problem-solving pf older persons in their homes.
Journal of Environmental Psychology, 18, 187-197.
Willis, S. L. 1996. Everyday cognitive competence in
elderly persons: Conceptual issues and empirical
findings. The Gerontologist, 36, 595-601.
Wister, A. V. 1989. Environmental adaptation by
persons in their later life. Research on Aging, 11, 267291.
Wolinsky, F. D., Coe, R. M., Miller, D. K., Prendergast,
J. M., Creel, M. J., and Chavez, M. N. 1983. Health
services utilization among the noninstituzionalized
elderly. Journal of Health and Social Behavior, 24,
325-337.
Guests’ and caregivers’ expectancies towards new assistive technology in
nursing homes
M.V. Giuliani°, E. Muffolini°, G. Zanardi*, M. Scopelliti° & F. Fornara°
° Institute for Cognitive Science and Technology, CNR, Rome
*Consulente Residenza Sanitaria Assistenziale “Nobile Baglioni” di Villa d’Almè (Bg)
[email protected], [email protected], [email protected],
Abstract
The research carried out investigated the GS, IC and EC’s
perception of the importance of areas regarding care in an
RSA and the contribution that could be given by applying
NT. The data supplied shows the diverse representation
between groups in the specific areas: the evaluation given
by the GS underline how the main interest is the potential
recovery of a good degree of autonomy through mobility
whereas the care givers seem to regard aspects connected to
the other three areas as of more consequence for a better
quality of life for the guests.
Subdividing the care givers according to their working
functions in the service (IC vs. EC) one can understand how
the EC have similar perceptions to the GS as regards the
strictly health and physiological aspects. Furthermore, this
tool underlines the different importance given by the two
groups of carers to the social aspects, deemed more
important to the guests’ lives by the EC compared to the IC.
The difference of interpretation between IC and EC regarding the areas of research is specified by the result obtained in the evaluation of technological aid concerning the
social aspect dimension where the two groups express an
evaluation in line with their respective roles: the EC judge
technological aid within the Social aspects area as more
useful, that is to say where they are specifically involved.
1
Introduction
The huge increase in population age is a widespread
demographic tendency in industrialized countries. The
result of the increase in life expectancy (Roush, 1996) is
the rising number of older people as compared to adults
and young people (Tuljapurkar, Li & Boe, 2000). As a
consequence, the management of such a trend is becoming
a key issue to point to. A large number of social problems
arises in order to set up a systematic planning and
arrangement of services and infrastructures to fulfill the
needs of the elderly. The costs of elderly people care are
unquestionably very expensive, not only from an
economic point of view: the more people get older, the
more they need specific health care services, such as
nursing homes (RSA), hospitals, treatments and medicines
for chronic illness and the like.
Long term care has probably emerged as one of the most
significant health and social policy issue, in that its
influence on the welfare state system is remarkably high.
The question of lowering the social welfare costs
(Weingarten, et al., 2002) is undoubtedly a central concern
in the policy of industrialized countries. To date, nursing
homes are the main solution to the problem of elderly
people who lost their autonomy. As a consequence,
nursing homes represent the reference services for all
potential innovations which can be applied to elderly
people care (Trabucchi, Brizioli & Pesaresi, 2002).
A new and promising direction to enhance assistance
towards elderly people with cognitive or mobility
impairments seems to be associated with the possibility of
using the advances in assistive technology also in a
domestic setting (Slangen-de Kort, Wagenberg & Midden,
1998). Up till now, the advances in technology made
available a wide array of automatic devices, provided with
mechanism of rising complexity, and arranged to perform
different tasks.
A great challenge would be to use these innovative devices
to allow elderly people suffering from cognitive and/or
mobility impairments to continue living in their homes as
long as possible (Marini e Biocca, 2001). What is still
needed to be understood is the potential impact of new
technologies (NT) in a completely different context, that is
nursing homes, in which the final users are not only the
residents, but also caregivers staff. The aim is to
investigate the most salient features of the health care
service in nursing homes, with particular reference to the
assessment of users’ needs and expectancies.
Environmental psychology literature has particularly
focused on the relationship among elderly people both in
nursing homes (Schulz, 1976; Fontane, 1982; Küller,
1991) and in domestic settings (Christensen & Carp, 1987;
Giuliani, 1991). On this point, recently some studies have
addressed the impact of assistive technology in residential
Copyright © 2003, The ROBOCARE Project
Funded by MIUR L. 449/97. All rights reserved.
53
environments, paying particular attention to social (Duffy,
2003) and psychological (Giuliani et al., forthcoming)
aspects.
2
Aims of the study
The aim of this study is the evaluation within an RSA of
possible areas for intervention using NT. Therefore, a
central concern is to analyse possible differences in
evaluation given by individuals taking part in the RSA
system. Three different groups of so-called “actors”
involved in the care service have been investigated:
Internal Care Givers /Social Workers (IC);
1. External Care Givers /Volunteers,- Family members
(EC);
2. Elderly guests (GS)
So, two specific research questions have been singled out:
a.
b.
What is the evaluation of NT by the elderly residents?
Are there different opinions within the three groups
regarding the application of NT in the care service?
3
Materials and Methods
The study has been carried out in an RSA in a small town
in Northern Italy (Bergamo hinterland). The RSA supplies,
besides accommodation and health services (nursing,
pharmaceutical and rehabilitation services), also social
services (activities of entertainment and socialising). It
accommodates 64 people with different levels of cognitive
and physical impairments.
The sample consisted of 40 guests (34 females, 6 males;
average age: 82.3) with integral cognitive capacities (mmse
≥ 24), 40 care givers divided in two sub-groups of equal
number: 20 internal care givers (18 females, 2 males;
average age: 35.5) and 20 external care givers (16 females,
4 males; average age: 35.5).
The two types of workers, although similar in their role of
care givers, differ notably concerning the nature of their
contribution in running the service. The internal care givers
are paid by the structure and characterised by their sociomedical training, they are strictly organised according to
professional status, working hours and tasks (doctors,
nurses, rehabilitation experts, entertainers and sociowelfare auxiliaries). They provide all of the basic
assistance, connected to legal obligations (health
assistance, psychomotor rehabilitation, personal hygiene
and cleanliness of the premises). They normally remain
within the nursing home services for a long period of time.
In our sample, the average period of service in elderly care
was 10.71 years.
The external care givers, on the contrary, provide voluntary
services which therefore notably differ from subject to
subject regarding working hours, skills, etc. Their
54
contribution is more concerned with interventions
regarding more eminently personal/recreational aspects
(keeping the guests company, organization of
entertainment activities etc.). They can be members of
Volunteer Associations or are connected to the structure
through one or more family members resident in the
nursing home (Relatives), and normally remain in the
nursing home services for a shorter period of time. In our
sample, the average period of service in elderly care was
4.93 years.
We drew from literature and instruments in use to evaluate
the level of autonomy (Instrumental Activities of Daily
Living scale, IADL and the Activities Daily Living, ADL)
and to focus on a series of “basic” activities belonging to 4
areas important for the guests’ quality of life (Autonomy of
movement, Socialising, Personal hygiene, Physiological
and health needs).
A questionnaire was prepared with 20 items (5 items for
each area), each indicating a daily activity.
In the first section of the questionnaire both the GS and
caregivers (IC and EC) were asked to indicate the
importance for the guests of carrying out such activities
independently on a 5-point Likert-type scale.
In the second section they were asked to assess how much
the guests’ quality of life would improve by being
autonomous in such activities on a 5-point Likert-type
scale.
For care givers (both internal and external) a further
section was added (again on a 5-point Likert-type scale)
asking how much their “quality of caring” would improve
for each activity if assisted in their work by a technological
device.
The four areas of activity included the following items:
Autonomy of movement
1. Moving within own room
2. Moving between various rooms on the same floor
3. Going up and down the various floors in the building
4. Going to green spaces outside the building
5. Going around the village
Socialising
1. Speaking to other guests when desired
2. Speaking to people from outside the structure when
desired
3. Participating in internal ‘get-togethers’
4. Access to moments of recreation
5. Knowing what is happening in the world
Personal hygiene
1. Having a bath
2. Shaving/applying make-up
3. Going to the toilet
4. Dressing
5. Devoting oneself to interests
Physiological and health needs
1. Taking medicine
2. Informing someone if not well
gave a significantly higher evaluation than both the EC and
GS.
3. Keeping fit
4. Eating and drinking
5. Controlling faeces and urine
4
Importance of
mobility
Importance of
Social aspects
Improving Quality of Life
Social aspects
Improving Quality of Life
Personal hygiene
Improving Quality of Life
Physiological/health needs
Statistical analyses
Three analyses were performed on the data:
1. Comparison between GS and caregivers (IC+EC)
applying a t-test for independent samples. Possible
differences between people who receive and give care
were investigated independently of qualification
and/or service.
2. Comparison between GS, IC and EC: the group of
caregivers was subdivided into 2 homogenous groups
identifying working distinctions within the RSA. An
analysis of variance (Anova) was applied to the groups
with comparison for individual samples through a
Post-Hoc test;
3. A third analysis considered only the two working
groups IC vs. EC through a t-test for independent
samples.
Results
Comparison GS vs. care givers: a significant differences
was found as regards the following sub-categories:
Importance of mobility;
Improving Quality of Life: social aspects, personal
hygiene, physiological and health aspects.
The sub-category regarding the importance of mobility was
given higher scores by the GS; in the three sub-categories
regarding the improvement of quality of life, the highest
score was given by the care givers.
Importance of
mobility
Improving Quality of Life
Social aspects
Improving Quality of Life
Personal hygiene
Improving Quality of Life
Physiological/health needs
P=0.002
P=0.0001
P=0.0001
P=0.0001
GS >
Care giver
Care giver >
GS
Care giver >
GS
Care giver >
GS
Comparison between GS vs. IC vs. EC: the score given by
GS to the Importance of mobility is significantly higher
than that of the IC. IC and EC didn’t show a significant
difference. As regards the social aspects, both EC and GS
gave a higher score than IC. As regards Improving the
Quality of Life, in the sub-categories “social aspects” and
“personal hygiene” both the IC and EC differ from the GS;
in the sub-category “health and physiological needs” the IC
GS>IC=EC
P=0.033
EC=GS>IC
P=0.0001
IC=EC>GS
P=0.001
IC=EC>GS
P=0.0001
IC>EC=GS
In the comparison between EC vs. IC the former gave a
significantly higher evaluation (P=0.023) of the help
provided by the NT in the social aspects of the “quality of
caring”.
6
5
P= 0.009
Conclusion
The results obtained show that the instrument we devised
a) helps to pinpoint sensitive areas for possible
interventions; b) discriminates differing levels of interest
with respect to the roles of the actors in the nursing homes.
The GS consider the items regarding activities in the
mobility area as more relevant to their quality of life. This
supplies important information to those who have to
develop NT for homes for the elderly. They should help
them primarily to carry out actions connected to autonomy
of movement.
In contrast, assistance devices having workers as a target
should concentrate on the other three areas (Social aspects,
Personal hygiene, and Health and physiological needs).
Improvement of these aspects, according to the care givers,
would have a more important impact on improving the
quality of life. In an RSA structure, the NT application has
to bear in mind the areas of intervention, and the specific
objectives for different people.
The results obtained from the analysis distinguishing IC
from EC is of particular interest: the different role (IC,
more closely connected to basic assistance and EC,
oriented toward the relational/recreational) shows a
different vision of the very essence of the support provided.
EC find the social aspects which characterise their work in
the service, as more important to improving the guests’
quality of life than the EC.
References
Christensen, D.L., and Carp, F.M. 1987. PEQI-based
environmental predictors of the residential satisfaction of
older women, in “Journal of Environmental Psychology”,
12 pp. 225-236.
55
Duffy, B.R. 2003. Anthropomorphism and the social robot.
In Robotics and Autonomous Systems, 42, 177-190.
Fontaine A. 1982. Loss of Control in the Institutionalizated
Ederly, in P. A. Bell, J. D. Fischer, A. Baum and T. C.
Greene (1990). Environmental Psychology. Holt, Rinehart
and Winston
[Cesta Et Al., Forthcoming] A. Cesta, S. Bahadori, G.
Cortellessa, G. Grisetti, M. V. Giuliani, L. Iocchi, G. R.
Leone, D. Nardi, A. Oddi, F. Pecora, R. Rasconi, A.
Saggese and M. Scopelliti. The Robo-Care Project.
Cognitive Systems For The Care Of The Eld-Erly.
Proceedings Of The International Conference On Ageing,
Disability And Independence (Forthcoming). Washington,
D.C.
Giuliani, M.V. 1991. Towards an analysis of mental
representations of attachment to the home, in “The Journal
of Architectural and Planning Research”, 8 (2), pp. 133146.
Küller R. 1991. Enviromental assessment from a
neuropsychological perspective., in T. Gärling e G. Evans
(a cura di), Enviroment, cognition and action, New York,
Oxford University Press, pp. 111-147.
Morini, A.and Biocca, L. 2001. Technologies to Maintain
People at Home: Italian Experiences, in International
Congress on Technology and Aging, 12-14 September,
Toronto, Proceedings, pp. 33-38.
Roush, W. 1996. Live long and prosper?, in “Science”,
273, pp. 42-46.
Schulz R. 1976. Effects of Control and Predictability on
the Physical and Psychological Well-Being of the
Institutionalized Aged, Journal of Personality and Social
Psycology, 33, pp. 563-573.
Slangen - de Kort, Y. A. W.,Wagenberg, A. F. van, and
Midden, C. J. H. 1998. Adaptive problem solving
processes of older persons in their homes. In J. Graafmans,
V. Taipale, & N. Charness (Eds.), Gerontechnology, a
sustainable investment in the future. Studies in Health
Technology and Informatics, Volume 48 (pp. 340-346)
Amsterdam: IOS.
Trabucchi, M., Brizioli E. and Pesaresi F. 2002. Residenze
sanitarie per anziani, Bologna, Il Mulino
Tuljapurkar, Sh., Li, N. and Boe C. 2000. A universal
pattern of mortality decline in the G7 countries, in
“Nature”, 405, pp. 789-792.
Weingarten, S.R., Henning, J. M., Badamgarav, E.,
Knight, K., Hasselblad, V., Gano, A. and Ofman, A. 2002.
Interventions used in disease management programmes for
56
patienta with chronic illness-which one work? Metaanalysis of published reports, in “BMJ”, 325, pp. 913-921.
Human-Robot Interaction: How People View Domestic Robots
Maria Vittoria Giuliani, Massimiliano Scopelliti, Ferdinando Fornara, Edoardo Muffolini &
Anna Saggese
Institute of Cognitive Sciences and Technologies
Viale Marx, 15, 00137 Rome, Italy
[email protected], [email protected]
Abstract
This paper aims at identifying the main features
of people’s representations of domestic robots
and technology in general. Advances in
technology and robotics are making more
concrete the possibility of having domestic
robots performing tasks inside one’s home. To
date, a wide array of assistive robots is already
available for treatment and rehabilitation of
temporarily impaired people and for long term
care of people suffering from chronic illness. A
new challenge can be identified in the possibility
of implementing robotic devices which can go
beyond the sphere of mere health assistance and
help people in everyday activities at home. Far
from being a matter of functionality alone, the
issue of acceptability depends on the
psychological implications of human-robot
interaction.
The study explores attitudes, emotional response,
ideas and preferences expressed by people
towards technological devices and the idea of
having a robot at home. Results show that, in
general, the representation of domestic robot is
not well-defined, probably because it can be
hardly associated to any kind of real experience.
People generally underestimate or overestimate
the current possibility of performing some tasks.
Anyway, some interesting age and gender
differences in terms of attitude, emotional
response and image of robots were found.
Implications are discussed.
1 Introduc tion
The decrease in childbirth and the great increase in life
expectancy represent typical demographic trends in
industrialized countries. These tendencies makes more
concrete the issue regarding elderly people: the more
they grow old, the more they are likely to need medical,
social and personal care services. As a consequence,
specific requirements in residential environment are
needed in order to satisfy all the possible necessities of
such a population. The management of long care services
is undoubtedly a serious problem, because of their rising
costs: nursing home care services are probably the most
important example in terms of social costs, not only
because they are extremely expensive from an economic
point of view, but also because they determine a forced
relocation of elderly people. To be compelled to live in a
new place, completely depending on other people’s
assistance, has unquestionably a deep psychological
impact [Hormuth, 1990], probably more difficult to
assess than the economic expenditures, but definitely not
less significant. Recently, the possibility of retaining
people needing long term care in their own homes has
been debated. Elderly people undoubtedly prefer living
independently in a familiar domestic and residential
setting. Anyway, a new series of problems arises, due to
the shortage of residential infrastructures and facilities
and the lack of home service workers as compared to the
large number of people who need some kind of
assistance. As for domestic settings, in addition, the
psychological impact of a long term home care assistance
is still far from well understood: assistance provided by
other people can generate a stronger negative influence
upon final users, in that they may perceive a loss of
control in their living space, a reduced autonomy; they
look at home service caregivers as privacy intruders. This
condition may represent a menace to self-esteem and to
the integrity of personal identity.
An alternative solution to human home care assistance
may be represented by assistive technology. The
advances in technology and robotics make assistive
devices for domestic settings more and more available.
Copyright © 2003, The R OBOC ARE Project
Funded by MIUR L. 449/97. All rights reserved.
57
The term “assistive technology” refers to a broad array of
different devices which can accomplish several tasks at
home. First of all, they can be used in treatment and
rehabilitation of debilitated people who suffer from
accidents and disease which temporarily impair their
normal functioning. Second, they can be used as assistive
devices in long term care for people with chronic illness.
This type of electronic devices gives the possibility of
checking a variety of health conditions. There are
monitoring machines capable of detecting common health
indicators, ranging from blood pressure to temperature
and weight, as well as more advanced technology which
can recognize severe breathing or heart problems and
prevent respiratory and cardiac arrest by immediately
warning a caregiver [Stewart and Kaufman, 1993];
similarly, in the event of a fall, detectors that “sense
acceleration, change of position and impact could
automatically report to a caregiver” [ibid., p. 4]. Finally,
they can be used to help ageing people with near-normal
functioning level to manage everyday tasks at home.
Smart sensors and intelligent units in telecare systems are
non-intrusive devices which can help elderly people to
maintain independence and promote perceived selfefficacy. The sensors can provide for a wide range of
domestic warnings, preventing different risks such as
overflowing in bathroom and kitchen, gas escape,
extreme room temperature and the like [Doughty, 1999].
Despite advances in technology and robotics which make
available innovative devices for elderly people care, the
functional/practical advantage of their use (e. g. reducing
economic costs) is far from being the only issue.
Assistive technology often fail to be adopted or used
because of inadequate training on how to use it; because
all the potential benefits to people’s independence are not
understood; because their design appear cumbersome
[Elliot, 1991]. In addition, the psychological implications
of having a human caregiver or, on the other hand, a
technological device supporting one’s daily life and
activities at home are undoubtedly completely different.
Theoretical and empirical evidence is available to support
this statement. Doughty [1999] argues that technological
devices can provide continuous care, are objective in
their analysis, and never get stressed or tired. They can
foster elderly people’s independence and improve their
quality of life. Nonetheless, they are not able to
“substitute for the tender loving care that can be given
thanklessly by dedicated individuals” [ibid., p. 7].
Stewart and Kaufman [1993] claim that such activities as
eating and personal hygiene represent areas in which
technological assistance is preferred to human aid,
because perceived control over life is enhanced. Monk
and Baxter [2002] underline that one of the advantages of
the smart home can be the opportunity to socialize for
people
experiencing
loneliness,
through
video
conferencing facilities. Anyway, an important issue to be
addressed has to do with improving people’s confidence
in smart technology, since it often goes unused because
older people feel stigmatised: technological devices
58
symbolize a change in competence associated with
negative social judgement [Gitlin, 1995].
The care of elderly people is undoubtedly a complex
experience in which social, emotional and environmental
factors play a central role [Hirsch et al., 2000]. Beyond
the impact of product design on usability (too small
buttons, printed letters hard to read, etc.), an important
task is to understand what are the real needs as expressed
by final users. Ignoring this aspect would pose serious
difficulties for the adoption of potentially useful devices.
Gitlin [1995] tried to deepen the user’s perspective,
shedding light on the reasons why elderly people may
accept or reject the use of assistive technology.
Depending on personal factors, helping devices may be
viewed either as mechanism by which to regain
independence or as threats to self-identity and social life,
regardless of users’ age and gender. Independence,
however, is not only a matter of personal ability, but
must be considered as the result of person-environment
fit [Steinfeld and Shea, 1993]. Houses are often full of
physical barriers which can hinder autonomy, but these
are frequently underestimated by elderly people. The
authors underline the importance of home modification in
order to improve people’s independence, but they also try
to evaluate the problem from a cost-benefit perspective:
elderly people often consider difficulties as conditions
they have to live with, hardly seeing good reasons for a
change at home. From a psychological point of view,
continuity in place experience is a key factor in
preserving personal identity [Breakwell, 1986].
Beyond technology related to domestic risk prevention
and aimed at increasing home livability, a greater
challenge consists in the implementation of a new
generation of technological devices which can interact
with people providing personal assistance in common
everyday activities. On this point, Mahoney [1997]
reviewed the market of rehabilitation robot products,
paying particular attention to a device designed to help
people to eat; Baltus et al. [2000] developed a project
aimed at the implementation of a mobile robot which can
accomplish different personal-aid tasks in many primary
functions: it goes beyond health care (data collection,
health indicators detection, tele-medicine, etc.) to
safeguarding of the elderly person to monitoring of the
environment and other people inside the home. It also is a
useful device to remind health-related activities (e. g. to
take a medicine) and, in general, things to do.
Furthermore, it is able to interact with people, through a
real-time speech interface consisting of a speech
recognition and a speech synthesis system. Although its
vocabulary is not very large, it can provide information
related to different daily activities, such as the television
program and the weather forecast.
The possibility of using such a device is definitely a
further step in evolution as compared to the technology
which is commonly used in the smart home. However, it
probably implies a different kind of psychological
reaction in people, mainly because robots cannot be
simply considered as assistive devices but also as entities
to interact with in a more or less humanlike relationship.
To date, socially interactive robots can engage in a
variety of peer-to-peer relations, reacting to human
behavior or actively encouraging social interaction [Fong
and Nourbakhsh, 2003]. The interaction can also consist
of a continuous human-robot dialogue, called
“collaborative control” [Fong et al., 2003], through
which the robot asks questions to the human being in
order to get assistance and to solve problems.
Regardless of what activities robots can actually perform,
another important psychological question is how people
would react to them. Nass and Moon [2000] demonstrate
through a series of experimental studies that people may
unconsciously attribute social rules and gender
stereotypes to computers, and may engage in a humanlike
interaction which implies politeness, reciprocity and selfdisclosure. Duffy [2003] argues that humans sometimes
tend to anthropomorphize inanimate entities, by
attributing cognitive or emotional states to them and by
interpreting their behaviour as if it were governed by
rational choices and consideration of their desires
[Dennett, 1996]. The author concludes that robots can be
more acceptable is they resemble human beings in some
way.
The idea of humanlike robots is constantly presented in
science fiction (movies, books, etc.), which is probably a
source of influence for people’s representation: it makes
available a twofold image of robots as helping assistants
or, on the other hand, as more frightening potential
competitors – or, even worse, overwhelming entities! The
idea of a domestic robot is probably much more difficult
to describe, because it can hardly be associated to any
kind of real experience. Nonetheless, the aim of
exploring people’s ideas about such advanced devices
would allow the designers to adjust its features to what
people need and expect. Khan [1998] developed a
questionnaire to assess people’s attitude towards getting
help from a domestic robot in everyday life activities at
home. Respondents showed to be rather positive towards
having a robot performing domestic tasks. In addition,
they felt safe about it and did not feel that it would
intrude their privacy. However, they mostly liked having
a robot performing programmed activities rather than a
smart device which takes initiatives. Speech was found to
be the preferred way of communicating, showing again
the tendency to humanize the interaction with the robot.
The results of the study can be a useful agenda for robots
producers, who aim to fulfill the user’s needs and
expectations in terms of design and functionality
[Oestreicher et al., 1999].
Following this research line, we organized a two stage study
in order to gain a better understanding of people’s
representations of domestic robots. Focusing not only on
assistive and health related tasks, but also on everyday
activities at home, we interviewed people of different age
groups, asking them questions partially extracted from
Kahn’s tool [1998]. In the pilot study [Cesta et al.,
forthcoming], we found that elderly people generally care
about an harmonious integration of robots in the sociophysical environment of the home, asking for requirements
such as “the robot must behave in such a way as not to
frighten the pets” or “it should not be a cumbersome
obstacle”. The robot’s shape is described in a wide array of
different ways, ranging from a minimum to a maximum
degree of anthropomorphism (from “cylinder or
parallelepiped, with no reference to human beings” to “it
should be like a child, so that I will not see it as a stranger”).
Nonetheless, respondents often imagine the eventual
interaction with the robot by referring to social dynamics
which are typical of human beings (“I would like it to mind
its own business” or “I would speak to it as if it were a
person, I would never give it an order like a dictator
would”).
These data were used to develop a questionnaire, in order
to collect quantitative data and to compare the
representations of people at different stages of the life
span.
2 Method
2 .1
Obje ctiv e s
The aim of this study was to shed light on the ideas
people have about robots at home and the possibility of
using them for everyday domestic activities. Particular
attention is focused on age related, gender and
educational level differences.
2 .2
Sa mple
We contacted a sample of 90 subjects, balanced by
gender and age group (18-25, 40-50, 65-75). People were
all urban residents, living in different neighborhoods in
Rome, and as heterogeneous as possible in respect of
educational level and familiarity with technology.
2 .3
Too ls
We developed a questionnaire centered on several topics
which emerged as highly important in the first study
[Cesta et al., forthcoming]. In addition to collecting
socio-demographic data, the questionnaire addresses the
following topics:
• Section one - Attitudes towards technology.
We proposed 13 5-point Likert-type items asking
people to what extent they agreed/disagreed with
sentences (from 0 = “I completely disagree” to 4 = “I
completely agree”) regarding features, vantages and
disadvantages of modern technology.
• Section two - Image of robots.
We proposed 9 categorical items focusing on people’s
preferences and expectancies about robot’s shape,
size, color, cover material, speed, etc.
• Section three – Human-robot interaction.
We proposed 15 categorical items focusing on
people’s preferences and expectancies about robot’s
59
personification (given name, etc.) and modalities of
human-robot communication and interaction.
• Section four – Activities at home.
We proposed different categorical items focusing on
the perceived capability for a robot to perform 16
everyday activities at home. We asked people to
check each activity as “impossible”, “probably in the
future”, or “currently performed”.
• Section five – Emotional response to robots.
We proposed 16 5-point Likert-type items focusing on
emotional response to domestic robots. People were
asked to check each adjective from 0 = “not at all” to
4 = “very much”.
2 .4
Ana ly s es
A Correspondence Analysis was carried out in order to
evaluate the image of robots and human-robot interaction
as described by different groups of respondents. We also
compared laypeople’s and experts’ evaluations of the
actual performability of different activities by robots.
Two separate Factor Analyses were then performed in
order to identify the dimensions underlying A) people’s
attitudes towards technological devices and B) emotional
response to domestic robots. The factors extracted were
finally used as dependent variables in a one-way
ANOVA, in order to assess differences accounted by
socio-demographic variables.
3
Results
The Correspondence Analysis showed very interesting
results about people’s representation of domestic robots.
Two main axes emerged as highly meaningful: the first
dimension was labelled “expressed/not expressed
preferences”; the second dimension was called
“human/not human”. Five different representations of
robots can be identified throughout the semantic space
defined by the two axes. The first cluster is mainly
characterized by indifference, as to both the robot’s
shape and human-robot interaction. A second cluster can
be identified in the semantic space close to the “non
human” pole of the second axis: distinctive features of
this representation were a “cold” image of robot and
human-robot interaction (made of plastic, standard
model, no given name, non vocal interaction). The third
cluster describes a more lively representation of robots,
with a more precise size (smaller than a human being),
bright colors, a humanlike voice, and capable of
understanding gestural communication. The fourth
cluster emerges as strongly oriented in the humanlike
direction: the robot is preferred as having a female name
and young voice, its features can be personalized and
human-robot interaction is based on language. To the
edge of the semantic space, the fifth cluster of
representations is even more characterized in terms of
60
anthropomorphism: robots are as big as humans, are
covered by a skin-like material, are assigned a given
name and have a male adult voice. Adults were strongly
associated with the first cluster, elderly people with the
third, young people with the fourth and, to a lesser
extent, with the fifth. No gender or educational level
differences were found.
As to the comparison between respondents’ and experts’
evaluations of actual capabilities of robots, both at the
moment and in the near future, we found that people’s
representation of robots is somewhat unreal. On the one
hand, activities that imply objects manipulation (such as
dusting, cleaning windows, making beds, laying and
clearing the table, using the washing-machine and finding
objects) are perceived as much easier to implement by
robotic technology than they are in fact. On the other
hand, people moderately underestimate the actual
implementation of such activities as entertainment and
home safety control, which imply cognitive tasks and are
indeed already available at the moment. Nonetheless,
they accurately perceive the current availability of
robotic devices that are able to remind of engagements
and prompt things to do. They are also realistic about the
difficulties of implementing robots that can help people
to cook and to cut their nails.
The first Factor Analysis showed that three different
dimensions can well synthesize people's attitude towards
technological devices. The first dimension refers to the
advantages provided by technology: they help people not
to get tired and not to waste time, they allow them to
perform different tasks and to be independent of the
others, etc. We labelled this factor “Benefits of
Technology”. It explains 25.3% of total variance. The
second dimension refers to perceived difficulties of use
and mistrust: technology is not easy to use, instructions
are hard to understand and people are not mentally
stimulated by it and don’t trust in it. We labelled this
factor “Difficult to Use/Mistrust”. It explains 14.4% of
total variance. The third dimension refers to a general
ambivalence towards technology, in that sophisticated
electronic devices are expensive and break down too
often, even though they allow people to do things by
themselves. We labelled this factor “Ambivalence”. It
explains 10.9% of total variance.
In the second Factor Analysis we extracted two different
dimensions which refer to emotional response to
domestic robots. The first dimension refers to a negative
reaction, in that robots are perceived as dangerous, scary,
potentially out of control, cumbersome, etc. We labelled
this factor “Negative Feeling”. It explains 42.4% of total
variance. The second dimension refers to a positive
reaction, which implies the representation of robots as
lively, dynamic, interesting, stimulating, etc. entities. We
labelled this factor “Positive Feelings”. It explains 17.9%
of total variance.
The ANOVA performed on attitude factors showed a
significant effect of age group on the variable “Benefits
of Technology” (F(2, 87) = 3.06, p<. 05) and “Difficult to
Use/Mistrust” (F(2, 87) = 15.01, p<. 001). The Bonferroni
test (p<.05) showed that older people recognize the
benefits of technology significantly more than young
people, but they also perceive difficulty of use and
mistrust significantly more than adults and young people.
Also the effect of gender was significant, both on the
variable “Difficult to Use/Mistrust” (F (1, 88) = 12.49, p<.
01) and on the variable “Ambivalence” (F(1, 88) = 4.69,
p<. 05): females revealed a stronger “Hardness/Mistrust”
with technology and a lower “Ambivalence” than males.
We carried out another ANOVA on emotional response
factors. It was found a significant effect of age group on
the variable “Positive Feelings” (F(2, 87) = 4.33, p<. 05).
Post hoc analysis showed that young people express a
more positive emotional response than older people
(Bonferroni, p<.05). The effect of the educational level
was significant on the variable “Negative Feelings” (F(2,
87) = 4.86, p<. 05): people with lower educational level
expressed more negative feelings than people who
attended high-school and people who took a degree
(Bonferroni, p<.05).
4
Discussion
The general representation of domestic robots which
emerged from this study seems to be somewhat
ambiguous. Results show that people have different
coexisting and partially overlapping ideas. This is
probably due to the lack of concrete experience with such
machines and the strong influence of science fiction,
which makes available a wide array of different
exemplars. As a consequence, people sometimes
underestimate or overestimate what activities robots can
currently perform or are going to perform in the near
future.
The general idea on technology is rather positive.
However, when speaking about a domestic robot, which
is a more specific device, the analyses become more
complex, and the global evaluation becomes much more
multi-faceted. On the one hand, young people have a
strong familiarity with technology and a friendly idea of
robots. In their representation, domestic robots are not
useful devices which can help people to do tasks at home,
but humanlike entities to interact with in leisure
situations: in fact, young people score lower on measures
of “benefits of technology” and higher on “positive
feelings” than elderly people. On the other hand, elderly
people conceive robots as merely useful devices at home,
which can help them not to waste time and not to get
tired, as shown by the analysis of variance on the
“benefit of technology” dimension. Although technology
can be useful, older people show a slight mistrust
towards machine which are likely to be hard to
understand and unsafe to some extent. Adults seem to be
by far the least polarized group in expressing ideas,
preferences, attitudes and emotional responses to the
possibility of having a robot at home. The general
representation of robots is highly characterized in terms
of “absence of preferences” as to its physical features and
type of human-robot interaction. In addition, they often
show mid-point scores on attitude and emotional
dimensions extrapolated from Factor Analyses. It could
depend on a weaker involvement with technology, in that
adults probably undervalue both the stimulating/exciting
(appreciated
by
young
people)
and
the
practical/functional side (appreciated by elderly people)
of having such innovative devices at home: they have
undoubtedly a reduced amount of spare time to spend on
leisure and are also more likely to do things for
themselves rather than relying on technology. Gender and
educational level differences were shown to be far less
important.
5
Conclusions
This study represents an attempt to understand the
potential impact of the introduction of domestic robots in
people’s everyday life. The analysis of the general
attitudes towards technology and the specific
representations of such futuristic devices seemed to be a
good starting point to identify some basic acceptability
requirements which would provide robot designers and
producers with useful guidelines for further innovations.
Despite the huge advances in technology and robotics,
the psychological implications of human-robot
interactions still remain a scarcely explored field. An
inquiry into people’s representations of domestic robots
appears to be a suitable way for bridging the gap between
the implementation possibilities granted by new
technologies and the needs of final users.
Refere nces
[Baltus et al., 1999] Gregory Baltus, Dieter Fox,
Francine Gemperl, Jennifer Goetz, Tad Hirsch, Dimitris
Magaritis, Mike Montemerlo, Joelle Pineau, Nicholas
Roy, Jamie Schulte, Sebastian Thrun. Toward personal
service robots for the elderly. Workshop on Interactive
Robots and Entertainment, WIRE 2000.
[Breakwell, 1986] Glynis Breakwell. Coping with
threatened identity. Methuen, London, 1986.
[Cesta et al., forthcoming] Amedeo Cesta, Shahram
Bahadori, Gabriella Cortellessa, Giorgio Grisetti, M.
Vittoria Giuliani, Luca Iocchi, G. Riccardo Leone,
Daniele Nardi, Angelo Oddi, Federico Pecora, Riccardo
Rasconi, Anna Saggese and Massimiliano Scopelliti. The
RoboCare project. Cognitive systems for the care of the
elderly. Proceedings of the International Conference on
Ageing, Disability and Independence (forthcoming).
Washington, D.C.
[Dennett 1996] Daniel C. Dennett. Kinds of Minds. Basic
Books, New York, 1996.
61
[Doughty, 1999] Kevin Doughty. Can a computer be a
carer? Paper presented at “A Meeting of Minds”, 9 th
Alzheimer Europe Meeting & Alzheimer’s Disease
Society Conference, London, UK, 30 June – 2 July 1999.
[Duffy 2003] Brian R. Duffy. Anthropomorphism and the
social robot. Robotics and Autonomous Systems, 42: 177190, 2003.
[Elliot, 1991] Robert Elliot. Assistive technology for the
frail elderly: An introduction and overview. NTIS, 1991.
[Fong and Nourbakhsh, 2003]. Terry Fong and Illah
Nourbakhsh. Socially interactive robots. Robotics and
Autonomous Systems, 42: 139-141, 2003.
[Gitlin 1995] Laura N. Gitlin. Why older people accept
or reject assistive technology. Generations, Journal of
the American Society on Ageing 19(1): 41-46, Spring
1995.
[Hirsch et al., 2000] Tad Hirsch, Jodi Forlizzi, Elaine
Hyder, Jennifer Goetz, Jacey Stroback, Chris Kurtz. The
ELDer Project: Social and emotional factors in the design
of eldercare technologies. Proceedings of the Conference
on Universal Usability, 72-80, Arlington, Virginia, 2000.
[Hormuth, 1990] Stefan E. Hormuth. The ecology of self:
Relocation and self-concept change. Cambridge,
Cambridge University Press, 1990.
[Khan 1998] Zayera Khan. Attitude towards intelligent
service robots. IpLab, Nada, Royal Institute of
Technology, 1998.
[Mahoney 1997] Richard M. Mahoney. Robotic products
for rehabilitation: Status and strategy. Proceedings of
ICORR ’97 - International Conference on Rehabilitation
Robotics, 12-22, Bath, UK, 1997.
[Monk and Baxter 2002] Andrew F. Monk and Gordon
Baxter. Would you trust a computer to run your home?
Dependability issues in smart homes for older adults.
Proceedings of the 16 th British HCI Conference, London,
British HCI Group, Sept. 2002.
[Nass and Moon 2000] Clifford Nass and Youngme
Moon. Machines and mindlessness: Social response to
computers. Journal of Social Issues, 56(1): 81-103, 2000.
[Oestreicher et al., 1999] Lars Oestreicher, Helge
Hüttenrauch and Kerstin Severinsson-Eklund. Where are
you going little robot? – Prospects on human-robot
interaction. Position paper for the CHI 99 Basic
Research Symposium, IpLab, Nada, Royal Institute of
Technology, 1999.
62
[Steinfeld and Shea, 1993] Edward Steinfeld and Scott
Shea. Enabling home environment: Identifying barriers to
independence. Technology and Disability, 2(4): 69-79,
1993.
[Stewart and Kaufman, 1993] Leigh M. Stewart and
Stephen B. Kaufman. High-Tech home care: Electronic
devices with implications for the design of living
environments. In American Association of Retired
Persons and Stein Gerontechnological Institute (eds.),
Life-span Design of Residential Environments for an
Ageing Populations, 57-66, Washington DC, 1993.
A Cognitive System for Human Interaction with a Robotic Hand
I. Infantino1, A. Chella1,2, H. Džindo1, I. Macaluso1
1
ICAR-CNR sez. di PalermoViale delle Scienze, edif. 11,90128, Palermo, Italy
(infantino, dzindo, macaluso)@pa.icar.cnr.it
2
DINFO Università di PalermoViale delle Scienze, 90128, Palermo, Italy
[email protected]
Abstract
The paper deals with a cognitive architecture for posture
learning of an anthropomorphic robotic hand. Our approach
is aimed to allow the robotic system to perform complex
perceptual operations, to interact with an human user and to
integrate the perceptions by a cognitive representation of
the scene and the observed actions. The anthropomorphic
robotic hand imitates the gestures acquired by the vision
system in order to learn meaningful movements, to build its
knowledge by different conceptual spaces and to perform
complex interaction with the human operator.
1
Introduction1
The control of robotic systems has reached a high level of
precision and accuracy, but often the high complexity and
task specificity are limiting factors for large scale uses.
Today, robots are requested to be both “intelligent” and
“easy to use”, allowing a natural and useful interaction
with human operators and users. A promising approach
towards simple robot programming is the “learning by
imitation” paradigm. Many working systems have been
proposed in the literature [2], [9], [11], [12], [16].
However, these systems, although effective, are generally
based on movements recordings obtained by gloves, by
particular equipments or by simplified vision; moreover,
the imitation capabilities are sometimes limited to simple
mimicking of the teacher movements. We claim that, in
order to have a system able to learn by imitation, the
system itself may have the capabilities of deeply
understand the perceived actions to be imitated. Therefore,
the system may be able to build an inner conceptual
representation of the learned actions. In this paper, we
present an architecture based on learning by imitation that
performs visual interaction between an human user
showing his moving hand and an anthropomorphic robotic
hand (a DIST-Hand built by GraalTech, Genova, Italy).
The core of the architecture is a rich inner conceptual
level [7] where the representation of perceptual data takes
place starting from a real time unconstrained vision system
[3], [4], [5], [10]. Our long term project goal is to build a
system that may help and collaborate with elderly and
impaired persons in everyday life (e.g. a system that helps
to pick up an object or to perform some movements). The
current system is equipped with a stereo video camera that
acquires the movements of the hand of the user, in order to
perform a direct visual control of the robot (by movements
imitation) or to interact using a given sign formalism. The
system takes as input a sequence of images corresponding
to subsequent phases of the evolution of the scene (the
movements of the human hand and their effects on the
whole scene), and it generates an output as a suitable
action performed by robotic hand, along with the
description of the scene. Such a symbolic description may
be employed to perform high-level inferences, e.g. those
needed to generate complex long-range plans of
interaction, or to perform reasoning about the user
operations. In order to test our system and to have
quantitative data on human system interaction, we consider
a measurable experimental setup in which the user plays
Rock-Paper-Scissors game. The system task in this setup
is to understand the strategy of the human player.
2
System architecture
The implemented architecture is organized in three
computational areas. Fig. 1 schematically shows the
relations among them. The subconceptual area is
concerned with the low-level processing of perceptual data
coming from the sensors. We call it subconceptual because
here information is not yet organized in terms of
conceptual structures and categories. The subconceptual
area includes a 3D model of the perceived scenes. Even if
such a kind of representation cannot be considered “lowlevel” from the point of view of artificial vision, it still
remains below the level of conceptual categorization. In
the linguistic area, representation and processing are based
on the formalism of probabilistic reasoning based on
Bayesian networks [13]. In the conceptual area, the data
coming from the subconceptual area are organized in
conceptual categories, which are still independent from
any linguistic characterization. The rest of the paper is
organized as follows. Section 3 provides a detailed
description of the subconceptual area. Section 4
summarizes the notion of conceptual space. Section 5
presents the linguistic area of the architecture. Section 6
describes a simple application that involves conceptual
space representation and reasoning: human user plays
rock, paper, scissors game against the system.
1
Copyright © 2003, The ROBOCARE Project
Funded by MIUR L. 449/97. All rights reserved.
63
conceptual area. The system (see Fig 2) is composed by
some common modules that work in parallel on the two
flows of images sourcing from the stereo rig:
1) image pre-processing operation (noise filtering,
tresholding, edge detecting);
2) the fingertip tracking following the first estimation;
3) fingertip localization on image based on previous
position.
The coordinates of the fingertips are the input of the other
software modules responsible of:
1) 3D coordinate computation;
2) inverse kinematics.
Conceptual area
Action Space
Linguistic
area
(Bayesian
k)
Situation Space
Perceptual
Space
Sensory Data
Sub-
Left Image
Right Image
Image
Preprocessing
Module
I*
xpred
Fingertip
Extraction
Module
xreal
3D Coordinate
Computation
Module
(a)
X
Kalman Filter
Module
Inverse
Kinematics
Module
Initial estimates
(b)
Fig. 1. (a) The three areas of the conceptual representation and the
relations among them. (b) The posture reconstruction system and the
robotic hand.
3
The subconceptual area
As previously stated, the task of the implemented
architecture is to deeply understand the postures and
movements of the human hand. To this aim, we need the
exact 3D reconstruction of the hand to individuate the
orientation and reciprocal position with other body parts.
Different methods have been proposed to capture human
hand motion. Rehg and Kanade [14] introduced the use of
a highly articulated 3D hand model for the tracking of a
human hand. Heap and Hogg [8] used a deformable 3D
hand shape model. The hand is modeled as a surface mesh
which is constructed via PCA from training examples. In
[6], Cipolla and Mendoca presented a stereo hand tracking
system using a 2D model deformable by affine
transformations. Our method uses fingertips as features,
considering each finger except the thumb as planar
manipulator, and starting from this hypothesis we compute
inverse kinematics to control the robotic hand. The human
hand is placed in front of the stereo couple. A black panel
is used as background and illumination is controlled.
Fingertips and wrist are extracted at each frame and their
3D coordinates are computed by a standard triangulation
algorithm. Given a hand model, inverse kinematics is used
to compute the joint angles which provide the inputs to the
64
Fig.2 The system architecture: two parallel set of procedures perform
separately the estimation of the fingertips on right and left images.
3.1
Fingertip localization and tracking
This section deals with the localization of the fingertips in
long image sequences. In order to achieve real-time
performances state-space estimation techniques are used to
reduce the search space from the full camera view to a
smaller search window centered on the prediction of the
future position of the fingertips. In the following, a novel
algorithm is proposed to detect the fingertip position inside
the search window.
Fingertip Tracking. In this section we are involved with
prediction of feature points in long image sequences. If the
motion of the observed scene is continuous, as it is, we
should be able to make predictions on the motion of the
image points, at any time, on the basis on their previous
trajectories. Kalman filter [18] is the ideal tool in such
tracking problems.
The motion of a fingertip at each time instance (frame) can
be characterized by its position and velocity. For a feature
point, define the state vector at time tk as
x(k)= [ r(k) c(k) vr(k) vc(k)]T
where (r(k), c(k)) represent the fingertip pixel position at
kth frame, and (vr(k) , vc(k)) be its velocity in r and c
directions. Under the assumption that a fingertip moves
with constant translation over the long sequence and a
sufficiently small sampling interval (T), the plant equation
for the recursive tracking algorithm can be written as
X(k+1)=Ax(k)+w(k)
Where
A
=
1
0
0
0
0
1
0
0
T
0
1
0
0
T
0
1
and the plant noise w(k) is assumed to be zero mean with
covariance matrix Q(k). For each fingertip, the
measurement used for the corresponding recursive tracking
filter at time tk is the image plane coordinates of the
corresponding point in the kth image. Thus the
measurement is related to the state vector as:
Z(k) = Hx(k) + v(k)
where
H
0 1 0 0
=
0 0 1 0
and the measurement noise v(k) is assumed to be zero
mean with covariance matrix R(k). After the plant and
measurement equations have been formulated, the Kalman
filter can be applied to recursively estimate the motion
between two consecutive frames and track the feature
point. Assuming that the trajectory of a feature point has
been established up to the kth frame of a sequence, we
describe procedures for estimating the in-frame motion
between the kth and (k+1)th frames. First, for a fingertip,
the predicted location of its corresponding point z(k+1|k)
is computed by the KF. Subsequently, a window centering
at z(k+1|k) is extracted from the (k+1)th image and the
fingertip extraction algorithm is applied to the window to
identify salient feature points as described in the next
section. The same process is repeated for each fingertip.
The initial fingertips and wrist coordinates are computed
by assuming the hand is placed as in Fig 3.a. Under such
hypothesis it is straightforward to compute the fingertip
coordinates.
Fingertip Extraction. This subsection describes a novelty
algorithm to perform robust and fast localization of the
fingertips. Once a search window is determined for each
fingertip, the feature point is searched for within that
window. The overall shape of a human fingertip can be
approximated by a semicircle. Based on this observation,
fingertips are searched for by template matching with a set
of circular templates as shown in Fig 3.c. Ideally, the size
of the templates should differ for different fingers and
different users. However, our experiments showed that
fixed window size works well for various users. We
choose a square of 20x20 pixels with a circle whose radius
is 10 pixels as a template for normalized correlation in our
implementation. In order to find meaningful data we first
perform an adaptive threshold on the response images to
obtain distinct regions. For each correlation we select the
points with the highest matching scores (usually 2-3)
which lies in unconnected regions of the thresholded
image.
(b)
(a)
(c)
Fig.3 (a) Initial hand configuration used to calculate the first estimation of
the fingertip positions; (b) three points (yellow squares) with high
correlation response individuate a group and provide a fingertip
candidate computed as centroid of such a group (red arrow); (c) circular
templates used to search fingertips.
Subsequently, near responses of different correlations are
grouped. We select the groups characterized by the highest
number of points associated and the centroid of such
groups are used to compute the fingertip candidates. In
order to calculate those candidates, we must combine the
information provided by each point in the group. However,
only the points corresponding to semi-circle correlation
responses are used in this stage.
Each point is associated with a vector whose direction
depends on the orientation of the corresponding semicircle (i.e. right - 0°, up - 90°, and so on). Its module is
given by the correlation matching score. The direction in
which the fingertip candidate should be searched is
computed as the angle formed by the vector sum of such
vectors (see Fig 3.b). Therefore, the fingertip candidate
coordinate for each group lies on the line through the
previously computed centroid and whose direction is given
by the above procedure. We use techniques of data
association based on the previous measurements and
predicted state to choose the correct fingertip among the
selected candidates.
Three-dimensional coordinate computation. Given the
camera matrices it is possible to compute the fundamental
(or essential) matrix of the stereo rig [19]. Epipolar
geometry is used to find correspondences between two
views. In order to reduce computational efforts, matching
features are searched on the epipolar line in the search
window of the second image provided by KF by means of
normalized correlation. 3D fingertip coordinates are
computed by a standard triangulation algorithm.
3.2
Inverse Kinematics
The coordinates of the fingertips and the wrist are used to
solve the inverse kinematics problem for joint angles,
provided a kinematics model of a human hand. The model
adopted [11] is designed to remain simple enough for
65
inverse kinematics to be done in real-time, while still
respecting human hand capabilities.
The whole hand is modeled by a 27 dof skeleton whose
location is given by the wrist’s middle point and whose
orientation is that of the palm. The fingers (II-V) have 4
dof, namely one in abduction/adduction and three in
flexion/extension represented as state vector [q1, q2, q3,
q4]T; the thumb (I) has 5 dof. We take into account static
and dynamic hand constraints [17] which allow us to
obtain meaningful configurations and reduce the number
of DOF of the model to 15. In order to solve the inverse
kinematics problem, calibration of the human hand has to
be performed to adapt to various users. This process is
must be done off-line using an interactive procedure.
We use 6 frames to represent finger kinematics: Oc (the
camera frame), Oh (the hand frame whose origin is the
wrist’s middle point and whose axes are defined by the
palm orientation with z axes perpendicular to the palm
plane) and Ofr(I-V) (the finger-root frame whose origin is
at first flexion/extension joint). Under the assumption that
the hand orientation doesn’t evolve in time, we can
compute the fixed transformation matrixes Thfr(I-V) during
the initialization phase. In such a way, it is possible to
express the coordinates of each fingertip in the finger-root
frame and solve for the joint state vector. For each finger
except for the thumb we compute the joint variable q1(IIV) as the angle between the projection of the fingertip on
the palm plane and the x axes. By rotating the finger-root
frame around z axis by angle q1(II-V), each finger (II-V)
can be represented as a 3-link planar manipulator. The
joint angles q2, q3 and q4 are then found by solving the
inverse kinematics problem for such a manipulator. The
thumb joint angles are found directly by solving the
inverse kinematics problem. Under the assumption of high
frame-rate we can state that the joint angles vary little from
time k-1 to time k. Therefore, given the fingertip
coordinates xi(k), i=I...V, each state vector qi(k) may be
computed by minimizing the distance between the joint
state vector at time k-1 with the desiderated one, subject to
the forward kinematics equation xi(k)=k[qi(k)] and hand
constraints, namely:
Minimize ||qi(k) – qi(k-1)||
subject to:
1. xi(k)=k[qi(k)];
2.static and dynamic finger constraints.
The robustness of this algorithm has been tested using
various fingers configuration: also in the more complicated
case of two very close fingers on the background palm the
system follow the correct feature. The addition of artificial
noise or the use of a lower image scale does not degrade
the performance of the algorithm.
4
The conceptual area
The aim of the architecture is to integrate visual perception
with knowledge representation, with particular emphasis
on man–machine interaction. Our proposal is based on the
hypothesis that a principled integration of the approaches
66
of artificial vision and of symbolic knowledge requires the
introduction of an intermediate representation between
these two levels [3]. Such a role is played by a conceptual
space, according to the approach proposed by Gärdenfors
[7].
4.1
Perceptual space (PS)
The perceptual space PS is part of the conceptual area of
the architecture and it is a conceptual space in the sense of
Gärdenfors [7], in particular it is a metric space whose
dimensions are strictly related with the quantities
processed in the subconceptual area. By analogy with the
term pixel, we call knoxel a point in a PS. A knoxel is an
epistemologically primitive element at the considered level
of analysis. The basic blocks of our representations in PS
are geometric primitives (the joint angle values, or
superquadric parameters) describing the acquired scene. In
order to account for the dynamic aspects of actions, we
adopt a perceptual space PS in which each point
represents a whole simple motion. In this sense, the space
is intrinsically dynamic since the generic motion of an
object is represented in its wholeness, rather than as a
sequence of single, static frames. The decision of which
kind of motion can be considered simple is not
straightforward, and it is strictly related to the problem of
motion segmentation. In the line of the approach described
in [5], we consider a simple motion as a motion interval
between two subsequent generic discontinuities in the
motion parameters. In the static PS mentioned above, a
moving component had to be represented as a set of points
corresponding to subsequent instants of time. This solution
does not capture the motion in its wholeness. The
implemented alternative has been previously investigated
in [5]. We adopt as the conceptual space for the
representation of dynamic scenes a dynamic space which
can be seen as an “explosion” of the static space. In this
space, each axis is split in a number of new axes, each one
corresponding to a harmonic component.
Fig. 4 is an evocative, pictorial description of this
approach. In the leftmost part of the figure, representing
the static PS, each axis corresponds to a 3D geometric
parameter; in the rightmost part of the figure, representing
the dynamic PS, each group of axes corresponds to the
harmonics of the corresponding geometric parameter. Also
in this case, a knoxel is a point in the conceptual space,
and it corresponds to the simple motion of a geometric
component.
4.2
Situation Space (SS)
A simple motion of a component corresponds to a knoxel
in PS. Objects may be approximated by one or more
geometric primitives. Let us now consider a scene made up
by the human hand. Consider the index opening, as in Fig.
5. We call Situation this kind of scene. It may be
represented in PS by the set of the knoxels corresponding
to the simple motions of its components, as in Fig. 5,
where each knoxel corresponds to a phalanx. In this case,
each knoxel corresponds to a moving phalanx of the index
and its harmonic components are not zero, while the other
knoxels correspond to the phalanxes of quiet fingers (the
figure depicts only some of them). Each point in the
Situation Space (SS) is a collection of points in PS. SS is a
pictorial representation of the global perceived situation.
Fig. 6. A1 is the collection of the points in SS describing the hand action.
Fig.4 Each point of the PS represents a whole simple motion.
Fig. 5. S1 is the collection of the points in PS describing the finger
movement.
4.3
5
Linguistic area
Long term declarative knowledge is stored at the linguistic
area. The more “abstract” forms of reasoning, that are less
perceptually constrained, are likely to be performed mainly
within this area. The elements of the linguistic area are
terms that have the role of summarizing the situations and
actions represented in the conceptual spaces previously
described, i.e., linguistic terms are anchored to the
structures in the conceptual spaces [4]. The symbolic
inferences in the linguistic area aimed to plan and decision
making, are performed by suitable Bayesian networks. At
a given instant, the chosen decision depends from past
events and actions executed with a given probability. The
interactions between human user and robotic system,
initially random, are used to update the tables of
probability of the network in order to learn suitable
strategies [13].
Action Space (AS)
In a Situation, the motions of all of the components in the
scene occur simultaneously, i.e. they correspond to a single
configuration of knoxels in the conceptual space. To
consider a composition of several motions arranged
according to a temporal sequence, we introduce the notion
of Action
in the sense of Allen [1]. An Action
corresponds to a “scattering” from one Situation to another
Situation of knoxels in the conceptual space. We assume
that the situations within an action are separated by
instantaneous events. In the transition between two
subsequent configurations, a “scattering” of at least one
knoxel occurs. This corresponds to a discontinuity in time
that is associated to an instantaneous event. Fig. 6 shows a
simple Action performed by the human hand. The figure
shows a human hand while opening. This Action may be
represented in CS (Fig. 6) as a double scattering of the
knoxels representing the phalanxes (the figure depicts only
one of them). The knoxel representing the palm remains
unchanged. Each point in the Action Space (AS) is a
collection of situations, i .e., of points in SS and it
represents a hand action. AS is a pictorial representation of
the action performed by human hand.
5.1
Learning in the architecture
The structures of the conceptual spaces allows to manage
the learning of the link between perception and action at
different level of representation. In the Perceptual Space,
we need to recognize and classify the motions of single
phalanxes (e.g., UpPhalanx1): it is an easy task and the
system uses a classifier based on a perceptron neural
network. In the Situation Space, we need to classify and
recognize complex dynamic postures: the system uses a
recurrent neural network able to learns the different hand
configurations. Let us consider a set of knoxels s= {pk1,
pk2, …, pkm} corresponding to an instance of a Situation
concept C, e.g., Hand Opening. When a knoxel of s, say
pk1, has been individuated by the subconceptual area and
it is presented as input to the recurrent network associated
to C, the network generates as output another knoxel of s,
say pk2. In this way, the network predicts the presence of
pk2 in SS. The expectation is considered confirmed when
the subconceptual area individuates a knoxel pk* so that
pk2~pk* . If the expectation is confirmed, then the
network receives as input pk2 and generates a new
expected knoxel pk3,and so on. The network therefore
67
recognizes the configuration of knoxels of the associated
concept according to a recognition and expectation loop.
In the Action Space, we performs a similar mechanism to
classify complex actions: when C is an Action, the
previously described sequences now refer to a succession
of different SS configurations. It should be noted that the
SS case is an example of synchronic attention, while the
AS case is an example of diachronic attention. Recurrent
neural networks make it possible to avoid an exhaustive
linguistic description of conceptual categories: in some
sense, prototype Situations and Actions arises from the
activity of the neural networks by means of a training
phase based on examples. In addition, the measure of
similarity between a prototype and a given Situation or
Action is implicit in the behavior of the network and is
determined by learning. As stated before, in the linguistic
area, where long term memory is the instrument to plan
and to decide strategies, we use suitable Bayesian
networks. The history that determines the behavior of the
system is a group of sequential action: the current decision
is dependent from past events and actions executed, with a
given probability. The interactions between human user
and robotic system, initially random, are used to update the
tables of probability of the network in order to learn
strategies of behaviors according to the Bayesian learning
algorithms [13].
6
Rock Posture
Paper Posture
Scissors Posture
Experiments: the case of rock, paper,
scissors game
We adopted an experimental setup that allowed us to
measure the degree of learning of the system during
human-robot interactions. In this setup, human user plays
the Rock, Paper, Scissors game against the robotic hand
(see Fig. 7). We have chosen the RPS (Rock, Paper,
Scissors) game because it is simple, fast, involving hand
dexterity and strategy between two players. Moreover,
RPS game is based on standard hand signs. Also the rules
are simple and well-known. Players may use any
combination of these throws at any time throughout the
match. Any throws that are not conforming to the standard
hand positions and thus deemed to be a rock, paper, or
scissors is considered to be an illegal throw and it is thus
forbidden.
6.1
expectations can prefigure the situation resulting as the
outcome of an action. This implements a predictive
behavior of the robotic hand, and it represents the ability
of a player to predict the opponent action before its
completion and using only sensorial input. For example,
when the robot recognizes a starting instance of the Paper
path situation, it immediately performs the Scissors action.
We take into account two main sources of expectations.
On the one side, expectations could be generated on the
basis of the structural information learned in the Bayesian
network. As soon as a Situation is recognized and the
situation is the precondition of an Action, the symbolic
description elicit the expectation of the effect situation.
Fig. 8 shows an example of the Bayesian network that
computes the most probable sign after two throws using a
strategy based on the repetition of successful moves. On
the other side, expectations could also be generated by
purely associative mechanism between situations by means
of the previously described neural networks.
Learning game behavior
The robotic system, in order to choose one of the three
game signs, uses a suitable sequential mechanism of
expectations. The recognition of a certain component of a
Situation (a knoxel in PS) will elicit the expectation of
other components of the same Situation in the scene. In
this case, the mechanism seeks for the corresponding
knoxels in the current PS configuration. The recognition of
a certain situation in PS could also elicit the expectation of
a scattering in the arrangement of the knoxels in the scene;
i.e., the mechanism generates the expectations for another
Situation in a subsequent PS configuration. In this way
68
Fig. 7. Elementary postures of the RPS game executed by human user,
robotic hand and simulator.
6.2
Playing a game
The system has played 500 matches against human user
which uses a defined complex strategy based on “gambit”
composition. A gambit is a series of three throws used
with strategic intent. “Strategic intent” in this case, means
that the three throws are selected beforehand as part of a
planned sequence. There are only twenty-seven possible
gambits, but they can also be combined to form longer,
complex combination moves. The strategy followed by
human player related to the Fig. 9 is represented by the
union of the gambit (PSR) and (RPR), with the random
choose to repeat the same sign or change gambit after a
stalemate or the conclusion of a set. A single game uses
the best of three of three format (max 3 sets, ended when a
player wins 2 throws). In the first phase of the challenge
(match #1-#50), the system plays at random, obtained a
success rate near to 33% (stalemate is counted as fail). The
continuous updating of the tables of the Bayesian network
introduces the knowledge of opponent’s strategy. After
approximately 250 matches the system has completely
learned the inner behavior of the human player and has
obtained a success rate near to 61,2%. The various
experiments done have highlighted a profile of learning
process characterized by a random initial phase that lasts
50~75 matches depending from player strategy, a second
phase with constant converging learning rate, and a final
phase in which the system does not improve its skill.
Fig. 8. An example of the Bayesian network that computes the most
probable sign after two throws using a strategy based on the repetition of
successful moves.
Match #001
Human
Set 1 P
R*
S
R*
Set 2
P
R
R
P*
R
Robot
P
S
R*
S
S*
R
R
R
P*
R
R
S
R*
P
S*
S
R*
Result: Human wins
Robot %tw: 30,77%
Match #201
Human Robot
Set 1 P*
R
S
R*
S
R*
Set 2
R
R
P*
P*
Set 3
R
R
P*
S
R
P*
R
R*
Set 3
Robot wins
55,56%
Fig. 9. The results of 2 matches (#1 and #355 of 500) are reported. The
graph reports the percentage of throws won by robotic system in a single
match. After 250 matches the system has reconstructed the behavior of the
human player and has obtained a success rate near to 61,2%.
References
[2] C. G. Atkeson, S. Schaal, “Learning Tasks From A
Single Demonstration“, in proc. of IEEE-ICRA 1997, pp.
1706-1712, Albuquerque, New Mexico, 1997.
[3] A. Chella, M. Frixione, S. Gaglio, “A Cognitive
Architecture for Artificial Vision”, Artificial Intelligence
89, no. 1-2, pp. 73-111, 1997.
[4] A. Chella, M. Frixione, S.Gaglio, “Anchoring symbols
to conceptual spaces: the case of dynamic scenarios”,
Robotics and Autonomous Systems, special issue on
Perceptual Anchoring, vol. 43, 2-3, pp. 175-188, 2003
[5] A. Chella, M. Frixione, S. Gaglio, Understanding
dynamic scenes, Artif. Intell., no. 123, pp. 89-132, 2000.
[6] B.D.R. Stenger, P.R.S. Mendonca, R. Cipolla, “Model
based 3D tracking of an articulated hand”, in Proc.
CVPR‘01, pp. 310-315, 2001.
[7] P. Gärdenfors, Conceptual Spaces, MIT PressBradford Books, Cambridge, MA, 2000.
[8] A. J. Heap, D. C. Hogg, “Towards 3-D hand tracking
using a deformable model”, in 2nd International Face and
Gesture Recognition Conference, pp. 140-145, Killington,
Vermont, USA, October 1996.
[9] J.A Ijspeert, J. Nakanishi, S. Schaal, “Movement
imitation with nonlinear dynamical systems in humanoid
robots”, in proc. of Intl. Conf. on Robotics and
Automation (ICRA2002), Wahington, 2002.
[10] I. Infantino, A. Chella, H. Džindo, I. Macaluso,
“Visual Control of a Robotic Hand”, IROS‘03, Nov. 2003.
[11] J. Lee, T.L. Kunii, “Model-based analysis of hand
posture”, in Comp. Graph. and Appl., pp.77-86, 1995.
[12] V. Pavlovic, R. Sharma, T.S. Huang., “Visual
interpretation of hand gestures for human-computer
interaction: a review”, PAMI, 19(7), pp. 677-695, 1997.
[13] J. Pearl, “Probabilistic Reasoning in Intelligent
Systems: Networks of Plausible Inference”, Morgan
Kaufmann, San Francisco, CA, 1988.
[14] J.M. Rehg, T. Kanade, “DigitEyes: Vision-Based
Hand Tracking for Human-Computer Interaction”, in Proc.
of the IEEE Workshop on Motion of Non-Rigid and
Articulated Objects, Austin, Texas, pp.16-22, 1994.
[15] R. Rosales, V. Athitsos, L. Sigal, S. Sclaroff, “3D
Hand Pose Reconstruction Using Specialized Mappings”,
in Proc. IEEE Int.l. Conf. on Computer Vision (ICCV’01),
Canada, July 2001.
[16] B.D.R. Stenger, P.R.S. Mendonca, R. Cipolla,
“Model based 3D tracking of an articulated hand”, in Proc.
CVPR‘01, pp. 310-315, 2001.
[17] Y. Wu, J. Y. Lin, T. S. Huang, "Modeling Human
Hand Constraints", in Proc. Of Work-shop on Human
Motion (Humo2000), Austin, TX, 2000.
[18] B.D.R. Stenger, P.R.S. Mendonca, R. Cipolla,
“Model-Based Hand Tracking Using an Unscented
Kalman Filter”, in Proc. British Machine Vision
Conference, Manchester, UK, September 2001.
[19] O. Faugeras, Three-Dimensional Computer Vision,
MIT Press, Cambridge, MA, 1993.
[1] J.F. Allen. Towards a general theory of action and
time. Artif. Intell., vol. 23(2), pp. 123–154, 1984.
69
70
"!#$&%(')+*,-/.0
(12
435$67.089:;$<=>'?+*@BA-DCE-F(GH/I
JKML5NPOQ$LRLTSMUVXWYN[ZU]\5^_\Y`Nba6cMU
W=U]dHNbafegU_hi\RZ(egQjcMUOZ$klQma6hiNbegU_LnNj\@o$U_pfeg\RhiU_pfe6U]LnN$Vq
ZMU_r[\5a6pfU_etNvs u6JNwoN[dU]\5ZxnN[y
zU{NwoN[^]N[a6U]N}|m|~VM€m€|m‚Pƒ„QmhiNVMO…egN[^_†
‡-ˆ…h}N[U_^‰‹ŠŒU]QL5LTSMU&VŽZHNba6cMUb‘YcU]p5’“KMZMU_a6QmhiN|m’“U_e
” hw\5cM\RQ}•0\RpfegN
Opfe6U_egK$egQ–cMU—o$LRU]\5Zx5\˜\˜™\RL5ZMQm^_QmšmU_\YcM\5^_^{N–•0QmšmZU]x5U_QmZM\bVX•0QmZMpfU]šm^_U]QP`N[x5U_QmZHN[^_\=cM\5^_^]\˜ƒ„U]LR\5a6LTS\[V
z=U]N[^]\˜›œN[až;|Ÿ$VH€m€|n~( Pƒ
QmhiNV$O…eTN[^¡†
‡-ˆ…h}N[U_^‰:ŠŒL5\RpfeTNb‘YU_d¢’“a6hE’ŽLRZMa5’“U_e
§ ¨!#"=ª§$% &¢²Œ¬Ð¯&¬6­¬6®°¯…±…²—³9´:µ˜¶Œ·“¸¹R«³Rºb³9¸»n¼­¸&¬ž½¾­X¿_µ˜«—»ŒÀ²t®9­ºb¬f¬ž´
¹ÂÑ[È´t¬f³9­&Ê9¹Âь´5¬f¹Ž¸f·Ž¯³RÓM¬fÑbȖØ׬6ÈÑb²Œ¬f¹Â¹Â·Ž¯´=¬ ­&Ñb¸&¹Â´„²t¬6±¬:¸&¸²t¹Ž¯&ÊR¬6¬—¬
±ÇÃÖ¬f³R®°´R¯&´t¸½BÈPÑb¬ž®9¯¢ÃÖ¯&­±ž³R¸—®R½ ºb­¬—³9®9¸&¸&´P²=²Œ¬-ÃÖ¬ž¯´ŒÁ9³9ÁR³5½¹Ž®°´t·b¬ž®¾¹Â¬ž­ºŒ¯¸&¹Â¹Â´Œ³B³9Á·Â³9ÈÁ9¬f­¸¹ÆÊ9±ž®°¬ž®9´b·Â·Ž³9Èn·ÂѼ Ä
¯³9¬fÃt®9±Ç·n³9­&·Â·Â¼¬f­l±¸¸¬ž¹Ž½¾ÊR¬X­º[¸&¬ž²b²t®T®g¸XÊn®°¹Â¯³9¬¶Œ¯±f®°¸lÑt¼5Ñt®9ºŒ¹Â±f·Â®°¬·5³9³°Ã[Ãt­&®°²Œ´t³T¹ŽË½¾¹Ž®°´t·nÁ­&ÊTÑb®°¬6¯±Ç¹Â¹Â³9¬f¶t­f­$É6¹Â¸l´+¼nÑb¸²Œ¬6¬­
·Æ¹Â½B®T¸&ь¸&¬ž¯&¯³TÊR±f¬M®9­&¸&¬²Œ¬µ˜¬%«—'m¬f»±Ç¸&®°¹Â¯Ê9¬¬f´ŒÈŒ¬f¬f­­&­ž¹ŽÁRÉ6´ŒÑ[¬f¬žÈ¯&ÃÖ³9®°¯´t½BÈ
®9¯&´t¬6±Ç®°¬·Â¹Ž®°Üf¬f´tȾÈ+¹Â¯´³9ºt³R¶t¯ÈŒ­l¸¬ž´Œ¯¬f¸&­³ ­
³9îB¯³9º[³°¸&¹Æ±-­¼­¸&¬ž½=Ó
¹Â¹Â´Œ½BÁYÎÏь´¾¯&½B³T¸&ÊR³°²t¬ž¸¹Â­¹Ž½BÊTÑt®T¬ž®°¸´5¬fÑ[¸‹Èj¬ž³9¯®°ÃËÑt¸&ь¬²Œ¯&¬®°³5¯®9Ñ[¬±…¬ž²Œ½B¯&¬6ÃÖ³9³9­¯¯¸&¬½B³˜±Ç®9³9µ˜´t´b±Ç«—±Ç¬
¬ž»@¯¹Ž´Œ´wˬ6®RÈB²Œ±ž³5˱ž­³9¬¾¹“¸½B²„ÁRь³R¬ž·Â®9´Œ¹Â­&·ÁR²Œ¹Æ¹Ž¹Â­´t´Œ¬žÁ¸¬ž²Œ¯&®¬ Ä
ÁR¹ŽÊR¬¾¸®R­Ômɳ9¯‹¹Â´P¸²Œ¬
¯³9ºŒ¶b­l¸´Œ¬f­­‹®9´tÈj¯&¬f·Ž¹Æ®°ºt¹Ž·Â¹“¸l¼,³°Ã¢¸²Œ¬
½ ­¾®9(+±f*±Ç,]³RÓ ½0Ñt&¢·Ž²Œ¹Æ­¬w²t¹Ž¹Ž´Œ´tÁj±ž¯&ÁR¬6³R®9®°­&¬P·Æ­Ò³°¸&IJt®°Ñ[¸
¬ž¯&±žÃÖ®9³9´Œ¯½¾´Œ³°®°¸„´t±žº[¬P¬˜´Œ®9³°±…¸²t¹Ž¬f³RÊ9´Œ¬f·Âȼ
½Bºn­&¼¼˜¬f­l®9¸¹Â¬ž´t´t)
³9­&¹ŽÃb½B¸²Œ¹Â·Â¬®9ȯ­¹ÂÊn¼±ž¹Â­®9ȸ&Ñt¬f¶b®°½;®°ºŒ·Ð¹Âºn·Â¯¹“¼‹³9¸¹Žº[¹Â¬6´t³°­/±Ç¸…(­ž¯01¬fɍ,{®RÓºŒ­¶¹Â´Œ¸‹Á®°¸&·Æ²Œ­&³¬´n¹Â½B¶Œ½Òь¯ºb³T¬fÊ9¯X¬0³°¸&Ã[²Œ¯&¬¾³Rºb¬ .„³9¸±Ç­M¹Â¬žË´t¹Ž±ž¸&²¼
±º[ž³9¬¾½B&¢Á9²ŒÑŒ¯¬ž³9·Â¹Â¯¶Œ­&¬˜²ŒÑ[¬f¬f¹ÆȖ­È@®–º5¹Â´@¼,·Â®9¸¸¯&²Œ²ŒÁR¬B¬¬YÃÖµ³9Êg·Â®9·Â«—³T¯&¹ÂË»m¬ÇÓM¸l¹Ž´Œ¼}̗Á=³°±ž±ž±žÃ ®°³9¸…¸&¯…®9¬fÈ­&Á9¹ÂÔn´Œ³9­„Á˜¯¹Â¬f¸&¸²t6­ ³25®°¸(7%348:,]±f9<ÉX®°;¸&´=?²Œ>@A¬fº[=B¼j¬@;±f®R®°@D±´ C Ä
Úl®9E ¬68ь±Fь¸G­‹·Â¹Â9<±f;­&®T=A±f¸&®TGǹÂɸ³9¸´bˬž­:¯²Œ¬f¬f¹ÂÈ@´t¯&¬B±Ç¹Ž·Â´,¶t¸²ŒÈ¸¬¾¹Â²Œ´Œ¬¾ºtÁj®9¬ž­­&´n¬6¹ÆÊn±Ò®°¹Ž¯…¯º[±…³9¬ž²j´t²b½0®9®gÊ5´t¬f¹Â´5Ȗ³9¸f¶tɯ¯+¬fË­¹Â±Ç¹Ž­-¸&¶t²–¸¬9³˜ÉH®È±ÇÊT³9¬Ç®°Ä{·Â·Â½B¯¬f¹Ž±¬ž¹Ž¸+´t¸l¼˜¹Ž³R´Œº³°ÁbÄà É
¸8S³gTÕnU¹ÆG±B9F4Ë¢;:®9NP>V­8:¸&@t¬¾É5±Ç˷¬f¹“¸®9²B´Œ¹Ž®9´tьÁ˜ÑŒ®°·Â¹Â´t±f®TÈj¸¹Ž­&³R¬ž´t¯Ê5­¹Æ¹Ž±Ç´¾¬¾­&¯¬f³9±Çº[¶t³°¯&¹Ž¸…¸l­ ¼9HÉ°IK­&¶ŒJM¯Ê9LONP¬ž>Q¹Â·ÂN·Â;®99R´t=S±ÇG¬ N
®9L;:´tNPȖ>V8:@t¯¬fɱžË³9Á9¹Ž¸&´t² ¹“¸¹Ž®9³Rь´Pь·Âь¹Â±f¯&³R®TºŒ¸¹Ž·Â³R¬ž´t½¾­„­ H/³R´ T8­lWY¸³nXD±…JZÔTU<®9[ZÁ9>¬9@AÉ=2¸&;¯¶t@D±…C\Ô}I]·Ž³5;:®9@^È> X_¹Â´ŒJMÁ `
®9´tÈ ¶Œ´t·Ž³5®9ȹ´ŒÁtÉ-®°´tÈE±Ç³n³9Ñ[¬ž¯…®T¸¹ŽÊR¬˜¸&¯…®°´b­Ñ[³9¯&¸®°¸&¹Â³9´H!GaW?`
XDL89<;:NP>V8:@b;@_CdcQ8 Eae >@A=9Éb˹Ž¸&²˜®9ьь·Â¹Â±f®T¸¹Ž³R´³R´=ÃÖ³9¯½¾®T¸&¹Â³9´
½¾®9´t®°ÈB¹Â´5½¾¸&¬f®°´tь®°ÑŒ´t¹Â±ž´Œ¬9Á„ÉX¿]½¾»ۍ®9ÌÑwµºŒ¶ŒÀÉ5¹Â·Æ¬ÇÈÕ¹Ž´tьÁt·Â³9ɯ…±ž®T³n¸¹Ž³9³RÑ[´0¬ž¯…³9®TÃm¸&Èt¹ÂÊ9®°¬´ŒÁR·Ž³¬ž¯±f³9®°¶t·Â¹Ž­HÜ6®T¬f¸&´n¹ÂÊ5³9¹Ž´ Ä
¯³9´Œ½B¬f´R¸…­žÓ
¹¸Ž¸¹Ž³R®9µ˜¯&´t¼7­!«—®°( »j4´t,]²tÉMÈ×®g­&ÊR­&Ñt¬f¬-®9±ž±žº[¶Œ¬B¬ž¯¬f¹“®9¸l´Yь¼7ь¶t®°·Â­&¹Âь±f¬fÑt®Tȸ·Ž¹Æ¹Ž¹Ž±ž³R´˜®°´t¸&­&­!¹Â¬ž³9ÊR(´tk6¬žf­,]¯…É®°(·Æ·mgb®°¬žiÉ ¯´nÁ9h1Ên¬¾,{¹Âɯ&­&³R±f¯´Œ¬f®°­½B·Â¬¾±Ç¶Œ¬ž®9´5¬­¸­6­ ³9¬f5HÑ[½‹½B¬žºŒ¯…¹Ž®T·Â·Ž¼ ÄÄ
£ ¤=¦§M¨H©+ª§
¥
¹Æº[«±ž³°®9¬f¸­&´5²@¬f¸&®°·Â˼
¯…±…¹Ž¹Â²Y¸&´t²t±Ç³9¹Ž¯´P´@¬f®R«µY­¬6³R¶ŒÈºb·Ž³9¸&¹Â¹H¸&´=¹Æ«±ž¸&­ ³9²tº[¬‹®°³°´t·Æ¸:È,®9­»n¸Ì¼¼9¯&­¸&¬f¸&¹Ž¬ž®9Íb½¾¯±ž­f¹Â­0Ɍ®9º[·H¿¡µ˜¬žÎϹ´R´Œ«¸Á„¬ž»Œ·Â·ŽÀ¹Â¹Â´nÁ9²bÊ9¬f®9¬f´t­­-±Ç¸&¬R¹Â­ÁR¹ÂÉÁ9®°®°´Œ¸&´t¬6¹ŽÃÅÈÈ Ä
®‹±ž¬žÁ9´5¯¸&³T·Â¼‹ËÑt¹Ž´t¯&Á+¬6­´5¬f¶t´R½‹¸¬fº[ÈÒ¬ž¹Ž¯Ð´0³9¸Ã²ŒÑŒ¬¯·Â³°¹“¸¸&¬ž³9¯…¸l®T¼n¸&Ñb¶t¬-¯&¬R­¼Ӎ­Îϸ&´B¬ž½¾¸&²t­Ð¬²tь®g¯&Ê9¬6¬­¬fº[´5¬ž¸M¬fË´
³9¯¯¬ÇÔ Ä
˲t¹Ž¬,ÁR²ŒÑŒ·Ž¹Â¯&Á9¬6²5­¬f¸…´5­0¸=¸&²Œ®w¬Y¸¯®°¬žÕn·Æ®T³R¸´Œ¹Ž³9³R½Ò´t­¼}²t¹ŽÃÖÑ}³9¯Yºb¬ž±Ç¸l·ÆË®9­¬f­¬ž¹ŽÃÖ´×¼n¹Žµ˜´tÁi«—µ˜»}«—®9´t»(ÈiÉ˵˜²Œ¶Œ¹Â±…·“¸² ¹
³9¬6̗®9âÁ9­¸&¬f²Œ´Rˬ¸Y²Œ¹Æ·Ž»n±…¹Ž¸&¼²¬f­l¯¸¯®°¬ž¬f¸&½¾±Ç¶Œ¬f­¯¹ŽÊR¬„¬f¿_µ˜³9Ș´–Ì­&¹ÂµÁ9»t´ŒÀ«—Ó¢¹ŽÍb»@Ø@±ž®9¸&¬¯´5¼n¸ ¸&¹Ž²Œ´Œ®°¬fÁ˜¸´œ¸&¬f¸&´5³˜È¸&¹Æ¹Â²Œ­³9±Ç¹Â´Á9¶t²t­¹Â­¾·Ž´˜¹ÂÁ9¸¸²5²Œ²Œ¸:¬˜¬B¸¸²Œ¯&¯&¬6¬¬f±Ç´t¬f®9Ȍ´R¯¸­Ä
Ñb®
®9¯&­¬6¸f±ÇÓ¬f´RØ,¸·Ž¬:¼YÍt­´t¸®9®9¯·Ž·Â¸¼¬fÈYь¯&ь¬6¯­³°¬fÚl´5¬6¸±¸-¸&²ŒË¬Ò²Œ½¾¹Â±…®°²,¹Â´Y®9ȌÁ9Ȍ³R¯&®9¬6·Â­&­­&³°¬f­ÃX«¸&²Œ³9¬0º[¶t³nÙ­&¬Ò®9¯&³9¬ Ã
®Bµ˜«»„¸³„­¶tьÑb³R¯¸¬f·ÂȬf¯&·Â¼„Ñ[¬ž³Rь·Ž¬RÓ
£ ¤=¦§M¨H©+ª§
¥
´tÛ$Ȍ¬ž¬ž®PÁ9·Â·Â·Â®,¯¹¹Â±ž¶t«¬ž·“¯…¸³9±ž¹Žº[½B®P³°¹H¸­&¹Â¶Œ®°±f´Œ¹-®˜´t­&¬=¹¹Æ­l¶Œ¸È¬ž´˜¬ž½B·Â·Æ¹Â®½B¹½ÒьÎÏ´5¶Œ¶Œ¸&·Æ¬f·Ž­¸&·Ž³¹-·Â¹ÂÁ9­¯&¹Â¬ž³RÁ9´tºb´tÜf³9¹“®Íb¸,±f̗®T¿¡µ¸¯¹Ž¸ÊR«—¹“Í[³t»Œ±ÇÉtÀ¾¹Æ´t®°¬ž·Â²b¬9·M®wÉX­¬ž¬6®g¸ÈwÊn¸&¶³R¶Œ¸&¯&´³¬
´n·Â¬Ç¶Œ¸&½B¸&¬f¬f¯®°¯&³„¸&¶Œ±ž¯…¯&®Œ¬6ÓH­&±žÞ¬ž¶Œ´5¸&¬f¬+­¸&ȳ ¹·Æ®g­¹ÆÊ9­l³9¸¯¬ž³½BȌ¹¬fÑt­¯&±Ç³9¯¸&¹ŽÊR³°¬¸¹Ž¶tÑb´t®°·Â®+¹M±Ç¬+Ý ·Æ®9¯&­¹ÂÑ[­¹Ž³9Íb¯&±f¸®°®Tܞ¸¹Â³¾³9´t¹Ž´ ¬
Ȍ­&¹Æ¬ž­lÁ9¸¬ž·Â¹Œ½Bµ˜¹(«—½‹»Ò¶Œ·Ž±…¸&²Œ¹¬®9½BÁ9¬f¬Ç´R¸¸¸¬Ò¬¿_¹Žµ˜´0¯Ì¹Â­»Œ®°ÀÇ·ŽÓŒ¸&³-ßH·Ž¬ž¬¢´t¯Á9¬ž³9·Æ´t®°³0Üf¹Ž³Rà5´Œ¶Œ¹9¹Â´t¸¯È® ¹(ȵ˜¹Æ­&«±ž¶t»:­¬¢­¬ ¹
·Â®°¬¸&¶Œ¸&¯…¬f®Œ´tÉ5ȱǬž³R´t´0ܞ¬˜·Â³:±…­²Œ±Ç¬=³9Ñ[¬ž³:½BȬf¹b¯&ÁR¬ž³9Ên´Œ¹ÂȌ³j¬ž´ŒÈŒÜf®,¹Â®9¶Œ¯&´¬á ¹t®°¸&´t¬ž®9½B·Ž¹Æ¹[­&¹Èȹt¬ž¯¹Â·Â±ž·Â®,¬ž¯…·Â±ž¬Ç® ¸&¸&±…¬ž²Œ¯&¬ Ä
²b­®T®°¸´Œ³t´ŒÓ$³+ÎÏ´¯¹ÆÍt±Ç´Œ¬ž¬RÊnÉ9¶Ên¸³:¹Ž¬f´ŒÑt¬¢®9¯ÑŒ¸¯¹Â±ž¬f³9­&¬ž·Æ®°´5¯¸¬¢®°¸&®°³-¸¸·_¬žá ®°´ŒÑŒÜfÑt¹Ž³R¯&´Œ³¬±ž±ž´Œ¹Ž¬ž³+·b­&¯&¬ž¬6ÁR±Ç¶Œ¬f¹“´R¸¸³¬´tÑt¬ž®R·Â·Ž­l³ Ä
·]­&®9áÊn´Œ³9¹ŽºŒÜž·Â¶Œ¹ÆºŒ®°ÑŒ¹Â´t¬ÇÑ[¹_¸&³¾Ó ¸&¹ÂÊ9ȳ0¹(¶ŒÈ´Y¹(­µ˜ÊR³9«—·ÂÁ9»
¬ž¯Ñ[¬—¬ž¯®9·Â¹Â±ž·(¶ŒÑŒ´Œ¯&¬—³RÁ9½¾¬ž®9¸¸&´t³B­¹Â«³9´t³R¹mºbȌ³¹mÙ®9®°¶t¯­&¬¹Ž·Â±…¹Ž²Œ³0¬ ®°²tÁR·Ž® ¹
õøòåçûgô°âPåçò æ_(í_ãè_ê…ägðè]åçé+æò…ünùè]ý[7élæ_ó6é&õñgê…ëè_ñgêë{ïí]ä˜í_åçåòåçû5æ þg
ìT÷$élê…ûÇûgè¡í å í]ÿÇåîžélê…ð6è_ïçïæ]æ¡ðåí]íÅé ðæ]ñgê…ìgûgì°ó—åòí_øè¡ä0í]élélöÅæ]óûÇéí]ê…ôžélè]ïçð=ë{ïçäå [ñõ élgûžöÖûgí÷ógŒøúélåè(6ùŽìgélöÆó¾è_í{òê…_ïçê…åîélûgê…ëÏûó í
õ òôgåçïçéHøòô9ò…í_åçë mò ì9òû6éûÇí_æ *
71
[ѵ( l4¬ž,{«—¯&ɍÃÖ»=³9®°´b¯²t½¾È˜®R®°­½¾´tь±ž®°¯¬´n³T¼Y³°Ênù³°È¯¸¬6³9²ŒÈº[¬ž³°¯…ÃÖ¸­f³9¹ÂÓm¯±&¢®9ºbь²Œ®9ь­&¬Ò¬f·Â¹Â­f¹Â±f´tÉ°®T±ž¸¸¯&®R¹Ž¬6³R­®9Ô´t­&­­—¬f®°È˜®9´t´t®gÈBÈ=ÊT®°º[¬Ç¹Â¬žÕ·Æ²t®°Ñ[ºŒ®g¬žÊn¹Â¯·Ž¹Ž¹Â¹Ž½B³9¸l¼Y¶Œ¬ž¯…´5®°­H¸…´t³9®°È ÷
¶t³R­±ÇÑb·Â¬f®9¬f´t¯f¯É®°®°¯­&¸&¹ÂÑ[¬Ò³R¬f­M¹Ž±Ç´¸¹Æ®°²tȌ·H®T¼5¸®°´b¸®°®°¸&¯½B¬f¬´5¹Â¸&½B±Ò¹Â³9³9¬ž´,¯´n¬—Ên¹Â­¹Â®°¯&´b³RÁRȾ´Œ¹ŽÊR½B½B¬ž´@¬ž³9´5¯¸&¸¬³Y­fÉb±Çµ˜³R˽B«—²tь¬ž»@¯·Ž¬ž¬‹ÈÕ(¬ž¶ŒÓ$ÊR´tÎϬž±ž´·Â¬ž³9¯&ÑtÑ[¸®°¬f®9¯&ȹŽ¸&´5¹Æ¸&¸l±¼³ Ä
¬f®9´t´t±ÇȘ¬¾¶t³9´ÃÃÖ³9¯³9¯¬fº[­&³°¬ž¸…¬f­+´@®°±…´t²tÈ,®9´Œ³°Á9¸¬6²Œ­¬ž¯:±f®°®9´Á9¬f²b´R®°¸…ÑŒ­-Ñ[¸&¬ž²t´j®°¸‹È®°¶Œ¯¬0¬B¸&¬Ç³Õn¸&¸&²t¬f¯&¬0´tÑt®9¯&·M¬6¸&­l³ Ä
¸²Œ¬:µ˜«—»¹Ž¸­&¬ž·ŽÃlÓ
Ìw­&¹ÂÁ9´Œ¹ŽÍb±ž®9´5¸ºb³n³5­l¸¸³¸²Œ¬Ð˳9¯Ô—³R´:µ˜«—»+²b®9­¯¬f±ž¬ž´5¸&·Â¼
º[̬žÌ—¬f´Ì—ÎÐÁ9¯¹Â³9Ê9º[¬f´ ³°¸&¹Æ®9±—·Â­&±ž³×³9´5ºn¸&¼œ¬žÕn«¸­³R®°ºb´t³9Ȅ¸&¹Æ±ž«­˜³9º[±ž³9³n½BÙ¶ŒÑb\Ѭž¸&¹Ž( n:¸&¹Â,]³9ÓM´tÎÏ­f´„É:á­&®R¶t±±…¸6²;Én¸&²Œ®9¬­
³9Ȍ®9Ãm¯&¬f¬—­&¸&¹Ž²Œ¬ÇÁR¬Õn´=¸&½¾¯®9¬ž´t®°½BÚlȳ9¬ž¯Ð¸&·Â¼B²Œ­¬‹±Ç¶t¹Â¯&¬ž­&¬6¬Ç´5®°ÃÖ¸&¶Œ·Â¹Ž¹ŽÍb·bÜ6±ÃÖ®T³9¸&±…¯¢¹Â²b³9±Ç®°´=³R·Â·Ž½0³9¬fÃM´ŒÑbÁ9µ˜®°¬6¯­X«¹Ž´Œ®°»ÁÒ´t¹ÂÈB®°­—´t¯¯&Ȅ³9¬fº[ÁR®9³°®°´t¸¯…®°¹Âȷ±—¼n¬6È
±Çܞ³9¹Â´Œ®9´5Á‹­¸¬Ç³9ÈÕn´Œ¹Ž¸ÃŬ­Ä
ÃÖ¬f¯&¬f´5¸ ­¸&¯…®T¸¬žÁ9¹Â¬f­ ®9´tÈY¸¬f±…²Œ´Œ¹Æà5¶Œ¬f­ ºn¼Yь¯³TÊn¹ÂȌ¹Ž´ŒÁ®=±Ç³9½0Ä
½BÑ[¬Ç³R¸´j¹“¸¹Ž¸&³R¬f´t­­+¸&º[?³ ¬f'[ÈP¬f¯:ÃÖ³9´Œ¯:¬ž¬žË)ÕnÑ[±…¬ž²t¯®9¹Â·Ž½0·Â¬ž¬f´Œ´5ÁR¸¬f­f­ Ó=¹Â´,µ˜¸&³9²Œ¯¬ž¬
³TȌÊ9¬f¬f¯f­&¹ŽÉÁR¸&´@²Œ¬6³9­Ã¢¬µ˜±Ç³9«½0D» Ä 5
ÃÖÑb³R®°¯¯¬f¬žÈՌ®°Ë½B¹“¸ÑŒ²˜·Â¬‹³°¹Â¸´˜²Œ¬ž¸&¯-²t¬Òȳ9«½¾³R®9ºb¹Ž³´tÙ­¶ŒÃÖ³9Ñ@¯—­&µ³±ž«—±Ç¬f»m¯Ébȸ&²Œ³R¬Ò½¾¬ž®°´n¹Â´ÊnÉ(¹Ž¯®9³9­-´Œ½B±Ç³9¬f½0´R¸Ä
¹Æ­¢²Œ¹ÂÁ9²Œ·Â¼ÈŒ¼5´b®°½B¹Â± ®9´tȹŽ´t±ž·Ž¶bȬf­®9´³RьÑ[³9´Œ¬f´R¸¢¸¬f®°½=Ó
³9®RÃ[±ž±žÎÏ®´„³9¯&½B³R¬6­¶ŒÑŒ¹Æ¯Ð·Âȹ­&¬f±Ç²¾´t¶t±Ç¯&® ¯¬¬ž´5ÃÖ´5³9¶t¸Ð¯½‹Ë¸&º[²t³9¬ž¬¯¯Xԋ¬f³9·ÂËȌÃb¬ž¬—¸…¯®9®°·Ž­&¼:¯Ôn¬—Ñ[­X±ž¬ž¹Ž´t³9³R´t±žÑŒ·Ž­&·Â¶b¬9¹ÂȌÈÉ°¬ž¹ŽË´t¯²Œ¹ŽÁ^´Œ¬f5Á ¯&³9¬¢¸º²Œ®-Úl¬¬6µ˜±­¸H±Ç«—¬f¸&»Ò´t¯…®°®°±ž´b¯®°¹Â­l´ ³ Ä
Ñ[®9´Y³9¯&®°¸®°ÑŒ¸&ь¹Â·Â³9¹Æ´±ž®TÉ[¸­¹Ž¬6³R®°´=¯…±…¸&²²ŒÉt¬‹­&ь¶Œ¯&¯¬6Ê9­¬ž¬f¹Â´t·Â·Â±Ç®9¬+´t±Ç³9¬:ÃM®9­&¬ž´tÊRÈ=¬ž¯…Á9®°¶Œ·¹ÆȌ¯³9®9º[´t³°±Ç¸…¬R­¢Ó¹ÂÎÏ­—´˜½B­¶t³°¸&±…¹Ž² Ä
ÊT­&¬ž®°ÊR¸&¬ž¬f¯…ÈB®°ºn·M¼‹Ñ[¬ž¸&²Œ³9Ñt¬·Ž³R¬RºÉ$Úl®9¬f­:±Ç¸&˹ÂÊ9¬f¬·Ž·X³9®RÃm­+®R±žºn±Ç¼Y³R½B¸&²Œ½0¬
³ÊgȌ®9®°¯&¸&¹Â¬Ç¹Â´Œ¸l¼Á ¸&³°²tìÃÖ¶Œ´Œ´t¬ž¬6±¸ÈŒ¹Ž­X³R´³9ÄÃ
®9·Ž¹Ž¸&¹Â¬f­¸&³„º[¬:ь¯³TÊn¹ÂȌ¬fÈ(iÓ &¢²Œ¬:½¾®°¹Â´˜®°¹Â½³°ÃM¸&²Œ¹Æ­Ñb®°Ñ[¬ž¯—¹Æ­
¸²t³¹ŽÁR®R²ŒÈŒ·Ž¹ÂÈÁ9¯²5¬f¸­¹Ž­´ŒÁ ¸&²Œ¸¬B²Œ¬½B¯³R¬ž·Æ­®T¸-¸¯¹Ž³R¬f´t±ž¬ž­´5²t¸+¹ŽÑ0Ⱥb¬ž¬žÊR¸l¬žË·Â³9¬fь¬ž½B´„¬žµ˜´5¸…Ì-­»B³9®°Ã´tµ˜ÈB«—µ˜»«ºn»m¼ É
³R­&à5Ñ[´E¶Œ¬f¬f±Ç´5¸¹Ž²Œ¸&Íb·Â¬j±f¼9®°É¸&·Â¬6Ë·Ž¼=®°¬B½ ÃÖÃÖ³³±Ç˱Ƕb¶b³R­­:¹Â¯&´ŒÔ ³9Á´,±f³9®°¸´@Ñt²Œ®9¬
¸&ºŒ²Œ±Ç¹Ž¬¾·Â³n¹Ž¸&±Ç³R¹Â³nÑb¬f³R­=¬f¯¯È³°®°Ã0¹Â¸&´t¹Â³9¸&®°´,²t¸&¹Â¬–³9¹Ž´P´µ˜¬ž®R«—Õn­Ñ[Ñt»m·Ž¬fÓ³R±Ç¯»¸®°­ ¶Œ¸&ºt®°¹Â³9­&´t´¬ÇÈ Ä É
˲t¹Â±…²–¹Æ­:³R´Œ¬
³9â¸&²Œ¬½¾®°¹Â´P¯&¬6àR¶t¹Ž¯¬ž½B¬ž´5¸…­+ÃÖ³9¯‹¸&²Œ¬µ˜«—»
ˬ+®°¯¬+ȬžÊR¬ž·Â³9ь¹Â´ŒÁ0˹“¸²Œ¹Ž´=¸²Œ¬+«³9º[³nÙ®9¯&¬ ь¯³°Úl¬6±¸6Ó
Ñt¹Â´„¯&¬6&¢µ˜­²Œ¬f«—´R¬Ò¸»ÒÑt¸&ÃÖ®°²Œ¯Ñ[³9¬Ð¬ž½ú¸¯-®TÕ¸&¹Â­-²t³R¬—´Œ³R¯&³9­lÁ5½Ò¸…®°®°¼—´Œ´t¹ÂȌËܞÑb¬6²Œ³RÈY¹Æ¹Ž±…´5®9²‹¸Ð­®9³9ÃÖÈt³RÃ(È·Žµ·Â¯&³T¬6ÌË­&­&6­D» ¬f5H5­(¹Ž¹Ž¸&´
´j²Œ¬»»n¬f¯&¬f±¬6±Ç¸±Ç¸&¹Ž¹Â¬ž³R³9´5!´ f´ ¸3ŒË0„ÉR³9Ë˯Ԭ¬
³ÃÖRȌ¬6´}¹Â®T­±Ç¸µ¶Œ¶t¯&­«—¬6­-­-»D¸&HH²t³°ÃЬ0Ít¸´b´t²Œ®°¬ž!¬ ·ÂË;·Ž¼Rpqɸ8S¯&¹ÂTa¬f´78s´trtȌ»­+¬f;±9<®9¸G-¯&¹Ž³R¹Æь­´o¯¹Â´Œ³°g,ÚlÁ¬6±Ë¹Ž´@¸6¬YÉm¸&¸&Ȳt²t¬f¬¾®°­¸ ±Ç¯&¬6¯½B¹Â±Çºb¬ž³9¬´5¸&¸¹Â¸&Êg²t¯&®°¬6¬Y¸&­¬6¬6½B­-®°¸&¯…®9²Œ±…¹Ž´² ¬
¬fȌ·Â¬žÈŒÊ9¬ž¬f¯·Ž·Ž³R¼RÑŒÓ ½B¬ž´5¸M³°Ãb®-µ˜«—»:¸®9¯&ÁR¬Ç¸¸¬fÈ:¸&³+®—¯¬f­&¹ÆȬž´t±ž¬ÐÃÖ³R¯¸&²Œ¬
rt8189<C:>@_;NR>V8@|{}>I]G@sU>V8@^U
rt868XsG9<;:NP>V8:@
„}@D8…iLGC=AG
rt8189<C:>@_;NR>V8@
‹ 9R=S;:@^>Œ1;NR>V8@
$®9ºŒ·Ž¬‚*S5Ðٷ®R­&­&¹“Í[±ž®T¸¹Ž³R´ÈŒ¹Ž½B¬ž´b­¹Â³9´t­
­&¼­l¸¬ž½ ³R¯&Á5®°´Œ¹ÂÜf®°¸&¹Â³9´Ó̗­ÃÖ³9¯—µ˜Ì-»mÉt³R´Œ¬:³9ø²Œ¬‹±…²t®9¯®R±Ä
¸¬ž¯¹ŽÜf¹Ž´ŒÁ=ÃÖ¬6®T¸¶Œ¯&¬6­+³°Ã¸&²t¬
µ˜«—»,¹Æ­ ¸&²t¬„Ôn¹Ž´tÈj³9ñdzn³9Ñ[¬ž¯…®TÄ
¸—¹Ž³R³T´_ˎT¬f±žÊ9³9¬ž½B¯6É+Ñb®×¬ž¸&¹Žµ˜¸&¹Â³9«—´»"¸&²t±f®°®°¸´ ¸…´Œ®°Ô9³9¬6¸­—ºbÑt¬i·Â®R­±Ç¹Â¬Ò½B®9ь½B·Ž¼"³9´Œ±ÇÁ
³R´t¸&­&²Œ¹Â¬BȬf¯¯&³9¬6º[È"³°¸®9­f­ Ó
¸Ên®P²Œ¹Â¯&¬­&³RÑb´Œ¹Æ¬6­&½B±Ç­&¹Æ¶Œ¬ž®°¬6´5·-­
¸f±žÉ+®9®°­&¯­&¶t¬Y¹Æ­±…¹Â³°²´ŒÃ:Á–®R®PË­=µY²Œ¶Œ¬f´t¶t´ ±ž·“¸¬žÈ¹“¯&ÄϬf¸Ì®°®9ÁR¹Â·Ž´5¹Â¬ž´Œ¸l´5Ái¼E¸Ë®°»n´b¼¹“¸ÈE­²E¸&¬ž¹Â®P½=´t±ÇÉь³R²n½Bº[¼¬fь­±f·Ž¹Æ®°¬ž±ž¶t¸&®9¬f­&·—´Œ¬˜¬f¬f´³°­­ÄÃ
³R´Œ´³9¸l®9˱f¹“à5¸²t¶Œ­¹Ž¯¸¹Â®°´Œ´bÁ‹È¹Â¹Ž´´tÃÖÁt³9É9¯¹Â½¾´0®T¸&¸²Œ¹Ž¹Æ³R­X´¾ÑbÃÖ®°¯&Ñ[³R¬ž½)¯X˸&²Œ¬¬-ь¬ž¯´n³TÊnÊ5¹Â¹Æ¯&ȳR¬´Œ®+½B±Ç¬ž·Æ´5®9¸f­Ó­&¹“&¢Íb±f²Œ®T¹Æ­Ä
¸µ˜¹Ž³RÌ-´P»m³°ÓSÃ&¢µ˜²t¹Â«—­H²b»m®9É$­Mºn¸l¼@˲Œ³:¹ÂÁ9±Ç³R²t´t·Ž¹Â­Á9¬6²5à5¸&¶Œ¹Â´Œ¬žÁ=´t±ž¸¬f²Œ­6¬ž5¹Â¯‹¸²Œ¯¬¬ž·ÆÍt®T¯…¸­l¹Ž¸X³R´t³R­´Œ²t¬¹ŽÑj¹Æ­H˸&²b¹Ž¸&®T² ¸
˯³9¬0º[³°ÃÖ³n¸…±ž­ž¶tÉЭ+¸&²Œ³R¬Y´­&¸¬f²Œ±Ç¬„³R´t±ÇÈi³n³9³9Ñ[´Œ¬ž¬=¯…®T¹Æ¸­0¹Ž³R¸&_´ ²tŽT®°±ž¸B³9½BˬYÑ[¬Ç±Ç¸&·Æ¹Ž®9¸&­¹Â³9­´,¹ŽÃÖ¼P®°½B¸²Œ³9¬Y´tÁ
µ˜¸«—²Œ» ¬
¯…­&¼®T¸­l²Œ¸¬ž¬ž½ ¯®°ÃÖºb¬6­l®T¸¸¯¶Œ®R¯&±¬6¸¢­˜¸&¬f¿Å¸l¯&¼n½¾Ñ[­f¬Ó ®°´bÈ}±f®°Ñt®9ºŒ¹Â·Ž¹Ž¸&¹Â¬f­¾³9Ã+¯&³Rºb³9¸­…À0¹Ž´
µ˜®9¯&¬}«Ì » ½B±ž·Â¹Â³9®R­˜¯­&¬w­&¸&¹Ž²ŒÍbÃÖ¬i³±ž±Ç®°³9¶b¸&´Œ¹Â­&³9­&¬–´j¬fÈúÁR¸&¹Ž²bÊR³9®T¬ž´)¸B´"²ŒÈ¹Â¹Ž‰´¹ÁR'[²Œ¬f(+·Â¯&*1¹Ž¬fÁR4´R,R²R¸PH
¸…­:³°Ñ[¸&¸¬ž²t²Œ¯…¬ž¬­¯Ñ[±ž¬f³n±ž±Ç·Â³9¸&®R¹ÂÑ[­&Ê9¬ž­&¬6¹“¯…­@Í[®T±ž¸&³9®T¹Â³9Ã
¸´w¹Ž³R¸²Œ´t¹Ž´ ¬ ­
Ít·Â¬f¬f­·Â­¢ÈÈÉ+¬Ç¸&¸…²n®°¶t¹Â·_­@ÉnËȲŒ¬f®9¹Â·Ž·Ž¬+¹Â´Œ®RÁ×ȌÈ˯¹Ž¬f¸&­²"­¹Â´Œ¸&Á‹²t¬w¹Â´
±ÇÁR³n¯&³9¬6Ñ[®T¬ž¸¯…¬ž®T¯¢¸È¹Ž³R¬Ç´ ¸…®°®R¹Â·[­³9Ñ[¸&¬f²Œ±Ç¬f¸­¯¢¹Â¹Ž­´ Ä
­&¶Œ¬f­­&¶t±…²®9­±Ç³R½B½‹¶Œ´Œ¹Æ±ž®°¸&¹Â³9´®°´tȱdzR½0Ñt¶¸®°¸&¹Â³9´B( 0nɑ*?*,{É
ÁR(*¯&g³R,{¶ŒÓ˜Ñ+ÎÏ®°´P¯…±…¸²Œ²Œ¹“¬„¸¬fÃֱdzR¸&·Ž¶Œ·Â³T¯q¬Ë¹Â(*1´Œ01Át,{ÉÉg¯&ˬ6¬­¬6­®°ÔR¯…¬Ç±…¸²—±…²j¸³9ь¸²Œ¹Æ±ž¬„­ ¸…(+®T*134Õ,R³9³R´t¯³9½‹·Â¬f®9¼@¯&´ŒÑŒ¹Â¯´Œ³°Á Ä
¸Ñ[«²t³R³R®T­&ºb¸¬f³È®°Ù¯¹Ž¬—®°´’¯³9¬(Í*6Ñt4¹Ž´5,{¯&Ét³9¸¬ž®9Úl¬f¯´t¬f±È=­¸6¸Ó ÈÃֹ³R­¯Ð±Ç¶b¸­&²Œ­¬-­&¸³9®9½B­&Ô¬ ­¹Â®R­­&Ȍ¶ŒÈ¬f¯­¬f­®9­´t¬6È
Ⱦ¸&ˬf¹Ž±…¸&²t²Œ´Œ¹Â´„¹Âà5¸¶Œ²Œ¬6¬­
˳9&¢¯ÔE²Œ¬¾³R¸…´;®TÕµ˜³9´Œ«—³R»"½‹¼Y¹Æ­¸±ž²t³9®T´t¸Ò­Ë¸&¹Ž¸&¬¾¶ь¸¬f¯È³9Ñ[ºn³R¼­&¬B¸lÃÖ˳9¯Ò³×±ÇÁ9·Æ®9¯³9­­¶Œ¹ŽÑtÃÖ¼n­˜¹Ž´Œ³°ÁÃ
¸È²Œ¹Ž¬ Ä
½BI‚¬žG%´b@^­U¹Â>V³98´t@s6­ U„5“¿_­rt¬f81m¬ 8&$9<C:®9>ºŒ@_·Ž;:}¬ NP>V0R8:À@\Ó {}•-¬f>I]´Œ¬žG¯…@s®°U·Â>V·Â8¼
@^­U0Ñ[¬f®9®9´tÔn’È ¹Ž´Œ~_Áb4ÉU¸&N²tGI”¬ Ít{m¯­>¸ `
ÁR³9ÃH¯&³R¸&¶Œ¬fÑj®9½ ³°ÃËȳ9¹Â¯½0Ô¬f¸&´t²t­&®°¹Ž³R¸´t¹Æ­:­—®°®9¹Â±…½¾²t¹Ž­‹¬fÊ9®°¬f¸ÒÈ=±…¹Â²t´Y®°¯…¸®9²Œ±Ç¬Ò¸&¬fµ˜¯&¹Â«Üž¹Â»m´ŒÉbÁ˸&²Œ²t¬
¹Ž·Â¬:¸l¼5¸Ñ[²Œ¬¬
¸¸­&¶Œ²Œ¬f±ž¯&¬:³9¬6±Ç´t­³nÈ@¸&³9²t¯…Á9®°È¯¸—¹Â³9´t¹Â¶ŒM´®TÑ,¸–t¹Ž¶Œ³R³°¬f´Ã´tÈȱǹ¬¹Â½B½B¸¬ž¬ž¬f´t´t®°­&½­&¹Â¹Â³9³9´tË´t­ ­³R¯&¹Â³9´tÔmñÇÓX³R·Â¶tµY¶ŒÈ¯¢³R¬6¸¯&­+®°¬+Õ¸­&²Œ³9Ñb¬„´Œ¬6³R±Ç­½‹¹Ž¼Íb­¼9±ž¸&É®9¬f®0½·Ž·Â¼9²ŒÉnÃֹ¬fÃÖ¬ž³R®°¯&¯ÄÄ
®9¯±…²Œ¹Æ±ž®9·(­l¸¯&¶t±Ç¸&¶Œ¯¬ ¹Â­¢ÁR¹ŽÊR¬ž´¹Ž|´ —¹ÂÁ9¶Œ¯]¬ *9Ó
¬BÍt¸¯…³ ­l¸+±Ç·Â³n¬ž³9ÊRÑ[¬ž·H¬ž¯…¹Æ®T­:¸±Ç¬@³R´t¹Â´"±Ç¬f³9¯&´Œ¯…Ȭ6¬žÈ˜¯Ë¸&¹Ž³ ¸&²j®R¸±ž²Œ±ž³9¬„½B®°ÑŒºŒ·Â¹Â¹Â·Â­&¹“¸l² ¼˜®7³9ÃЭ&¸Ñb²Œ¬ž¬ Ä
­&±ž¼¹“Íb­l&¢±Y¸¬ž²Œ½2
¬˜½¾­rt81ÃÖ¯8a³9X^½ G%9<;:´ŒNP>V³98:¸˜@š±Ç™‘³n³9GF4Ñ[G¬žLH¯…®T˸¬¹ŽÊR¬È¹Æ­l³9¸´t¹Ž´Œ¬f­fÁRÓ ¶Œ¹Æ­Ì ²
±ž±ž³5³5³R³RÑbÑb¬f¬f¸¯¯®9®°®°­&¸&¸&Ôm¹Â¹ÂÊ9Ê9Ӝ¬,¬
Ì­&­¼¸„¼­l­¸¸¸&²Œ¬ž¬ž½>
¬¸f²Œ¯®°¹Â­¢¸&¬Ò˸&³9³9¯ÁRÔ¾¬Ç˸²Œ¬-¬ž¯-®9¯&¸&¬-³=¹Ž´5Ñ[¸¬ž¬ž¹Æ¯&¯­BÃÖ¬f³9±Ç­¯¸&³9½ ¬f½BÈ­Ñ[³R³9³R½B´t­&·Ž¬f¬Ò¼„ÈwÁR¹Â´=³°·Ž³R“à ±Çºt³n›&®°¯&³9·³RÑ[¸ºb¬ž®R³9¯…­®T¸ÔM­‹¸œ\¹ŽÊR¸&(+²t¬*4®°h1µ˜¸0,{ÓÒ«—³RÎÏÑ»(´ Ä Ó
¸¸&¢³²Œ²Œ¬X®¾¬fь¯&¸¬ž¯¬fÃÖ³9³9®9Ñ[¯½/³R¬9­&ɬf³°¹ÂÈÃд,²Œ±Ç¸&³n¹Â²Œ¬ž³R¯…¬BÑb®°ÃÖ¬f¯…³R¯±…·Ž®°²Œ·Â¸&³T¹Â±f¹ÂËÊ9®°¬:¹Â·°´Œ­¯Át¸&³9¯É(º[¶t¸³°±²Œ¸…¸¬B¶Œ­žÓ¯¸&¬X¬f&¢¯&±Ç½D
²Œ³9¬Ò´b±Çµ˜­&¬ž¬f¯«—±ž´³9»,´t¸&²ŒÈ˜Ë¬X¹Â·Ž·ŽÔn¬f·ÐÊ9´Œ¯&¬ž³T¬ž·ËÃÖ¬ž³°·Ž¯ÄÃ
¬6ÈÁ9¬+¸²t®T¸-¬f®R±…²Y¯&³Rºb³9¸¹Â´Y¸&²Œ¬‹¸&¬6®°½ ²t®9­®°º[³9¶¸-¹“¸…­¸¬f®9½
&
£ w © v y w
x
¹ÆÑtz­˜¯&¬f³5È­&®9ь¹±….„¹Ž²Œ¸&¬±Ç¬6¶Œ­Ð¸&·Ž²Œ®°¸˜´b¬=¸È„³}­&¹Ž¸ÁR²Œ¹Æ´ŒÈ¬ ¹“¬fÍ[ь´R±ž¸¯&®°¹“³RÃÖ´5Ñb¼ ¸0³5¯­ºb¬6¬Ç³ÃÖȄ¬žÈ¯­&¼j¬ž³9´b·Â³°¶±Çø&¬,¹Â¯&³9¬6¸&´t¬6­­¬6±…²Œ®°³9¯…´ŒÃű…¸&¹Æ²,àR¬ž´¶t³R¬f²t´w­˜®gÊ9µ˜®9¬ ´t«—È"®0»(­®°É¹ÂÑÁ°¹Ž¸ÄÄ
´tÑt¹“·ŽÍb¹Æ±ž±f®°®°¸&´5¹Â³9¸‹´˜ÈȬfÁ9³R¯½B¬ž¬¾®9¹Ž´³°Ãӗ­&Ì Ñb¬6Ít±Ç¯…¹Æ®°­¸—·Â¹ŽÜ6³9®Tºt¸&­&¹Â¬ž³9¯´,ÊT®TÃÖ¸³R¹Ž¯+³R´¸&²Œ¹Æ¬
­—¸&¹Ž´5²t¸&®°¬f¸f´tÉmÈ®„¬6ÈPµ˜®°«—Ñ» Ä
±f­&¹Â®°´Œ´ŒÁ9´t·Â¬„³°¸¯³9º[º[¬¾³°­¸Ò¹Â½B±f®9ь­&·Â¬
¼Y®°¯&¬f´tÁRÈ,®°¯…¸È²Œ¬6¬
ÈYÑt®9¯&­-³RÑb®
³5­ÁR¬6¬žÈ,´Œ¬f®9¯ÑŒ®9ь·Ž¹Â¯Üf³R®°®R¸&¹Â±…³9²Œ´Y¬f­+³°Ãдt¬ž¸&²Œ¬fÈ ¬
¸®9³¾ºb³Rºb¶¬‹¸0ь¸&¯&²Œ¬6¬±Ç¹Æ­¬ž¬f´n·ŽÊn¼¹Ž¯±…³9²t´t®°½0¯…®9¬f±Ç´5¸&¸0¬ž¯®°¹Âܞ´b¬fȖÈ
¹Â¹Ž´–´=¸&¸¬f¬ž¯&¯½¾½¾­Ò­¢³°³°ÃXî9¸­²Œ­¬=¶t½0¹Ž´5ь¸¬ž¸&¯¹Â³9´t´t®°­ ·
u
v
0
72
~€4UNG%I{m>I‚G%@^U>V8@sU
rt8IKIƒJM@€> E ;NR>V8@
†€G;I‡rt8IqXs8U>NR>V8@
~€4UNG%I‰ˆŠ9 E [Z>NG E NRJM9<G
†€Ga;:I~_>Œ1G
$¹ÂÁ9¶t¯&¬]*?5Hµ˜«»Y&®TÕ³R´Œ³9½Ò¼
¸½¾®9²Œ´n®°¬ž¼¸&¹Â¯:¬fÔn­f´Œ¸Óž¬f³TˆŠ®9˽¥…ž·Â¬f;Ƚ¾9<ÁRG¬X®T¯&¸³°³R¬f틺b¸³9²ŒË¸¬­¢²Œ³°¹Ž²t·Â¸&¬˜®g²tÊR¬žŸ¬-¯@_­&¯&;:³9³R…ž½Bºb³9;¬ ¸9<­G:Ôn¹Â¹Ž¯&´+´t³Rȸºb²Œ³9³°¬¸ÃM­:­&Ôn¼®R´Œ­l±³T¸¸‹¬žË½¾Ë·Â¬f­f¹ŽÈ¸& Ó ÁR²Œ&¢¬³R²Œ¶³9¬¸ à ьȱ…²t¬f·Æ®g®9Ñb¼9¯¬f¬6®R´tÈ(±È¸ÓP¹Â¬ž´Œ¯»nÁ=¹ŽÜfÑb¬¢³9¬6´@±Ç¸&¹Ž²ŒÍb¸&¬ ²Œ±ž®9µ˜¬¾·Ž·ÂË¢«—¼9É©®g»„¼=~€¹ŽNR´¸&9<²t8ˬ0@Z²Œ= ·Â¹Â¬f±…®RE ²GÈ@^¬žÈNR¯…¬f9<­&±ž;:²Œ¹ÂLO­&¹Ž>Ñ@¹ÂŒ1³9;´t³°NR>V­Ã8®°¸&@j²Œ¯¬¬¾¹Â­Ò¸ÁR®9¶t¯&Ô9­&³R¬f¬f¶Œ´„ȖѺn¸&¹Æ¼³ ­
¶b8:rt­9<81¬6C8È=>9<@DC:ÃÖ;>³9NR@_¯>V8;:±Ç@
NP>V³n8:®R³9@]­Ñ[5¬ž™ ¯…›G%®T±ÇF¸³nG%¹Ž³9L“³RÉ°Ñ[´¹Â­M¬žÓq¯…±ž—t®T³9¸³9´t¹Ž·Â³R±ž·Ž³T´„¬žË¯¹Ž´Œ¹Â´=´Œ¬fÈ+Á¡Ë˲Œ(*6¹Æ¹Ž±…j4¸&²,M²‹Ë¸&¸&²Œ²Œ¬Ò¬:¬¢±Ç½B®R³R±´t¬f¸&­&±…¹Â¹Â³9²bÈ´b®°¬f­´Œ¯“¹ÆÑ[­rt½¾¬ž8¯&­Ä` ¸¸½B²Œ²Œ¹Æ¬˜¬­&­&·Â­&¹Ž¬f³R®9®9´P½BȌ¬ž¬È¯Ð¶ŒÑt±f¯…¯&®°®T¬ž´„¸&Ä{¹Âȳ9Ȭž´¹ÂÍt´tÉ$´Œ®°Ë½B¬6Èw²Œ¹Æ¹Â±ž·Â·Â®9¬¾¬f·Ž®R·Â¹Ž¼ÒÈ´–¬ž±…¯„®’²t®9®9…ž´ŒÁ9GÁ9¬ž; ¬´5e ¸„ÈŒLO ¶ŒÈ¯&E¶Œ¹ÂG´Œ¯@^¹ÂÁ ´ŒNR9<Á,¸;:²ŒLO¸&¬>²ŒŒ1½BGa¬YC„¹Â¬f­µ˜´R­&¹Ž¸³R«—¹Ž¯´» ¬ Ó
ÃÖ®R³R±¯&¸½B¹Ž³R¬6´tȖ­-¬ÇºnÕ¼P¬f±ž¬f¶®R¸&±…¬6²wȯºn³9¼=º[³°¸&¸&²t¹Æ±¬0³9®°¸&Á9²Œ¬f¬f´5¯+¸Ò¯&¸…³R®°ºbÔ9³9¬¸&¹Æ¹Â±Ò´R¸®9³,Á9¬f®R´R±ž¸…±ž­—³9¶Œ¹Â´j´5¸0­¶t¸&²Œ±…² ¬ ̷³9´ŒÁ
˹“¸²Y¸&²Œ¬0±Ç·Æ®9­­¹ŽÍb±f®T¸&¹Â³9´=¹Â´5¸&¯³nȌ¶t±Ç¬6ȸ&³
±…²t®9¯®R±Ä
®²t˜¹ŽÁR˲®gÄ]¼YÑ[¬ž¸¯&²tÃÖ³9®T¸Ò¯½¾¸&²Œ®°´t¬
±žË¬—²Œ³R³9Ñb·Â¬f¬„¯®°¬ž¸&´b¹ÂȌ³9­:^´ œt¶ŒÓ ÑP—º[³T¬žË¹Â´Œ¬fÊ9Á¬ž¯6®É°±Ç¸²Œ³9²t¬ž¯¬ž¬ ¯¬ž®°´5¯¸‹¬-®°È´t¹ŽÃÅÈ Ä ¸­&¬ž¼¯­l¹Ž¸Üf¬ž¬Ò½ ¸²ŒÃÖ¬0¬6®TÃÖ¸³9¶Œ¯¯&½ ¬6­0³°¸&òt¸&®°¬6¸
®°½ ®°¯¬Y˯&³9¬f¯·ŽÔm¬fÉmÊT®°¸&²Œ´5¬f¸B¯&¬¾¸&³j®9¯&¸&¬B²t¬=®¸&´5¬f¶t®9½‹½ º[ˬž¯+³9¯³°Ô Ã
ÃÖ³9¬fׯ&¬f¸´5²Œ¸¢¬=Ë¢³°®g¸&²t¼¬ž­¢¯B®Ò½B¯³9¬žº[½‹³°º[¸¬ž±ž¯…®9­0´
³°¸…î°¸&ÔR²Œ¬-¬¹Â´5¸&¸&¬6³¾®°½=®9±foÓ ±Ç³R&¢¶Œ²Œ´5¬¸¸¶t²Œ´t¬:È®9¬ž±Ç¯·Â¸&¼5¹Â³9¹Â´Œ´tÁ ­ ¸®9¬žºŒ½¹Ž·Â¹Ž¸lȼҹ½B³9Ã[¬ž´t¸²Œ­&¹Ž¬³R´t­&¼­f­l¨Ó ¸&¢¬ž½ú²t¬ž¸&¼B²t¹Ž®°´b¸±Ç˷¶t¬È?¬²t5X®gÊR±ž¬³9½BÁ9¯½‹³9¶Œ¶tÑ[´Œ¬f¹Â±fÈ0®T¸¹Â´¾¹Ž³R¸&´²tÉ9¬¸¬f­&®9¼n½ ­Ä
ÃÖ®—¬6­®T¬ž¸¸¶Œ¯&³°¬ ï¹Æ¶Œ­·Ž¸&¬6²Œ­(¬ ¸²tE ®T81¸$89<¸&C:²Œ>¬@_¯&;:³RNP>Vºb8:³9@!¸­(XD½Ò9<8¶tN­l8 ¸$E 8:ÃÖ³9Lîɷ·Ž¸&³T²bË,®T¸¹Â´:¹Æ­³9ȯ…ȬÇÍt¬ž¯´t¸&¬f³È=¹Â´®9­Ä ±ž³9½BÙ³nÑb³9³5Ñ[­¹Ž¬ž¸&¯…¹Â®T³9¸&´¹ÂɌ³9´–­¼­®9¸&½B¬ž½ ³9´Œ®°Á@¯…±…¯&²Œ³Rºb¹“¸³9¬f¸±Ç­Ò¸&¶Œ¹Â¯­Ò¬+³°®°ÃÅ´t¸¬žÈ
´w¸&¬6³R®°º½ ¸®9¹Ž­&´t¹Â¬fܞ¬9È–Ó ºn¼P®
¸Ë¬ž¬¯…®9±ž±Ç®9¸X´ÒËÃÖ¶t¹“¸¯²„¸²Œ¬f¬ž®R¯Ð±…²0±ž·Â®R³9­&¸&­&²Œ¹Ž¬fÃÖ¼+¯X¹Â¸&´B²Œ¬¸&²Œ±ž¬³n³9¬f¯…´nÈÊ5¹Ž¹Â´b¯³9®T¸&´Œ¬6½BÈÒ¬žµ´5¸6«—Ó »0&¢ºt²Œ®9¬f¯&­&¬ž¬fÃÖÈÒ³9¯³9¬9´ É ¬ž±žÕ³9½B±…²b½‹®°´Œ¶ŒÁR´t¬+¹Â±f½0®T¸&¬6¹Â³9­&­´@®°ÁR½B¬f­¬f±…®9²t½0®9´Œ³R¹Â´Œ­&Á„½ ¬6¸&®9²b±…®T²=¸:³°®°¸·Â²Œ·Ž³T¬ž¯6Ë©Ó ­——Œ¸&³R²t¯¬0®„¯³9Ⱥ[¬ž³°¸¸…®9­¹Ž·Â¬f¸&ȳ
¸¢4²Œ£ ¬¢G¸l; ¼nÑb¬³986Ã[8:9<±žC³5>³R@D¯;ȌNR¹Ž>V´t8®°@+¸&¹Â®9³9­´Ò®ÑŒÃÖ¯&³R³9¯&¸&½"³±Ç³9³RÃt·_±ÇÓ$³nØ@³9¯…¬È±ž¹Â´t³9®T´t¸­&¹Ž¹Æ³RÈ´ ¬ž¯¸&²t~€®°NR9<¸8:¯@A¬Ç= Ä ®9±ž³9´t½B®°·Â½‹¼­¶Œ¹Æ­ ´t¹Â³°±fâ®T¸&¸&¹Â²t³9¬„´,ÊT­®°¬f¯¥¬ ¹Ž³R(¶t*6j­-,]Ó0¸&¬6Ø@±…²Œ¬B´Œ¹Æ¯±ž³9®°¶Œ·ÐÁRь²Œ¯&·Â³R¼˜ºŒÈ·Â¬ž¹Â½¾­¸&­+¹Â´Œ¯ÁR¬ž¶Œ·Æ®T¹Â­&¸&²@¬6È@¸l˸&³³
·ÃÖ¹³R¬f¶Œ­„¯&¸&¿_²=Èe³n¤·Ž¬f¬fE ­+Ê9¬f´Œ·(³9³°¸:ÃM¯&³9¬f¶Œ·Ž¼Œ¯À-²t³R¹Ž¬f´j¯®9®Y¯±…±ž²Œ³n¹Æ±ž³9®9¯…È·[¹Ž­´b¸&¯®T¶t¸&¹Â±Ç³9¸&´,¶Œ¯ÑŒ¬+¯¹Â³°­—¸³n±Ç³9±ž³9´b·]±Ç“Ó ¬ž¯&¢´Œ²Œ¬6È ¬ ˢȹ ®g'm¼¬ž¯¸¬ž²Œ´5¬¸¯&¸l³R¼nºbÑ[³9¬f¸­­Ò³°¬ÇÃ0Ռ±…±Ç²t³9®°½B´t½ÒÁ9¬„¶Œ´Œ¹Â´¹Æ±žÃÖ³9®T¸¯¹Ž½¾³R´E®T¸¹ŽÈ³R¬f´ÑbÓ,¬f´t̗È·Ž¹Â·´ŒÁ7¸²Œ¬³9´×±ž³9¸½0²Œ¬ Ä
˵¹Ž«—¸&²‹»m¸&:Ó ²Œ&¢¬²ŒË¢¬ ®g‹ ¼—9P¸=A²Œ;¬@^Ȍ>Œ1¬f;:±ÇNP¹Æ>V­&8:¹Ž³R@]´‹™ ­G%¼F­G%¸&L6¬f¹Â½ ´R¸¯&¹Æ­³È¯&¬6¶b®°±Ç·Â¬f¹ŽÜf­M¬f® È È˹ƭl¹Ž¸&¸²Œ¹Ž´b¹Â´Ò±¸&¸&¹Â³9²Œ´ ¬ ½Ò±ž³9¶Œ½B´Œ½‹¹Æ±ž¶Œ®T¸´t¹Ž¹Â´t±fÁ®T¸&­¹Â³9¼´­¸&}Ó ¬f½Bz—¹Â­‹¯¬f±ž±®°¸:´P±ž¶b³9­½B¬„½‹¬f¶Œ¹“¸´t²Œ¹Â¬ž±f¯0®T¸&ȹ³9¹Â¯&´@¬6±½¾¸‹®°³RÔ9¯‹¬6­—¹Â´t¶bÈ­¹Â¯&¬B¬6±³°¸ Ã
º[(*1¬Ç04¸l,]ËÓ¬žÎϬf´B´,Ñt±ž®°¬ž¯&´5¸&¹Æ¸&±Ç¯…¶Œ®°·Æ·Â®°¹Âܞ¯X¬f¥®È@rt®°ÑŒG@^ÑtNR¯&9<³5;®9LO±…>Œ1²ŒG¬6C­—­&®°¼´b­l¸È@¬ž½;ȹƭ²t¸&®g¯ÊR¹ŽºŒ¬¶Œ® ¸&¬f¯È,³9º[³R³°´Œ¸¬f¹Â±­ ¯­&³9¬f±Ç½B¸¬±ž³9²t½B®9¯½‹ÈË¢¶t´Œ®°¯¹Â±f¬¢®T³9¸´¾¹Ž³R´º[³R½¾®9¯®°È¾ÔR¬fÈ­¬6ȶt¹Æ­&±ž¬+®T³°¸¬fÃMȾ­l¸È¹ŽÁR¬f½0Ên¹Â±ž¬f¬9¯&ÁRÉ9¼ ˲t¹Ž·Â¬—¹Â´tȹŽÄ
®³99ÃÐÁ9¬f¸´R²Œ¸¬¾¿¡³°·Ž¬6¸®9²ŒÈ¬ž¬f¯+¯…¯À$³9¸º[²t³°®T¸…¸Ð­¹ÆH­H¸&¹Â²Œ´„¬B±…·Â²t¬f®°®9¯ÈŒÁ9¬ž¬¢¯+³9¹ÆÃ[­ ³R¹Ž¯&´nÁ5Ê9®°³R´Œ·ŽÊR¹Âܞ¬f¹ÂÈ´ŒÁ-¹Ž´@¸&²Œ¸&¬²Œ¬„˳9Ȍ¯¬ÇÔ Ä ¹Â´=̗¸l˱f±Ç³B³9½¾¯…È®°¹Â´Œ¹Â´YÁ¢±Ç¸&·Æ³¢®9¸­¬f­®9¬6­ž½ Ét²Œ±ž³9¬Ç¸½B¬ž¯Ñb³9³5Á9­¬f¹Ž¸&´Œ¹Â¬ž³9³R´:¶tµ­¢®°«—´t» È=±ž²t®9³9´-½Bº[³9¬ÁRȌ¬ž¹Ž´ŒÊn¬f¹Æȳ9¬f¶tÈ ­
±½Bž¹Â­&¬f¹Â½‹³9´tº[®9¬ž·Ð¯…­Xь®9¯³±Ç¸Ð±Ç¬f®9­±ž­+±ž³9ÃÖ³R¯…¯:ȹ¸´Œ²ŒÁ-¬¸&³Ë²t¸&²t³9¬—·Â¬¾È¸&¹Â¯&¬6¬6®°±½=¸&¹ÂÉH³9´bË­H²Œ³°¹Â·ŽÃ[¬
¸&²Œ¸&²Œ¬—¬·Ž¬6³°®9¸&Ȳt¬f¬ž¯f¯ Ó Ñ[¿_­³R¬f­&¬+¬fÈ,ÃÖ³9¯ºn¼Y¬žÕŒ¸&®°¬6½B®°½¥ÑŒ·Â“¬ ½0( ¬fjZ½‹k6,Öº[ÀQÓ¬ž¯…­ ³R¸&½0²b®T³R¸:Á9¬f²t´Œ®g¬žÊR³R¬0¶t¬ž­¢ÕŒ¸&®9¬6±®°¸½¾·Ž¼˜­¸&®9²Œ¯&¬„¬‹­±ž®°³9½B½0¬ Ä
²t¬f®9®°¯½=ȌËÉn®9¸²Œ¯&¬X¬ ®9¯&´t³RÈ:ºb³9±Ç¸&³R¹Æ´5±¸&®9¯³9Á9·R¬ž´5­&³°¸…­ÃŸlˢȹ ®°'m¯¬ž¬9¯É6ˬž¹Ž¸&²Œ²t¹Â·Ž¬ž¬¯¢¹Ž´Ò¹Ž´²Œ¸&¬Ç²t¸¬-¬ž¯²b³9®°ÁR¯…¬žÈ´ŒË¬f³9®9¶t¯&¬­
¦ ´˜¸&²Œ¬Ò³°¸&²t¬ž¯—²b®°´tȘ®¥{m>§UNR9>VT%JMNGC:­&¼­l¸¬ž½ ¹Â­-±Ç³R½BÑb³5­¬6È
¸
¸ºn²Œ¼ ¬
¯³9Ⱥ[¬6³°±Ç¹Æ¸&­¹Æ¹Â±X³9®9´bÁ9®°¬ž·H´5ь¸…¯­(³˱DzŒ¬f­¹Æ±…­ ²‹Ë®°¹Ž¯¸&¬Ð²P±Ç¯³R¬f½B­&Ñbь¬6·Ž¬ž±¸&¸+¬f¸&·Ž¼+³Y®°¬6¶Œ®9¸&±…³9²j´t³9³°½B¸²Œ³9¬ž¶t1¯ HM­¹Ž¹Ž´´ ȬfÊn»n¹Â¼±ž­l¬f¸­¢¬ž½"³9¯®°¹Â¯…´±…²Œ¸&²Œ¹Ž¸&¬:¬f±Ç­&¸&³°¶ŒÃŸl¯Ë¢¬H¹Æ®°­¯¬®°´+±ž³9¹Â½B´5¸&Ñb¯³R³9¯·(¸…Ñt®°¯&´5³¸±ÇÃÖ¬6¬6È®T¶Œ¸&¶t¯¬f¯&­f¬MÓ ÃÖ³R¯$±Ç·Æ®9­Ä
¸­&²Œ¹ŽÍb¹Æ­Ð±ž®°±Ç¸&·Æ¹Â®9³9­´B­M³°³°Ã(ñǭ&¼¬f´R­l¸¸¬ž¯½¾®9·Ž¹Â­Xܞ®¬6Èҷ¬f­&®9¼Ȍ­l¬ž¸¬ž¯X½¾È³n­X¬f±f­X®°´Œ´B³°ºb¸Ð¬¬žÃÖÕ¶Œ¹Â­¯¸f¸²Œ¨Ó ¬ž&¢¯Ð²Œ¯&¬—¬žÍt±ž´Œ·Â®R¬f­lÈ Ä ­&¹“ÃÖ¼n¹Â´ŒÁ„µ˜«—»(ÓbÎÏ´¸&²t¹Â­—˳R¯&Ôˬ+˹·Ž·$¯&¬žÃÖ¬ž¯—®9·ŽË¢®g¼­¸&³B¸²Œ¬
®9¯±…²Œ¹Ž¸&¬6±¸¶Œ¯&¬H³9Ãn¸&²Œ¬X˲t³9·Â¬Ðµ˜«—» ®9´tÈ ´Œ³°¸¸&³¸²Œ¬Ð®°¯…±…²Œ¹Ž¸&¬6±Ä
—
3
73
ªM«Z«S¬M­ž®°¯D±%²q®
¶
^
Ç ´?È^´?¬sÆÉÄSÊ_»ÃMÆ4²‘»
ËAÈ^´S¬sÆ2¶s«SÃ̪Z«Z«S¬M­®
Í ÆZ´SÎ^Â:ʘªM«Z«S¬M­ž®
ÄSÃZ¬s«?¶Zµ^Â?ÊɪM«A«S¬M­®
ÄSÃZ¬s«?¶Zµ^Â?ÊɪMÆ:¶MÃZ¬®
ÄSÃZ¬s«?¶Zµ^Â?ÊɪM«A«S¬M­®
Í ÆZ´SÎsÂ?ʘªMÆ?¶MÃA¬®
ÄSÃZ¬s«?¶Zµ^Â?ÊɪM«A«S¬M­®
¯D±S»ÃZ¬D±1ÏAºMÃsÆA­
³s«S¬s´Sµ_±1¶Mµ
(*4k6,
΃
( 0?3 ^0:g,
( 3S01,
·^«?¸˜¹AºD»4¼½±1¶Zµ
(*6l *1n *k6,
( 0?4,
( 0?h €0?j €0Ak6,
É É
nÉ É
tÉ
( l s3Ak6,
¾Z¿^À
ÁZ¸S¹€ÂA«S¬s´?ÃD±:«?¶
ÄZ«^ÅZÅ:ÆS¬
( 0M* €0?01,
( 0:l €0?n ^3S4,
( 3s*%,
9É
É É
( 3?3É^3?gŒÉ^3Ah1,
( 3Sl4,
( 3?j,
ÉÉ ŒÉÉ °É
( gAhnÉsgAjÉsgMk6,
( hM*°É_h?01,
( h:gtÉ^h?hɀh:jÉ_hSk6,
( jA0nÉ^j?3ŒÉ^j:g,
( gSlÉsgAnɀh?4,
( h:34,
( h?lŒÉ^h:nŒÉ^j?ɀjs*,
( jAhnÉ^j?j,
&®°ºŒ·Â¬}0M5X»n¶Œ½B½¾®°¯¼„³9ÃM±ž·Â®R­&­&¹“Í[±ž®T¸¹Ž³R´
¸¶Œ¯¬-³9Ã$¸²Œ¬+­¹Â´ŒÁR·Ž¬ ¯³9º[³°¸&¹Æ± ®°ÁR¬ž´5¸fÓHÌ"ь¯¬f±ž¹Â­&¬-±…²b®°¯…®9±¸¬ž¯¹“Ä ­&¶ŒÁ9ÁR¬f­¸­B¸²t®T¸=˲Œ¹Â·Ž¬@˹“¸² º[³°¸&²ÕŸ @_;:…ž;9<Gj®9´tÈֈŠ…ž;:9<G
Ü6¹Æ­®T¸ÑŒ¹Ž³R¯´0¬f­&³9¬ž´5ø&µ˜¬6È׫—»0¹Ž´Ë(¹Ž*6¸&²¾4,{Ó?¯¬f­&Ø@Ñb¬6¬P±¸H±ž¸&³9³‹´t­&¯&¹Â¬6Ȍ®9¬ž±¯=¸¹ŽÊR®P¬¢¸&³9¬6¯Ð®°½È¬f·Ž¹Â®9º[¯¬ž±…¯…²Œ®T¹Ž¸&¸&¹Â¬6Ê9±¬ Ä º[@D¬8N ³9ºŒE 81¸®°8¹Â9<´ŒC:¬6>@_ÈB;¹ŽN´¾GCw¸²Œ­&¬—¼¬ž­l¸Õ¬ž¬f½¾±Ç¶Œ­B¸&¹ÂÊ9³9¬f´B¯&¼j³°Ã¹Â´5­&¹Ž¸&½B¬ž¯ÑŒ¬f·Â­¬—¸&¹Â®9´Œ´tÁ,ÈB¯&·Ž¬6³5­­¶t¬f·“·Ž¸…¼B­0±Ç±f³R®°¶´ Ä
¸±ž¶Œ³9¯Ñ[¬Ò¬B®9Ë­¹“¸È²@¬ž·Â¹Â¸ºb²Œ¬f¬B¯®°¬ž¸&´n¹ÂÊnÊ9¹Ž¬:¯³9¹ŽÃдt½0¹Ž¸-¬f®°´5·Â¸·Â³T®°Ë·H±…­²t¸&®9²t´Œ¬:Á9¸&¬6¬6­-®°º5½ ¼½Bь¯&¬ž³T½ÒÊnºb¹ÆȬf¹Â¯´Œ­—Á¸&³® ь±ž³9·Â¬f½BÈ0ь¸…·Â®9¬Ç­&ՋÔÈ­žÉ9³R­&½B¶t®9±…¹Ž²¾´b­®9­ £ בG8:; 9<;LO=?!>@Art=0818³R9<¯C:Øq>@_86;W}NGÙC0JZ®°U<´t[Z>ȓ@Z=°~€ÉRNR9<ÃÖ³98¯X@Z=]½Brt³R¯&8¬ `
­®9¸&·Ž¯…·M®T¸¸¬f¬ž®9ÁR½ ¼¸º[²t¬ž®T²t¸+®gÊn±ž®9¹Â³9´˜¶Œ¯…º[­ž¬BÓ ®R¦ È´@³9ь¸&¸&²Œ¬f¬¾ÈY³9¸&¸&³²Œ¯&¬f¬f¯+³9²t¯ÁR®°®°´t´tȹŽÉÜf¬+¹Ž´j¸&²Œ¯¬f¬0®R³T±Ê9¸&¹Â¬fÊ9¯¬ Ä ­&8:¶t9<±…C²>@D®R;/­NGaC~D8 ®9ььG%9B¯³RË®R±…²Œ²Œ¬f¬f¯&­m¬‹®°¸&e ¯²Œ¬H¬0ь±Ç¯³9¬ÇÃÖ½B¬f¯&Ñ[¯¬Ç¬f¸È(¹Ž´Œ Ó Á—Œ³9¬ž¯´n¬ÇÊnՌ¹Â¯&®°³R½B´Œ½Bь·Â¬ž¬M´5¸¸—®R­¯Ô¬Ç­Ä
¸¸¬f²Œ®9¬:½¬f´5®°Ên¯…¹Â±…¯&²Œ³R¹“´Œ¸½B¬f±Ç¬ž¸&´5¶Œ¸¯®9¬f·(­Ð±…¬f²t®9®°±…´Œ²ÁR¯&¬f³R­¢ºbºn³9¼
¸¹ÂÑt´„¶Œ¸¯²Œ­&¶Œ¬-¹Â´Œ¸&¬fÁ„®9½®°´=±Ç¹Â³R´tÑbȬ6¹ÂÊn­Ð¹ÂËȶb¹Ž¸&®°² · ¶Œà5¶Œ´t¹Âь¯&¯&¬6¬6­ È®
¹Æ±¸ÊR®9¬žEºŒ¯E ¼Y·Â¬=²Œ®9¹Ž´tÁR²@Èi¬¶Œ.„´t±Ç±ž¬f®9¯±ž¸…¼9®°É(¹Â´w³R¯m¬f´5ÚtÊnW¹ÂXD¯&L³R8´Œ9a½B;NR¬ž>V8´5@w¸¾¯&ˬ6²tà5¬ž¶Œ¯¹Ž¬‹¯¬f¸­0²Œ®¬
®9±ž³9ь½Bь¯ÑŒ³R·Â®R¹Â­&±…²²P¸&¸&²Œ³j¬+¯ÁR¬ž³R³9®°¯·(ÁR®R®9´Œ­&­&¹Â¹ŽÜžÁR¬
´Œ¬f¹Ž¸È„­B¸&³T³¾Ë¹Ž´i¸fÓt¸&¢®R­²ŒÔP¬+½¾¹Â´7®9³9¹Ž´=¯…ÈȬž¹¯0'[¬f¸¯&³j¬f´t®R±Ç±¬ Ä ÊRÑt¬ž®9¯ºŒ¼
¹Â·Ž¹Ž²Œ¸&¹Â¹ÂÁ9¬f²˜­B®°¯³9¯¬
ºt¶t¯­l¬f¸à5´Œ¶Œ¬f¹Â­¯­f¬fÉtÈ(½BӖ³9̗¯¬:·“¸²Œ±Ç³R³9½0¶ŒÁRÑt²}·Ž¬žÕY®9½B±ž³n³9³9´Œ¯…ÁȹŽ¸&´b²Œ®T¬f¸&¹Â~€³9NR´Y9<8±f@Z®T= Ä
º[¯¬ž¬Ç·Â¸l¹Âˬf­¢¬ž¬f³9´@´È¸&²t¬f¬:·Ž¹Âº[Ȭž+¹ ¯…'m®T¬ž¸&¯¹Â¬žÊ9´5¬0¸®°½B´tȳȌ¯&®°¬6·Â®9¹Ž¸&±Ç¹Â¸&¬f¹Â­Ê9¬:®°Ñt¸ÑŒ¬f®9·Ž¹Â½ ¬fÈ®°ºn¯…¼¾±…²Œ¸¹“²Œ¸¬f¬:±Çµ˜¸&¶Œ«—¯¬f» ­ ²trt®g81ÊR8¬Ð9<C:º[>¬ž@_¬f;´ÒNG¬žC:Õ5¸®9¬žÑŒ´tь­&¯¹Â³RÊ9¬ž®9·Â±…¼ ²t¬f¶t­f­&Éf¬f®°È(·ÂÉ9·°®¸²Œ¸&¬Ð¯¬žÑ[´t³Rȋ­­&¸&¹Ž³TºŒË¢·Â¬X®°³R¯…ÈŒ¯&Á5­®°¸&´Œ²Œ¹Â¬Üf®°È¸&¬f¹ÂÊ9³9¬ž´t·Ž­Ä
¸¬f³¯®°¯¸&¬f¹ÂÊ9±ž¬‹³TÊ9µ¬f¯«—ÃÖ»˜¯&³R®¾½/·Â³9®°´t´Á„¶t¸&´Œ¬fь¯&½¯¬fÈь¹Æ±·Æ®°¸´¬fȹŽ´n­&Ê9¹Ž¸&³R¶t·ŽÊn®°¸&¹Â´Œ¹Â³9Á¾´¸5²Œ¹Â¬Ò´@¶b®­&®9ÈÁ9¬f¬:·Ž¹Âº³9Äà ³R®9ьь½Bь¯¬ž³R´5®9¸$±…²t³9ìf­ÈŒ¹Â­­¬f¸&¬ž¯½"¹Žºt¶¸&³ ¸&¬6º[È+¬³9ь´Œ¯¬6¬Ç­ÃÖ¬ž¹Æ­¯¯¯…¬f®TÈ:¸²Œº[¬ž¬f¯±f±Ç®°·Â¶t¬f­&®9¬¢¯fӑ®°¯z¬¹Æ­lÁR¸¬ž¯&´Œ¹ÂºŒ¬f¯¶®9¸·Ž¬f·Âȼ
®9®·Ž·Á9·Â¸&³9²tºt¬=®9·Ð®gÊTÁR®9³R¹Ž®°·Æ®°·ºŒ¹Â·Â­Ò¬„ÑŒ¯¯¬f³T­&Ên³9¹Â¶ŒÈ¯…¬6±ÇȽ¬6HЭ+¹Â¸´w³,®±ž³9¯&·Â¬6·Ž¬6®9±±¸&¸¹Â¹ŽÊ9ÊR¬f¬
·Ž¼Pµ˜®R«—±ž±Ç»w³R½B®˜ÑŒÑŒ·Ž·Æ¹Æ®°­&´² ½B¹Â´ŒÁt³9¯Ó i¬ –t¬žÕn¹ÂºŒ·Â¬9É5¯&³RºŒ¶t­¸®°´bÈB·Ž¬6­&­±Ç³9½Bь¶Œ¸®T¸¹Ž³R´t®°·bȬž½¾®°´bÈnÄ
¸¯³
³9º[±Ç³°³R¸Ñb¹Â± ¬‹®9ËÁ9¹“¬ž¸´5²Y¸¸&ȲŒ¹Â¬Ò¯&¬6ь±¯¸&³9·Â¼
ºŒ·Â¹Â¬ž´n½ Ê9³R®T·ŽÊR¸—¬f²bȄ®°Ë´t¹ŽÈY¸&²=¹Æ­¹Ž¸fÑtÓ ¯&³TÊn¹ÆȬfȺn¼¸&²Œ¬ ÎÏ´w³R¯È¬f¯‹¸³,®9´t®°·Â¼nܞ¬
¸&²Œ¬YȬfÊ9¬ž·Â³9Ñt½0¬f´5¸0³9퐳R½B¬„ÃÖ¬f®°Ä
¸­&¶Œ³9½B¯&¬6¬—­0²t³°¹Â퐸&µ˜³RÁ9«¯…®°»w½¾È­H¶t¹Â¯&´
¹Â´ŒËÁ˜²Œ¸¹Æ²Œ±…²„¬=˷®R¬—­l¸B²t®g¼9ÊR¬6¬®°¯…Á9­ž¯ÉH³9˶ŒÑ[¬¬f²tȾ®g¸&Ê9²Œ¬¬-È˯³R®g¯&ËÔ´ ­
—¹Ž´b®°·Â·Ž¼RÉm¸&²Œ¬0¸&¬6®°½ ­&¹Âܞ¬0¹Â­-º[¬f±Ç³R½B¹Ž´ŒÁ=®¯&¬f·Ž¬fÊg®9´5¸-¹Æ­&­&¶Œ¬
¹Â­´„±ž®9µ˜·Ž¬«—µ»„«—È»|¬žÊR( ¬ž3·Â,{³9ÓMь«—½B®T¬ž¸&´5²t¸6¬žÉR¯X®°¸´b²tȾ®°´„­¬f®‹Ê9¬žàR¯…¶b®°®°·´5˸&³9¹Ž¸¯®°Ô¸&­X¹ÂÊ9®9¬Èt½Bȯ&¬f¬6®R­&­­M¶t·Æ¯&®°¬¯Á9³9¬ à ®R±ž±Ç³R¯ÈŒ¹Ž´ŒÁ:¸&³B­³R½0¬ ­&Ñb¬6±Ç¹ŽÍb±ÃÖ¬6®T¸¶Œ¯&¬6­Ð³TÊR¬ž¯¸&²Œ¬-ь¶ŒºtºŒ·Ž¹Æ±ž®°Ä
*6n?nZkÒ¸&³Û0:SS0Ó©&¢²t¬f­&¬‹²Œ¹Æ­l¸³9ÁR¯®9½B­—Á9¹ÂÊ9¬
¸¸²Œ²t¬B®T¸-­¹Â¬ÇܞÕ¬Òь³°·Â¹ÂÃH±ž¹“¸&¸²t·Ž¼=¬0±Çµ˜³R´t«­&»Y¹ÂÈˬf¯¬Ò®RÈ­—¹Æ­l®„¸¹Ž´tȬ6Á9­¶Œ¹Â¹ÆÁ9­´˜²˜±…¸&²t²Œ³9³5¹Æ­±Ç¬0¬ ®°¸&ь²tь¬‹¯³9³RÑt®RÑb±…²Œ³R¬f¯­Ä ¸³R¹Ž´Œ³R·Ž´˜¼:¼9®-¬6à5®°¶t¯…­®°·ÂÃÖ¹Ž¯¸³9®T½‡
¸
Ž
¹
R
Ê
Ð
¬
¹Â´t¼È¸¹Æ®9±ž®°Ô9¸&¬:¹Â³9¹Â´5´0¸&³°³ÃŒ®R¸±ž²Œ±ž¬³9¸&¶Œ¯´5¬ž¸—´tȋ³R´Œ³°·ŽÃt¼=¸²Œ®¾¬¢Ñ[Ñt³9®°¯&Ñ[¸&¹Â¬ž³9¯…´ ­
R
³
˜
´
˜
µ
—
«
m
»
(
É
&
­
Ž
¹
t
´
ž
±
+
¬
&
¸
t
²
ž
¬
¸¸¶Œ²Œ´Œ³R­&¹Ž¸l¬¼w¸&²t¸³–®°¸
ÈȬf®9³P·´ŒË³9¹Ž¸
¸&²×®9Ȍ®jȯ·Â®9¬f¯&­ÁR­0¬=¬ÇÕ´5ь¶t·Â½‹¹Â±žº[¹“¸¬ž·Ž¯„¼w³9¸&Ã-²Œ¬¯³9ьº[¯&³°³R¸…ºŒ­ž·Âɬž½ÃÖ¯&³R½³9à ³9ø²Œ¬
²t¹ŽÁR²P´5¶t½‹º[¬ž¯‹³9Ã˳R¯&Ô­ ³9´–µ«—»mӨܬžÊR¬ž¯&¸&²Œ¬f·Ž¬6­&­fÉ
¸­&²Œ¶Œ¬ºt­Ñt¸®9®9Ñb´R¬f¸¹Â¯®9­M·¸±Ç²t³T®TÊR¸¬ž²t¯…®°®gÁ9Ê9¬¬¢³°º[재ž¸&²Œ´„¬‹±ž³9Ë´t³R­&¯&¹ÆÔÈ­¬ž¯³9¬f´YÈ0µ˜²Œ¬f«—¯&¬»¯¹Â¬ž´=ь¯¸&¬f²Œ­&¬+¬ž´5·Æ®9¸X­® ¸
±ž³n³9¯…ȹŽ´b®T¸&¹Â³9´
ÃÖ³9¯®0·Æ®°¯Á9¬-¸&¬6®°½=Ó
Ít­&³9ÊR½B¬‹¬‹¼R¬f¹Â´5®°¯…¸&­¬f¯&®9¬6´t­l¸ÈY¹Ž´Œ¸Á²5¶b¹Ž´b­­¸¹ÂÁ9²Œ²5¬0¸®9­´t³9®°´·Â¼¸&­²Œ¹Æ­¬0¹Æ¯­¬f±Ç¶t¬f­´5¬ž¸-ÃÖ¶ŒÈ·¬fÃÖÊ9³9¬ž¯ ·Â³9ÈÑt¬ž½0¯¹ÂÊ5¬f¹Â´5´Œ¸Á ­
¨ Ò Y¦”$ Óv y w
Ð
Ñ ¹Â´¸&²Œ¬:µ˜«»±ž³9½B½‹¶t´Œ¹“¸l¼RÓ
&¢²Œ¬Ít¯…­¸Ð²Œ¹Æ­l¸³9Á9¯…®°½ú¹ŽY
´ —$¹ÂÁ9¶t¯&q¬ 09®+¯&¬fÑb³R¯¸…­H³9´„±ž¬ž´5¸&¯…®°·ŽÄ
Îϼ´˜­&¹Â­¸&²Œ³°¹Æ­Ã‹­&¸&¬f²Œ±Ç¬,¸&¹Â³9Ë´˜³R¯&ËÔ¬‹­
­&³9¶Œ´E½Bµ˜½¾«®°¯»7¹ŽÜf¬+¸&²b¸&®T²Œ¸=¬0²t¯¬f®g­&Ê9¶Œ¬@·“¸…¯&­¬6³9±Ç¬žÃX´5³9¸¶Œ·Ž¼}¯-®°ºb´t¬f®9¬ž·“´ Ä ¸¹Âܞ²Œ¬6¬„ÈjÈÊn¬ž­fÊRÓY¬ž·Â³9Èь¹Æ­l½B¸¯&¬f¹ÂºŒ´R¶¸:¸³9¬fâÈPȌ®°¹Â­Ñt¸&ь¯¯&¹Žºt³5¶®9±…¸&²Œ¬6È,¬6­ž­&¥Ó ¼&¢­l¸²Œ¬ž½¾¬„­+¸¯&ˬf´t¹Ž¸&È,²j¸&¯³T¬fË¢­&Ñb®°¯…¬6Ȍ±¸­
Ñt½¾¯&®9³R¯&Ñb¼³5­³°¬6ÃȘ¸&²Œ¹Â´@¬‹±ž¸²Œ·Â®R¬B­&­&·Ž¹Ž¹ŽÍb¸&¬f±ž¯®°®°¸&¸&¹Â³9¶Œ´¶t¯&³°¬RÃHÓÒ­Ø,³R½0¬B¬+Ít˯…­l³9¸+¯Ôь­¯¬f®9­&±f¬ž±Ç´5³9¸+¯…È®¹Â´Œ­&Á0¶Œ½0¸&³ Ä ¸Ø,³j¬
±ž¬žº[´5¬ž¸&·Â¹Ž¯…¬f®°Ê9·Â¹Â¬¾Üž¬f¸Èi²t®T³9¸0´t³9¬f´Œ­B¬
¹Â´i³°Ã¢¸²Œ¸&²Œ¬Y¬·Æ®9¯&­¬6¸B®9­&¼R³9¬f´t®9­+¯­0ÃÖ³9¹Æ¯+­B¸¯²Œ®°¬¸&²Œ±…¬f²Œ¯„³R¹Â±Ç±ž·Â¬„¬f®°³°¯6à Ó
³RÑt¶Œ¯&³T¯BÊn¸¹ÆÈ®°Õ¬-³9­&´Œ¶Œ³R½B½‹½¾¼9®°Óׯ¹ŽØ,Üf¬f¬YȸȌ²Œ®T¬ž¸…´7®ŒÓ ·Â³5³RÔw®T¸
­&Ñ[¬f±Ç¹ŽÍb±Y¹Æ­­¶Œ¬6­¾®°´tÈ È®9·Ž¹Æ¹Â­Üž¸&¬Y¯¹ŽºŒµ˜¶Œ«—¸&¬f»iÈ,¹Â­&´}¼n­ÈŒ¸&¬f¼5½¾´b®°­ ½B¹Â­¹Â±=¸&²b®9®T´t¸:È7¹Â´@±Ç³9³R½B¯ÑŒÈ·Â¬f¬Ç¯—Õ}¸³=®°ÑŒ¬%ь'm·Â¬f¹Æ±ž±®T¸¹Ž¸ÊR¹Ž³R¬ž´i·Â¼˜È¯³9¬ÇÄÄ
½¾®°¹Â´t­fɰȹƭl¸¯&¹ÂºŒ¶¸¬fȋ®9ьь¯³R®R±…²Œ¬f­Á9¶t®9¯®9´R¸¬ž¬X½B³R¯&¬¯&³RºŒ¶t­¸Ä
ËÁR¯&³9³RÎϯ´Ô¶ŒÔь­7&¹Â´Œ®°¿ÖÁY´tºt®9·Ž¸¬o½0²Œ¬ž¬f0×½ ·Ž¼ Ë®R¬i¸&±ž²Œ±ž³5­&³9¶Œ­¯…¬×½BȹŽ½BË´tÁY³9®9¯¯&¸&Թ³@ܞ­,¬}¸&ь²Œ­¶Œ³R¬ºŒ½B¸·Â®R¹Â¬w­&­²ŒÔ³°¬6­0Ã
È?Ñ[¸&²Œ¬ž­¹Â¯&¬7´tÃÖ³9±Ç±Ç¯˜¬ ·Æ½0®90?­¬6­?ÈP¹Ž?Ít5ºn¬6Àȼ É ´Œ¬6­&&¢­²Œ®9¬˜´tÈ­&¬f±Ç¬%³R.„´t±ÇÈi¬f´t²Œ±Ç¹Â¼­¸&˳RÁ9¹“¸¯…²®°½>¯¬f­&¬fÑ[Êg¬f®9±·Ž¶b¸¢®T¸¸&³„¬6­B±Ç¬ž¸&´5²t¸¬˜¯®9µ˜·Ž¹Âܞ«—¬6Ȅ»}³9­´t¼¬f­­f¸&¬fÓ ½
¸®9²Œ´t¬®°·Âµ¼­«—¹Æ­˜»mÉgÑ[®°¬ž´t¯&ÃÖÈ-³9¸¯²Œ½B¬ž¬f¹Â¯È"Ñ[³T³RÊR­&¬ž¹“¸¯˜¹Ž³R¸´:²Œ¹Â¬w´t­¯¹Æ¬fȱž¬X¬ž³R´5¶Œ¸,¯µ¸…®T«—Õ³9» ´t³9·Â¹“½‹¸¬ž¼R¯…Ó‘®T¸&¢¶Œ²Œ¯&¬¬ Ë®9¯¹Ž±…¸&²Œ²=¹Ž¸&¬ž¬6´n±Ên¸¹Â¶Œ¯&¯&³R¬6´Œ­½B¹Ž¬ž´ ´5¸¸&®9¬f·m¯&½¾±…²b­=®°´Œ³9ÁRÃÒ¬f¸­²Œ¹Ž¬ž´Y¹Â¯˜®0±ž¯®9¬fÑt®9®°±Ç¸&ºŒ¹Â¹ÂÊ9·Â¹“¬-¸l¼ ³9¯³9ÈÃB¬f±ž·Ž¹Â³9º[ь¬ž¹Â¯…´Œ®TÁ Ä
( 3?n ^gS Mg€* sgA04,
( gS3 ^?
g g ½* k6,
g
74
0¤ v©+¨Òåä=¨dæ‘Òª§
&¢²Œ¬«³9º[³nÙ®°¯¬¢ÑŒ¯³°Úl¬f±Ç¸ç¹Â­HÃÖ¶Œ´tȬ6ÈBº5¼0µYÎ藫"¿Ö¸&²Œ¬Î{¸…®°·ŽÄ
¹Æ¿_®°Û´,®gˁµY¹Žg?´tgA¹ÂnA­¸&Ž¯nZ¼=k?Ž?ÃÖ³90:/¯?SéАRȌÀ&À—¶t±ž®9®°´t¸&ÈP¹Â³9´¹Æ­ÒÉ ®=藴Œ¸²Œ¹ŽÊR¯¬ž¬ž¯…¬
­¹Ž¼R¸l¼Y¬f®9®°¯´t­‹È@%¬ 'm«³9¬f¯&­&¸‹¬f®9¸&²b¯±…®T² ¸
­¸®9¯¸¬fȹŽ´|ê5®°´n¶t®°¯!¼ 0:S?3ŒÓ
&¢²Œ¬×ь¯³°Úl¬6±¸PÃÖ³±Ç¶t­&¬f­w³9´v¸&²t¬EȬžÊR¬ž·Â³9ь½B¬f´R¸–³9Ã=Ȍ¹Â­Ä
¸±ž¯&³9¹Â´5ºŒ¸&¶¯¸¹Žºt¬fÈB¶¸&­¬¼­¸&¸&³,¬f½B¸²Œ­M¬Y¹Â´B±Ç³9˽B²Œ¹Æ½B±…²B³9´}­&³°Á9ßl³5Ë¢®°®°·¢¯³°¬¢Ã-®°Á9´t¬fÈÒ´Œ¯&¬ž³R¯…ºb®T¸³9¹Ž¸&´Œ¹ÆÁj±¢®°®9Á9±Ç¬f¸&´5¹ÂÊ9¸¬­
­&®R¬ž­&¯­&Ên¹Â­¹Â¸±ž®9¬f´t­-±Ç¹Ž¬´,®9¬ž´t´nÈBÊn¹ÂÁ9¯&¶t³R¹Â´ŒÈŒ½B®9´t¬ž´5±Ç¬R¸­-ÉR­&¹Â¶t´,±…²„˲t®9­X¹Â±…²Œ²,¬f²n®9¶Œ·“¸½¾²
®°±ž´t®9­¯&¬¢½¾Ã¡®R®g±Ç¼˜¹Â·Ž¹Ž´Œ¸&¬f¹Â¬f¬f­fÈ Ó
Îϱ…´,²t®9Ñt·Ž·Â®°¬ž¯&´t¸&Á9¹Æ±Ç¬f¶t­0·Â®9¹Â´n¯fÊ9É(³9«·ÂÊ9³R¬6ºbÈP³ٹ´i®°¯¸¬Ò²Œ®°¬Y¹Â½¾È¬f­ ­&¹Â®TÁ9¸:´}­l¸³°¶tÃ-È­&¼n¼¹Â­l´Œ¸Á¬ž½¾¹Æ­­0­¶ŒÃÖ¬6³9­ ¯0®°¸´t²ŒÈ ¬
²Œ¯±f³9®°¬žº[¯¸&¬‹¬f³°¯&¸…³°³R­žÃÁ9ɬž¸&¹Ž´t²Œ´5¬ž¬0¸³9¬ž¬ž·Â¶b·Ž·Æ¹Â­ÈÁ9¬f®°¬f¯&´RÁ9·Â¸¬f¼=´5­¸&¸¬f²t­f´tÓЮ°­¸:ÎϳR´Y®9¯­0ÈÁ9³R¬f³RÑ´Œ¯„¸-¬ž¯…Ñbº[®°³5³°·m­&¸¸­&²²Œ¹ÂºŒ¬fÍt·Ž­&¼iÕn¬‹¬6¬ž®9È@ÊRÁ9¬ž¬ž®9´7´5´t¸…Ș²n­¶Œ½B±ž½¾®°³R´=®°ºŒ´tº[¹Ž·Â­f¬¬ É
ºŒ­&¹Ž¶ŒÁR¸´–® ®°Ê9´t¬fȖ¯&¼:¹Ž½B¯¬žÑŒ·Â¬ž·Â¬žÊT½B®9´R¬ž¸Ð´5¸…±…®T²b¸&®°¹Â·Â³9·Ž´P¬f´Œ³9Á9׬¢®Y¹Æ­H½‹¯&¬f¶tь·“¯¸¬f¹“Ä{­&¯&¬ž³R´5ºb¸&¬6³9ȋ¸0ºn­&¼¼:­l¸¸²Œ¬ž½¥¬—ȸ&¬ž³ Ä
±ž¯&¬6®T¸&¬+!® [sGL XD>@A=ÛG%@€F6>9a8@^I‚G@^NmÃÖ³9¯²n¶Œ½¾®9´t­žÓ
º[®9´t³°&¢È¸²}²Œ¹Â´n¬«Ê9Ñt³R¬6ºb¯&­l¸³9³9¹ŽÚl¸&Á5¬f¹Æ®T±ž±¸&¸­B¬6­®°¹Â­´b­Èi¬fß® Ê9¬žÌ—ܯ…¯®°®T¸·M¸&¹“¹ÂÍb®9³9±ž­&´b¹ÂÑ[®9®°¬f··—±Îϸ…¹Â´Œ´5­ ¹Ž¸&¸&³°¬ž¹Æ÷®T·Â¹Ž¸¸&ÁR¹Ž²ŒÊR¬ž¬B¬˜´t±žÑŒ¸&¬¯²b³9®T¯ºŒ¬f¸=·Â­&¬ž¬f¹Ž½ ´n®9Ê9¯±…³RÃÖ²Œ¯&·Ž³RÊR¬f½¬f¯­­
²ŒÊn¹Â¹Â¯&Á9³R²´Œ·Â½B¬žÊ9¬ž¬f´5·¸ÑŒ´Œ¯¬Ç³9¸lºŒË·Â¬ž³9½ ¯Ô­ž­É5³R¸&·ŽÊn³¾¹Â´Œ­&¬žÁ‹ÊRᬞ®R¯…±Ç®°¹Â·m·Ž¹Ž¯&¸&³R¹Â¬fºb­f³9ɸ&¸¹Æ³„±ž­­ÑŒ¬f´t·Æ®T­¸&³RÃÖ¯&³9¹Â¯Üž½B¬6Ȅ­¬fÃÖ³R´¯Ä
¯­&Ñb³9º[¬6±Ç³°¹Ž¸…Íb­¢±¹Ž´¸®R¸&­²ŒÔ¬:­fÉT­¸&®°³-½B¸¬²Œ¬¬ž´nьÊn¯³9¹Â¯&ºŒ³R·Â´Œ¬ž½B½;¬ž³9´5Ãm¸f±žÓ ³n³9¯…ȹŽ´b®T¸&¹Â´ŒÁ-½‹¶Œ·Ž¸&¹Âь·Â¬
&¢²Œ¬®9±Ç¸&¹ÂÊ5¹Ž¸&¹Â¬f­$ÃÖ³±Ç¶b­M³9´:¸l˳ ­&±ž¬ž´t®9¯&¹Â³R­fÉ6¸&²Œ¬Ít¯…­l¸Hº[¬ž¹Â´ŒÁ
®¯:³9Ȍº[³9³°½B¸¹Â±¬f­±Ç¸&³R¹Æ±½B¬žÑb´n³RÊn´Œ¹Â¯&¬ž³R´5´Œ¸½B­®9¬ž´t´5¸ÐÈ=¹Â±ž´„®9˯&¬ž²ŒÄ]ÁR¹Æ±…¹Ž²„ÊR¬žÈ¯…³9­Ð½B±Ç³R³°´5¸¸&¹Â±¯¹ŽºŒ¸&¬f¶Œ±…¸&²t¬ ´Œ¸&³9³0·Â³9¸ÁR²Œ¼9¬ É
²Œ¹Â±ž´t³9¬f­½B¯„¸&¹Ž½BȌ¸&¶®°³9¸¹Â´,·Â¹Ž¼w³R´Á9·Â³RÉ[¹“ÃÖ®9®9¬R·M´YšÓ ³°¬žÃ&¢´n®R²ŒÊn­&¹Ž¬˜­&¯¹Æ³9­l­&´t¸¬f¹Ž½0±Ç´Œ³RÁY¬f´t´5È}®9¸´@¹Ž­&´Y±ž¬f¬ž·ÂË´tÈ®9¬f²Œ¯&¯&¹Æ·Â¹Â±…¼˜³@²=Ñ[¹Æ¸&­„²t¬ž¯…¬f®,­­&³R¬‹²Œ´@¬6®9®°±Ç¹Â´,·Ž¸&¸&³9²²Œ¯…ÄÏ­¹Â±ž­:®9±ž³9¯&³°¬¯Ä
³RÑb¬f¯®°¸&¬¹Â´ ³9¯…Ȭž¯¸&³}¬ž´t®R±¸Y®wÑt¯&¬6ȬÇÍt´t¬fȜ˳R¯&AÔ –t³TË ³°Ã
¸³R®R¹ŽÁ9±´Œ¸&¼,Á5¹ÂÊn­—˹“¸Ñt²Œ¹Ž¯&¬6¹Â³T±…­-Ên²PÃֹƳ9ȹƯ-­‹¬Ò¸&¸&®
²t²Œ¬„¬¯&¹Æ±…±ž­&²˜®°±ž¯³9­¬0Ñ[¬ž¬„¸-³°Ãг9³°¸ÃÐòŒ±…¸¬¾²t²Œ®°¬
¬ž·Â·Æ·Âȯ¬ž¬f¬f´Œ­&¯&ÁR¬f·Â¼9¬f®9­—]Ó ¯±…ÃÖë²,³9³°¯¸&¸&¬6¸&²@²Œ®°½¾¬Ò¸&²t­‹¸&¬f¬6­&®9±…¬B²Œ´t´ŒÈP­&¬Ç³9¸&¹Æ·Ž­ÄÄ
Ëь®9·Â¯²Œ­&³T³B¹ÆÊn±…¹Ž²j½0´tÁ‹³9®9¸&Ȍ¸¹ÂȲŒÊT®T¯¬+¬f¸&·Ž¹Â­¬f´Œ­—Ê9ÁB¸&¬f²Œ·(¸²Œ¬¾³°¬:ÃHÑt®°È®9¶¬ž¯¸ÊR¸¹Â³9¬ž±ž´Œ·Â¶Œ³9³R·Æь®°½‹½B¯-¼B¬f¹Â­´RÃÖ­³9¸¶t¯³9¬f³R­Ã·Â´tȌ±Ç³R¬ž¬ž¯Ë"´Œ´Œ®9­¬fȌ³R±Ç¶Œ·Ž¸&Ên·“¬6¸…¹ÂȘ­ž´ŒÓ Á0¸³¸&³n¹Â³9½0·Æ­Ä
¸¸¹Ž³
³R&¢´×Ñ[²Œ¬ž¬ž¬=¯&´nÃÖ³9Ênь¯¹Ž¯½ ¯³°³9Úl´Œ¬f­&½B±ÇÑ[¸B¬f¬f±Ç®9´R¹Ž¹ŽÍb¸„½¾±Ò¹Â­B´×­¬f®T˯&¸¾Ên²Œ¹Æ±Ç±Ç¹Æ¯±…¬f¬f²×­®°¸&®°½‹¹Â´t´Œ¶ŒÈÁj·Ž¸&½¾¹Â®ÑŒ®°·Â¹Â¬Ô9´Œ¬:¹Ž¯¸&³9¹Æ¸&®°²Œº[·¬f³°½ ȸ…­„¬f½0®g®°Êg¯³R®9¬˜´t¹Ž·Æ­®°®°¸&ºŒºt¯…®T·Â·Ž¬¬ Ä
¸º[®R²t­¬˜®T¸,¬ž®°Ä{ºŒ­&¶t·Â¬ž¬=­¯¬6Ên¸&­˜¹Â³–±žÑŒ¬f±ž­=·Æ³9®°½B´t¸³E´ŒÑ[¹Ž´t³R¸²Œ­&Áœ¬=¬w®°­&Ë´b¶tÈ;³9±…²7¯·Â­&ȱ…­Ó²t¬f¯&¬fÊnÈÌ ¹Æ¶Œ±Ç¬6·Â¹Â­­¾´Œ¶ŒÁ×®°Ñ[´t¬ž¸&È7¯¬6Ên±…¹Â­&²Œ­&¶Œ´Œ¹Â³9Á9³R´ÁR·Ž³R¬f­Á9­&¸B¼¼9ɋ­l½B¸Ë¬ž³R½=¹Ž¯&·Â¬ ·É
·Â±ž®9¬ž³9¯&ÊR¬½B¬ž·-ь¸&²t·ÂÁ9¬Ç¬Y³RÕ®9ÃÖÑt·Â³9­f·Â·ÂìÓ·Ž®9³T´tË&¢­¹Â´Œ²ŒÃÖ³RÁ€¬@¯5–¯&¸&³R²Œ¿Öºb¹Å¬‹ÀB³9¬ž¸&¯&´n¹Æ¬f±Ên½B¹Ž­&¯¹Ž¬ž´t³9¯´ŒÈŒÊn½B¬ž¹Â¯±ž¬f¬f´R¿¡­
¹_¸—Ó ¬9±ž®R¶ŒÓ”±ž¯±ž¯&³9›¬f¹Ž¯…´R¸„ȸ¹Ž¹Â·Ž´t¼}­„ÁÒ¸&ь¹Â¸½0¶Œ³„¯…¬=­&²Œ¶Œ¹ÂÃÖÁ9¬f³R²È ¯
¼R½¾³9®T¶Œ¸…¯±…²„Ñt¹Ž³9·Âí· ¯œ5¯À ¬fHM®9¿ÖÈB¹Â¹ÆÀ½B¬ž¬´5¸&¸&¬f²Œ¯¬-¸…®°´Œ¹Â¬f´ŒË½B<­ ¬žœ5´5À¸Ò¿Ö¹Â¿Ö¹Â¹]¹ÆÓ ÀX¬9‚Ó ¸&¯…›®°·Â´t¬Ç¸…­&Ñ[­³9²t¯&®g¸-ÊR¿Ö¬:¹]Ó ®„¬9/Ó ±…²Œ›ÁR¬f¬Ç­¸­
¸ÃÖ½B³R¹Ž³R¯¬B´ µY®
¿¡¯f±Ç¹_ÓÓç³?¬9»n'[âÓ½B¬f¬:¹Ž›&¸&±…^² ÃÖ²Œ¯œ5³9¬6À½±…É$Ô,¿¡Ên¸&¸&²t¹Æ²ŒÀ¬0¬Y¬6±ž­&­³:±ž¸&'m³9³T¯&¬žÊ9¸+¬Ç1¬ Ä{œR¿ ½¾ÀǛÉ—ÁR®9³Ò±…¿Öʌ²ŒÃÖÀ0¹Â³R´Œ¯­¬1¬6œR®B®°À˯…ÉM±…®9¿Ö²E¹Â·ŽÊŒÔ
À¿Ö¹]ËÓ ¬ž¬9¹“ÕŒâÓ¸®°²=½B›µ˜·Â¹Ž³5´b¯³R®T­fÔ Ä Ó
믳TË´^œRÀÇÓ
¦ ´t±Ç¬B®9±…²t¹Ž¬fÊ9¬fȘ¸&²Œ¹Æ­+ºt®R­¹Æ±Ò³9Ñ[¬ž¯…®T¸&¹Â³9´b®°·M¬ž´nÊn¹Ž¯³9´t½0¬f´5¸fÉ
î žé¨
é ï%ððñ^òôóóaõö<÷6ö%øaùõ%úMûRüñ^û§õþýDûPøþÿ%õsûüð
¸¹Â´n¹ŽÊRÊ9¬ ¬6­l˸¹Ž®gÁ5¼R®TÓM¸&¬6»nÈB¹Â´t³9±ž´Œ¬ ·Â¼¾¸&²Œ¯¹Æ¬f­±Ç±…¬f²t´5®°¸&¯…·Â¼®9±Ç¿_¸&­¬f¬f¯&¬K¹Æ­l(¸*6¹Â4±—,m³°ÃÖÃM³R¯®0®Òµ­«—¶Œ¯»„Ê9¬f²b¼Œ®9À­É5ºb³9¬f´Œ¬ž·Â´¼
¹¸Æ®‹­ ³iÃÖ¶t¬ž¹ŽË­&¸f¶tӔ®°®°·Â¯&·Â¸&¼˜¹Æ³T±ÇËÑb·Â¬f³5¬ž­­&Ê9­&®9¬f¹ÂȌ¯fºŒÉȌ·Ž¬0¯&¸&¬6²Œ¸­&¬j³=­Ð¸&±…²Œ²Œ²b¹Æ­l¹Æ®°­¢¸¯…³9®9¹ÆÁR­&±­&¯¸¶Œ®9¬ž¬-½ ¯¹Ž¬ÇÜfÕ¬:¹Â´Öь¸·Â²Œ¹Â—±ž¬ž¹“¹Â½ ¸Á9·Ž¶Œ¼R¯ËɌ¬’®°¹Ž¸&·Ž0T²j¸&²Œºœ³R¯¬f¶Œ­&­&²ŒÁ9Ñb³T²„¬6˱¹Ž¸¸­
¸²t®T¸Ò¸&²Œ¬f¯&¬¹Â­‹´Œ³°¸B®˜ÑŒ¯¬f±ž¹Â­&¬¾¸¯&¬f´tÈj¹Ž´–¸&²Œ¹Æ­ÒÃÖ¬f®T¸¶Œ¯¬
®°´tÈ
¸­&²t¶t®T±f±Ç¸¾¬fº[­­³°ÃÖ¸¶Œ²i·Ž·Â¼„¯¬f¹Â®R½B±ÑŒ¸&¹Â·ŽÊ9¬f¬½B¬ž®°´5´t¸&È}¬6ÈȬf¹Â´·Ž¹Âºb¸&¬f²Œ¯¬+®°¸&·Æ®9¹ÂÊ9­¬=¸¼9µ˜¬6®°«—¯…»w­žÓ ²t®gÊ9¬ºb¬f¬ž´
¸¸²Œ¬ž½¾¬wÎÏ´­f±žÓÒ¸&³9²Œ½BÎ{¬+¸‹½‹²t­&²Œ¶Œ¹Â­³T´t¸&˳R¹Â±fÁ9­—®T¯…¸&¸&®°¹Â²t³9½®°´ ¸‹¹Â|´ ½BȹŽ—$¬f¯±…¬f¹ÂÁ9²t±Ç¶t®9¸:¯&´Œ±Çm¬ ¹Æ³9­0°½¾½B± ­Y½Ò˶Œ¶t¬ ´Œ­&²t¬f¹Æ±žÈ ®g®TÊ9¸ºn¬ ¹Ž³R¼ ¹Â´´n¸Ê9¹Æ²Œ¬f­+¬f­¸&±Ç­&¹Â¬w³RÁR´t®°­&­¸&¼¹Æ¬6Èn­lÈ ÄÄ
¸¬f½B²Œ¯&¬f¬6¬+¯&È@ÁR·Æ®9¹Â®9±B­´@¸¢±Ç¼R¬³R'[¬f½B®9¬6¯½‹±­¸¶Œ¹Ž¸&ÊR´Œ²t¬0¹Æ¬ž±ž¯Ë¢®°¬-¸&®g¹Â²b¼=³9®g´jÃÖÊ9³9¬³9¯+¯ ºb±ž¸¬f³5²t¬ž³R´Y®T¯¸‹ÈŒ­&¹Ž³9Ȍ´t½B³Y®°¸&¬´Œ¹Â³9³°Ë´,¸:³9¯®°¶tÔ´t­&­¬„È@¶t±Ç¬ž­³RÊR¹Â½B´Œ¬ž´,Á„½‹­¹“¶ŒÃ¢¸&´Œ¹Â¹ŽÁ°¹Ž´ ÄÄ
±f®T¸¹Ž³R´®T¸+®°·Â·_Ém¸²Œ¬0¶t­¬0³°ÃÐȌ¹Ž¯¬f±Ç¸ ±ž³9½B½‹¶Œ´t¹Â±f®T¸&¹Â³9´@®°·Â·Ž³TË­
ÃÖ³R¯­¹Â½Bь·Â¬ž¯ºŒ¶¸—­¸&¹Â·Â·%¬ 'm¬f±Ç¸&¹ÂÊ9¬ ÃÖ³9¯½¾­¢³°ÃM±Ç³n³9¯…ȹ´t®T¸¹Ž³R´Ó
ʸn¹Â¹Â±ž¬ž&¢¶ŒË7·Æ²Œ®°³°¬:¯6Ã[É°®9¸&ÃÖ´t¯²t³9®°¬½ú·Â¼¸&¯­®+¬ž¹Æ­´b±ž³°ÈB³nͳ9¹Ž¸´B¯…²ŒÈ¸¬f¹Ž²Œ´b­&¬—¬:®T¸&Èȹ³9¬6¹Æ®°´B­ÁR¹ÂÁ9Ñ[¯´B®9³9½B¹Â³°´5Ã(­¢¸Ð¯®9³°¬f·ŽÃm±ž·Â¬ž³TÊn´5˹Ž¬f¸­Ë+µ˜ÃÖÉR³9«—ȯ¹Æ»(­l®0¸Ó9¯&Á9ÎϹ´¾¬fºŒ´Œ¶Ñt¬ž¸®9¯…¬f®°¯È Ä·
º[½¾®9ь¬ ®9ь¹Ž­´t¯¶Œ³R­f¹Ž®R¸ÓH®9±…µY²ŒºŒ¬f·Â³9¬-­¢¯®9¬žºt³T´t®RÊRȾ­¬ž¬6¬%¯6È'mÉ°¸³9¬f²Œ±Ç´Y¸&¬-¹Â¬ÇÊ9®9Õ¬¯ÑŒ±…Ãַ²t³9¹Æ±Ç¹“¯¢¸¹Ž¸¬f®°±ÑŒ±Ç¸³RÑt¶Œ½0·Ž¯…¹Æ®°±ž½Ò·m®°¸&¶Œ±…¹Â²Œ´Œ³9³9´„¹Æ±ž¹Æ±Ç®°¹Â¬´¸&¹Â¹Ž³9­&´¬ž´YÊR¯&­¬ž¬6¬f¯…®°¬ž®°·Â½¹Ž·[ÜfȹŽ´Œ¸&³°³Á Ä
Ȍ¬ž·Â¹Žº[¬ž¯…®T¸¹ŽÊR¬³9¯¯&¬6®9±¸¹ŽÊR¬µ˜«—»B²b®9­X´t³°¸´t³TË®RȌ®g¼­H®:ь¯¬ÇÄ
ÃÖ±ž³R¹Â­&¯¬Hºb¸³9¯&¬f¸&²´tÈ(¸&ÉT²t­¬ ¹Â´t¸l±žË¬Ð³¾%¬ 'm®°Ñt¬f±Çь¸&¯&¹Â³5Ê9®9¬Ð±…­&²Œ³9¬6·Â¶­žÓ ¸¹Ž³R´t­²t®gÊR¬Hºb¬f¬ž´:Ñt¯&³RÑb³5­¬6È
¯¬f­&Û¬f³n®9³R¯±…Ô5²—¹Â´Œ¸lÁË®°³¸(½¾¸&²Œ®9¬¹Ž´ ½0ÃÖ¶³5¸­l¶Œ¸$¯¯&¬X¬6±ÇÑb¬ž¬f´5¯¸­&Ñ[Ȭf¬f±Ê9¸¬f¹Ž·ŽÊR³R¬fь­½B­&¬ž¬ž¬f´5½¾¸³°­mÃn¸&¸&³²tº[¬Ð¬Ðµ˜±ž³9«—´» Ä
­&rt¹ÆÈ8:¬žI©¯XD¬fÈLG<Wš®9­¢N¹Â;4½0U Ñ[³9¬ž¯&Õ¸¬f®9±Ç´5¶Œ¸¸&¸¹Â³9³9´ÑŒÓ¹Æ±ž6­ »n5ž¬ž™‘ÊR;:¬ž9P¯…=A®°G/·-~ %¬ 'mE ;:³9L¯&G+¸­­&¼²t­l¸®g¬žÊ9½¾¬@­ºb®°¬f´t¬ž´È
ȸŒ²t³9®T´Œ¸Y¬-±ž¹Ž´
®9´ ¸&²ŒÈ¬-¬f¯®9e ¬f· ±Ç¬fË´5¹“¸¸²E˳9®–¯ÔÒ·Æ®°³R¯´Á9¬˜µ˜´n«¶Œ»¾½ÒºbÃÖ³R¬f¯¯ºŒ³9¶t˹Ž·Æȯ³9¹Â´Œº[Á0³°¸­&­¼­l±ž¸®9¬žÑt½¾®T­Ä
¸…ºt±f®9®°·Ž¬,­&·Â¹ŽÔÜ6³°­ž®TËÓv¸¹ŽÑ[³RÎÏ´¬ž´E¯&ÉtÃÖ³9Ñt·Â³9¯®9´Œ½B¯Á„¸¹Ž¹Â´t±ž¸&Áw¶Œ¬ž¯·Æ®°½ %¬ ¯6'mɬf®°±Ç¹Â¶­¸&¸­&¹Â³9Ê9¶Œ´Œ¬j¬f³R­±Ç½‹³n­¼9¶b³9Ét±…¯…²Eȸ®R¹Â´t­®9ÔY®T­¸®9¹Ž±Ç³R­³n´ ­&³9¹ŽÁRÑ[¹Â´Œ´œ¬ž½B¯…®T±Ç¬ž¸³R´5¹Ž½BÊR¸—¬ÑŒ®°·Ž´t·Â¬ž³°ÈÕ Ä
±žµ³9«—M´ –b»m¹ÂžÓ ±Ç¸M&¢¯²Œ¬f¬6­&­³9¬·Â¶¸¸¯&¹Ž³R¬f´t´t­žÈŒÉT­¾²b¹Â®g½BÊ9¬Ñb³5ºb­¬f¬Y¬ž´¾±Ç·Â®9¬fȌ®9Ȍ¯„¯&±Ç¬6³R­&­&´t¬f­lȋ¸¯¹Ž®9´0¹Ž´5·Â®9¸…­Ò¯&ÁR³R¬´}­±ž¸&®9²Œ·Ž¬¬
³R~_½‹4U¼RNÓiGI2ÎÏ´7®°´tÑtÈÝ®9¯¸rt¹Â±ž81¶Œ8·Â9<®9C:¯f>ÉÐ@_;:¯&¬6NP>V­8:¬6@®°¯…±…Ȳ–¹Â½0ь¬f¶t´t­&­&²Œ¹Ž¬f³R­¾´t­
¸&³T³9Ë¢Ã:®°³9¯…¶ŒßÈ ¯Þ}¸®°GNÕG³99<´8:Ä`
®¯9=A³9ьGº[@_ь³°G¯8¸…³R­
JZ®RUÒ±…®9²Œ¯&¸&¬f¬˜¬f­f®9Óݹ½¾´nÊ9­&¢³R³9²Œ·ŽÊRÃЬ,¬f¯È ±ž³9³9º[½B¿Ö³°¬R¸ÓьÁb­—·ÂÓ¬ÇÃÖÕ³9¹Ž·ÂºŒ¸l·Â³T¶t¼×˹Ž·Æ³°È¹Ž´ŒÃ:¹Â´Œ¡Á ¸Áw²Œ~€¬,NRÑt9<®°8¸¸&®9@o¯­&³9Ôrt·Â­
·Ž81¹Â´Œ8¹Â´šÁ 9<C:>(Ëgt@_²ŒQÉ;¹ÂNh4±…G,{² C É
·Æ±ž®°¯&¯¬6Á9®9¬ž­&Ä{¹Ž´t­±žÁ9®9·Â¼˜·Ž¬±Ç®9³9­½B­¬fь½‹·Â¬ÇºŒÕj·Â#¼ ±ž(®9lÑt,]É®°ºŒ¯¬f¹Â·Â­¹“±Ç¸¶Œ¹Ž¬6¬-­+³9ºbÑ[³9¬ž¸&¯…²j®T¸¹Â¹Ž´–³R´t­­…³9ÀXßl¬fË´R¸…®9®°¯&¬B¹Â·Â­®°¹Â´t´È Ä
¹Â³R´=¯ÈŒ²t¬ž®°¯+¯…ȸ³Ë¢®°¬Ç¯Õ¬9ьtÓ ·Â³9&¢¹Ž¸Ò²n¶tÈ­¹ 'm²t¬ž¬Ç¯¸&¬ž¬f´5¯&¸Ò³RÁ9¯¬f³9´Œº[¬ž³°¹Ž¸Ò¸l¼¾±f¹Â®°­—Ñt®B®9ºŒÔ9¹Â¬ž·Ž¹Ž¼¸&¹Â¬fь­Ò¯³9¯&Ñ[¬6¬žÈ¯&¶t¸l±ž¼„¹Ž´Œ¹Ž´Á
¸²Œ¬:±Ç³5­l¸³9Ã$¸²Œ¬+³TÊ9¬f¯®9·Ž·m­¼­¸&¬f½ ȬžÊR¬ž·Â³9ь½B¬f´R¸6Ó
®9­&¶Œ¯±…¯µY²t¬˜¹“³R¸½B¯&¬f¬f±³9³T¸¯Ê9¶Œ¬¬ž¯¯Ò¬f¬ ­f.¾ÃÖÉ$¯±ž³9®°¬ž½ ·Ž¸&´5²t¸³R³9®9¶Œ¶Œ´t¯„ÁRÈײPÑ[¬ž½0¬ ¯…'[­³R¬6Ñ[¯&±¬„¬f¸&±Ç¹Â±žÊ9¸&³9¹Â¬,Ê9½B¬˜­lь¸­&¯·Â³¬Ç®°Õ,±Ç¸&¹Æ¬f®°¸&Á9·—³˜¹Â¬fÈ­„¯¬f¬f·Ž¹Â®9¹Â´Eº[·Ž¹Â¬žÜž¶t¯…¬R´t®TÉ$¸&±Ç¬f¹Â¬fÊ9´¯¬ ÄÄ
¸…·Æ®°®°¯¹Â´Á9¬@®°­&´b±fÈ®°·Â¬˜È¼n²Œ´t¬Ç¸®°¬ž½B¯³9¹Æ±EÁR¬ž´Œ¬f´n¬f³9Ê5¶t¹Â¯­„³9´Œ­&½B¼­l¬ž¸´5¬ž¸…½¾­ž­Ó ȬžÎÏ´ÊR¬ž·Â½B³9Ñ[³R¬f­È ¸i­&³°³PÃá¸&®°²Œ¯6¬ É
£
˷® ¬f²t®RȬž¯¬žGa¬j¯¢; ÃÖe ¸&³RL+²Œ̯¬P¸&rt²t½BG¬:³9@^½B¯NR¬–9<³R;­L+±ž>¸—®9Œ6GÑt±ÇC ³9®°½Bºt®°·ŽÑŒ¬jÑt·Âь¬Ç¯¯&Õ³9³5º[¸®9³°®R±…¸­²×Ô®9­:²t­®R¿_­&±Ç­=¶Œ³n½Bºb³9Ñ[¬f¬f­¬ž¬ž´¯…¸&®T²t¸ÑŒ¬–¹Ž¯ÊR¬Ç¬-¯&ÃÖ³R¬ž·Â³¯·Ž¬j¯±ž¬f®9³9È(·“Äà É
¹Â¬fÜf¯®°®°¸&¸&¹Â¹Â³9³9´´bÉHdÀ ®9( gŒ¶€É ¸&³Rh´Œ,]Ó ³9½B³9¶b­+¯&¬6±…²t®°¯Á9¹Â´ŒÁbÉ$®°¶Œ¸&³9´t³9½B³9¶t­+¸&¬ž·Â¬ž³RÑÄ
à
Ñâá Ò
y
\‚ã
h
75
16
15
14
13
13
Distributed Vs Centralized
12
12
11
Distributed.
10
Centralized
8
8
7
7
6
6
5
5
4
4
2
1
10
9
3
2
1
Communication
12
Deliberative.
11
9
3
13
Reactive Vs Deliberative
Reactive
Absent
11
Direct.
10
Stigmergic
9
8
7
# $#
$
# $#
$
$#
$#
$##$#$#
$
$#
# $#$#
$
$##$##
$
# $$#
$
$##$##
$
$#
$#
$
$#$#
6
5
4
3
2
1
!
! !
!"
"!! ""!!
!
!"
! ""!
!"
"!!! "!!
"! ""!
!"
!"! "!
%% &%
%&
%
&%
%
%% &&%%
%&
&
%&
&% &%
¹ŽÁR¶Œ¯&¬m0s5M®5ÀÙ¬ž´5¸&¯…®°·Â¹Âܞ¬fÈÊ­žÓz¹Â­¸&¯¹ÂºŒ¶¸&¬6È(ÓXºbÀiz¬ž·Â¹Âºb¬f¯®°¸&¹ÂÊ9¬ Ê­žÓH«¬6®9±¸¹ŽÊR¬9ÓX±6À¢Ù³R½B½‹¶Œ´Œ¹Æ±ž®°¸&¹Â³9´Ó
¸¸²Œ²t¬„®T¸+ь½¾¯³°®°ÚlÔR¬6¬Ò±¸:®9Ë´¹Â·Ž¹Â·´R¸È¬ž¬ž·Â·Ž¬f¹ÂÑbÁ9¬f¬f´5´j¸+¸&­&²Œ¼¬­l¸®°¬ž´t½/®9·Ž¼ºt­&®R¹Â­­ ¬6Ș³9óR®°´@·Â·Hµ˜¸&²Œ«—¬
»˜¹Â­ºb­³9¶t¸&¬f² ­
¯³9ºt¶t­l¸„®9´tÈ7˹ÂȌ¬ž·Â¼i®9±f±Ç¬fѸ®9ºŒ·Ž¬RÓEÌ ²Œ¹ÂÁ9²7·Â¬žÊ9¬f·Á9³5®°·³9Ã
¸¸²Œ¬ž½¾¬–­+ь¿Ö¯¬R³°Ó ÚlÁb¬6ӎ±Én¸¯³9¹Æº[­Y³°¸¸³E¹Â±­&ь²Œ·Æ³T®TË¥
²t½?³TË¥Ñt·Ž¹Â¶t´5­¢¸&¬ž²t·Â¹Ž·ÂÁR¹ŽÁR²
¬ž´5·Â¬ž¸Ê9¬f¯·m³9º[¹Ž´5³°¸&¸&¬f¹Æ·Ž±–·Â¹ÂÁ9­&¬ž¼´5­l¸Ä

¸
Ö
Ã
R
³
&
¯
ÃÖÑt¶t²5´t¼±­&¸¹Â¹Ž±f³R®°´t·Œ®°®R·ÂȌ¹“¸®°¹Ž¬6Ñ­¸…ÀЮT¸±Ç¹Ž³R³R¶Œ´Ò·ÂÈB³9Ã[¯¬ž²ŒÑt³R¯&½B¬6­¬f¬f­M´Rь¸Ð¯®:³9Ñ[Ê5³R¹Æ®°­&¬fºtÈÒ·Ž¬—ºn®9¼‹·“¸­l¬ž¸…¯®T´t¸®T¬¸³°¹ŽÊRÃ[¬¢¸&²Œ¸&³¬
®9­&²Œ¯¸¢³TËȳ9¹Â´Œ½BÁ0³9²Œ¸&³T¹Æ±žËE­fÓ¨·Â³T&¢Ëœ²t¬±ž³Rь­¯¸—³°Úl®°¬6¶±¸¸¢³9®°´Œ¹Â³R½B½0­³R®T¶t¸¢­Ñt¯&®g³RÊnºb¹Â³9´Œ¸Á:±ž¸®°²Œ´¬¹ÂË¢´t±Ç®g¯¼Ò¬f®RÃÖ³9­l¯Ä
¹ÂÑb´Œ®gÁR¼·Ž¼-®T¸&Ñb¸&¬f¬f¯´RÃÖ¸³R¹Ž¯&³R½œ´ ¸&¶t³­&¬Ç¸ÃÖ²Œ¶Œ¬X·R³9¸®R¸&²Œ­Ô¬f¯$­fӍ­&¹Â̗ÈȌ¬ÐȌ³°¹“Ãn¸¸&¹Ž³R²Œ´t¬®°±Ç·Â³9·Ž¼-¹Â´:˱ǬгR´Œ±Ç³R´Œ´t¬6­l±¸…¸&®°¬6´5È-¸&¸&·Â¼³
¸¹Æ­M²Œ¬¸²Œ¶t¬­&±Ç¬-³R³9Á9Í´Œ¸¹Ž¸&¬f¹Â±…Ê9²Œ¬´Œ¹Ž³R½B·Ž³RÑtÁ9®R¼„±¸X®9³9¹Ž½BÃb¸¬f²ŒÈ¹Â­X®T¸´Œ³T²ŒÊ9¬f¬f·ŽÑŒ·“¸l¹Â¼:´ŒÁB®9´tÑ[È0¬ž³9²ŒÑt³T·ŽËwS¬ 5HËˬ²Œ±ž¹Â®°±…²´
Ȍ®9·Â¯&­&¹Â³
Ê9¬¢®9¸¯&²Œ¬:¬Ñb¯&®g¬6¼5­¹Â¬6´Œ®°Á
¯…±…®°²‹¸¸¸¬ž³:´5®g¸&Ê9¹Â³9³R´Y¹ÂÈÒ¸³„¯&¬Úl¸¬f²Œ±Ç¬‹¸&¹Â¹Æ³9­&´B­&¶Œºn¬Ò¼‹³°¯ÃX¬f¯®°¬ž·t·Â¶t¹Â®9­ºŒ¬f¯¹Ž·Â­f¹Ž¸lÓ¼Ø@³9¬ Ã
¸¸²Œ³
¬¢¶t²Œ­&¬0¬Ç¸¹Ž¬ž´¯³9¸&ÁR²Œ¬ž¬0´Œ¬fь³9¯¶t³°­$Úl¬f¯&±Ç³R¸fºbÉ(³9®°¸&´b¹Æ±ÐȘÑt®°·Â¯®°¬Ò¸ÃÖ³9­¸&¯¶t½¾ÈŒ­$¼5¸&¹Â´Œ²tÁ®°¸H¸&®9²Œ¯&¬0¬ÑŒÑŒ¯·Æ³9®°ºŒ´Œ·Â´Œ¬ž¹Â½¾´ŒÁ ­
±ž³9´Œ´t¬f±¸¬fÈ
¸&³B¸²Œ¬ž¹Â¯±Ç³R´R¸¹Ž´n¶Œ³R¶t­¢¶t­&¬ ¹Ž´Y®°´¬ž´nÊn¹Â¯&³R´Œ½B¬ž´5¸fÓ
¨ Ò Qª Ò¦
y Ò :Ò
97
98
99
00
01
02
97
98
99
00
01
02
97
98
99
00
01
02
—
) ¥?,¸“f8pÛ«]h`b”3gi5->A£XC®Ü78oM®WaTZW+>A5Z`a`a]XCR–—8 ÝI8p“NWb<_™ZC®µ(8‹¶3Wae~e~]h2X™ZC
¶18Ú¶3e~W+YAUŽC ´
8‹¶‡e~WaYAUKCÚ<^2•”€G¨8;¶‡YA5Z2dYAÏhCÞEHGߓ«Wb™HY­>AWagX[XYA5D”
.;<Z0d5->A5D”(GR>ATcU4W+YA5ZTVYA[X>A5o\À]_>o–…]_g4Wa`a5OµI]_gi]_YoM®]d]^>c”XWa2•<?YAWb]_2ŽP
GIj4jX`aWbTZ<^YAWa]h2/YA]~¶‡j•<_T-5N6=žXjX`a]_>c<^YAWa]_2ŽC kWb2Ô»9sH|Dxc¼c¼­ƒ_zÀu‡½_v¨|H¾
yÀ¿4¼ÐÃhsHƒÔlcuXy¢¼VsVu•t_y{z}|_u•thÌpÄR„N͇„
Æ
|_s­Ç^vH¿X|VÈà|^u€»9Ìat_uXu4zud½
t_uiƒ~Íixc¿4¼­ƒ_†3Ì zud½I¾Z|^sÑÍhÈ3tBxc¼VC1^hh‡8
) ’^,¸µ(8^¶‡Wae~e~]h24™ZC_¶i8_¶3Wa24£_UŽC^“f8?ÕI5->A™­U‡gi5->A£h5->DC_˜X8^µI<_e~]_™ZCh<^2•”
8B¶‡e~W+YAUŽCpEH©ÚW+>A™HY‹>A5Z™­[4`+YA™‹Wa2(YAUX5OTZ]d]^>c”XWa2•<?YAWb]_2¸]_\•U45-YA5->A]^¡
´
£h5Z2X5Z]h[X™N>A]hgi]_YA™N\À]_>¨`b<^>A£_5-¡ˆ™­TD<_`a5/<^™­™­5ZefgX`a0dC káWa2€lcuF»Os­|hw
xV¼A¼­ƒ_zud½_vÐ|H¾‚yÀ¿4¼¸lcu4y¢¼Vscuit_y{z}|^uithÌOÍXÎ_‰IÈ3|_vAz†‡‰â|_u€npãVÈX¼VsVz{w
‰/¼Vu4yˆt_̇9|BA|_y{z}x-v¨älc͇noOåDæ•çÑ|_ui|hÌ †3Ì †7çÑt_èot_zzÀCB“N5ZT-5Zefgi5->
_h_38
) ‘^,¸Õ78•ÒÑW+Yc<^24]XCi–—81GI™A<_”4<‡C•éf8XÒÑ[42XWa0B]h™­U4W{C4JV8•LI]‡”4<‡C16Q8i꫙H¡
<DëO<3Cf<_24”¬Õ78N–Ô<^YA™­[Xg•<?>c<3CìEªµR]hgi]dT-[4jŽPŸGíTVU4<_`a`a5Z24£_5
jX>A]hgX`a5Ze³\]^>N<^WÚ<_24”Ð>A]hgi]^YAWaTZ™ZC kmWa2Å;¼Ax-y{†‡s­¼¨Ä¨|_y¢¼¨zuԄRs-w
y{z î®x-z}t_ÌilcuXy¢¼-ÌÌ zb½d¼Vuixc¼VCK*D‘_‘h’3C•ŠB]h`{8K*D¤h‘h¦‡C4j4jŽ8K*V4*D‘‡8
)+*D^,~.p8‡Jª]dT-TVUXW}C3“f8BLN<^>c”3W{C4<^2•”‚–—84¶X<^`b5V>A24]XC«EªµR5D<_TVYAWbŠ‡W+Yª0‚<^2•”
”X5Z`aWagi5->c<^YAWa]_2ŽPš<™­[X>AŠh5-0 ]h2mef[X`+YAWa¡}>A]_gi]_Y¸™H03™HYA5Ze~™ZC kâWa2
thÌat_u•x-zu‡½ÑQ¼­tBx-y{zðDzy{θt_uiƒÑñ7¼-Ì z}c¼Vs­t^y{z}|_u…zÀu/ʝ†3Ì y{z{w„o½d¼VuXy
ï
Í4Î^vcy¢¼V‰7vÞäÀÅ;ÄI„RlFÁ3òBÂ_Ã-å?C(698R:=<_£_5Z`a`a]à–—8RÕN<_2X245Zg4<_[X5->DC
˜X8X›S5Z2•”3`a5->DC•6p”K8aCXj4jK84‘D‡¤h‡8K¶‡jX>AWa24£_5->DCi^h‡*h8
('
)+*h*-,/Û78‡“N[•”35Z@iCX–—83˜h5Z24@‡Wa2ŽCX<^2•”/6Q84–…Wa`aWa]h™ZC®„²q•tDãB|_ui|^‰fÎ/|H¾
)+*-,/.103242457698;:=<?>A@B5->DCFEHGI.;.KJHGILNMO6QPKGR2S<?>ATVUXW+YA5ZT-YA[X>A5(\]^>
ʝ†3Ì y{zs­|BA|_y«Í4Î^vcy¢¼V‰7vAC‹TVU4<_jXYA5V>7Wa2 µR]hgi]^Y
\<_[X`aYQYA]_`b5V>c<_2dYRef[4`+YAW+>A]hgi]^Y9TZ]d]_ji5->c<^YAWa]h2KC kmlHnonpnrq4sHt_u4vcw
“NWaŠB5->A™­W+Yª0~YA]f:Ú]h`+03e~]_>Aj4UXWa™­eÔC ´
thx-y{z}|_u4v~|_u€Q|BA|_y{z{x-v‚t_uiƒ…„R†‡yˆ|_‰~t_y{z{|_u4C‹ŠB]_`{8o*-ŒXC‹24]38=dC
6Q84:=<^>A@B5V>DC45D”3™Z8aC®GRÒó:Ú5-YA5->A™ZC•__B‡8
jXjŽ84h^?‡^ŒB‡C=*D‘_‘h’38
)+*??,féf8Xô«2d0ÐMO<_]3C1G(84©4[4@‡[X2•<_£h<3C4<_24”ÔG(8XÒ¨<_UX24£XC(EHM®]d]hji5->­¡
) ?,~“78i“N[•”35Z@iCK–—8Ž˜h5Z2X@3Wa2KCK6Q8Ž–…Wa`aWa]h™ZC;<_2•”š“f8i›œWb`a@B5-™ZCEHG
Yc<?ž3]h24]_e70Ÿ\]^> ef[X`aYAW+¡¢<_£_5Z2dY—>A]_gi]_YAWaTZ™ZC k
<^YAWaŠB5‚e~]hg4Wa`a5/>A]hgi]_YAWaTZ™ZPÐGR2‡YA5-TZ5D”X5-2‡YA™7<_24”õ”XW+>A5ZT-YAWa]h2X™ZC k
„R†‡yˆ|_ui|^‰~|_†‡v
„R†3yˆ|^ui|_‰¸|_†‡v«Q|BA|_y{vACiŠh]h`{84Œ3C4jXjŽ81*V‡_¤‡CÚ*D‘_‘B¥‡8
9|BA|_y{vAC•Šh]h`{8•¤‡C•2X]X8XŒXCXj4jŽ8X¤h¥h¦Dd¤h‘h¥‡C=*D‘_‘h§‡8
)+*D¤^,~.p8d698‡:=<?>A@B5->DCQE­M®[3>­>A5Z2dYO™HYc<?YA5R]^\1YAU45R<^>­Y®Wb2~e7[4`+YAW+¡}>A]hgi]_Y
) ¤^,/G¨8=©•<?>AWa245Z`a`aW{CO.‹8=Jª]dTZTcU4W{Cp<^2•”€“f8=L«<^>c”3W}C¬E­–…[X`+YAW®>A]hgi]^Y
YA5D<_e~™ZC kmWb2—ñNzvAy{sVz{-†3y¢¼­ƒ‚„R†‡yˆ|_u•|_‰~|_†‡vN9|BA|_y{z}xfÍ4Î^vAy¢¼V‰fv
™H03™HYA5Ze~™ZP¯G°TZ`b<_™­™­W+±•TD<?YAWa]h2²g•<^™­5D”³]h2³T-]d]_>c”XWa24<^YAWa]h2KC k
ö C4jXjŽ8X¤?4*?d8K¶3j3>AWb2X£h5->DC•__h‡8
´ 5ZTVUX24WaTD<^`ŽµR5Zji]_>­YR“NJH¶1C4__h¤‡8
) Œ_,/.‹8d698‡:=<^>A@h5->DCQE ´
)+*ZŒ_,/–—84Üp5Z`a]h™­]~<^2•”‚:Ú8i¶dYA]h245_Co„óÍ4†‡scð_¼VÎÐ|H¾Rʝ†3Ì y{z}tD½d¼VuXy9t_u•ƒ
UX5R5V·15ZT-Yo]_\KU45-YA5V>A]h£h5-245ZW+Yª0fWa2¸YA5D<_e~™p]_\
ʝ†3Ì y{zs­|BA|_y«Í4Î^vcy¢¼V‰7vAC‹TVU4<_jXYA5V>7Wa2 µR]hgi]^Y
*Zhh¹ºe~]hg4Wa`a5®>A]_gi]_YA™ZC k¸Wa2~»Os­|D|Dxc¼­ƒ_zud½_vR|H¾®yÀ¿4¼QÁ‡ÂBÂ_ÃNÄI9Å
Æ
|_s­Ç^vH¿X|VÈÉ|^uËʚ†XÌ y{zw{9|BA|_y¸ÍXÎ_vAy¢¼V‰fvAC«GI`b<_2ŸMI8Q¶3U‡[4`+YAÏ
“NWaŠB5->A™­W+Yª0~YA]f:Ú]h`+03e~]_>Aj4UXWa™­eÔC ´
)+*?¦?,~©•<_g3>AWbT-5(µ¨8XL«]_>A5-Wb`a™ZC/E ´
) ¦?,~Ò¸8®5-Y…<_`{8QÒÑ]h2X]h`aWa£h5hCÓEHM®5Z2dYAWagi]_YA™ZPm.;<?>A£h5F™­TD<_`a5>A]hgi]^Y
YA5Z<_e~™ZC k¯Wa2m»Os­|D|Dxc¼­ƒ_zud½_v…|ª¾/yÀ¿4¼/Á3ÂBÂ_ÃÔÄI9Å
Æ
|_sAÇ^vH¿X|cÈ
TD<^YAWa]h27YA]NWa2•”X]d]^>®5-23Š‡W+>A]h2Xe~5Z2dYDC k‚lcu4y¢¼Vscuit_y{z{|_uithÌ4÷‡|_†‡sVu•thÌ
|H¾IQ|BA|_y{z{x-v«R¼Vv-¼­t_s­xc¿dCiŠB]h`{8K*?dC•24]38Ž*_C4jXjŽ84¥_‘D‡‘_’3CÚ*Z‘h‘h¤‡8
:=<?>A@B5->DC46o”3™Z8aCX›€<_™­U4Wa2X£_YA]h2KC•“¨MIC•__h¤‡8
)+*D§^,~˜X8I©45->Agi5->DCíʚ†XÌ y{zw{„p½d¼Vu4y‚Í4Î^vcy¢¼V‰7vACøGI”4”3Wb™­]_2X¡}›€5-™­`b5V0dC
<_”X]_@B]_>A]3CÖEªµR]hgi]BM®[4j3¡
*D‘h‘_‘38
µR5Z™­TZ[X5hP—G×£_>c<^2•”œTcU•<^`b`a5Z2X£h5š\]^>…e7[4`+YAW+¡¢<_£h5-2‡Y‚<^2•”ºWa23¡
YA5-`b`aWa£h5-2‡Y(™H03™HYA5Ze~™ZC k؄Rl¨Ê—tD½BtDÙ-zui¼VC=ŠB]h`{8=h‡CÚ24]X8p*hC;j4jK8
)+*?¥?,¸“f8 Û«]_`b”Xgi5->A£ù<_24”Ó–—8 ˜38õ–Ô<^Yc<?>AWaThC
¤_‘?‡¦h‡C1_h‡*h8
n9ð^thÌ †Xt_y{z{|_u—|H¾IQ|B-†‡vAy
j
76
]Dë9<^>c”š<f>A]_gi]_Y«<^>ATcU4W+YA5ZT-YA[3>A5¨Wb23¡
YA5Z£_>c<?YAWb2X£~TZ]d]hji5V>c<^YAWa]h2šgi5-Y¢ëO5Z5-2Ôe~]hgXWb`a5«>A]_gi]_YA™ZP®GIjXj4`aW+¡
|^u7ʝ†3Ì y{z{wQ|BA|_yKÍ4Î^vAy¢¼V‰fvACdGR`b<_27MI8_¶3U‡[4`+YAÏo<_24”(.10324245p698
´
´ 5D<_e~™ZPf©X>A]he
[4Tc@B5->9ÝO<_`aTcUš<_24”Ð.10X2X245
6Q84:=<^>A@B5V>DC45D”3™Z8aC®GRÒó:Ú5-YA5->A™ZC•__B‡8
<^2•”Ð.K03242X5Ñ6Q84:=<^>A@h5->DC•6p”X™Z8aCX› <_™­UXWa24£_YA]_2ŽCX“¨MIC•_h_¤38
) §^,~ÕIW+>A]B<_@‡WRÒÑW+Yc<_24]S<_24”œ¶X<?YA]h™­UXW
´ 5D<_e~™ZPf©X>A]he
[4Tc@B5->9ÝO<_`aTcUš<_24”Ð.10X2X245
ï
¼A¿3t_ðDz}|_sVw ï
ñ(¼VvAzb½_uút_u•ƒ
t_v-¼AƒFûp|_u4y{sH|hÌÌb¼Vscvp¾Z|_s
ñNzvAy{sVz{-†3y¢¼­ƒõʚ†XÌ y{zw{9|BA|_yšûp|hÌÌb¼­x-y{z}|^ußqit^v­Ç^vACÑWa2àµR]hgi]_Y
´ 5D<_e~™ZP³©X>A]heü“«WbŠh5->A™­W+Yª0ÉYA]¬:Ú]h`+03e~]_>Aj4UXWa™­eÔC
´
) ¤h^,~G(83“Ñ<_™ZC4µ(8X©ÚWa5->­>A]3C•Ü78XÒÑ[4e/<?>DCX˜X8i꫙HY­>A]DëQ™­@‡W{Ci˜X8•¶3jX`a5-Y­¡
[4Tc@B5->
ÏZ5->DCo<_24”ÞMI8
Ý®<_`aTVUÔ<_2•”….K03242X5N698•:=<?>A@B5->DCX5D”X™Z8aCoGIÒó:Ú5-YA5V>A™ZC1__Bd8
Y­>c<^24™­ji]^>­Yg‡0Ë<_2dYA™<_24”à>A]_gi]_YA™ZC kþ9|BA|_y{z}x-vSt_uiƒm„R†3w
) ¤3*-,/–—8f©X5->­>c<^>A5Z™­™­]3CšMI87©45V>­>c<^>AW{CÔ6987:=<_£_5Z`a`b]3C…µ¨87:Ú]h`a5Z™­5Z`{C
yˆ|^ui|_‰¸|_†‡vÑÍ4Î^vcy¢¼V‰7vAC•ŠB]_`}84¤h‡C•2X]X8K*hCXj4jŽ84’B¦Z•*Z3*hC1_h_38
µ(83µR]h™A<^YAW{C4G(84¶3ji5V>c<_24Ï-]h2ŽC4<_24”‚›á8 1<_2X5-Y­YA5hC¨E­M®]_`a` <^gi]_¡
>c<^YAWaŠB5(5Ze~5->A£h5-2‡Y«<_TVYAWb]_24™Ngi5VYªëO5-5Z2š>A5D<_`=™­]dTZT-5->N>A]hgi]^YA™ZC k
)a*Z‘^,~ÿ(8=éO<^24£3COÒ¸8=› <?Yc<_24<_gi5hC®Ò¸8‹ÒÑWb£_[4TcU4W{C9<^2•”õÒ¸8pJªÏZ[Xe~W}C
Wb2…9|BA|Xû=†?Èiw}Á3ÂBÂB Q9|BA|_y‹Íi|DxAxc¼Vs
EHM®]d]_>c”3Wb24<^YA5D”SY­>c<^24™­ji]^>­Yc<^YAWa]h2 ]^\R<…™­Wa24£h`a5/]hg H5-T-Yfgd0F<
£^>A]h[Xjf]_\•24]h2XU4]_`b]_24]_e~WbT®e~]hg4Wa`a5®>A]_gi]_YA™ZC k/Wb2~ñÑzvAy{scz}-†‡y¢¼­ƒ
8Ž©4[X@‡[•”4<‡CK<_24”
´
) ¤B?,~G(8®ÝO<^>­>A5-Y­YDCNÛ78OµI<_gXWb”X5D<^[ŽC
8IGQ>c<_W{C
´
=8Ž“f8K› <^24£XCÚ<_24”SÒ¸8;ÒÑ]_™­[4£_5hC
EHM®]h`a`aWa™­Wb]_2~<DŠB]hWb”X<_24T-5O<_`a£_]_>AW+YAU4e
) ¤h¤^,¸µ(8N¶3Wae~e~]h2X™ZC¨“f8ÑGRjX\À5Z`ag•<_[XeÔC(“f8N©X]?ž1C(µ¨8Ñ:Ú8¨Û«]_` ”‡¡
=8ÚÕN<_Wa£_UŽCp“f8‹˜X8‹–…[4™­`aWa245->DCO–—8‹:Ú5Z`aWaTD<_2KCO<^2•”
\À]_>=Y¢ëO]QY­>c<_Tc@B5D”Ñe~]hgXWb`a5
e/<_2ŽC‹Ò¸8
>A]_gi]_YA™‚Y­>c<_24™­ji]^>­YAWa24£m< ™­Wb2X£h`a5F]hg H5ZT-Y…Wa2ËTZ]d]_>c”3Wb24<^YAWa]h2
¶18 ´
g4<_™­5D”/]h2/\[X24T-YAWa]_2Ô<_`a`a]dTD<?YAWb]_2ÔTZ]_24TZ5Zj3YDC kFWa2ÔñÑzvAy{scz}-†‡y¢¼­ƒ
5->A]h£_5Z245Z]_[4™…>A]_gi]_YA™ZC k
„R†‡yˆ|_u•|_‰~|^†3vQ|hc|^y{z}x€Í4Î^vcy¢¼V‰7vAC¨Õ78RGI™A<^e/<3C
lcu4y¢¼-ÌÌ zb½d¼VuXyp9|BA|_y{v(t_u•ƒ/ÍXÎ_vAy¢¼V‰fv~äl­
8Ž©4[X@‡[•”4<‡CK<_24”
´
´
´
8IGQ>c<_W{C
8KÕ«<_™­5Z£h<DëO<3CÚ6o”3™Z8bCŽŠB]_`}8;¦dCŽjXjŽ8=*D¦h¦D
) d*-,
^yÀ¿€lcu4y}ÌRûp|_uD¾Ð|_u lcu4y¢¼-Ì zb½d¼VuXyI„R†‡yˆ|_ui|_‰¸|_†‡vÐÍXÎ_vcw
) ¤B¦?,¸Õ78‹Õ«]DëO<^>c”KC9–—8®–Ô<^Yc<^>AWaThC9<_2•”ÞÛN<_[3>c<DŠm¶18o¶3[X@‡U•<^YAe~5_C
äl^û;Q„NwˆÁ3ÂhÂBÂVåDC1^hh‡8
EHGI2óWa24TV>A5Ze~5Z2dYc<_`~”35Zj4`a]D03e~5Z2dY—<_`a£h]^>AWaYAUXe
) _?,‚–—8(G¨8(ÝO<^Yc<^`aWb2ß<_24”ßÛ787¶i87¶3[4@‡U4<^YAe~5hC
E­¶‡jX>A5D<_”XWa24£
´
|H¾QnÑû;„RlcÁ3ÂBÂhÂ
´
8=¶‡[4£h<^>DCÚÜ(81ÒÑ[4e/<?>DCÚ<^2•”—–—8Ú©®8;–—8
ûp|_uD¾D¼Vs­¼Vuixc¼Q|_ufQ|hc|^y{z}x-v9t_u•ƒI„R†3yˆ|^‰~t_y{z}|^Cd¶‡5Z]h[X`}C_ÒÑ]_>A5Z<3C
]^\RWb2dYA5->c<^T-YAWa24£™­Wae~j4`a5~>A]hgi]^YA™fWa2õ<šTZ`a]dTV@d\{<^TZ5Ô<?>­>c<_2X£h5D”
e/<D0Ѝ^h‡*hCijXjŽ83j4jK8•__‘hD3__‘h¥‡8
\À]_>c<^£hWa24£€±45Z`b”KC kúWb2ËñÑzvAy{scz}-†‡y¢¼­ƒS„R†‡yˆ|_u•|_‰~|^†3v‚Q|BA|_y{z{x
) ¤h’^,
ÍXÎ_vAy¢¼V‰7vcC4ŠB]h`{8•¦‡C4j4jŽ8X¤_¤3*Vd¤h¤_‘38Ž¶3j3>AWa24£h5V>DCi^hBd8
8N©4[X@‡[•”4<‡C¨Jª@B5-e~]_YA]Þé78aCf©®8«GR>c<^W{Cf<^2•”
´
´
8IÕ«Wa£h<_™­U4W{C
E­ÛI>c<h”3[•<?YA5D”‚™­j4<^YAWb<_`Kj•<?Y­YA5->A2/\]_>Ae/<?YAWa]h2‚]_\Úe~WaT->A]¨>A]hgi]^Y
Ú8p“f8p› <_2X£€<^2•”ÞÜ(8®ÒÑ[Xe/<^>DC²EªG֔X5ZT-5Z2dY­>c<_`aWaÏZ5D”ÞYA5Z™HY
<^`a£h]_>AW+YAUXe¯\]^>Ñ]hgH5-T-YNTZ`a]_™­[X>A5fgd0…ef[4`+YAWaj4`a5¨TZ]d]_ji5->c<^YAWa24£
) _¦?,
£_>A]h[XjŽC kWa2šñNzvAy{sVz}-†‡y¢¼­ƒf„R†3yˆ|^ui|_‰¸|_†‡vIQ|hc|^y{z}x(ÍXÎ_vAy¢¼V‰fvAC
Õ78®GI™A<_e/<‡C
>A]_gi]_YA™ZC k/Wa2‚ñÑzvAy{scz}-†‡y¢¼­ƒÑ„R†‡yˆ|_ui|^‰~|_†‡vOQ|hc|^y{z}xIÍ4Î^vAy¢¼V‰fvAC
´
89GQ>c<_W{C
´
89©4[X@3[4”4<‡CQ<_24”
´
8OÕ«<_™­5Z£h<DëO<3C
8OÕN<^™­5Z£B<ZëO<3C
6o”3™Z8bCXŠB]_`}8•¦‡C4j4jK84¤^ŒBD‡¤_Œh’38Ž¶‡jX>AWa24£_5->DCi^hh‡8
) ¤h‘^,/¶18 ˜X8ж3[X2ŽC“f8 ›á8‚.K5Z5hC—<_24”ØÒ¸8 ÝI8ж‡WaeÔC
6p”X™Z8aCXŠB]h`{8•¦‡C4j4jŽ81*Z§B¦D4*?¥?ŒX8Ž¶3j3>AWa24£h5V>DCi^hBd8
Wbe~e7[42X5-¡ˆg•<_™­5Z”
8d¶3[X£B<^>DC3˜38d:Ú8B“N5-™A<_W{C‡Ü78hÒÑ[4e/<^>DCh<_24”~˜X8‡: ꫙HY­>A]DëQ™­@‡W{C
´
thÌat_u•x-zu‡½7Q¼Athx-y{zðDzy{΂t_u•ƒ
>A]hgi]_Y(TZ]d]_ji5->c<^YAWa]_2ŽC k¯Wa2õ»Os­|Dx Q|H¾7lHnpnonÉlcuXy¢¼VsVu•t_y{z}|_u•thÌ
) ?Œ_,~Ò¸8®¶‡[4£B<ZëO<^>c<€<^2•”Þ–—89¶3<_24]3C²E­M®]d]_ji5->c<^YAWaŠB5gi5ZU4<?Š‡Wa]_>
´
ï
MO<_e~ji]h™ZC7EHGR2š<^>ATcU4W+YA5ZTVYA[X>A5Ñ\]^>IYAWa£hUdYA`+0ÐTZ]h[Xj4`a5D”…e7[4`+YAW+¡
5VžXjX`a]_>c<^YAWa]_2Ô]_\‹TZ]_2dYAWb2‡[X]h[4™9[X24@‡24]DëQ2…”X]he/<^Wa24™ZC k/__h‡8
8O©4[4@‡[4”4<3C9<_2•”
|_s­Ç^vH¿X|VÈS|_u
) ¤B¥?,~.p8;M®U•<^Wbe~]DëQWaTZÏhC
>c<^2•”3]he~245-™­™9<_™OT-]he~j4`a5Ze~5-2‡Yc<?>­0/<_jXjX>A]h<_TcU45Z™®YA](>A]hgi]^YAWbT
´
Æ
Íi|Dx-z}th̕ñ7¼-Ì z{V¼VsHt_y{z}|_uFzu…ʝ†3Ì y{z{w„o½d¼VuXy{vÑÍ4Î^vAy¢¼V‰fvAC1^hh‡8
\ >A5D”€–—8ŽÝo>A[XTV@‡™HYA5ZWa2KCSEH–Ô<_T~Š‡™Z8KjiT7¡R”X5VYA5->Ae~Wa24Wa™­eÖ<^2•”
89GR>c<^W{C
U4Wa5->c<^>ATcUd0
]_\Q>A5D<_TVYAWbŠh5…gi5ZU4<?Š‡Wa]_>A™7U•<^2•”X`a5Z™7TZ]he~jX`a5-ž3WaY¢0dC kÖWa2m»9sH|Dx
) ^¤^,~J¢™H>c<_5Z`FG¨8¸› <^£h245V>DC –…WaTcU•<_5-`—.KWa2•”X5-23g4<_[XeÔC<_24”ØGR`+¡
´
OÍ ÂBÂVåDCi^hh‡8
) ¤h§^,/¶18ŽÝ®5ZUX24@B5hCKµ¨8KµI] ­<^™ZC‹<_24”SÛ78K› <^£h245V>DC EHG
8;Õ«<_™­5Z£h<Zë9<3Cp6p”X™Z8aC=Šh]h`{8=¦dC
jXjŽ8X¤B¥^¤?d¤h’Bd8Ž¶3j3>AWb2X£h5->DC•__Bd8
Õ(8OGR™A<_e/<‡C
\]^> e~]hgXWa`b5
Q|BA|_y{v(t^uiƒ~Í4Î^vcy¢¼V‰7v~äl­
ñNzvAy{sVz{-†3y¢¼­ƒ7„R†‡yˆ|_ui|^‰~|_†‡vR9|BA|_y{z}xNÍXÎ_vAy¢¼V‰fvACXÕ78‡GR™A<_e/<‡C
8Ú©X[4@‡[•”X<3C;<_2•”
>A]hgi]_Y‹YA5D<^e~™ZC k~Wb2¸»9sH|Dx K|ª¾QyÀ¿4¼®lcu4y Úûp|_uD¾ 1|^uflcu4y¢¼-ÌÌ zb½d¼VuXy
]_[XYDP¸Gß`a]dTD<_`9<^j4j3>A]B<_TcU€YA]šef[X`+YAWa¡}>A]_gi]_Y(TZ]?ŠB5->c<^£h5hC kÖWa2
´
y¢¼V‰fv¸äl­„«ÍBåDC4Üp5Z2XWbT-5hC4JˆYc<_`+0dC4__h3C•j4jŽ81*ZB¥D4*h*-ŒX8
y¢¼Vscuit^y{z}|_uit_ÌIûp|_uZ¾D¼VsA¼Vu•xV¼Ð|_uSQ|hc|^y{z}x-v/t_u•ƒÔ„R†‡yˆ|_‰¸t_y{z}|_u
8;GR>c<^W{C
®Í ÂBÂcå?C•__h‡8
|H¾/yÀ¿X¼
™­TZ<_`b<_gX`b59e7[4`+YAW+>A]hgi]^Y;\]^>Ae/<^YAWa]h2X™ZC k~Wb2f»Os­|Dx i|H¾plHnpnon lcu4w
´
Wa2¬»9sH|Dx f|H¾yÀ¿4¼lcu4y ¸ûp|_uD¾ (|_u
>A]hgi]_Y‹YA5D<^eóTZ]d]_>c”3Wb24<^YAWa]h2~[X™­Wa24£Ñ”35Z™­W+>c<_g4Wa`aW+YAWa5Z™ZC kÔWa2/»9sH|Dx
8®ÝO<^`bTcUË<^2•”œ–—8oÕI03gXWb2X5-Y­YA5hCØE­¶‡]dTZWb<_`Iji]_YA5Z2dYAWb<_`a™~\]^>
´
U3>A[42KCQE­M®]d]_>c”3Wa2•<^YA5Z”‚”35Zj4`a]D03e~5Z2dY‹]_\Kef[X`+YAWbjX`a5hCBUX5-Y­¡
) ¤_Œ_,~G(8K¶3< ~]^Y­YAW}CKL(8 ÝR8 1[Xe~5Z`{CK<_2•”698 Õ781µR[4™­j4Wa2XW}CÔE­–…[4`+YAW+¡
*Z§_Œ381¶3j3>AWa24£h5V>DCi^hBd8
) ^§^,
_h‡*hC4ŠB]h`{8K*hC•jXjŽ81* d*D¤_¤¸Ô* ‡*-ŒB‡8
<_@B5D”X<3CŽéf8ŽÕ«W+>c<^Yc<‡C
´
896‹™HYA`aWb2KCN<^2•”˶i8QM®UXWb5-2ŽC
´
<^YAWa24£N>A]?ŠB5->DC k/Wa2‚»9sH|Dx Ž|H¾®lHnpnonõ„R¼Vs­|_v}È3tBxc¼fûp|_uD¾Z¼VsA¼Vuixc¼VC
*Z’_Œ381¶3j3>AWa24£h5V>DCi^hBd8
) ^^,~Õ(8
|_sVÌaƒÐû=†?ÈÐl ŽC‡_h_38
Æ
E­M®]d]_>c”3Wa2•<^YA5Z”7TZ]_2‡YAWa2‡[•<^`dj4`b<_2X24Wa24£Re~5-YAUX]‡”X™Ž\À]_>‹TZ]d]_ji5->­¡
8KÕ«<_™­5Z£h<DëO<3CÚ6o”3™Z8bCŽŠB]_`}8;¦dCŽjXjŽ8=*D¥h¦D
´
<D03`a]_>DCáEHG³\À>c<^e~5-ëO]^>A@S\]^>~Š‡Wb™­Wa]_2Þg4<_™­5D”
t_uiƒf„R†‡yˆ|_‰~t_y{z{|_u4C4ŠB]h`{8Xÿ«ÿ7C4^h3*_8
)a*Z’^,‚MI8RµR]h24<_`b”ŸÒÑ[Xgi5€<^2•”Ÿ6Q8IÝ®]_2•<_gi5Z<_[ŽCýEHM®]d]hji5->c<?YAWbŠh5
„R†‡yˆ|_u•|_‰~|^†3vQ|hc|^y{z}x€Í4Î^vcy¢¼V‰7vAC¨Õ78RGI™A<^e/<3C
´
\]_>Ae/<?YAWa]h2€TZ]_2dY­>A]h`{C k¯lHnonpn¯q4sHt_u4vVtBx-y{z}|^u4v‚|_uS9|BA|_y{z}x-v
EHGR>­YAW+±4TZWb<_`
™Hë9<?>Aeügi5ZU•<DŠ‡Wa]_>A™€]^\š”3Wb™HY­>AWagX[XYA5D”ó<_[3¡
YA]h24]_e~]h[X™«>A]hgi]^YAWbTf™H0X™HYA5-e~™ZC k¬Wa2 »Os­|Dxc¼A¼Aƒ^zu‡½_v/|ª¾¨lHnonpn
EHM®]d]_>c”3Wb24<^YAWa]h2 ]^\9ef[X`+YAWbjX`a5fe~]hgXWa`b57e/<_2XWbjX[4`b<^YA]^>A™ZC krWa2
zu4y¢¼Vscuit_y{z{|_uithÌÐûp|_uD¾D¼Vs­¼Vuixc¼œ|H¾SQ|hc|^y{z}x-vÞt_uiƒº„R†‡yˆ|_‰~thw
»Os­|Dx o|H¾¨lHnpnon¬lcu4y¢¼Vscuit_y{z{|_uithÌ9ûp|_uZ¾D¼VsA¼Vu•xc¼/|^uF9|BA|_y{z}x-v
y{z}|_u;äl^û;Q„óÁ3ÂhÂBòVåDC1^h3*_Cij4jK8X¤h‘h‘_¤7‚¤h‘_‘h’‡8
t^uiƒf„R†3yˆ|^‰~t_y{z}|^u4C1¶‡5Z]h[X`{C4ÒÑ]^>A5D<3Ci–Ô<D0ԍ__3*_8
) ŒB^,¸µ(8®µ(89–…[X>Aj4Ud0dC«MI8Q.KWb™­5VY­YAW}CIµ¨8
) _¥?,¸é789Õ«W+>c<?Yc<3CÑÒ¸89ÒÑ]_™­[4£_5hCÑÕ789GI™A<_e/<‡CNÕ(89Ò¨<_5-YA™­[KCÑ<^2•”
´
<^>c”3W+\ªC«.p89Jˆ>AWb™­UKCN<^2•”
G(8‡ÛN<^£h5hCIEH6‹e~]_YAWa]_2X¡ˆg•<^™­5D”¸TZ]h2dY­>A]_`•]_\KTZ]d]_ji5->c<^YAWa24£¨UX5-Y­¡
Ò¸84Ò¨<Zë9<_g4<^Yc<‡C~E­M®]d]^>c”XWa2•<?YA5D”Y­>c<^24™­ji]^>­Yc<^YAWa]h2š]^\®<¸™­Wa23¡
5->A]h£_5Z245Z]_[4™še~]_g4Wa`a5—>A]hgi]^YA™ZC k
£_`a5R]_g ª5ZT-Y®g‡07ef[X`+YAWbjX`a59e~]hg4Wa`a5O>A]hgi]^YA™oëQW+YAU4]_[XYoji]h™­W+YAWa]h2
|_uSQ|hc|^y{z}x-v/t_u•ƒ…„R†‡yˆ|_‰¸t_y{z}|_uXC‹^hBdC…™­ji5-TZWb<_`pWb™­™­[X5~]h2
Wa23\]_>Ae/<?YAWa]h2/]_\Ž5D<_TcU~>A]hgi]^YDC kšWa2лOs­|Dx ;|H¾RyÀ¿4¼Rlcu4y pûp|_uZ¾
–…[4`+YAW+¡ˆ>A]_gi]_Y«¶‡03™HYA5Ze~™Z8
®Í ÂBÂcå?Ci__h38
|^ušlcuXy¢¼-ÌÌ zb½d¼VuXy‹9|BA|_y{v(t_u•ƒ/ÍXÎ_vAy¢¼V‰fv~äl­
Wa2ÉlHnpnpnÓqXs­t_uXv-thx-y{z}|_u4v
) ŒX*-,~G(8iÒÑUX]d]…<^2•”š“f8iÕI]_>A™HëQWa`a`}CFEªGI25 ~TZ5-2‡Y«TZ]d]_>c”3Wa2•<^YAWa]_2
) ^’^,‚–—8 «[4Wa242KC~EHGáTZ]_e~j•<?>AWb™­]_2š]_\o<_jXjX>A]h<_TcU45Z™RYA]¸YAU45¨5ZŠB]^¡
<^>ATcU4W+YA5ZT-YA[3>A5f\]^>f<^[XYA]_24]he~]_[4™«>A]_gi]_YÑYA5D<^e~™ZC k
Wa2€»9sH|Dx
`a[3YAWb]_2 ]_\RU4]_e~]h£_5Z245Z]_[4™(ef[X`aYAW+>A]_gi]_Y¨YA5D<_e~™ZC k¯Wb2m»Os­|Dx
|H¾‚yÀ¿4¼/lcuXy ¨ûp|_uZ¾ Q|_uõlcuXy¢¼-ÌÌ zb½d¼VuXy«Q|BA|_y{v…t_u•ƒFÍ4Î^vAy¢¼V‰fv
|ª¾«lcuXy ®ûp|_u‡½_s­¼Vvcv7|_uÐnOð^|hÌ †‡y{z}|_u•t_scÎFûp|^‰«È•†‡y{zud½_C1__3*_8
ä{l­
OÍ Â?ÁDåDCX›€<_™­U4Wa2X£_YA]h2KC4“ÑMICi–Ô<Z0ԍ_hh‡8
=8;˜X8=Ýo[XYA`a5->DCpG(8;G(8ŽµRWbÏ-ÏZW{Cp<_2•”—µ(8;.p8;Õ«]h`a`aWa™ZCàE­M®]d]_jX¡
) ^‘^,‚¶i8Q–…]_2‡YA5-Wa>A]Þ<_24”Ë6Q89Ý®WaTVUX]XCÓEªGù”‡032•<_e~WaTD<^`I™H03™HYA5Ze~™
<^j4j3>A]B<_TcUºYA]€gi5-U•<DŠ‡Wb]^>­¡ˆg•<_™­5Z”Þ\]^>Ae/<^YAWa]h2ËTZ]_2dY­>A]h`{C k
) Œd?,
Wa2
5->c<^YAWaŠB57TZ]?ŠB5->c<_£_5¸]_\o>A5ZT-YAWa`aWa245D<?>¨5Z2‡Š3W+>A]_24e~5Z2dYA™ZC k
Wa2€lcu•w
»Os­|Dx o|H¾7yÀ¿4¼fÁ3ÂBÂ?Á…lHnonpnŸlcu4y Rûp|_uZ¾D¼VsA¼Vu•xc¼/|^uF9|BA|_y{z}x-v
y¢¼VsVu•t_y{z}|_u•thÌ«ûp|_uZ¾D¼VsA¼Vu•xV¼Ð|_u s­|BA|_y{z{x-v‚t_uiƒ…„R†‡yˆ|_‰~t_y{z{|_u4C
t^uiƒf„R†3yˆ|^‰~t_y{z}|^u4C4›€<_™­UXWb2X£_YA]_2ŽCX“¨MICi–Ô<Z0ԍ__B‡8
¶X<_2…©X>c<_2XTZWa™­TZ]XCiMOG(C4GIj3>AWb`;__h‡8
k
77
i5Z`aWb2X™­@d0dC‹E­ÛI>A]h[X2•”35D”Ñ™H0Xe¸¡
Í4Î^vcy¢¼V‰7vACKGI`b<_2—MI8K¶‡U3[X`+YAϨ<_2•”š.10324245¨6981:=<?>A@B5->DC16p”X™Z8aC
gi]_`aWbT7TZ]he~e7[42XWbTZ<^YAWa]h2Ôgi5VYªëO5-5Z2U45-YA5->A]_£h5Z2X5Z]h[X™NTZ]d]_ji5->­¡
› <_™­UXWa24£_YA]_2ŽCh“¨MIC‡^hBdCdÒÑ`a[dëO5->oGITZ<h”X5-e~WbTQ:‹[4gX`aWb™­UX5->A™Z8
) Œh¤^,~“N<?Š‡Wb”¨˜h[42X£«<^2•”¨GI`a5-žX<^2•”X5V>
<?YAWa24£¸>A]hgi]^YA™ZC kބR†3yˆ|^ui|_‰¸|_†‡vÑ9|BA|_y{vAC1ŠB]_`}84’3C424]38i¤3C4j4jK8
) ¦_§^,~ÝI8‹“N]h24<h<^` ”1C9.‹8oÛN<^>AWa5Zjd0dCO<^2•” “f8‹µR[4™ZC
^§h‘D3_‘h‡CŽ^hh‡8
Eª“«Wb™HY­>AWagX[XYA5D”
e/<_24WajX[4`b<^YAWa]h2Ÿ]_\fef[X`aYAWajX`b5—]hg H5ZTVYA™š[4™­Wa24£m>A]hji5Z™ZC k
) Œ_Œ_,‚¶i8~MO<?>Aj4Wa2ŽCšMI8/©45->­>c<?>AW{Cš©®8/–…]_2dYA5Z™­5Z`a`a]XC698~:=<^£h5Z`a`a]XC
»9sH|D|DxV¼­ƒ_zÀu‡½_v |ª¾—yÀ¿4¼€Á3ÂBÂ^ÁºÄR9Å
Wa2
|_sAÇ^vH¿X|cÈß|^u¬Ê†3Ì y{z{w
Æ
<^2•”Ÿ:Ú8R:=<^YA[45-`b`aW{CýE­¶‡TD<_`b<_gX`a5õ”X5Z`aWagi5->c<^YAWaŠB5—jX>A]dTZ5Z”X[X>A5-™
Q|BA|_yoÍ4Î^vAy¢¼V‰fvAC•GR`b<_2šMI84¶‡U3[X`+YAÏN<_24”Ð.10324245«6Q8X:=<?>A@B5->DC
\À]_>Ð5 ~TZWa5Z2dYÐef[X`aYAW+¡}>A]hgi]^Y‚TZ]d]_>c”3Wb24<^YAWa]h2KC k
6o”3™Z8bC‡›€<_™­U4Wa2X£_YA]h2KC3“¨MICX__B‡CXÒÑ`a[dëO5->QGRTD<h”35Ze~WaTN:‹[4g3¡
nNû;„RlcÁ3ÂBÂBÂ
Æ
|_s­Ç^vH¿X|VÈØ|_u
Wb2Ÿ»Os­|Dx ¨|H¾
thÌat_u•x-zu‡½àQ¼­tBx-y{zðDzy{Îát_uiƒ
ï
`bWa™­UX5->A™Z8
͕|Dx-z}th̕ñ7¼-Ì z}c¼VsHt_y{z}|_uFzu…ʝ†3Ì y{z{w„o½d¼VuXyOÍ4Î^vAy¢¼V‰fvAC1^hh‡8
%
) ¦h¥?,féf8dGQ>c<_W{CdÕ(8dGI™A<^e/<3CB<_2•”¸Õ(8BÒ¨<^5-YA™­[ŽCQEH–Ô<_j‚<_T d[XWb™­W+YAWa]h2
) ŒB¦?,/G¨8o˜X8®J H™­ji5Z5V>­YDCRG(8®–Ô<^>­YAWa24]_`aW}CIG¨8OÝ®Wa`a`b<^>c”1CR<_2•”Þ.p8O–—8
Wb2¨e7[4`+YAW+¡ˆ>A]_gi]_Y;™H0X™HYA5-e~™;g•<^™­5D”Ñ]h2¨YAWae~5p™­U4<^>AWa24£R™­TcU45D”3[4`+¡
ÛN<^efg4<^>c”X5-`b`b<‡CóE­M®]h`a`b<_gi]^>c<^YAWa]h2œYAU3>A]h[X£hU YAU45‚5-ž3j4`a]hW+Yc<?¡
Wb2X£XC k
YAWa]_2É]_\‚`a]dTD<_`~Wa2dYA5->c<_TVYAWb]_24™—Wb2ó<^[XYA]_24]he~]_[4™FTZ]h`a`a5ZT-YAWaŠB5
Õ78®GI™A<_e/<‡C
>A]_gi]_YAWaTZ™ZP
6o”3™Z8bCXŠB]_`}8•¦‡C4j4jK84¤_’h¤D‡¤h‘h‡8Ž¶‡jX>AWa24£_5->DCi^hh‡8
´
U45O™HYAWaTc@Ñj4[4`a`aWa24£I5-ž3ji5->AWae~5Z2dYDC k¸„R†‡yˆ|_ui|^‰~|_†‡v
9|BA|_y{vACXŠB]h`{83Üo]_`{8K*_*hC324]383L«]X8XdC4jXjŽ8‡jXj*-ŒB‘D•*?¥d*hCK^h‡*hC
8;©X[4@‡[•”X<3CK<_24”
´
Wa2
8OÕ«<_™­5Z£h<DëO<3C
´
´
8«GQ>c<_W{C
´
8ŽÕ«<_™­5Z£h<Zë9<3CÚ6o”3™Z8aC;ŠB]_`{8Ú¦dCKj4jŽ8=*-Œd¦Z
OÍ" ÂBÂVåDC•_h_38
) ¦_‘^,¸JV8KµI5-@3`a5ZW+YAWa™ZCpÛ78Ž“N[•”35Z@iCÚ<^2•”—6Q8ږ…Wa`bWa]_™ZCËE­–…[X`aYAW+¡}>A]hgi]^Y
y¢¼V‰7v¸äl­
TZ]h`a`b<_gi]^>c<^YAWa]h2\]^>I>A]_g4[4™HYI5-ž3j4`a]^>c<^YAWa]h2KC kõWa2»Os­|Dxc¼A¼­ƒ_zu‡½_v
) ŒB¥?,/.10324245p698?:=<^>A@h5->DC=EH.KW+\5Z`a]h2X£N<_”4<_j3YAWa]h2¨Wa2(UX5-YA5->A]_£h5Z2X5Z]h[X™
|H¾OlcuXy¢¼VsVu•t_y{z}|_u•thÌ=ûp|_uD¾Z¼VsA¼Vuixc¼Izu~9|BA|_y{z}x-vIt_uiƒN„R†‡yˆ|_‰~thw
e7[4`+YAW+¡}>A]hgi]_YKYA5D<^e~™ZPŽµR5Z™­ji]_24™­5oYA]QTZ]_2dYAWb2‡[4<_`BŠ_<^>AWb<^YAWa]_27Wa2
y{z}|_u4C•__h‡8
Wa24”XWaŠ‡W ”3[•<^`Ž>A]hgi]^Y«ji5V>­\]_>Ae/<^24TZ5_C k˄R†‡yˆ|_ui|_‰¸|_†‡v¨9|BA|_y{vAC
) §h^,~G(8I¶3£_]_>AgXWb™­™A<º<_2•”൨8NMI8IGQ>A@3Wa2KC
Šh]h`{8•’‡C42X]X84¤3C4j4jŽ84^¤h‘?‡_§h¥‡CŽ__h‡8
™HY­>c<^YA5Z£_Wb5-™Q\À]_>I<(YA5Z<_e
) Œh’^,/.‹8o6Q8o:=<^>A@h5->DCOéf8oÛ«[X]XCQ<^2•”õ“f8o˜h[42X£XC²EHM®]d]hji5->c<?YAWbŠh5
EH.Ž]dTZ<_`(2•<DŠ‡Wb£h<^YAWa]h2
´ 5ZTcUŽ83µR5ZjŽ8aC4Û«5Z]_>A£_Wb<
) §3*-,/¶184MO<^>AjXWb2Ô<_2•”….p846Q84:=<^>A@B5V>DC¨E­M®]d]hji5V>c<^YAWaŠB5(`b5Z<h”X5V>R\À]h`+¡
»Os­|Dxc¼A¼­ƒ_zud½_v¸|H¾7òBÂ^yÀ¿…lcu4y¢¼Vscuit_y{z}|^uithÌOûp|_uZ¾D¼VsA¼Vu•xc¼¸|_u„Qƒhw
`b]DëQWa24£¨Wa2~<є3Wb™HY­>AWagX[XYA5D”fe7[4`+YAW+¡}>A]hgi]_Y‹™H03™HYA5ZeÔC kлOs­|Dxc¼A¼Aƒ_w
ð^t^uixc¼­ƒ~9|BA|_y{z}x-vAC1^h3*_Cij4jK84¥‡*V‡¥h¥d8
]_\Ž>A]hgi]^YA™ZC k
5ZTcUŽC•_h‡*h8
´
>A]_gi]_Y9YA5D<^e~™Q<_jXj4`aWa5D”‚YA](YAU45«™­W+YA5Ñj3>A5Zj•<?>c<^YAWa]h2‚Yc<_™­@iC k—Wa2
$#
zu‡½_vÔ|H¾‚lHnonpn
) Œh‘^,/Ýp>AW <^2Ë:Ú8®Û«5->A@B5-0Þ<_24”œ–Ô< ­< ˜ –Ô<?Yc<^>AW ThC×EH:=>AWb2XTZWaj4`a5D”
lcuXy¢¼VsVu•t_y{z}|_u•thÌ7ûp|_uZ¾D¼VsA¼Vu•xc¼—|_uº9|BA|_y{z}x-v
t_uiƒf„R†‡yˆ|_‰~t_y{z{|_u4Ci_hh‡8
T-]he~ef[X24WaTD<?YAWb]_2Ô\]^>¨”3032•<^e~WaT(e7[4`+YAW+¡}>A]hgi]_YIYc<_™­@š<^`a`b]dTZ<^¡
vAz†‡‰
89©4[X@3[4”4<‡CQ<_24”
*?¦^Œ381¶‡jX>AWa24£_5->DCi^hh‡8
»Os­|Dx Ž|H¾IyÀ¿4¼Rlcu4y ‹ûp|_uD¾ K|_uÐlcuXy¢¼-ÌÌ zb½d¼Vu4y;9|BA|_y{vNt_uiƒfÍXÎ_vcw
YAWa]_2ŽC k
´
„R†3yˆ|^ui|_‰¸|_†‡vF9|BA|_y{z}x€Í4Î^vAy¢¼V‰fvAC¨Õ78RGR™A<_e/<3C
TZ]_24™HY­>c<^Wb23¡ˆe~]^Šh5Ng•<^™­5D”š”XWa™HY­>AWag4[3YA5D”…TZ]^¡
89GQ>c<_W{C
”XWa2•<?YAWb]_2m<h”Xe~Wa™HY¸”3032•<^e~WbT/]_g4™HYc<_T-`b5-™ZC kÓWa2ºñÑzÀvcy{scz}-†‡y¢¼­ƒ
!Ú89› <^24£3CN<^2•”
) Œh§^,‚–—8QL(8QGIUXe/<h”X<_g•<_”XW{C«¶18R–—8QµR[4™­U4<_2KC
]_ji5->c<^YAWa]_2—™HY­>c<?YA5Z£_0š\À]_>N\À]h[X>N]hg H5ZT-Y¨`aW+\ÀYAWa2X£‚>A]_gi]_YA™ZC k
´
) ¦_’^,/¶18_MO<^>AjXWa2¸<_24”7.‹8^6Q8_:=<?>A@B5->DCoE­M®]d]hji5V>c<^YAWaŠB59e~]_YAWa]_27TZ]^>­¡
ÒÑ`a[dëO5V>«GRTD<h”35Ze~WaTÑ:‹[4g4`aWa™­U45V>A™Z8
698•L«<_@_<_2X]XC7EHG
Wb2¬ñNzvAy{sVz{-†3y¢¼­ƒõ„R†‡yˆ|_u•|_‰~|^†3vÔ9|BA|_y{z}xSÍXÎ_vAy¢¼V‰fvAC
Wa2ó»9sH|Dxc¼c¼­ƒ_zÀu‡½_võ|H¾SyÀ¿X¼SlcuXy¢¼VsVu•t_y{z}|_u•thÌ(ÍXÎ_‰IÈ3|hw
) §B?,~6Q8‡ôITVUXWbgi5_C ´
|_u¨npãVÈX¼Vscz‰/¼VuXyˆthÌhQ|BA|_y{z{x-vcC^› <^Wa@3Wa@‡W{C?ÕN<Zë9<^WaW}C_“N5ZT_8
8‡Ò¨<^YA]XC4–—83GR™A<h”4<‡CX<_2•”~Ò¸8dÕI]h™­]‡”X<3CÑE¢“«0‡¡
&
2•<_e~WaTQYc<_™­@/<^™­™­Wb£_24e~5Z2dY9Wa2‚<7e7[4`+YAWb<_£_5Z2dY ^ef[X`+YAWaYc<^™­@f5Z23¡
^h_38
'
Š3W+>A]_24e~5Z2dY(g•<^™­5D”€]h2õe~]‡”X[X`a5~TZ]h2 •WaT-Yf>A5-™­]h`a[XYAWa]h2KC kùWa2
) ¦^^,/Ýp>AW <^2³:Ú87Û«5->A@B5V0
$#
<_2•”r–Ô< ­<ɘá–Ô<?Yc<^>AW ThC
T-]h[4jX`a5D”Ð>A]hgi]^YRTZ]d]_>c”3Wb24<^YAWa]h2KC k
´
OÍ" ÂBòVåDC•–Ô<Z0š^h3*_C9¶‡5Z]h[X`{C4ÒÑ]^>A5D<38
»9sH|Dx ;|H¾IyÀ¿4¼RlcuXy pûp|_uD¾ Ž|_uÐlcuXy¢¼-ÌÌ zb½d¼VuXy;Q|BA|_y{vNt_uiƒfÍXÎ_vcw
EH:‹[4™­U45V>­¡
y¢¼V‰fv¸äl­
ëO<^YATcU45->DPâGI2Ø<_jXjX>A]h<_TcU¯YA]ó\{<_[X`+YËYA]h`a`a5->c<_2dYËYAWa£hUdYA`+0‡¡
5-TVUX24WaTD<_`K>A5Zji]_>­YDCXJª2X™HYAW+¡
) §h¤^,
81MI81.KW <^24£/<^2•”š˜X8i¶18i.KWa[ŽC~EH–…]_YAWa]h2FTZ]h2dY­>A]_`Ž>A5D<^`bWaÏZWa2X£
´
YA[3YA5I\À]_>QµR]hgi]_YAWaTZ™R<^2•”/Jª2dYA5Z`a`aWb£_5Z2dYI¶‡03™HYA5Ze~™ZC‡ôI24WaŠB5->A™­W+Yª0
T-03TZ`aWbTFg•<^`a`Nj•<^™­™­Wb2X£Þ™HY­>c<?YA5Z£_0Ë<_e~]h2X£õef[4`+YAWaj4`a5e~]hgXWa`b5
]^\=™­]h[3YAU45->A2šMO<_`aW+\]_>A2XWb<3CK–Ô<?>ATVU^h‡*h8
>A]hgi]_YA™®Wb2/>A]hgi]^Y9™­]dTZTZ5V>Q£B<_e~5-™ZC kšWa2лOs­|D|Dxc¼A¼Aƒ^zu‡½_v¨|ª¾«yÀ¿4¼
) ¦d*-,~µ¨8¨›á87Ý®5D<?>c”KC
´
Á3ÂBÂ^Á¨lHnonpnõlcu4y =ûp|_uD¾D¼Vs­¼Vuixc¼N|_u¸9|BA|_y{z}x-vIt_uiƒN„R†‡yˆ|_‰~thw
87›á8¸–…TD.Ž<_Wa2ŽC…<_24”³–—8fÛ«]d]‡”3>AWaTcUŽC
y{z}|_u4C3› <^™­U4Wa24£^YA]h2KC4“¨MIC•–Ô<Z0š^hh‡8
EHM®]d]_>c”3Wb24<^YA5D”šYc<^>A£_5-YÑ<^™­™­Wa£he~5Z2dY«<_2•”šWa2dYA5->ATZ5Zj3YI\À]_>«[42X¡
e/<^242X5D”¬<^W+>SŠB5-U4WaTZ`a5Z™ZC k
Wa2ó»Os­|D|Dxc¼­ƒ_zud½_võ|H¾€yÀ¿4¼€Á3ÂBÂ^Á
) §_Œ_,¸µ(8•6pe~5->­0dC4Ò¸8K¶3Wa@B]^>A™­@‡W}Ci<_2•”
lHnpnpnÉlcu4y Nûp|_uD¾Z¼VsA¼Vuixc¼…|^u 9|BA|_y{z}x-v‚t^uiƒ…„R†3yˆ|^‰~t_y{z}|^u4C
›€<_™­UXWb2X£_YA]_2ŽCX“¨MICi–Ô<Z0ԍ__B‡8
e~5Z2dY7Wa2m<Ô>A]hgi]^YfYA5D<^eÔC k
) ¦_?,/ÝR84ÝR83›S5->A£h5V>Q<_24”Ô–—8X˜X84–Ô<^Yc<?>AWbT_C¨EHÝo>A]h<h”XTZ<_™HYR]_\;`b]dTZ<_`
Wa2Þ»9sH|Dx I|H¾
) §B¦?,/MI8¸MO<^™HYA5Z`aj4Wa5-Y­>c<‡CÔ.‹87J¢]dTZTcU4W{C/“f8(LN<?>c”XW{CЖ—87:pWb<_£_£hWa]XC
) ¦^¤^,‚–—84˜h5Z2X@‡Wb2Ô<_2•”…Û784“«[•”X5-@1C¨E ´
U45Ñj4<_j•<?>c<_ÏZÏ-W;jX>A]_g4`a5ZeÔC k
G(8‹¶‡TD<_`aÏZ]3CO<^2•”õG(8=¶3£_]_>Ag4Wa™­™A<‡CóE­M®]d]_>c”3Wa2•<^YAWa]_2œ<_e~]_24£
»9sH|Dx ¸|H¾šlcuXy¢¼VsVu•t_y{z}|_u•thÌ/ûp|_uD¾D¼Vs­¼Vuixc¼ |_u¬lcuXy¢¼-ÌÌ zb½d¼Vu4y
OÍ Á‡ÂBÂBÂVåDC•_h_38
U45-YA5->A]_£h5Z2X]h[X™R>A]_gi]_YAWaT7™­]dT-TZ5->Ij4`b<Z0d5->A™ZC kõWb2F»Os­|Dx p|H¾«lcu•w
9|BA|_y{v(t_u•ƒ/ÍXÎ_vAy¢¼V‰7v~ä{l­
y¢¼VsVu•t_y{z}|_u•thÌ;ûp|_uZ¾D¼VsA¼Vu•xV¼I|_uflcu4y¢¼-ÌÌ za½d¼Vu4yK9|BA|_y{vRt_u•ƒ¨ÍXÎ_vcw
OÍ" Á3ÂBÂhÂVåDC1__h‡8
y¢¼V‰fv¸äl­
) ¦?Œ_,/.‹8(6Q8(:=<?>A@B5->DC~Ò¸8(©3>A5Z£h5Z2X5hC¸é787Û«[4]XC~<_2•”ɵ¨87–Ô<h”‡¡
U4<DŠh<^2ŽCÉE¢“NWa™HY­>AWbgX[XYA5D”õU45VYA5->A]h£_5Z245Z]_[4™¸™­5-24™­Wa24£\]^>~]h[3Y­¡
) §h§^,~G(8KÝo>A5Z24”X5Z23\5Z`b”F<_24”Õ(81ô781ÒÑ]_g4Wb<_`a@_<3CFE ´
”3]d]_>=e7[4`+YAW+¡}>A]hgi]_Y;`a]dTD<_`aWaÏD<^YAWa]_2ŽC e/<_jXj4Wa24£Ñ<^2•”Ñj•<?YAU¨j4`b<_23¡
Wa2X£XC kfWa2f»9sH|D|DxV¼­ƒ_zÀu‡½_vK¾VsH|_‰áyÀ¿4¼OÁ3ÂBÂ^ÁNÄR9Å
Æ
|_s­Ç^vH¿X|VÈÐ|_u
Æ
–…J ´
8KÝO<_`aTcUŽCEH–…Wb™­™­Wa]_2—>A5Z`a5ZŠ_<_2dYNT-]h`+¡
`b<^gi]_>c<^YAWaŠB5¨]_g4™­5->AŠ_<^YAWa]h2<^2•”Ô`a]dTD<^`bWaÏD<?YAWa]h2ŽC kÞWb2š»9sH|D|DxV¼­ƒhw
Æ
|^sAÇ^vH¿X|VÈá|_uºÊ†3Ì y{z{wQ|hc|^y
l
78
ï
thÌat_u•x-zu‡½/Q¼­tBx-y{zðDzy{ÎÔt_uiƒ‚Íi|Dx-z}th̎ñ(¼-Ì z}Zw
) §B¥?,~:=8o¶dYA]h2X5hCËŎt_Îd¼Vs­¼­ƒ—Å;¼At^sVuXzu‡½Szuõʝ†3Ì y{z}tD½d¼Vu4y¨ÍXÎ_vAy¢¼V‰fvAC
”35Ze~WaTÑ:‹[4g4`aWa™­U45V>A™Z8
zÀu‡½_vѾcs­|_‰âyÀ¿X¼Á3ÂBÂ?ÁõÄR9Å
|_s­Ç^vH¿X|VÈm|_u
¼Vs­t_y{z}|^u—zu…Êš†XÌ y{zw{„p½d¼Vu4y®ÍXÎ_vAy¢¼V‰7vcC1_h_38
:=<?>A@B5->DC«6o”3™Z8aCI›€<_™­U4Wa2X£_YA]h2KC«“ÑMICQ__BdCÑÒÑ`a[‡ë®5->…GITD<?¡
´
5Z<_eÖTZ]d]_ji5->­¡
<^YAWa]h2õ[4™­Wa2X£—”3[•<^`O”30324<_e~WaTZ™ZC kÖWa2Þ»9sH|Dx I|H¾7nÑû;„RlcÁ3ÂBÂhÂ
ʚ†XÌ y{zw{9|BA|_y¨Í4Î^vAy¢¼V‰fvAC9GR` <^2ÞMI8o¶3U‡[4`+YAς<_24”m.10324245‚698
) ¦_¦?,/G¨8•›á8Ž¶dY­>A]h[4ji5(<^2•”
Wa2m»Os­|D|Dxc¼­ƒ_zud½_vÔ|H¾/yÀ¿4¼ÐÁ3ÂBÂ?Á
› <_™­UXWa24£_YA]_2ŽCX“¨MIC•–Ô<D0ԍ__Bd8
ñQ„RIÍ4C•_h_38
Wa2
81Ý®<_`aTVUKCÔEª:‹>A]^YA]dTZ]h`a™N\]^>
lHnonpnÉlcu4y Nûp|_uZ¾D¼VsA¼Vu•xV¼Ð|_u€Q|BA|_y{z{x-v‚t_uiƒ…„R†‡yˆ|_‰~t_y{z{|_u4C
5-`bWa£_WbgXWa`bW+Y¢0 \]_>fef[X`aYAW+¡}Yc<^>A£_5-Y7]hg4™­5V>AŠh<?YAWb]_2ŽC k
´
TZ]h`a`b<_gi]^>c<^YAWa]h2KC1TZ]d]_>c”3Wb24<^YAWa]h2F<^2•”Ô”‡0X24<_e~WaT«>A]_`b57<_™­™­Wa£_2X¡
:‹>A5-™­™ZC1__h‡8
Robotic path planning in partially
unknown environment
Alessandro Scalzo – Gianmarco Veruggio
CNR-ISSIA Sezione di Genova
Via De Marini, 6 – 16149 Genova (Italy)
[email protected] - [email protected]
Abstract
The problem of navigating and path planning in partially
unknown environment involves some interesting related
topics, such as sensing, safe and reliable trajectory
generation, localization. Many solutions have been
proposed in literature, each of them working fine in some
situations and presenting some unavoidable drawbacks in
others. For this reason, a robust and reliable solution to the
general problem should be a hybrid approach, consisting in
the fusion of different robotic methodologies. In this paper
we will discuss our solutions to the different sub-problems
involved by the task of navigating in unknown
environments for a mobile robot equipped with a set of
different sensors, as ultrasonic range finders and colour
camera. In particular, we will see an application of
probabilistic maps to the problem of obstacle mapping, an
application of the Global Navigation Function algorithm to
the problem of path planning, and a Kalman Filter solution
applied to a problem of vision-based self localization, each
one integrated in a low-level and mid-level robot system,
implemented on our ATRV Jr mobile base Tango.
1
Obstacles sonar mapping
Our robot Tango (employed by the Robotlab unit in the
project Robocare) is equipped with a sonar belt, consisting
of 17 units surrounding the vehicle, polled at the rate of 5
Hz. Our approach to the problem of processing the sonar
readings is to use them to build probabilistic occupancy
maps [6][7][4][13][12][5] instead of using the raw data in
a reactive obstacle avoidance navigation algorithm, for a
valid number of reasons that will be better explained in the
following sections. The first reason is the possibility of
using probabilistic maps to fuse sensor data acquired from
different origins, such as sonar range measures and, for
instance, information about obstacles and free space
acquired by an artificial vision system, resulting in an
homogenous data representation. The occupancy map
obtained can then be used to generate free space paths by a
global path planning algorithm, and/or self localize the
robot in the environment using matching techniques with a
priori known environmental landmarks and features, or
Copyright © 2003, The ROBOCARE Project
Funded by MIUR L. 449/97. All rights reserved.
Simultaneous Localization And Mapping (SLAM) [4][2][3]
algorithms.
The occupancy map will be therefore represented by a
discrete grid of N × M real values 0 ≤ m(x, y ) ≤ 1 whose
values are the occupancy fuzzy values of the related cells.
This map is therefore a discrete quantization of the work
space of the robot, where each square cell has a fixed side
dimension (usually 10 cm).
1.1
Ultrasonic data filtering
The data acquired by ultrasonic sensors are obviously
affected by measure errors (lying in the range of 3 cm), but
a greater problem is that of false positive and false
negative sample data. False positive samples may be
caused by spurious echoes, whilst false negative ones may
result from specular reflections on smooth and polished
surfaces like glass, when the incident angle exceeds a limit
value (usually 10 degrees on very smooth surfaces). Thus,
the range data acquired from ultrasonic sensor need to be
filtered to limit the effects of this kind of errors. A possible
filtering technique is to apply a low pass filter on sensor
range measures during the process of occupancy grid
updating with new sensor data.
We can actually model the sensor response as a
probabilistic distribution of having an obstacle that caused
the sonar reflection on the sensible sonar emission
wavefront. This distribution can be approximated as a
pseudo-Gaussian distribution in the following form
⎧
⎛ 1 ⎛ r 2 δ 2 ⎞⎞
⎪ K exp⎜ − ⎜ 2 + 2 ⎟ ⎟
⎜ 2 ⎜ σ r σ f ⎟⎟
P(r , δ ) = ⎨
⎝
⎠⎠
⎝
⎪
0
⎩
δ < δM
δ ≥ δM
where δ is the angular position on the wavefront with
respect to the sonar axis, and δ M is the maximum response
angle for the sonar. r is the obstacle range measured by
the sonar, and K is the normalization constant.
When a new response at range r is received from the
sonar, the occupancy map is updated as in the following
low pass formula
79
if m old ( x, y ) < P(r ( x, y ), δ ( x, y )) then
m new ( x, y ) = Am old ( x, y ) + BP(r ( x, y ), δ ( x, y ))
sonar emission direction is similar (i.e. comprised in a
suitable range) to those that filled the cell.
In figure 1 we can see a map recorded on line by Tango
where 0 < A, B < 1 A + B = 1 . The previous map updating
rule introduces a low pass filtering behaviour, that makes
the system less sensitive to false positive responses from
the sonar.
1.2
Moving obstacles clearing
It is obvious that the mapping method previously described
can reliably map fixed obstacles in the environment, like
walls and furniture, but cannot delete previously detected
and mapped moving obstacles, like people and other
autonomous vehicles. It is thus necessary to introduce a
supplementary rule to clear any map cells that are no more
occupied by obstacles. To this scope, we can observe that
if a sonar sensor can say if an obstacle is present at a
certain range, another information is contained in this: we
can actually say that in this case no other obstacles are
present in the area covered by the sonar, at a lesser range
than the measured one. We can thus use this information to
clear the cells covered by the sonar response area which is
closer than the detected range.
We can therefore define a clearing function as follows
⎧ ⎛
⎛ 1 δ 2 ⎞⎞
⎟⎟
⎪ K ⎜1 − exp⎜ −
⎜ 2 σ 2 ⎟⎟
C (r , δ ) = ⎨ ⎜
f
⎠⎠
⎝
⎪ ⎝
1
⎩
δ < δ M , r < rD
otherwise
where rD is the obstacle distance detected by the sonar.
The clearing function is then used in a low pass filtering
rule analogue to the occupancy mapping:
if m old ( x, y ) > C (r ( x, y ), δ ( x, y )) then
m new ( x, y ) = Am old ( x, y ) + BC (r ( x, y ), δ ( x, y ))
that smoothly clears the map cells covered by the sonar
response area, before the detected range, at every map
updating cycle.
1.3
Figure 1
moving in an indoor environment using the previously
described mapping algorithm.
2
Path planning
One of the fundamental aspects of the problem of
autonomous mobile robotics, is the efficient planning of
trajectories that bring the robot to move in the free space,
with a sufficient margin of safety distance with respect to
the obstacles (that can be known a priori or detected on
line by the robot through its sensorial system). There are in
literature several examples of trajectory generation
methods based on local principles of obstacles avoidance
[1], but at first sight these seem insufficient for our scopes,
because they often result in solutions that are very far from
the optimum, also in very simple cases, with generation of
cycles and situations of stall that make the behavior of
robot visually very different from the intelligent behavior
that we would want to achieve for our scopes.
2.1
Global Navigation Function
Thus, we chose to follow alternative ways, based on global
algorithms that supply optimal solutions, but that are at the
same time optimized in the implementation to be
applicable
Dealing with smooth surfaces
There is one more problem in the mapping method
previously described: a false negative sonar response, due
to specular smooth surface reflection for large incidence
angle, can erroneously cancel obstacles from the map.
To avoid this kind of error in the mapping algorithm, one
more step has to be introduced, taking into account the
incidence angle under which the map cell was filled. This
can be made by recording, for each cell, the direction of
the sonar emission that filled the cell itself. A cell can thus
be cleared by a sonar emission with no echo only if the
80
Figure 2
in real time during navigation [9][8]. The choice was
therefore the Global Navigation Function (GNF)[10], a
method that allows the generation of potential fields
absolutely free of local minima. This is achieved by
expanding a wavefront starting from the goal, on a
numeric grid that represents the work space of the robot.
Obviously, the cell grid is the occupancy map discussed in
the previous section, where the GNF algorithm diffuses the
wavefront quickly through cells whose occupancy values
are lower, and slowly where occupancy values are higher.
Over a given threshold, the wavefront doesn’t diffuse at
all. In figure 2 we can see the wavefront diffusion on a
simple map at three different times.
The trajectory generation is therefore computed as a raw
gradient descent of the potential function, without the risk
of being trapped in a local minimum, because the potential
function is totally free of them.
2.2
GNF interpolation
Of course, a further step is needed between the discrete
potential function generation and his gradient evaluation,
because the numerical potential function must be
interpolated to obtain a continuous potential function
defined in the Euclidean space that can be derived with
respect to the Cartesian coordinates. We chose the Bezier’s
surfaces as interpolating functions because of their good
properties of C1 class continuity.
To compute such interpolation, a square subgrid of 16 cells
centered on robot position is used. We can define
[
T = t3 t2
]
[
S = s3
t 1
D(r , s ) = T M B ∆M B S
T
⎡ Di −1, j −1
⎢D
i , j −1
∆=⎢
⎢ Di +1, j −1
⎢
⎢⎣ Di + 2, j −1
s2
]
0 ≤ r, s ≤ 1
s 1
3 − 3 1⎤
⎡ −1
⎢ 3 −6
3 0⎥⎥
MB = ⎢
⎢− 3
3
0 0⎥
⎢
⎥
1
0
0 0⎦
⎣
T
Di −1, j
Di −1, j +1
Di , j
Di , j +1
Di +1, j
Di + 2, j
Di +1, j +1
Di + 2, j +1
Di −1, j + 2 ⎤
Di , j + 2 ⎥⎥
Di +1, j + 2 ⎥
⎥
Di + 2, j + 2 ⎥⎦
as the usual Bezier’s surface geometric representation,
where ∆ is the matrix of potential function values in the
16 grid cells surrounding the robot position.
[
]
∂
T T
D ( x ) = − 3t 2 2t 1 0 M B ∆M B S
∂x
T
∂
T
Vy = − D ( x ) = −T M B ∆M B 3s 2 2 s 1 0
∂y
Vx = −
[
]
Figure 3
Figure 3 is shows an example of the Bezier interpolated
occupancy function for two concave obstacles (left), and
the related Bezier interpolated potential function (right),
totally free of local minima also in presence of non convex
obstacles.
The trajectories generated by the GNF method satisfy
requirements of optimal trajectory in analogy with
Fermat’s Principle in geometrical optics, stating that a
geometrical ray of light travels from A to B in a medium
along the path that takes the minimum time, i.e. the path
that minimizes the curvilinear integral of the refraction
index. In our case, the analogy of the refraction index is
the value of the occupancy function in the map.
In this way, the high values of occupation in the cells
covered by obstacles and those adjacent ones, expand the
metric properties of the related space, making it more
convenient, for the cost function calculated according to
the Fermat’s Principle, to circumnavigate the zones with
high occupation rather than crossing them.
2.3
Effects of partial knowledge
As we have seen, the robot fills his occupancy map on line,
during his motion in the environment while executing his
task. When this process starts, the map is initially empty,
and the robot will first try the most direct path to achieve
his given goal, discovering obstacles on his way through
the ultrasonic sensor set. This can bring the robot to
reconsider the path previously generated, and possibly
change the plan on line, because of most promising not yet
explored paths. This process can iterate several times,
resulting in a desirable emerging behaviour of environment
exploration, very useful for mapping.
When the map of the environment is complete, the GNF
will find the optimal path to the goal (unexpected moving
obstacles apart). Moreover, the map can be initialised with
a priori knowledge on fixed obstacles occupancy (that can
be marked “fixed” on the map to prevent deletion), in a
hybrid mapping method that easily integrates different
kinds and sources of information.
The gradient of the continuous potential function D(r , s )
can be derived from the previous relations, resulting
81
3
Self localization
As is well known, the problem of self localization is
central to autonomous mobile robotics. Our activity in this
research direction is characterized by an incremental
approach, where four sequential steps have to be
performed in order to develop a robust and accurate self
localization algorithm able to work in a non structured
environment (at this time, only the first step is completely
realized and will be discussed in details in this paper). The
first step consist of a localization algorithm to correct the
robot position and orientation in real time, during the robot
motion, based on fixed and recognizable landmarks
(detected by artificial vision as yellow vertical stripes
delimiting the door in figure 4) at known positions in the
work space.
Before proceeding in our discussion on Kalman filtering
applied to the problem of self localization, we should first
note that the relations between variables involved in this
schema are non linear, thus, when we say Kalman filtering,
it is implicitly meant that we are talking about Extended
Kalman Filter.
The formulae from which descends EKF by linearization
are the usual linear KF well known relations:
µ i +1 = [Aµ i + Bu i ] + K i +1 [m i +1 − H[Aµ i + Bu i ]]
K i = Σ i H t R −1
[
Σ i +1 = [Σ i + Q ] + H t R −1H
−1
]
−1
In our case, the non linearity of system evolution is
contained in the first of the following equation, which is
p i +1 = µ i + b(µ i , u i )
(prediction)
µ i +1 = p i +1 + K i +1 [m i +1 − Hp i +1 ] (correction)
the as the first of the previous ones, split into two steps:
where
⎛
⎞
⎜ cos ϑ * cos(α + ξ 2 )(v + ξ1 )⎟
⎜
⎟
b(s, u + ξ ) = ⎜ sin ϑ * cos(α + ξ 2 )(v + ξ1 ) ⎟∆t
sin (α + ξ 2 )(v + ξ1 )
⎜
⎟
⎜
⎟
⎝
⎠
L
( )
( )
Figure 4
Our algorithm is not based on triangulation but on single
landmark measures, as a Kalman Filter input [11] in order
to correct the robot position and subsequently his
orientation according to the new position computed. The
second step consists of transforming the localization
algorithm in a SLAM algorithm, where both robot and
landmarks positions belong to the status of the system [3],
and have to be estimated. In the third step easily
recognizable landmarks are removed from the
environment, and the vision system has to extract natural
features to be used in SLAM from the environment. In the
fourth step, occupancy maps built on sonar data have to be
matched to landmarks extracted by the vision system and
to new sonar readings to achieve a more accurate and
robust self localization.
3.1
Kalman filter in localization
Copyright © 2003, The ROBOCARE Project
Funded by MIUR L. 449/97. All rights reserved.
82
being s=(x*,y*,θ*) the real status of the robot, and
µ=(x,y,θ) the estimated status of the robot. The measure
obtained from the vision system is a relative direction ϕ of
towards the landmark detected on the image processed. If
w=(xw, yw) is the position of the landmark in the
workspace, the following relation is valid,
⎛ yw − y ⎞
⎟⎟
⎝ xw − x ⎠
γ = ϕ + ϑ = A tan⎜⎜
where γ is the absolute angle of the estimated direction of
the landmark seen by the robot. Defining our measure m as
p = (x
y)
t
d =p−w
v = −(cosγ
sinγ )
t
⎡ cos γ
H=⎢
⎣− sin γ
sin γ ⎤
cos γ ⎥⎦
m = H[(d ⋅ v )v − d ]
we have all the quantities needed to build the Kalman filter
formulae for our problem.
3.2
Uncertainty covariance evolution
For the evolution of the covariance of the uncertainty
(supposed gaussian distributed), we simply suppose that
uncertainty increases with the robot motion proportionally
to the linear and angular distance covered by the robot, and
reduced by measures according to the usual Kalman filter
covariance propagation formulae.
5. A. Elfes: Using Occupancy Grids for Mobile Robot Perception
and Navigation. IEEE Computer 22(6): 46-57 (1989)
We can therefore suppose that, if the robot moves for a
distance δ, the increasing in the position error covariance
will be
6. J.-S. Gutmann and K. Konolige. Incremental mapping of large
cyclic environments. In Proc. of the International Symposium on
Computational Intelligence in Robotics and Automation (CIRA),
2000.
⎡
⎡δ 2 0 ⎤ ⎤
Σ i +1 = Tt ⎢TΣ i Tt + k p ⎢
T
2 ⎥⎥
⎢⎣
⎣ 0 δ ⎦ ⎥⎦
and similarly as regards the orientation.
Experimental results lead to an accuracy of localization
using this technique in a range of 10 cm in position and 5
degrees in orientation, during the execution of navigation
tasks in an office-like environment equipped with 6
landmarks.
4
Conclusions
The navigation system made of the three components
(mapping, path planning, localization) described above
appear to be an efficient and robust middle level control
system in a robotic architecture oriented to the navigation
in partially unknown environments.
Future development of this control systems will be
oriented to a stronger coupling of existing sensors (sonars
and vision) and to the integration of new sensing and
localization systems, as inertial unit, electronic compass,
GPS for outdoor employing, laser range finder.
7. D. Hahnel, D. Schulz, andW. Burgard. Map building with
mobile robots in populated environments. In Proc. of the
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS),
2002.
8. D. Koditschek: Exact robot navigation by means of potential
functions: Some topological considerations, in Proc. IEEE Int.
Conf. Robotics Automat. , Raleigh, N.C, Mar. 1987, pp. 1--6.
9. P. Konkimalla and S. M. LaValle. Numerical computation of
optimal navigation functions on a simplicial complex. Proc.
Workshop on the Algorithmic Foundations of Robotics. (1998)
10. J. C. Latombe: J.C. Robot Motion Planning, Kluwer
Academic Publisher, MA. (1991)
11. F. Marando, M. Piaggio, A. Scalzo: Real Time Self
Localization Using a Single Frontal Camera. In Symposium on
Intelligent Robotic Systems (Toulouse, F, July 2001), M. Devy,
Ed.
12. H. P. Moravec. Sensor fusion in certainty grids for mobile
robots. AI Magazine, pages 61–74, Summer 1988.
13. S. Thurn. An online mapping algorithm for teams of mobile
robots. International Journal of Robotics Research, 2001.
References
1. J. Borenstein and Y. Koren: Potential Field Methods and Their
Inherent Limitations for Mobile Robot Navigation, Proceedings
of the IEEE International Conference on Robotics and
Automation, 1398-1404 (1991)
2. H. Choset. Topological simultaneous localization and mapping
(slam): Toward exact localization without explicit localization.
IEEE Transactions on Robotics and Automation, 17(2), April
2001.
3. H. Christensen and G. Zunino: Slam in realistic environments.
In Symposium on Intelligent Robotic Systems (Toulouse, F, July
2001), M. Devy, Ed.
4. G. Dissanayake, H. Durrant-Whyte, and T. Bailey. A
computationally efficient solution to the simultaneous
localisation and map building (SLAM) problem. In ICRA’2000
Workshop on Mobile Robot Navigation and Mapping, 2000.
Copyright © 2003, The ROBOCARE Project
Funded by MIUR L. 449/97. All rights reserved.
83
84
The Design of the Diagnostic Agent for the RoboCare Project
P. Torasso, G. Torta, R. Micalizio, A. Bosio
Dipartimento di Informatica, Università di Torino (Italy)
e-mail: {torasso,torta,micalizio,bosio}@di.unito.it
Abstract
The paper presents the software architecture of the diagnostic module in the RoboCare multi-agent system.
After a domain conceptualization of the RoboCare domain, we focus the discussion in particular on the intelligent monitoring functionality. The Executor Monitor,
by performing a temporal interpretation of messages
coming from the robotic agents and from the intelligent sensors, provides fault detection functionality as
well as synthesis of qualitative clues to be exploited for
fault identification.
1
Introduction
Given a system representation in terms of its components and a set of observations of the system behavior, we have a diagnostic problem when one or more
observations don’t agree with the predictions made under the assumption of normal behavior. The first step
in diagnostic problem solving is clearly the detection
of such discrepancies (fault detection) and is closely
related to system monitoring. Usually one is also interested in fault localization (i.e. which component(s)
have failed) and fault identification (i.e. which specific faults have occurred). The MBR approach to diagnosis (Model Based Diagnosis) has been proved to
be able to deal with all these problems also in realworld applications (e.g. (Console & Dressler 1999)).
The recent advancements in technology make possible
several applications for robotics, in particular for multiagent systems (e.g. RoboCup and AAAI Mobile Robots
Competition). In a multi-agent system the diagnosis
task presents some peculiar challenges. An important
choice regards whether the diagnosis (as well as other
reasoning tasks such as planning) should be performed
in a centralized or distributed manner. A distributed
approach has the main disadvantage of requiring that
all agents have auto-diagnostic capabilities. Moreover,
a global diagnosis can be obtained only by combining
the various local diagnoses in a central place; therefore the problems of communication overhead are not
completely avoided (e.g. (Pencolé, Cordier, & Rozé
c 2003, The RoboCare Project — Funded by
Copyright °
MIUR L. 449/97. All rights reserved.
2001)). Since in most domains a multi-agent system
involves agents with (at least) some degree of autonomy, the centralized approach has to deal with incomplete models of the behavior of the agents. Moreover,
in a centralized approach the diagnostic agent has to
deal with the partial observability of the global system
since it can rely just on information provided by sensors and/or volunteered by the agents. In our work we
focus on a centralized approach which is able to take
advantage of any auto-diagnostic information that the
autonomous agents may be able to provide. Another
problem specific to multi-agent diagnosis is that, by being autonomous and possibly pursuing different goals,
the agents can give rise to interactions which result in
system failures, for example when agents competing for
a resource cause delays or when a blocked agent obstructs the way to other agents. These kinds of failures
are clearly different from those caused by the breakup of e.g. a mechanical component of an agent and
should be handled appropriately. RoboCare ((Cesta &
Pecora 2003)) is a very interesting and demanding test
bed for the development and test of diagnostic strategies able to deal with multi-agent systems. In fact, in
RoboCare, we have to face anomalies not only caused
by faults in the robotic agents, but also failures due to
critical interactions among agents. Moreover, The temporal dimension (task have to be concluded within a
given deadline), increase the complexity, since the temporal dimension (included metric time) has to be dealt
with.
The paper is organized as follows. In section 2 we
describe the architecture of the Diagnostic Agent and
its interactions with other modules of the Supervisor.
Section 3 presents the domain’s conceptualization. In
section 4 we illustrate the mechanisms adopted for messages interpretation, an important step in the diagnostic problem solving since it addresses the problem of
fault detection. Section 5 provides an outline of the
simulator of the robotic agents and the intelligent sensors we have built for testing the functionality of the
diagnostic agent. Section 6 contains a discussion of the
approach including some hints on the challenges the
fault identification module has to deal with.
85
2
Architecture of the system
In figure 1 we show the high level architecture of the
supervisor of RoboCare highlighting the modules which
are considered part of the Diagnostic Agent. We assume
that the planner decides which agent must execute a
particular action and the (partial) ordering in which
actions must be performed. The scheduler schedules
the execution of an action ACT when all the actions
that precede ACT in the ordering have completed successfully and the preconditions of the action ACT are
satisfied. Moreover, we assume that the preconditions
of each action explicitly check the capabilities required
to the agent in order to complete the action (for example the capability related to the robot mobility for a
Go To action). Thus different tasks assigned to different
agents can be started concurrently.
The Status is the data structure through which the
monitoring and diagnostic processes publish their results to the rest of the system (in particular, to the
Planner and Scheduler). The Status is partitioned in
two sections: observed and inferred information. In
the observed side the pieces of information captured
by the intelligent sensors are represented: for example,
the position of the agents in the environment. We assume that the intelligent sensors are reliable and thus
the information in this section is considered certain.
The inferred side contains the diagnoses computed by
the diagnostic module. In general, a diagnosis represents the status of all agents in the system. For example, the agent A has mobility “BROKEN”, agent
B has mobility “OK”, . . .. However, in order to explain system failures that are caused by agents interactions (e.g. traffic), diagnoses must sometimes explicitly
mention, along with the agents’ status, which troublesome interactions are hypothesized to have taken place.
This problem makes the diagnostic reasoning of RoboCare significantly more difficult than in distributed systems, where the interactions between sub-systems are
known and predefined. In a multi-agent system we can
only make weaker assumptions about the agents interactions. Since the system addressed by the RoboCare
project evolves over time, it is important to note that
even if the published Status refers only to the current
time instant, the diagnostic module keeps an internal
track of past states falling within a temporal window of
fixed size; this is required because diagnosis of systems
which evolve over time does in fact depend on the past
observations as well as on the current ones. The Executor Monitor (EM) receives from the Scheduler a set of
tasks to be executed. These tasks are assigned by EM
to the agents. The EM then monitors the agents actions
through a network of intelligent sensors that represent
the feedback from the environment. An intelligent sensor can recognize a set of specific events; when one of
theseevents occurs the sensor informs the EM with a
message. Notice that also the agents could send messages such as completion or auto-diagnosis, directly to
the EM. As we will see in section 4, each message requires an interpretation phase. The interpretation of
86
Figure 1: The architecture of the Diagnostic Agent
a message could modify the observed Status. For example if the message states that an agent has successfully crossed a door, the EM modifies the position of
the agent in the Status. Moreover the interpretation of
the same message could allow us to conclude the correct completion of a go to action previously assigned
to the agent. When an action is completed correctly
by an agent the EM informs the Scheduler and may
receive from it a new task for the same agent. If no
new task is assigned the agent (temporary) goes into
idle status. Similarly, when an action fails the EM informs the Scheduler. It is worth noting that in this case
EM performs the task of fault detection and for this
reason EM sends the set of intermediate conclusions it
has reached during the message interpretation step to
the Fault Identification module. These intermediate results are facts that abstract details of the multi-agent
system such as metric deadlines and messaging infrastructure. Since the diagnostic process is invoked with
these (abstract) facts as inputs, it is possible to plug-in
a diagnostic module able to handle discrete events (e.g.
(Torasso & Torta 2003)) even if it is not particularly
designed for multi-agent environment.
3
A domain conceptualization
In the following we describe how the scenario foreseen
for Robocare is modeled inside our system. This scenario includes the environment (rooms, beds, etc.), the
robotic agents and the intelligent sensors. It is worth
noting that the assumptions made here refine the description of the scenario originally introduced in (Pecora & Cesta 2002).
3.1
Domain’s objects
We assume that the world is formed by a set of rooms;
two rooms can be connected by one or more doors.
There are three types of rooms:
1. kitchen: where an object called trayrack (containing
the trays with patients’ meals) is located.
2. stay room: with beds for elders (we model hallways
by setting the number of beds to zero).
3. repository: where the agents go when they are idle.
Each room is ideally subdivided in a set of areas. All
areas except one (i.e. the transit area), are associated
with an intelligent sensor. When an event occurs in an
area the associated intelligent sensor sends a message
to EM. In particular we associate a sensor to all objects
(beds and trayrack) and to all doors. Then the area of
a room could be subdivided as in the figure 2
Each dot represents a sensor monitoring a limited
area of the room. It’s worth noticing that the sensor
on the door monitors two areas: one in room R1 and
one in the adjacent room; this sensor is complex and we
will describe its function later. As already noted, the
transit area is not associated with any intelligent sensor.
So the EM cannot detect the events in this area. For
each area we indicate in our model the max number of
agents that could occupy it at the same time: for all
object areas and door areas we assume that the max
number is 1, while for the transit areas it is, currently,
infinite. We also assume that sensors are completely
reliable.
3.2
The agents’ behavioral modes
An approach to model-based diagnosis requires a detailed description of the system to be diagnosed; in
particular, it is necessary to single out its components
and to identify the set behavioral modes for each component, where usually one behavioral mode represents
the normal behavior and the others represent different abnormal behaviors. In most cases the abnormal
behaviors directly correspond to different faults. In
the project RoboCare we have to deal with a multiagent system where each robotic agent has a set of
capabilities thatcould be working properly or incorrectly. So a set of behavioral modes must be associated to each capability of each robotic agent. The
following table summarizes the capabilities currently
taken into account and their related behavioral modes.
Behavioral modes
Capabilities
OK, TX-BROKEN,
COMMUNICATION
RX-BROKEN, BROKEN
OK, SLOWDOWN,
MOBILITY
BROKEN
OK, BROKEN
HANDLING
Typically a diagnosis is an assignment of a behavioral
mode to each capability of all robotic agents. As we
have said, however, a diagnosis could also include one or
more predicates that indicate troublesome interactions
between agents.
Concerning the temporal dimension, we model the
time as a discrete sequence of instants, assuming that
behavioral modes can evolve from a temporal point
to another temporal point, according to specifics constraints (known as time varying systems in MBD literature, see (Console et al. 1994)). MTGs (Mode Transitions Graphs) are a simple formalism for representing which transition between modes are allowed. Given
Figure 2: A sample abstract room subdivision
the component c of the system’s model, MTG(c) is a
directed graph (V, E) where V is the set of behavioral modes for c and E is the set of edges; the edge
(m, m0 ) ∈ E if the component c could naturally evolve
from behavioral mode m to mode m’. In RoboCare
we have associated an MTG to each capability of each
agent. For space reasons, we show one MTG example
only. Figure 3 reports the MTG for the communication
capability of a robotic agent.
In particular, the nominal mode OK can evolve into
the modes TX-BROKEN (transmitter broken), RXBROKEN (receiver broken), or BROKEN in a single
time slot. Moreover this MTG indicates that the faults
are not intermittent, since a behavioral mode can either
persist in the current mode (self-loops) or evolve in a
new faulty behavioral mode.
3.3
Agents’ Actions
In the next section we will illustrate several types of
intelligent sensors, whose task is to detect changes that
occur within a limited area of the environment. Since
these environment changes are foreseeable consequences
of agents’ actions, EM can build forecasts on the effects
of the action ACT as soon as EM assigns the action
ACT to an agent A. It’s worth noticing that an action
must have a high granularity in order to make possible
to foresee its consequences. In other words we assume
that the execution of each action is accomplished via an
ordered sequence of events that must occur in the environment. The intelligent sensors can exploit this knowledge for recognizing events and providing suitable messages to EM. Thanks to the high granularity of agents’
actions, the EM could perform forecasts of events that
Figure 3: MTG for the communication capability
87
must occur within a temporal window. However the EM
cannot be completely accurate in its temporal forecasts
since, as we said, the agents may give rise to interactions
which cause delays or failures. This temporal aspect is
important for the interpretation of sensors’ messages
that we will describe in section 4.
Let us consider the case where agent A1 (currently
in room R1) must move to a bed B5 located in room
R3. In order to reach room R3 from R1 the agent must
cross room R2; moreover there are two doors D1 and D2
between R1 and R2 and just one door between R2 and
R3. We could assume that a M OV E-T O command is
available such that we just specify the destination point
of the agents’ complex movement (i.e. bed B5 in room
R3). As it will be apparent from the next section, intelligent sensors send messages to EM as soon as some
relevant events occur (for example A1 is crossing door
D1 from R1 to R2). The EM would face significant difficulties in interpreting the flow of such messages because
it isnot obvious which ones refer to a nominal execution
of the given command (e.g. the EM know whether A1
has chosen to reach room R2 through door D1 or D2).
In order to make possible for the EM to detect as soon
as possible potential anomalous situations, we assume
that a complex movement such as the one described
above is decomposed by the planner into several simpler steps, for example:
GoT oRoom(A1, R2, D1)
GoT oRoom(A1, R3, D3)
GoT oObject(A1, B5)
where GoT oRoom instructs an agent to move to the
transit area of an adjacent room through a specified
door and GoT oObject instructs an agent to move to a
particular object (in this case bed B5) located in the
room where the agent is currently located. In conclusion, the commands are detailed enough to allow EM to
create a set of event forecasts (temporally ordered) for
each command; in this way the EM can create expectations about what messages it has to receive (at least
under nominal behavior of the involved agents). For example, after issuing command GoT oRoom(A1, R2, D1),
EM expects to be notified of the event that agent A1
has crossed the door from the sensor associated to door
D1 in order to consider the command successfully executed.
This solution reduces the communication flow between the robotic agents and the supervisor since it
does not require that the agents inform the EM step by
step on their position or action status (e.g. completed,
running).
The high level of granularity of atomic actions available
to the planner does not imply that the robotic agents
have no autonomy. In fact the execution of an atomic
action by a robotic agent requires, for example, that the
agent itself has a complex navigation capability inside
a specified environment and is able to negotiate with
other agents when it has to access critical resources such
as doors.
88
Figure 4: The trayrack sensor automaton
3.4
Intelligent Sensors
As we said before, an intelligent sensor is associated to
each object area. In particular we assume that an intelligent sensor can recognize a set of specific events
that occur in its area. When an event occurs, the
sensor sends a message to the EM with relevant information concerning the observed event. We have foreseen the following three types of sensors, to deal with
events occurring in different objects area: bed area sensor, trayrack area sensor, door areas sensor. For the
trayrack area we need to detect the following events:
- agent A1 enters into the area (of the trayrack)
agent A1 takes the meal MEAL1 from trayrack
- agent A1 leaves the area (of the trayrack)
This sensor is simply described by the following automaton:
State free indicates that the area near the trayrack is
not occupied by an agent. State busy indicates that in
the area near the trayrack there is an agent. Finally,
state busy with meal indicates that the agent has taken
a meal from the trayrack. Each edge is labeled with an
event (what agent made in the sensor area), and with
a list of messages sent to the system. Notice that the
list of messages could be empty, in this case the event
is not observable. We assume that an agent enters in
one area when it must performs a work into the area.
For this reason, an agent in state busy cannot exit from
the trayrack area, because it must conclude the action
take meal before.
The following events have to be detected for the bed
area B:
- agent A enters into bed area B
agent A clears the bed B
Figure 5: Functionality of the sensor associated to the
bed area
Figure 6: The sensor of a door monitoring two areas
- agent A makes up the bed B
- agent A serves a meal to the elder sitting on bed B
- agent A leaves bed area B
The door area sensor is more complex since it has
to monitor two areas instead of one. Moreover, the
events to be captured are strongly influenced by time
evolution. Figure 6 shows the sensor associated to door
D1 (represented as a dot) together with the monitored
areas in room R1 and room R2 adjacent to the door
(D1.R1 and D1.R2 respectively).
Since we want to reduce the number of messages sent
by a sensor, the door sensor recognizes the following
events:
- An agent (let’s say A1) crosses the door and in which
direction, e.g. from R1 to R2 or vice versa.
- an agent (A1) stays for too long into a door area
- an agent (A1) leaves a door area after staying too
long in it
So if an agent A1 crosses the door without problems
(i.e. traffic or faults) the sensor sends one message only.
Differently, when an agent A1 tries to cross the door at
the same time as another agent A2, we assume that the
agents can negotiate which one must cross first. Thus,
one of them has to leave the occupied area returning
into the transit area of the room. The sensor has to detect whether an agent A stays too long into an area; in
fact, A could have its mobility capability broken or the
negotiation among agents using the same door could be
troublesome. Figure 7 shows the automaton that describes the door sensor when an agent crosses it from
R1 to R2. The dotted edges are taken when the agent
A1 located in D1.R1 area meets another agent A2 located in the area D1.R2. This case is an example of a
possible critical interaction between agents, that could
be cause of action delay or fail. We assume that the
agents can negotiate which one must cross first. Thus,
the other agent leaves the occupied area turning into
transit area of room. 1
4
Messages interpretation
The EM interprets a message with respect to the current status of the system and to its current expectations. An expectation represents a system forecast on
1
For sake of simplicity we assume that mobility slowdown
cannot activate a stay too long message.
Figure 7: The door sensor automaton from R1 to R2
the future when no faulty behavior occurs. In other
words, the system assumes a normal behavior for each
capability of an agent, and then creates an expectation
for a particular event that must occur within a certain
time deadline. So, if an expectation isn’t satisfied, we
presume a faulty behavior at least for one capability,
or a troublesome interaction among agents. For sake of
simplicity we show the interpretation process with an
example. In particular, we illustrate a set of rules that
allow us to discriminate if an action has successfully
completed or failed. An expectation is represented by
a predicate with the following template:
expect(Source, Event, T imeDeadline)
where Source indicates the source of the expected message (sensors or agents), Event is the expected event
and TimeDeadline is the metric time within which the
EM should receive the message.
Now, let’s suppose that EM issues the command
GoT oRoom(A1, R2, D1) to agent A1 located in room
R1. This command requires the agent A1 to move from
room R1 to room R2, through the door D1 (see figure
2). The EM knows that when the agent A1 crosses the
door D1, its associated sensor sends a message reporting this event. Thus the EM creates the following expectation: expect(D1, through A1 f rom R1 to R2, td ).
We assume that the Scheduler module estimates the
time ta needed by A1 to move from R1 to R2. ta
is used for computing the time deadline td within
which the action must be completed (td = ta + ∆,
where ∆ represents the tolerated delay). If no problems occur, the sensor sends the expected message:
message(D1, through A1 f rom R1 to R2). The EM
interprets this message with the following general rule:
message(Source, Event) ∧ expect(Source, Event, T )∧
relate(Event, T ask) ∧ CurrentT ime ≤ T ⇒
retract[expect(Source, Event, T )],
assert[T askCompleted T ask],
update[CurrentStatus]
This rule matches each received message with an
expect.
Since the expectation is satisfied it can
89
be removed, moreover the EM knows that the task
GoT oRoom(A1, R2, D1) is completed, then sends to
the Scheduler a ”done” message, and creates a new Status snapshot where the agent A1 is in room R2. If some
problem occurs, the expectation could expire. Another
general rule removes the expired expectation and asserts a failure that the EM sends to the Fault Identification Module (FIM):
expect(Source, Event, T ) ∧ relate(Event, T ask)∧
∼ delayable(T ask) ∧ CurrentT ime > T ⇒
retract[expect(Source, Event, T )],
assert[T askF ailed T ask]
The condition that the task is not delayable
(∼delayable( Task)), has been added because in some
cases when an expectation expires it doesn’t mean that
the action failed. For example, while agent A1 is moving, its mobility could change from ok to slowdown, or
the agent could encounter traffic. So the agent spends
more time than estimated for moving to room R2. For
some actions the EM could give more time for the completion of the task creating a new type of expectation:
ExpectDelay (Source, Event, TimeDeadline)
This expectation is treated similarly to the previous
type. For example, the first expectation is expired and
the EM creates a delayed expect for a message from
the sensor associated to door D1. When the expected
message arrives the following general rule is activated:
message(Source, Event)
∧ ExpectDelay(Source, Event, T )
∧ relate(Event, T ask) ∧ CurrentT ime ≤ T ⇒
retract[ExpectDelay(Source, Event, T )],
assert[T askCompletedW ithDelay T ask],
update[CurrentStatus]
The EM removes the delayed expect, and informs the
Scheduler that the task is completed, moreover informs
the FIM that the task is completed with a delay. Finally, the EM modifies the status. If no message is
received, the delayed expectation expires. In this case
the following rule is activated:
ExpectDelay(Source, Event, T ) ∧ relate(Event, T ask)
∧CurrentT ime > T ⇒
retract[ExpectDelay(Source, Event, T )],
assert[T askF ailed T ask]
The delayed expect is removed, the EM informs the
Scheduler and the FIM that the task is failed. As
said above, also the agents could send messages to the
EM. These messages are interpreted by an appropriate set of rules similar to the ones used for interpreting the messages coming from the sensors. Messages
form agents are relevant for ruling out the possibility
that failures depend on faults in the transmission capabilities of the agents. Moreover, agent messages may
provide EM with early information about critical interactions among agents which may results in failure
of plans. Early detection of problems is quite relevant
since it allows recovery actions form the scheduler to
be taken more timely. Without agent messages the EM
would be able to reach similar conclusion by exploiting
sensor messages but at later time since sensors can send
90
a message just after the problem has occurred. Moreover the agents could send auto-diagnosis messages: in
this case the FIM combines each single message to perform a global diagnosis of the system (in a way similar to the one proposed in (Pencolé, Cordier, & Rozé
2001)).
5
Simulator
A Simulator of the environment is a much needed part
for most complex systems: the diagnostic agent of
RoboCare is not an exception. In fact, we need to test
the capabilities of the diagnostic agent even in absence
of the real flow of messages coming from the robotic
agents and the intelligent sensors.For this reason we
have developed a high level simulator of the environment which is able to take care of the commands (actions) sent from the EM to the robotic agents (see figure 8). The commands (codified as XML messages)
are captured by the Message Exchange Module of the
simulator and dispatched to the appropriate simulated
robotic agent. Since each action is not completed in a
single time slot, but can take several instants, it is up
to the simulator to mimic the execution of the actions.
At each time slot (time representation is discrete), the
simulator considers all the commands currently under
execution and simulates the change in the environment
(for example in a GoT o command the simulator mimics
the progress of the robotic agent towards its goal). For
this reason the simulator maintains an internal structure (the ”internal status”) where the current position
of all robotic agents is recorded as well as the status of
all other objects in the modeled domain (beds, doors,
etc.). Once the new status has been determined the
simulator has to simulate the emission of the messages
by the intelligent sensors. For this reason the simulator exploits the automata describing the behavior of
each intelligent sensor in order to decide both whether
a message has to be produced and which message. All
the messages produced by the simulated intelligent sensors in a given time slot are gathered by the Message
Exchange module and then sent to the EM when the
clock is advanced by one unit.
The simulator would be needed even if in the current state the RoboCare project would have at disposal
both Robotic agents and intelligent sensors. In fact, the
entire system has to be tested not only in nominal conditions, but also in presence of faults. For this reason
we need a special interface for allowing the human testing the system to simulate the presence of faults (fault
injection). The presence of faults makes the simulation
much more difficult because the simulator has to mimic
the behavior of the robotic agents under degraded capabilities. Since we have modeled a number of capabilities for each robotic agent and more than one behavior
mode for each capability , the simulator must be able to
mimic each possible combination of modeled degraded
capabilities. It is worth noting that some combinations
of faults have an impact not only on the movement and
other actions performed by the robotic agents but also
on the messages that are received and sent since we have
taken into consideration also communication problems.
Figure 8: Simulator and the Diagnostic Agent
6
Discussion and Conclusion
This paper describes an approach for monitoring and
diagnosing a multi-agent system involving a number
of robotic agents whose behavior exhibits a significant
level of autonomy. The RoboCare diagnostic agent essentially relies on a centralized approach 2 to monitoring and diagnosis although it is able to take advantage
of auto-diagnosis information volunteered by robots.
Given the complexity of the task, our approach is based
on the decomposition of the problem into two distinct
(but interacting) sub-tasks: (1) monitoring and fault
detection, (2) fault identification. As concerns fault detection, this activity involves the interpretation of the
messages coming from the sensors and agents and the
synthesis of qualitative contextualized descriptions of
the observed system behavior. These qualitative descriptions can be seen as intermediate conclusions that
abstract from details of the multi-agent system such
as metric deadlines and message protocols. The second subtask (fault identification) makes use of Model
Based Diagnosis techniques starting from the intermediate conclusions derived by the first subtask. Although
the focus of this paper is not on fault identification,
it is worth noting that the FIM task can be reduced
to the diagnosis of discrete time varying systems, and
therefore existing efficient techniques for diagnosing this
class of systems can be exploited. An essential feature for the RoboCare domain concerns the ability of
dealing with metric time since the Scheduler reasons
about the allocation of tasks to robotic agents in terms
of metric time intervals. Therefore the EM has to be
able to monitor the temporal evolution by taking into
2
Other works on diagnosing multi-agent systems (e.g.
(Kalech & Kaminka 2003)) advocate the centralized approach as a way of reducing communication overhead while
(Pencolé, Cordier, & Rozé 2001) presents a distributed approach for the supervision of a telecommunication network
where each station computes a local diagnosis which is sent
to a central supervisor that computes the global diagnosis
from the set of local diagnoses.
consideration the metric duration of actions and deadlines. The introduction of the expectation mechanism
provides a way for handling metric time information
at the appropriate level. The complexity of correctly
identifying the cause of a malfunction in RoboCare (as,
in general, in multi-agent systems) is particularly high
since local and minor faults may have very severe impacts on the whole system. For example, if an agent
is blocked in a door area, this causes the failure of all
agents whose tasks require to cross the same door. It
should be clear that the sooner the Planner and the
Scheduler are informed about the occurrence of a fault,
the higher are the chances that they can decide recovery actions which avoid at least the most severe consequences of the fault. The approach to fault identification currently being developed for RoboCare has
many features that make it suitable for dealing with the
problem mentioned above. In particular, the ability of
reasoning about indirect consequences of faults as well
as critical interactions among robotic agents makes the
diagnostic agent able to single out the root causes of
malfunctions. A prototypical version of the diagnostic
agent is implemented in JDK 1.4.1 SE. The messages
interpretation is performed by exploiting facilities provided by the JESS package, an expert system shell that
can be tightly coupled to code written in Java. Finally
we use XML technologies for the exchange of messages
among EM, sensors and robots. The diagnostic agent
works in cooperation with a high level simulator of he
environment, which emulates the exchange of messages
with virtual intelligent sensors and robotic agents.
References
Cesta, A., and Pecora, F. 2003. The robocare project:
Multi-agent systems for the care of elderly. ERCIM
News 53.
Console, L., and Dressler, O. 1999. Model-based diagnosis in the real world: lessons learned and challenges
remaining. In Proc. IJCAI99, 1393–1400.
Console, L.; L., P.; Theseider Dupré, D.; and Torasso,
P. 1994. Diagnosing time-varying misbehavior: an
approach based on model decomposition. Annals of
Mathematics and AI 11:381–398.
Kalech, M., and Kaminka, G. 2003. On the design of
social dignosis algorithms for multi-agent teamsd. In
Proc. IJCAI03, 370–375.
Pecora, F., and Cesta, A. 2002. Planning and Scheduling Ingredients for a Multi-Agent System. In Proceedings of UK PLANSIG02 Workshop, Delft, The
Netherlands.
Pencolé, Y.; Cordier, M.; and Rozé, L. 2001. Incremental decentralized diagnosis approach for the supervision of a telecommunication network. In Proc.
DX01.
Torasso, P., and Torta, G.
2003.
Computing
minimum-cardinality diagnoses using obdds. LNCS
2821:224–238.
91
92