PDF - Universidad de Huelva

Transcription

PDF - Universidad de Huelva
DEPARTAMENTO DE INGENIERÍA DE SISTEMAS Y AUTOMÁTICA
ESCUELA SUPERIOR DE INGENIEROS
UNIVERSIDAD DE SEVILLA
Cognitive Autonomous Navigation System using
Computational Intelligence techniques
Sistema Cognitivo de Navegación Autónoma basado en
Técnicas de Inteligencia Computacional
por
Nieves Pavón Pulido
Ingeniero en Informática
PROPUESTA DE TESIS DOCTORAL
PARA LA OBTENCIÓN DEL TÍTULO DE
DOCTOR POR LA UNIVERSIDAD DE SEVILLA
SEVILLA, 2012
Directores:
Dr. Joaquín Ferruz Melero, Titular de Universidad
Dr. Aníbal Ollero Baturone, Catedrático de Universidad
Agradecimientos
En primer lugar, quiero expresar sinceramente mi agradecimiento a los doctores D. Joaquín
Ferruz Melero y D. Aníbal Ollero Baturone por brindarme la oportunidad de realizar este
trabajo de Tesis y ayudarme a conseguir los objetivos propuestos.
Desde un punto de vista institucional he de manifestar mi gratitud a los órganos de la
administración que han financiado parcialmente el desarrollo de los trabajos de
investigación que se presentan en esta Tesis: la Junta de Andalucía y el Ministerio de Educación,
Cultura y Deporte a través de los proyectos ROMOCOG y ROBAIR y CROMAT,
respectivamente, y el Vicerrectorado de Formación Permanente e Innovación de la Universidad de
Huelva que contribuyó al desarrollo de la plataforma robótica BENDER mediante la
adjudicación de varios proyectos de investigación para la mejora en docencia universitaria.
La finalización de esta Tesis Doctoral no sólo supone un hito académico, es también un
hecho importante en mi vida personal. Tras una profunda reflexión sobre el largo camino
recorrido durante estos años (probablemente demasiados), he de reconocer que este trabajo
ha sido posible gracias a muchas buenas personas que me han apoyado y a pesar de muchas
otras que me han causado gran dolor tanto en lo profesional como en lo personal. Sin
embargo, incluso aquellos que te causan un daño, a veces irreparable, pueden ser
merecedores de agradecimiento. Por tanto, agradezco el dolor y la injusticia cometida por
algunos, porque por ellos he descubierto una fortaleza interior que me ha hecho luchar
incansablemente a pesar de las dificultades.
Una vez finalizado este viaje, quiero mantener vivo en mi recuerdo sólo lo bueno. Así pues,
únicamente me queda dar las gracias a todos los que de alguna manera (profesional o
personalmente), han colaborado conmigo o me han acompañado a lo largo de este camino.
Gracias a los miembros del Grupo de Robótica, Visión y Control de la Universidad de Sevilla,
especialmente a Rafa, Víctor, Fran, Fernando y, cómo no, Joaquín.
Gracias a los miembros del Centre of Intelligent Systems de la Universidad do Algarve, sobre todo
al Dr. D. Antonio E. Ruano y al Dr. D. César Texeira, que tanto me ayudaron durante mi
estancia en Portugal.
Gracias a los miembros del Laboratorio de Robótica de la Universidad de Murcia, especialmente
al Dr. D. Humberto Martínez Barberá por su paciencia y hospitalidad. Gracias también a
los miembros de la División de Sistemas e Ingeniería Electrónica por dejarme realizar algunos
experimentos en el seno del Departamento de Tecnología Electrónica de la Universidad Politécnica de
5
Cartagena. Gracias de forma muy especial a Blasi que nos ha proporcionado la oportunidad
de divulgar parte de los resultados del proyecto ROMOCOG en los medios de
comunicación.
Gracias a los estudiantes de mis asignaturas durante el curso 2011-2012, me han
demostrado su paciencia y amabilidad animándome durante la recta final del desarrollo de
esta Tesis.
Gracias querido Dr. D. Juan Suardíaz Muro, eres la bondad revestida de cuerpo y alma.
Gracias Sharon, tu esposo y tú me demostrasteis, durante el IROS 2010 en Taipei, que
puedo contar con unos buenos amigos al otro lado del mundo.
Gracias Senga y John por vuestra ayuda en la corrección de algunos documentos en inglés
y, sobre todo, por vuestra amabilidad y simpatía.
Gracias a mi familia (Cami, José y todos los demás), por su apoyo incondicional. Gracias
Leela, no existe un ser más cariñoso y leal en el mundo.
Gracias papá y mamá, mis fracasos son sólo míos pero todos mis éxitos son la
consecuencia de la educación y el amor que me habéis dado. Los grandes valores (la
honestidad, la sinceridad y el sentido del honor y de la responsabilidad), no los descubrí ni
en el colegio ni en la universidad, los aprendí de vosotros, mis mejores maestros.
Gracias Juan Antonio por todo el apoyo y el cariño que me has dado y que me das, tanto
en lo profesional como en lo personal.
Gracias a todos. Espero que sigáis ahí, pues el final de esta etapa académica no es más que
el comienzo de otra nueva etapa investigadora llena de desafíos que espero me permita
devolver a la sociedad lo que la sociedad me ha dado.
6
7
8
Abstract
Nowadays there is a growing interest in developing inexpensive indoor service mobile
robots that can efficiently carry out service tasks in indoor environments such as domestic
settings, offices, hospitals or homes for handicapped and elderly people, among others.
Reducing the cost of the hardware and software architectures is essential to extend the use
of service robots in such places. Cost reduction can lead to increase the inaccuracy of the
results obtained when a specific task is required to be executed, and to reduce the
processing and storing capabilities. Therefore, any robot perception and control algorithm
should take into account these constraints and be efficient from a computational point of
view. (Final)
On the other hand, autonomous navigation is a key task which any mobile service robot
should efficiently accomplish regardless of the hardware cost. In fact, the success of many
other tasks such as transportation of objects and persons, surveillance, guidance or
cleaning, among others, depends on the software architecture designed to solve the
problems related to navigation.
This Thesis presents contributions in the field of autonomous navigation of mobile service
robots in an unknown dynamic indoor environment. A set of techniques based on both
Computational Intelligence and classic paradigms are used to solve significant problems
related to autonomous navigation. Specifically the semantic representation of the
environment, dynamic obstacle avoidance, topological map building, self-localization, path
tracking and kinematic and dynamic behaviour modelling.
The semantic representation of the setting, which consists of a set of semantically labelled
polylines obtained from the raw data acquired by a 2D laser scanner, is used in the
implementation of several reactive algorithms (local path planning and obstacle avoidance)
based on hierarchical fuzzy systems. Such representation is also used in the topological
map building and self-localization procedures.
The topological map building method allows to obtain a hybrid map of the environment,
represented as a graph whose nodes store relevant semantic and geometric information of
representative places. Such nodes are linked by trajectories defined as a list of robot poses
which are estimated by combining the information provided by propioceptive and
exteroceptive sensors, specifically two encoders and a pose change virtual sensor based on
the raw data acquired by an inexpensive 2D laser scanner. Classic statistical techniques are
used to solve the data fusion problem.
9
Natural landmark extraction and data association are needed to implement the mentioned
virtual sensor. In order to improve the accuracy of such sensor, the natural landmark
selection procedure is enhanced by using a hierarchical fuzzy system that allows to reject
the landmarks which do not meet several requirements defined by the user. A fuzzy-based
paradigm is used since it provides tools to take into account abstract concepts which can be
naturally defined and easily related by means of fuzzy rules.
A set of experiments carried out in both simulated environments and real settings
demonstrate the performance of the designed algorithms. A Pioneer robot and the
BENDER (Basic ENvironment for DEveloping Robotics systems) robot have been used
in the validation procedures, in real scenarios. In particular, the development of BENDER
is addressed with the purpose of demonstrating that the proposed techniques are efficient
even when the cost of the hardware is significantly reduced and the accuracy of the
propioceptive and exteroceptive sensors and the computational capabilities of the robot
decrease.
On the other hand, the modelling of the behaviour of a medium-size car-like mobile robot
is also addressed. The model is implemented as a set of Radial Basis Function Neural
Networks (RBFNN), optimally selected by a Multi-Objective Genetic Algorithm (MOGA).
A computationally inexpensive path tracking method based on Multi-layer Perceptron
Neural Networks (MLPNN) is also presented. In particular, these Computational
Intelligence-based techniques demonstrates that it is possible to learn the dynamic and
kinematic behaviour of a robot with strict holonomic and manoeuvrability constrains
without using complex mathematical models. Experiments with the car-like autonomous
vehicle ROMEO 4R demonstrate the effectiveness of the proposed methods.
Summarizing, this Thesis is focused on proving that well-known Computational
Intelligence paradigms such as Fuzzy Systems, Artificial Neural Networks and MultiObjective Genetic Algorithms provide very useful tools to solve problems in the Robotics
context. Such techniques can be successfully combined with classic methods for solving the
whole autonomous navigation problem, even for inexpensive service robots.
10
Resumen
Hoy en día, existe un creciente interés en el desarrollo de robots de servicio económicos
que puedan desarrollar tareas diversas en entornos de interiores (hogares, oficinas,
hospitales o residencias de ancianos o personas con discapacidad, entre otros). Para que
dichos robots se introduzcan progresivamente en estos lugares de forma natural es esencial
que se produzca una disminución de los costes tanto en la arquitectura hardware como en
la arquitectura software. Esta reducción de costes puede influir negativamente en los
resultados obtenidos cuando el robot ejecuta ciertas tareas. Por tanto, los algoritmos
diseñados tienen que tener en cuenta estos inconvenientes y deben ser eficientes desde un
punto de vista computacional.
Por otra parte, la navegación autónoma es una tarea clave que cualquier robot móvil de
servicio debe desarrollar eficazmente, independientemente del coste económico del mismo.
De hecho, el éxito de numerosas aplicaciones (transporte de objetos o personas, vigilancia,
guía o limpieza, entre otras) depende de la arquitectura software diseñada para resolver los
problemas relacionados con la navegación.
En esta Tesis se presentan aportaciones en el campo de la navegación autónoma de robots
móviles de servicio en entornos interiores dinámicos desconocidos. Para resolver los
problemas relacionados con la navegación autónoma se propone un conjunto de técnicas
basadas en paradigmas clásicos y en Inteligencia Computacional. Concretamente se aborda
la representación semántica del entorno, la evitación de obstáculos móviles, la construcción
de mapas topológicos, la localización, el seguimiento de caminos y el modelado del
comportamiento cinemático y dinámico de un robot.
La descripción semántica del escenario, que consiste en un conjunto de polilíneas
etiquetadas semánticamente a partir de los datos adquiridos por un dispositivo de barrido
láser 2D, se usa como entrada a varios algoritmos de navegación reactiva (planificación
local de caminos y evitación de obstáculos) que implementan sistemas borrosos jerárquicos.
Además dicha representación semántica se usa también en la construcción de mapas
topológicos y en el proceso de localización con respecto a dichos mapas.
La técnica de construcción de mapas topológicos propuesta permite que se genere un mapa
híbrido del entorno, consistente en un grafo cuyos nodos almacenan información
semántica y geométrica relevante de lugares representativos. Los nodos se enlazan
mediante trayectorias definidas mediante una lista de poses del robot. La estimación de
dichas poses se lleva a cabo combinando la información extraída de sensores tanto
11
propioceptivos como exteroceptivos (dos codificadores ópticos y un sensor virtual de
cambio de pose basado en los datos proporcionados por un dispositivo de barrido láser 2D
económico, respectivamente). Con el fin de resolver el problema de la fusión de datos se
utilizan técnicas probabilísticas.
La implementación del sensor virtual de cambio de pose depende de los procesos de
extracción de marcas naturales y asociación de datos. Por tanto, es necesario mejorar la
precisión de dicho sensor en la medida de lo posible. La selección de marcas naturales tiene
gran influencia en este proceso, por lo que se ha mejorado mediante el uso de un sistema
borroso jerárquico que permite rechazar aquellas marcas que no cumplen ciertos
requerimientos especificados por el usuario. Se utiliza la Lógica Borrosa porque
proporciona los medios adecuados para definir y relacionar entre sí conceptos abstractos.
Para comprobar la efectividad de las técnicas propuestas, se presentan un conjunto de
experimentos realizados tanto en entornos simulados como reales. Los robots Pioneer y
BENDER (Basic ENvironment for DEveloping Robotics systems), son las plataformas
básicas elegidas para validar la arquitectura software diseñada en escenarios reales. Cabe
mencionar que el robot BENDER se ha diseñado específicamente en el contexto de esta
Tesis, con el propósito de probar que las técnicas propuestas son eficientes, incluso cuando
el coste de la arquitectura hardware se reduce notablemente y la precisión de los sensores,
así como la capacidad computacional del ordenador de a bordo, disminuye.
Por otra parte, el modelado del comportamiento de un vehículo de tamaño medio también
se contempla en esta Tesis. El modelo de dicho robot se implementa como un conjunto de
Redes Neuronales de Base Radial seleccionadas, de forma óptima, por un Algoritmo
Genético Multi-Objetivo. Además, el problema del seguimiento de caminos explícitos
también se aborda usando un algoritmo computacionalmente eficiente basado en una Red
Neuronal tipo Perceptrón Multicapa. Estos métodos basados en Inteligencia
Computacional demuestran que es posible aprender el comportamiento dinámico y
cinemático de un robot con restricciones no holónomas y de maniobrabilidad sin necesidad
de disponer de modelos matemáticos complejos. Los experimentos que validan dichos
algoritmos se han probado en el vehículo ROMEO 4R.
En resumen, esta Tesis se centra en probar que los paradigmas basados en Inteligencia
Computacional (tales como los Sistemas Borrosos, las Redes Neuronales Artificiales y los
Algoritmos Genéticos Multi-Objetivo) son particularmente útiles en la resolución de
problemas en el contexto de la Robótica. Además dichas técnicas pueden combinarse
fácilmente con métodos convencionales para resolver el problema de la navegación
autónoma de robots de servicio económicamente asequibles.
12
Index
Chapter 1___________________________________________________________________ 29
1.1 Motivation and objectives. ........................................................................................................... 29
1.2 Outline and main contributions. ................................................................................................. 32
1.3 Framework. .................................................................................................................................... 33
Chapter 2 __________________________________________________________________ 35
2.1 Service robots. ............................................................................................................................... 36
2.2 Autonomous navigation for service robotics. .......................................................................... 38
2.3 Environment perception and representation for autonomous navigation. ......................... 39
2.3.1 Type of sensors used in autonomous navigation. ........................................................ 39
2.3.2 Environment representation from raw data acquired by a 2D laser scanner. ......... 43
2.3.3 Representation of the environment from data acquired by other sensors............... 52
2.4 Map building and localization. .................................................................................................... 53
2.4.1 Hybrid SLAM (metric and topological). ........................................................................ 54
2.4.2 Fuzzy logic-based localization and map building. ........................................................ 70
2.5 Data association............................................................................................................................. 71
2.5.1 Data association without previous feature extraction. ................................................ 73
2.5.2 Data association with previous feature extraction. ...................................................... 74
2.6 Exploration of the environment, path planning and path tracking. ..................................... 75
2.7 Conclusion...................................................................................................................................... 77
Chapter 3 __________________________________________________________________ 79
3.1 Introduction. .................................................................................................................................. 79
3.2 Describing the environment using semantically labelled polylines. ...................................... 80
3.2.1 Detection of saturated points (SP). ................................................................................ 82
3.2.2 Detection of occlusion points (OC). .............................................................................. 83
3.2.3 Detection of inside and outside angle corners.............................................................. 85
3.2.4 Detection of frontal and lateral walls. ............................................................................ 87
3.3 Reactive navigation algorithm. .................................................................................................... 89
3.3.1 Computation of traversable segments. .......................................................................... 89
3.3.2 Generation of feasible safe paths from the set of traversable segments. ................. 91
3.3.3 Reactive navigation method based on fuzzy techniques. ............................................ 95
3.4 Experiments and results. ............................................................................................................ 102
3.4.1 Validating the algorithm that extracts the set of semantically labelled
polylines. ........................................................................................................................... 102
3.4.2 Validating the autonomous navigation system. .......................................................... 109
3.5 Conclusion.................................................................................................................................... 113
Chapter 4 _________________________________________________________________ 115
4.1 Generation of a topological map using a semantic polygonal description of the free
space. ............................................................................................................................................. 116
4.1.1 Semantic description of the free space. ....................................................................... 117
4.1.2 Topological map building. ............................................................................................. 126
4.2 Pose change estimation using the semantic polygonal representation of the
environment. ................................................................................................................................ 138
4.2.1 Implementation of a pose change virtual sensor based on the semantic
polygonal description of the environment. ................................................................. 139
4.3 EKF filtering for pose estimation. ........................................................................................... 153
4.3.1 Obtaining the output of the pose change virtual sensor by using statistical
methods............................................................................................................................. 154
4.3.2 Implementation of the EKF. ......................................................................................... 157
4.4 Experiments and results. ............................................................................................................ 161
13
4.4.1
4.4.2
4.4.3
4.4.4
Testing the exploration algorithm. ................................................................................ 162
Testing the pose estimator. ............................................................................................ 163
Testing the topological map building method. ........................................................... 173
Testing the topological map building method in a real environment with the
BENDER robot. .............................................................................................................. 175
4.5 Conclusion. ................................................................................................................................... 177
Chapter 5 _________________________________________________________________ 179
5.1 Introduction.................................................................................................................................. 180
5.2 Car-like service robots. ............................................................................................................... 181
5.2.1 Modelling a car-like vehicle using a set of Radial Basis Function neural
networks. ........................................................................................................................... 182
5.2.2 Selection of the best collection of neural networks that solves a problem
using a MOGA. ................................................................................................................ 186
5.2.3 A comparison between the RBF neural network-based models of the drive,
steering and heading systems and their respective linear models............................. 206
5.3 An Artificial Neural Network-based method for path tracking. .......................................... 209
5.3.1 The classic Pure Pursuit algorithm. ............................................................................... 209
5.3.2 An Artificial Neural Network-based path tracker. ..................................................... 211
5.4 Conclusion. ................................................................................................................................... 218
Chapter 6 _________________________________________________________________ 221
6.1 BENDER: A low-cost hardware and software architecture................................................. 221
6.1.1 Hardware architecture. .................................................................................................... 222
6.1.2 Software architecture. ...................................................................................................... 223
6.2 Main features of the proposed architecture............................................................................. 230
6.3 Computational cost analysis. ...................................................................................................... 231
6.3.1 Computational cost obtained in the remote computer. ............................................. 232
6.3.2 Computational cost obtained in the onboard computer. .......................................... 235
6.4 Conclusion. ................................................................................................................................... 238
Chapter 7 _________________________________________________________________ 239
7.1 Conclusions and contributions. ................................................................................................. 239
7.2 Dissemination of the results. ..................................................................................................... 241
7.3 Future work. ................................................................................................................................. 242
Appendix I ________________________________________________________________ 245
Appendix II _______________________________________________________________ 253
References ________________________________________________________________ 265
14
List of Figures
Figure 2-1. Software architecture for autonomous navigation based on independent software components. . 39
Figure 2-2. Parameter configuration of a Sick 2D laser scanner................................................................................ 43
Figure 2-3. Example of segmentation with three obstacles and two saturated areas. ............................................ 44
Figure 2-4. Example of hierarchical hybrid map with three levels: Semantic, topological and metric. .............. 55
Figure 2-5. Example of the accessibility, divisibility and connectivity properties that hold in a roadmap. ....... 64
Figure 2-6. Examples of roadmaps: Visibility graph, GVD and Silhouette method, respectively....................... 65
Figure 2-7. Example of space partition when S contains two points. ...................................................................... 66
Figure 2-8. Example of space partition when S contains three points. .................................................................... 66
Figure 2-9. Example of triangulations. Only the triangulation on the right fulfils the Delaunay condition. ..... 67
Figure 2-10. Delaunay triangulation and its dual Voronoi diagram. ......................................................................... 67
Figure 2-11. Example of generalized Voronoi graph in a setting with three obstacles. ........................................ 69
Figure 2-12. Environment perception in two consecutive instants. ......................................................................... 72
Figure 2-13. Alignment of segments for the data association process using common reference system. ......... 73
Figure 3-1. Hokuyo URG-04LX laser device features. ............................................................................................... 81
Figure 3-2. Global segmentation process based on searching semantic features. .................................................. 82
Figure 3-3. Example of saturated areas extraction after applying Algorithm 3-1. .................................................. 82
Figure 3-4. Sequence of polylines obtained after executing Algorithm 3-1 ............................................................. 83
Figure 3-5. Saturated areas associated to several objects that generate occlusions and the obtained OC
points. ................................................................................................................................................................ 84
Figure 3-6. Detection of an outside angle corner OAC according to the slope sign change. .............................. 86
Figure 3-7. Detection of an inside angle corner IAC according to the slope sign change. ................................... 86
Figure 3-8. List of polylines obtained after executing the algorithms that detect several features. ..................... 87
Figure 3-9. Scenario where points labelled as FW and LW are used to enable an exploration maneuver in a
closed corridor. ................................................................................................................................................ 89
Figure 3-10. Feasible traversable segments found in an environment described using polylines consisting
of SP, OC, OAC and IAC points. ................................................................................................................ 90
Figure 3-11. Calculated segments after executing Algorithm 3-5. ............................................................................. 91
Figure 3-12. Example that shows how to take into account the triangle condition. .............................................. 92
Figure 3-13. Clustering middle points using several uniformly distributed angular regions. Orange-colored
points are wrongly clustered. ......................................................................................................................... 93
Figure 3-14. Detecting convex points from the robot point of view. ...................................................................... 94
Figure 3-15. Dynamic generation of angular clusters. The clusters bounded by the intervals [
[
,
] do not enclose any middle point. The cluster defined by the interval
ten middle points and the cluster
,
] and
encloses
contains eight middle points. Finally, the interval
forms a cluster with three middle points.................................................................................... 94
Figure 3-16. Straight lines obtained after grouping the set of middle points taking into account several
dynamically calculated angular intervals. ..................................................................................................... 95
Figure 3-17. Fuzzy system designed to select the suitable route according several conditions defined by
the user high level application. ...................................................................................................................... 96
Figure 3-18. Fuzzy system inputs and outputs. Figures (a), (b) and (c) show the relation between the
inputs EFFORT and DISTANCE and the output MERITFACTOR when the number of
points is 0, 0.5 and 1, respectively. Figure (d) shows the relation between the input
15
NUMBEROFPOINTS and the output PROBABILITY when EFFORT and DISTANCE are
0.5. ...................................................................................................................................................................... 97
Figure 3-19. Fuzzy sets used by the route selection fuzzy system. (a) Fuzzy variable
NUMBEROFPOINTS. (b) Fuzzy variable EFFORT. (c) Fuzzy variable DISTANCE. (d)
Fuzzy variable MERITFACTOR. ................................................................................................................ 97
Figure 3-20. Fuzzy system designed to select the suitable velocity according to the probability of collision,
distance to the closest feature and target point. ........................................................................................ 99
Figure 3-21. Fuzzy sets used by the velocity selection fuzzy system. (a) Fuzzy variables COLLISION and
TARGET. (b) Fuzzy variable MINDISTANCE. (c) Fuzzy variable SPEED. .................................. 100
Figure 3-22. Example of situation where it is necessary to reduce the velocity. ................................................... 100
Figure 3-23. Fuzzy system inputs and outputs. Figures (a), (b) and (c) shows the relation between the
inputs TARGET and COLLISION and the output SPEED when MINDISTANCE is 0, 0.5
and 1, respectively. Figure (d) shows the relation between the input MINDISTANCE and the
output SPEED when TARGET and COLLISION are 0.5. ................................................................. 101
Figure 3-24. Example of extraction of a semantic polygonal description of the environment. The number
of points that should be scanned during each stage is highlighted. ..................................................... 103
Figure 3-25. Correspondences between raw points and the polyline segment that represents them. .............. 104
Figure 3-26. First experiment in a simulated environment. ...................................................................................... 106
Figure 3-27. Second experiment in a simulated environment. ................................................................................. 106
Figure 3-28. Third experiment in a simulated environment. .................................................................................... 107
Figure 3-29. Experiment carried out in a real environment. (a) Area of the laboratory. (b) Raw data
acquired by the 2D laser device. (c) Non-simplified representation of the environment. (d)
Simplified representation of the environment. (e) RMSE analysis. ..................................................... 108
Figure 3-30. The robot navigates through the main corridor of the hospital floor plan. ................................... 110
Figure 3-31. Results obtained after executing the algorithms proposed to navigate safely. ............................... 111
Figure 3-32. Diagram of the experiment carried out in a real environment. ......................................................... 112
Figure 3-33. Results obtained after executing the proposed algorithms to navigate safely in a real
environment. Pictures (a) represent the set of the polylines together with the calculated feasible
routes. Pictures (b) show what the robot sees. Pictures (c) show a view of the robot navigating
through the setting. ....................................................................................................................................... 112
Figure 4-1. Closed polygon obtained from the semantic description of the setting. ........................................... 119
Figure 4-2. (a) Convex point with respect to the robot position. (b) Concave point with respect to the
robot position. ............................................................................................................................................... 120
Figure 4-3. Example of Algorithm 4-1 execution. There are four convex sub-polygons in level 0 and one
convex sub-polygon in level 1. ................................................................................................................... 120
Figure 4-4. Set of feasible exploration paths obtained from the relationship between centroids. .................... 121
Figure 4-5. Algorithm that obtains the appropriate path by taking into account the output of the fuzzy
system and the decision of the human user. ............................................................................................ 123
Figure 4-6. Fuzzy system for the selection of exploration paths. (a) Fuzzy set for the variable that defines
the length of the path. (b) Fuzzy set for the variable that characterizes the angle defined by the
line that joins the centroid and the origin and the X axis of the robot reference system. (c)
Fuzzy set for the output variable MFactor. (d) Inputs and output of the fuzzy system. (e)
Distribution of the values calculated by the fuzzy system within the universe of discourse of
the input variables. ........................................................................................................................................ 124
Figure 4-7. Set of feasible exploration paths. The green path has the worst merit factor since its length is
the shortest and the angle
is 135º. The black path has the best merit factor, since its length is
16
the longest and the angle is the closest to 90º. ......................................................................................... 125
Figure 4-8. Sequence of stages that allow to update the topological map by taking into account the free
space semantic description. ......................................................................................................................... 126
Figure 4-9. Example of domestic setting. (a) The robot is located at the main corridor. Several
intersections define significant places that should be remembered. (b) Semantic polygonal
representation of the environment at time t. ............................................................................................ 128
Figure 4-10. Example of environment whose free space is represented by a set of semantically labelled
convex sub-polygons. Sub-polygons 2, 4 and 5 are exploration sub-polygons since they contain
at least one exploration edge and their level is L0. An interval
whose values are
respectively [80, 110] is used to select traversable edges according to the notion of
perpendicularity. Only sub-polygons 2 and 4 contain traversable edges that form an angle with
the X-axis of the laser reference system within the interval. ................................................................. 129
Figure 4-11. Example of domestic setting. (a) Partitions of the closed polygon obtained from the set of
semantically labelled polylines. (b) Set of convex sub-polygons with labelled edges. Exploration
edges are blue colored. ................................................................................................................................. 130
Figure 4-12. First level sub-polygon with a diagonal inside the interval
................................................ 131
Figure 4-13. (a) Candidate nodes located at the centres of the intersections. (b) Candidate nodes located at
the centres of the intersections when the heading and the robot position change. .......................... 132
Figure 4-14. (a) Example of a non-valid candidate node when none of the three first requirements are
accomplished. (b) Example of a non-valid candidate node when the fourth condition is not
met. .................................................................................................................................................................. 133
Figure 4-15. Candidate nodes calculated according to TraEdge edges and diagonal segments. ........................ 134
Figure 4-16. The robot is inside the circle defined by the first candidate node. When the real node is
created the area around the robot is memorized. The memorized set of polylines is used when
the node is revisited. ..................................................................................................................................... 135
Figure 4-17. Example of topological map building. (a) Description of the free space; sub-polygons and
candidate nodes are displayed. (b) Simulated environment from which the topological map is
being built. ...................................................................................................................................................... 136
Figure 4-18. Example of topological map building. (a) Description of the free space; sub-polygons and
candidate nodes are displayed. The first node is located at (0,0). Sub-polygons 5 and 9 generate
the same candidate node. (b) Simulated environment from which the topological map is being
built. The candidate nodes are located at the intersections. .................................................................. 137
Figure 4-19. Example of topological map building. (a) Description of the free space using sub-polygons.
There are three real nodes (red) and three candidate nodes (dark gray). (b) Simulated
environment from which the topological map is being built. ............................................................... 138
Figure 4-20. Changes in the set of polylines during the interval
. ....................................................................... 140
Figure 4-21. Feasible association of segments between consecutive instants
and . .............................. 140
Figure 4-22. Stable segments generated after relating OAC and IAC points. ....................................................... 142
Figure 4-23. Example of natural landmarks represented as segments obtained during
and . Raw
data at consecutive instants are slightly different. ................................................................................... 143
Figure 4-24. Fuzzy system that allows to calculate the confidence degree of a segment. In the example, the
fuzzy system obtains the confidence value for the pair of segments
and
. .............. 144
Figure 4-25. Hierarchical fuzzy system that calculates the confidence of a segment. .......................................... 145
Figure 4-26. Fuzzy types for the hierarchical fuzzy system that calculates the confidence of a segment
(natural landmark). ........................................................................................................................................ 146
Figure 4-27. Two segments from instants
and are matched. The segment is taken as the
17
reference system in order to estimate the change in position of the robot. ....................................... 147
Figure 4-28. Calculation of the variation in robot heading using two matched segments from instants
and . .................................................................................................................................................. 147
Figure 4-29. (a) 2D laser reference systems at times
the 2D laser reference system at time
and . (b) Ends of the segment according to
. (c) Ends of the segment according to the 2D
laser reference system at time . ................................................................................................................. 148
Figure 4-30. Estimation of the change in the position of the robot relative to the robot reference system
at time
using translation and rotation operations....................................................................... 149
Figure 4-31. Relation between the position of the robot according to the robot reference system at
and the rest of points that define the landmarks using the 2D laser distance measurements. ........ 150
Figure 4-32. Application of a fuzzy system that allows to calculate the final unreliability degree of a pair of
matching segments. This allows to reject those non valid pose change estimates. ........................... 151
Figure 4-33. Fuzzy system that calculates the unreliability degree of a segment of a pair of matching
segments by taking into account its confidence degree and the eSi value. ........................................... 152
Figure 4-34. Fuzzy type that allows the defuzzification of the output Unreliability Degree. ............................. 152
Figure 4-35. General diagram of the estimation of the pose of the robot. ............................................................ 154
Figure 4-36. Parameters of the kinematic model of a differential drive system robot. ....................................... 158
Figure 4-37. Obtaining the current state
and the covariance matrix
using an EKF. ............................... 160
Figure 4-38. Autonomous exploration experiment. (a) The robot is located at the initial position. It starts
to advance to the point represented by the position of goal centroid. (b) The robot is following
the secondary corridor that intersects with the main corridor. (c) The robot reaches the main
corridor. (d) The robot has turned to the left and it is following the main corridor. ....................... 162
Figure 4-39. Detail of the semantic representation of the free space at time . There are five exploration
sub-polygons that define the exploration areas located at the intersections of the main corridor.
The exploration sub-polygon 5 defines the end of the corridor. ......................................................... 163
Figure 4-40. Simulated world for the experiment. (a) Ground truth trajectory followed by the robot and
segment that represents the object in the simulated setting. (b) Relative position of the segment
from the robot point of view while the robot follows the trajectory. (c) Detail of the set of
relative positions of the segment while the robot is approaching the segment. ................................ 164
Figure 4-41. Representation of the covariance of the distances to the end points of the segment when
such segment is sensed along the trajectory. Since the Hokuyo 2D laser scanner is being
simulated, the error is 0.01 if the distance value is in the interval [0.01, 1] meters and the 1% of
the measurement if the distance value is greater than 1 meter. ............................................................ 165
Figure 4-42. Trajectory reconstructed from the set of relative segments versus the ground truth trajectory. 165
Figure 4-43. Components of the main diagonal of the covariance matrixes. (a) Component
. (b)
Component
. (c) Component
. .............................................................................................. 167
Figure 4-44. Off-diagonal covariance components of the covariance matrixes that provide information
about the correlation between variables. (a) Component
. (b) Component
. (c)
Component
...................................................................................................................................... 168
Figure 4-45. Topological map built while the robot navigates along the main corridor. (a) Only deadreckoning is used. (b) Only measurements from the virtual sensors are used if they are
available. .......................................................................................................................................................... 170
Figure 4-46. Results obtained for a trajectory estimate. (a) Estimation by using dead-reckoning only. (b)
Estimation by using the pose virtual sensor (when available). (c) Distances between the real and
estimated positions of the robot, when only dead-reckoning or only the virtual sensor (when
18
available) are used.......................................................................................................................................... 171
Figure 4-47. (a) Trajectory followed by the robot while navigating through the simulated environment of
the hospital. (b) Map of the environment built while the robot navigates along the corridor.
The estimated trajectory is also represented. ............................................................................................ 172
Figure 4-48. Number of estimated landmarks for each sample of the trajectory; when the number of
landmarks is zero, only dead-reckoning is used. ...................................................................................... 172
Figure 4-49. Hospital map used for the topological map building. ......................................................................... 173
Figure 4-50. Topological map building obtained after applying the technique proposed in section 4.1.2. ..... 174
Figure 4-51. Experiment carried out in the main corridor of a building. The semantic polygonal
representation of the setting is highlighted. While BENDER approaches the end of the
corridor, a topological map where nodes represent intersections (doors) are generated. Only
valid candidate nodes are transformed into real nodes. ......................................................................... 176
Figure 4-52. Experiment that validates the map building method in presence of mobile obstacles. (a)
BENDER detects candidate nodes for each intersection (door). (b) A person walks near the
robot. Non-valid candidate nodes are generated. (c) The person is not close to the robot and
only the valid candidate node is transformed into a real node. ............................................................ 177
Figure 5-1. Service robots. (a) Robots with a differential drive system. (b) Autonomous wheelchair for
transportation of people. ............................................................................................................................. 182
Figure 5-2. Bicycle-based kinematic model of a mobile car-like robot................................................................... 182
Figure 5-3. Front and side view of the ROMEO 4R. ................................................................................................ 183
Figure 5-4. Model of the vehicle using black boxes. .................................................................................................. 184
Figure 5-5. RBF neural network. ................................................................................................................................... 185
Figure 5-6. Basic structure of a classic genetic algorithm.......................................................................................... 188
Figure 5-7. Chromosome obtained from a record of a training data set. ............................................................... 190
Figure 5-8. Distributed software architecture that implements the MOGA. ........................................................ 191
Figure 5-9. Neural-based steering model. .................................................................................................................... 192
Figure 5-10. Log files for the steering training data set. ............................................................................................ 193
Figure 5-11. Closed-loop tester...................................................................................................................................... 194
Figure 5-12. Comparison between experimentally measured curvature (solid red line) and generated by the
RBF neural network-based model (dashed blue line) (Part 1). ............................................................. 195
Figure 5-13. Comparison between experimentally measured curvature (solid red line) and generated by the
RBF neural network-based model (dashed blue line) (Part 2). ............................................................. 196
Figure 5-14. Root Mean Squared Error (RMSE) obtained after executing the collection of experiments. ..... 196
Figure 5-15. Neural-based drive system model. .......................................................................................................... 197
Figure 5-16. Learning the behaviour of the drive system. (a) Patterns for learning; (b) patterns for testing;
(c) patterns for validating the model. ......................................................................................................... 198
Figure 5-17. Closed-loop tester...................................................................................................................................... 199
Figure 5-18. Set of centres. ............................................................................................................................................. 199
Figure 5-19. Comparison between the experimentally measured velocity (dashed blue line) and the
velocity generated by the RBF neural network-based model (solid red line) (Part 1). ..................... 200
Figure 5-20. Comparison between the experimentally measured velocity (dashed blue line) and the
velocity generated by the RBF neural network-based model (solid red line) (Part 2). ..................... 201
Figure 5-21. Results obtained after testing the closed-loop model. ........................................................................ 201
Figure 5-22. Black box model of the heading system. ............................................................................................... 202
Figure 5-23. Experimentally measured heading, velocity and curvature. ............................................................... 202
Figure 5-24. Black box model of the heading system. ............................................................................................... 203
19
Figure 5-25. Comparison between experimentally measured heading (dashed blue line) and simulated
heading generated by the RBF neural network-based model of the heading system (solid red
line) (Part 1). ................................................................................................................................................... 204
Figure 5-26. Comparison between experimentally measured heading (dashed blue line) and simulated
heading generated by the RBF neural network-based model of the heading system (solid red
line)(Part 2). .................................................................................................................................................... 205
Figure 5-27. Results obtained after testing the closed-loop model. ........................................................................ 205
Figure 5-28. Kinematic model and first order dynamic models. ............................................................................. 206
Figure 5-29. First order dynamic models. .................................................................................................................... 206
Figure 5-30. Steering System. Comparison between experimental data, simulation results from the RBF
neural network-based model and simulation results from the linear model. ..................................... 207
Figure 5-31. Drive system. Comparison between experimental data, simulation results from the RBF
neural network-based model and simulation results from linear model. ............................................ 208
Figure 5-32. Heading system. Comparison between experimental data, simulation data from the RBF
neural network-based model and simulation data from the linear model system. ............................ 209
Figure 5-33. A piece of the path represented by a set of consecutive goal points. .............................................. 212
Figure 5-34. Set of five goal points represented in polar form in the robot reference system. ......................... 212
Figure 5-35. Set of feasible manoeuvres for obtaining the training data set. The minimum length of the
curvature radius is 2,5 meters for the robot ROMEO 4R. .................................................................... 213
Figure 5-36. MLP neural network architecture used for path tracking. ................................................................. 213
Figure 5-37. Diagram of the algorithm that allows to validate the performance of the proposed ANNPT. .. 215
Figure 5-38. Set of experiments carried out to demonstrate that the ANNPT has a better performance
than the Pure Pursuit controller. ................................................................................................................ 216
Figure 5-39. Validation of the ANNPT using the robot ROMEO 4R. The red dotted line represents the
path followed by the robot when the ANNPT is executed. The black line represents the
original path to be tracked. .......................................................................................................................... 217
Figure 6-1. The BENDER service robot. .................................................................................................................... 222
Figure 6-2. Distributed software architecture. Red lines represent the information shared between highlevel and low-level processes. Black lines represent the information shared between processes
of the same level. ........................................................................................................................................... 223
Figure 6-3. BENDER differential drive system. ........................................................................................................ 224
Figure 6-4. Drive system software component. .......................................................................................................... 225
Figure 6-5. Software component that allows the robot to acquire raw data from the Hokuyo 2D laser
device............................................................................................................................................................... 227
Figure 6-6. Software component that controls the voice synthesis device. ........................................................... 227
Figure 6-7. Software component that controls the gyratory servo platform. ........................................................ 228
Figure 6-8. Detailed diagram of the high-level software architecture. .................................................................... 230
Figure 6-9. Topological map built during the experiments for measuring the computational cost. ................. 232
Figure 6-10. Computational cost for the semantic polygonal representation task in the remote computer. .. 233
Figure 6-11. Computational cost for the data association task. ............................................................................... 233
Figure 6-12. Computational cost for the self-localization task in the remote computer. .................................... 234
Figure 6-13. Computational cost for the topological map building task in the remote computer. ................... 234
Figure 6-14. Computational cost for the exploration task based on the free space semantic description
executed in the remote computer............................................................................................................... 235
Figure 6-15. Computational cost for the semantic polygonal representation task in the onboard computer. 236
Figure 6-16. Computational cost for the data association task in the onboard computer. ................................. 236
20
Figure 6-17. Computational cost for the self-localization task in the onboard computer. ................................. 237
Figure 6-18. Computational cost for the topological map task in the onboard computer. ................................ 237
Figure 6-19. Computational cost for the exploration task in the onboard computer. ......................................... 237
Figure 6-20. Computational cost for YARP communications. ................................................................................ 238
21
22
List of Tables
Table 2-1. Significant tasks that service robots should carry out according to the IEEE Robotics and
Automation Society. ....................................................................................................................................... 37
Table 2-2. Set of sensors frequently used in mobile robots for autonomous navigation. ..................................... 40
Table 3-1. Semantic labels for the representative points that bound the segments. .............................................. 82
Table 3-2. Rule base with NUMBEROFPOINTS, EFFORT and DISTANCE as inputs, and
MERITFACTOR as output. ......................................................................................................................... 98
Table 3-3. Rule base with inputs COLLISION and TARGET and SPEED as output. .................................... 101
Table 3-4. Rules base with inputs VEL and MINDISTANCE and final SPEED as output. ............................ 101
Table 3-5. Computational complexity of the algorithms that allow to obtain the semantic polygonal
representation of the environment. ........................................................................................................... 103
Table 4-1. Conditions for assigning labels to edges. .................................................................................................. 118
Table 4-2. Rule base with inputs PLenght and DirChange and MFactor as output. ........................................... 124
Table 4-3. Rule base with inputs RMSE0 and RMSE1 and PointConfidence as output. ................................... 146
Table 4-4. Rule base with inputs P0CONFIDENCE and P1CONFIDENCE and SEGMConfidence as
output. ............................................................................................................................................................. 146
Table 4-5. Rule base with inputs SegConf and RMSE eSi and Ud (Unreliability Degree) as output. ................ 152
Table 5-1. Set of desired steering commands. ............................................................................................................. 192
Table 5-2. Sequence of patterns from a training data set with N=M=15. ............................................................. 193
Table 5-3. Neural network parameters. ........................................................................................................................ 195
Table 5-4. Best neural network parameters. ................................................................................................................ 199
Table 5-5. Neural network parameters. (Part 1) ......................................................................................................... 203
Table 5-6. Neural network parameters. (Part 2). ........................................................................................................ 204
Table 5-7. Collection of tests carried out to demonstrate the performance of the proposed ANNPT in a
simulated environment. ................................................................................................................................ 215
Table 6-1. Characteristics of the computers used in the experiments. A Linux operating system is used in
both computers. ............................................................................................................................................ 231
Table 6-2. Selected tasks for measuring the computational cost. ............................................................................ 231
23
24
List of Algorithms
Algorithm 3-1. Extracting saturated points. .................................................................................................................. 83
Algorithm 3-2. Extracting occlusion points. ................................................................................................................. 84
Algorithm 3-3. Extracting candidate outside angle corners........................................................................................ 85
Algorithm 3-4. Simplifying a polyline. ............................................................................................................................ 88
Algorithm 3-5. Generating feasible traversable segments. .......................................................................................... 90
Algorithm 4-1. Generating convex sub-polygons. ..................................................................................................... 119
25
26
No victory without suffering.
J.R.R. Tolkien
27
28
It's the job that's never started as takes longest to
finish.
J.R.R. Tolkien
Chapter 1
Introduction.
This Thesis presents contributions in the field of autonomous navigation of mobile service
robots in unknown dynamic indoor environments. Computational Intelligence-based
techniques are used to solve significant problems related to autonomous navigation,
specifically the semantic representation of the environment, dynamic obstacle avoidance,
topological map building, self-localization, path tracking and kinematic and dynamic
behaviour modelling. (Final)
The proposed methods have been validated in known robotic platforms such as a Pioneer
robot and the car-like autonomous vehicle ROMEO 4R. However, these robotic platforms
are relatively expensive and it is difficult to introduce them in a real domestic setting due to
their economic cost. Therefore, the development of the inexpensive mobile service robot
prototype BENDER (Basic ENvironment for DEveloping Robotics systems), has also
been addressed, with the purpose of demonstrating that the proposed techniques are
efficient even when the cost of the hardware is significantly reduced and the accuracy of
the propioceptive and exteroceptive sensors and the computational capabilities of the robot
decrease.
This chapter firstly presents the motivation and the main objectives of the research carried
out. Then, the outline and main contributions of the Thesis are presented. Finally, the
framework in which the work has been developed is described.
1.1 Motivation and objectives.
Nowadays, there is a growing interest in developing inexpensive indoor service mobile
robots that can efficiently realize service tasks for human users in a domestic environment.
In general, reducing the cost of hardware devices can lead to increase the inaccuracy of the
results obtained when a specific task is required to be executed. Moreover, processing and
storing capabilities could also be reduced. Therefore, any algorithm that allows the robot to
carry out any task should be very efficient from a computational point of view.
Autonomous navigation is a key task which any mobile service robot should efficiently
accomplish. In fact, the success of many other tasks such as transportation of objects and
29
Chapter 1. Introduction
persons, surveillance, guidance or cleaning, among others, depends on the software
architecture designed to solve the problems related to navigation. This software
architecture should consist of a set of components that allow the robot to acquire
information from the environment and to process such information with the purpose of
obtaining a suitable representation of the setting. Simultaneous localization and map
building could be addressed by another component that takes into account the obtained
representation of the scenario. Once the robot has acquired a certain degree of semantic
knowledge about the space, it could plan trajectories to reach goal places or to interact with
the environment and human users as required.
Many researchers have partially contributed to solve the whole autonomous navigation
problem in different environments (both indoor and outdoor), by using heterogeneous
strategies [1]. Many of these strategies are based on solutions that use elements from
Control Theory [2], [3], Graph Theory and Topology [4], [5], [6], [7], [8] or Probabilistic
Theory [9], [10], [11], among many others. Computational and Artificial Intelligence-based
techniques have also been used [12], [13], [14], [15], [16], [17] to a lesser extent, even when
they have demonstrated to obtain promising results in other fields of Engineering [18],
[19], [20], [21].
On the other hand, the experiments that prove the effectiveness of the proposed methods
by these researchers are carried out by using both simulated environments and robots and
known commercial robotic platforms whose cost is usually high. A wide range of
exteroceptive and propioceptive sensors can also be used. The more accurate the sensors
are, the more reliable is the obtained result and more expensive is the robotic platform.
Therefore, it is necessary to balance both issues if it is desired to introduce service robots in
real domestic environments.
The main objective of this Thesis work is to obtain a set of Computational
Intelligence-based techniques that allow an inexpensive mobile service robot to
navigate autonomously and smartly through an unknown dynamic indoor
environment. In particular, this aim can be met if the following sub-goals are fulfilled:

Obtaining a low level semantic representation of indoor settings from data
acquired by using a low cost 2D laser device (the use of this kind of exteroceptive
sensor keeps the necessary balance between accuracy and cost). A set of algorithms
(whose computational cost should be low), will calculate this environment
description which will consist of semantic and geometric elements. This hybrid
representation will allow the robot to store a model of the world combining metric
information with simple semantic concepts, similar to those used by humans.

Obtaining a semantic description of the free space by considering the hybrid
30
Cognitive Autonomous Navigation System using Computational Intelligence techniques
representation of the environment. The description will consist of a set of convex
sub-polygons whose edges will be semantically labelled. This classification of the free
space will allow the robot to determine which places are potentially explorable. The
robots Pioneer and BENDER will be used to validate the proposed algorithms in a real
environment.

Designing a set of computationally efficient algorithms to accomplish the tasks
related to exploration and obstacle avoidance in an unknown and dynamic
environment. The intelligent behaviour of the robot will be implemented by using a
hierarchical fuzzy system whose rule base will be designed by a human expert. The
robots Pioneer and BENDER will be used to validate the proposed technique in a real
setting.

Developing a method for topological map building of the environment that
takes into account the semantic representation of the free space. The
topological map will be implemented as a graph where each node will store
relevant semantic and geometric information of a specific place. Such nodes will
be linked by paths defined as a list of robot poses. Each robot pose estimate will be
calculated by combining the information acquired by propioceptive and exteroceptive
sensors with a method based on an Extended Kalman Filter (EKF). A procedure based
on a hierarchical fuzzy system, whose rule base will be defined by a human expert, will
allow to enhance the natural landmarks extraction procedure. The robots Pioneer and
BENDER will be used to validate the proposed method in a real scenario.

Modelling the behaviour of a medium-size car-like mobile robot by using a set
of Radial Basis Function Neural Networks (RBFNN), optimally selected by a
Multi-Objective Genetic Algorithm (MOGA) and developing an efficient and
computationally inexpensive path tracking method using a Multi-layer
Perceptron Neural Network (MLPNN). In particular, these Computational
Intelligence-based techniques will demonstrate that it is possible to learn the dynamic
and kinematic behaviour of a robot with strict non-holonomic and manoeuvrability
constrains without using complex mathematical models. The robot ROMEO 4R will be
used to validate the proposed methods.

Designing and building the mobile service robot BENDER by taking into
account that the cost of hardware devices should be significantly reduced. In addition,
an open and distributed software architecture will provide the necessary services to
validate the set of algorithms developed in this work. Furthermore, a simulation
framework based on the open source tool Player/Stage [22] will be developed for
improving the validation process.
31
Chapter 1. Introduction
1.2 Outline and main contributions.
This Thesis consists of seven chapters complemented with two appendixes. A summary of
the contents of the chapters is described as follows:
In Chapter 2, the problem of autonomous navigation of mobile service robots in
unknown dynamic indoor environments is presented. In particular, the need of developing
a set of techniques that allow a mobile robot to acquire information from the environment
with the purpose of obtaining an appropriate representation of the scenario is considered,
since from this representation, the mobile robot can be aware of the distribution of the free
space and it can accomplish other complex tasks such as map building, self-localization,
path planning and path tracking. The state of the art on autonomous navigation is
reviewed, mainly taking into account the related work in the field of environment
representation, metric and topological map building and self-localization. Computational
Intelligence-based methods applied to Mobile Robotics are also outlined.
In Chapter 3, a set of computationally inexpensive algorithms, which allow the mobile
robot to obtain a semantic polygonal representation of the environment from 2D laser raw
data, is described in detail. Specifically, relevant features such corners or occlusions are
detected and properly represented by low level semantic tags. A method to generate a set
of safe paths for the robot is presented. The paths are built from clusters of midpoints of
the traversable segments defined by the semantic tags. A hierarchical fuzzy system that
selects the suitable route to follow according to the constraints given by the designer is
implemented by using the XFuzzy 3.0 tool. Another fuzzy system controller calculates the
suitable velocity while the robot navigates. In order to validate the performance of the
proposed techniques, a set of experiments, both in a simulated environment and in real
settings using a Pioneer robot and BENDER (the low cost robotic platform built in this
work), are carried out.
In Chapter 4, a set of techniques that allow a mobile service robot to build a hybrid
(metric and topological) map is proposed. This hybrid map consists of a set of topological
nodes that contain semantic polygonal descriptions of places. The nodes are
interconnected by links that store the path between them. The algorithm that detects
possible areas where a node should be created uses a semantic description of the free
space. This description is calculated by using the semantic representation of the
environment proposed in Chapter 3. In addition, exploration is addressed by using a
hierarchical fuzzy system that selects an exploration route by taking into account the
semantic description of the free space. The trajectory tracked by the robot while it is
exploring the setting and building the map is calculated by taking into account the
32
Cognitive Autonomous Navigation System using Computational Intelligence techniques
information acquired by propioceptive and exteroceptive sensors. In particular, the path
between topological nodes is estimated by calculating a sequence of poses which are
obtained after fusing the data provided by a death reckoning method and the data acquired
by a virtual sensor based on a 2D laser device. Data fusion is carried out by using an
Extended Kalman filter. Selection of features for data association is improved by means of
a hierarchical fuzzy system. Experiments carried out in a simulated environment and in real
settings using the Pioneer and BENDER robots allow to validate the proposed methods.
In Chapter 5, a method for mobile robot model learning based on Computational
Intelligence techniques is presented. Specifically, the behaviours of the drive system and the
steering system of a medium-size car-like robot are modelled by using a hierarchical system
consisting of a set of RBF neural networks. The selection of the RBF neural network
parameters is carried out by means of a Multi-Objective Genetic Algorithm (MOGA).
Furthermore, the path tracking problem is addressed by using a MLP neural network-based
technique. Both proposed methods are validated in experiments carried out with the
autonomous car-like vehicle ROMEO 4R.
In Chapter 6, the hardware architecture of the low-cost service robot BENDER is
detailed. Moreover, the software architecture based on a set of distributed components that
share the information by means of the YARP middleware is also described in depth. A
comparative study between BENDER and other commercial robots demonstrates that it is
possible to reduce the cost of the robotic platform while maintaining some significant
features.
In Chapter 7, the main conclusions and contributions of this Thesis are discussed.
Furthermore, the future work is summarized.
1.3 Framework.
This Thesis has been developed within the following research projects that have provided
the necessary funding and equipment:
CROMAT: Coordinación de RObots Móviles Aéreos y Terrestres. Funded by the
Dirección General de Investigación (DPI2002-04401-C03-03). The main objective of this
research project was the development of new methods and techniques for the cooperation
of aerial and ground mobile robots.
ROBAIR: Robótica y fiabilidad aérea. Funded by the Spanish Research and Development
Program (DPI2008-03847). The objective of this project is the research and development
of new methods and technologies to increase the safety and reliability mainly in Aerial
33
Chapter 1. Introduction
Robotics. The project includes three main topics: safe and reliable controlled platforms,
multi-UAV safe and reliable cooperation and integration of the aerial robots with the
ground infrastructure.
ROMOCOG: The project ROMOCOG (Sistema automático de adquisición de habilidades
cognitivas en entornos dinámicos mediante percepción multimodal, TEP-4479) is financed
by the Andalusian autonomous administration (Junta de Andalucía). Its main goals are the
development of tools and methods to provide high level cognitive functions for robots in
order to increase their flexibility, their tolerance to changes in the environment and their
ability to interact with persons.
ROMOCOG II: The Project ROMOCOG II (TEP-6412) is an extension of the project
ROMOCOG. It is financed by the Andalusian autonomous administration (Junta de
Andalucía). Its main goals are the improvement of the results obtained by the project
ROMOCOG and the development of new extensions related to the enhancement of the
cognitive capabilities of a service robot.
Furthermore, it is worth to say that part of this Thesis has also been developed during a
stay at the Centre of Intelligent Systems (University of Algarve), in Portugal (since October
15th, 2006, to January 15th, 2007). This centre is a multidisciplinary research and
development unit whose main objective is to promote fundamental and applied research in
Computational Intelligence and supporting technologies, in particular in the areas of
computing and information systems, control, optimization, biomedical signal processing,
cognitive neuroscience and economics by means of applications that use re-configurable,
high-performance computing systems based on homogeneous, heterogeneous, and
distributed architectures.
34
All we have to decide is what to do with the time that is
given us.
J.R.R. Tolkien
Chapter 2
Autonomous navigation of mobile service robots: State of the Art.
N
owadays, technology is crucial in our lives. There are many machines that
facilitate everyday tasks at home. In the industrialized world, it is easy to find a
computer, generally equipped with Internet access, that can handle all kind of
information (personal, economical, recreational or related to health, among others).
Furthermore, in the last decade, the use of tiny electronic devices with the ability to stay
connected and share information anywhere and at any time, has exponentially grown.
Thanks to the development of Technology, important services for human beings such as
public and private transport systems have been improved. In particular, such services are
faster, safer and more comfortable. (Final)
In the industrial context, there is also a high degree of automation and it is possible to find
robots that perform repetitive tasks in a very efficient way. However, despite this enormous
technological growth in diverse fields, it is difficult to find specific intelligent autonomous
service robots that efficiently work in a domestic setting or in public environments such as
hospitals, schools, offices or museums. There exist some expensive prototypes that can
perform complex tasks, but they are not commercially available products. On the other
hand, commercial service robots usually provide very reduced features.
Currently, several of the main challenges that robotic researchers face are related to the safe
autonomous navigation of mobile service robots. The comprehension of the environment
and the ability to interact with the user in a natural manner are also abilities that service
robots should acquire.
Autonomous navigation does not only mean that a mobile robot is capable of moving
through the setting while avoiding obstacles in a reactive way; it is also necessary that the
robot can build a map by considering the unchanging elements of a unknown dynamic
semi-structured environment. In addition, the robot should be capable of localizing itself
within the map, recognizing and learning places and intelligently accomplishing the
assigned tasks.
For all the above reasons, this chapter describes those works that are of interest in the field
of service robot design and development. In particular, two significant issues are
35
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
highlighted: degree of autonomy and cost of production. In addition, the state of the art
related to autonomous navigation in dynamic semi-structured indoor settings is analyzed in
depth. This analysis is mainly focused on the study of techniques that allows a service robot
to explore indoor environments, build hybrid (metric and topological) maps of indoor
settings and localize itself by using maps.
2.1 Service robots.
The International Federation of Robotics (IFR) defines a service robot as follows: “A service
robot is a robot which operates semi of fully autonomously to perform services useful to the well-being of
humans and equipments, excluding manufacturing operations” [23]. Thus, a service robot, as
opposed to an industrial robot, should have a greater degree of autonomy and intelligence
and an enhanced ability of natural interaction with human beings [24].
In general, a service robot consists of a mobile platform equipped with sensors that acquire
information from the environment, and could also be equipped with mechanisms that
allow to handle objects properly.
Service robots could carry out a wide range of jobs, for instance, domestic assistance
(cleaning, surveillance or recreational tasks, among others), agricultural tasks, autonomous
transportation of objects and people (in urban settings, hospitals, shopping centres or
offices), or being the support of the user when a dangerous job must be carried out. Table
2-1 summarizes the set of most significant applications of service robots according to the
information provided by the IEEE Robotics and Automation Society.
The economic cost of the robot and its hardware and software architectures generally
depend on the set of tasks that the robot must execute. However, a service robot should be
able to adapt to multiple environments [25] and to perform heterogeneous missions [26].
In this Thesis, the applications of interest are those related to personal assistance in
dynamic semi-structured indoor settings, such as homes, hospitals and offices, among
others. In these cases, a service robot should be capable of executing the following main
tasks: safe autonomous navigation, including map building and localization, and natural
interaction with the user.
Another important issue is related to the economic cost of service robots. In particular,
service robots should be affordable from an economic point of view while maintaining
their features. Therefore, researchers should carry out an additional effort in order to
design inexpensive fully functional service robots [27].
36
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Application
Agriculture and ranching.
Domestic tasks.
Recreational and educational
tasks.
Rehabilitation and Medicine.
Personal assistance and robotic
guides.
Rescue tasks and explosive
deactivation.
Surveillance and monitoring.
Help for elderly and
handicapped people.
Table 2-1. Significant tasks that service robots should carry out according to the IEEE
Robotics and Automation Society.
Moreover, Ambient Intelligence (AmI) is currently a very significant issue [28], [29].
Therefore, new research lines have emerged, focused on including inexpensive service
robots in an intelligent ambient with the purpose of carrying out assistance tasks for human
beings, in particular for the elderly [30] and for handicapped people [31], [32].
37
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
A service robot, both in a conventional environment and in an intelligent ambient, should
consist of a basic set of hardware components (sensors and actuators), and a software
architecture, distributed if possible [33], that allows to acquire information from the
sensors and to carry out control tasks. Furthermore, the set of software applications should
meet certain objectives related to quality parameters, according to the main principles of
Software Engineering [34].
2.2 Autonomous navigation for service robotics.
As mentioned in the preceding sections, a service robot that executes personal assistance
tasks in domestic settings, hospitals or offices should be usually composed of a mobile
platform equipped with sensors, actuators and possibly a mechanism to handle objects. In
addition, a robotized wheelchair could also be considered a personal service robot, since it
allows to transport people with reduced mobility.
Regardless of the specific task assigned to the robot (objects or people transport, guiding in
a building, surveillance services or home cleaning, among others), autonomous navigation
is one of the main problems that must be solved.
A mobile robot navigates in an autonomous way if it is capable of moving through the
environment, in a safe manner, from a start position to a goal localization without direct
human intervention. Meeting this objective involves meeting a set of sub-objectives such
as:

Environment perception and representation.

Map building according to the selected setting description and localization within the
map. Many works address the Simultaneous Localization and Mapping (SLAM)
problem.

Path planning. The robot should be able to calculate a safe path that allows to connect
two places.

Following the planned path, appropriately avoiding the static and dynamic obstacles.
On the other hand, the software architecture for the autonomous navigation system should
consist of a set of hierarchically connected individual components. Thus, the software stack
should be composed of lower level components that offer services to higher level
components. Connection between components could be solved by using suitable
communication tools (middleware), see Figure 2-1.
Using a distributed software architecture provides multiple advantages. In particular, it is
very easy to include new components that can be executed both inside the robot PC
38
Cognitive Autonomous Navigation System using Computational Intelligence techniques
controller and remotely. This enhances the computational capacity of low-cost robots
whose hardware features are reduced.
Once the robot is able to navigate autonomously, it is ready for the execution of other
tasks that need the robot to move from one place to another. These tasks could be
implemented by using new software modules that can be easily included in the distributed
software architecture. In the following sections, relevant contributions in environment
perception and representation, SLAM, path planning and path tracking problems are
analyzed. These tasks are the main components of any autonomous navigation system, as
Figure 2-1 shows.
2.3 Environment perception and representation for
autonomous navigation.
Accuracy and reliability are essential in autonomous navigation systems; specifically, a
suitable perception and representation of the environment is crucial for obtaining both of
them. In order to meet this objective, it is necessary to properly select the set of sensors
which the robot should be equipped with, while considering different and even opposite
aspects such as reliability and economic cost. In addition, once the robot acquires certain
information about the environment, a good high level representation of such information
should be found with the purpose of describing and classifying the objects that compose
the setting. Then, different methods could be designed to execute several actions, such as,
obstacle avoiding, map building, localization and safe path planning and tracking by using
this high level representation.
2.3.1 Type of sensors used in autonomous navigation.
Human beings have a multitude of senses. Sight, hearing, taste, smell and touch are the five
traditionally recognized and they allow people to be aware of the world around
(exteroceptive senses). Furthermore, human beings can also obtain information about their
internal state (propioceptive senses).
Figure 2-1. Software architecture for autonomous navigation based on independent
software components.
39
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
There are many differences between human beings and robots, so the latter are currently
very far from achieving near-human sensorial proficiency. Robots can also sense
information from the world by using exteroceptive sensors not necessarily similar to the
human senses. On the other hand, information about their internal state (much more
reduced than in humans), is acquired by using propioceptive sensors. Table 2-2 shows a list
of sensors widely used in robots, highlighting how they work and how suitable they are
according to the environment (indoor, outdoor or both).
Exteroceptive
Type
Sensor
Ultrasonic devices
Environment
e/i
Infrared devices
i
2D Laser scanner
e/i
3D Laser scanner
e/i
Bumper
e/i
Monocular vision
e/i
Stereo vision
e/i
Proximity
Contact
Light
Task
Distance measurement based on ultrasound by
using time of flight techniques.
Distance measurement based on infrared light by
using similar principles as for ultrasonic devices
but more accurate due to the nature of infrared
light.
Distance measurement by using Light Detection
and Ranging (LIDAR). A one-dimensional set
of distance measurements is provided.
Distance measurement by using Light Detection
and Ranging (LIDAR). A bi-dimensional set of
distance measurements is provided.
Obstacles are detected by means of direct contact.
Visual information acquisition by using a single
camera. Omnidirectional systems allow to obtain
360 degrees images of the environment.
Visual information acquisition by using two
cameras. It provides the same functionality as a
monocular system but it has the ability of
calculating distances, and can be used for 3D
reconstruction.
Propioceptive
Sensor
Encoder
Gyroscope
Environment
e/i
I
Accelerometer
e/i
Inertial Measurement Unit
(IMU)
e/i
Global Positioning System
(GPS)
E
Task
Velocity estimation by transforming angular
position of an axis into a digital code.
Heading estimation according to the principle
of angular momentum conservation.
Acceleration estimation.
Velocity, heading and gravitational forces
estimation by using a combination of
accelerometers and gyroscopes.
Absolute position estimation.
e/i = exterior e interior || e = exterior || i = interior
Table 2-2. Set of sensors frequently used in mobile robots for autonomous navigation.
40
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Exteroceptive sensors provide information about the environment around the robot. When
this information is processed, an abstract description of the elements of the environment
can be obtained. Such description is usually used as the input of a procedure that allows to
trigger a set of specific actions with the purpose of accomplishing a given task. Together
with the conceptual description of the scenario, the information about the internal state of
the robot, acquired from propioceptive sensors, is also significant, since it facilitates
autonomous navigation tasks such as localization, mapping, path planning or path tracking
[35].
1) Exteroceptive sensors.
Both ultrasonic and infrared detectors usually are less expensive than 2D and 3D laser
scanners. However, the accuracy of laser devices is much higher. Nowadays, the economic
cost of 2D laser scanners has been significantly reduced, so they are very appropriate for
acquiring distance information mainly in indoor settings, due to the trade-off they offer
between reliability, accuracy and cost. In addition, some researchers have devoted some
efforts to obtain more inexpensive 3D laser devices by using 2D laser scanners assembled
on a motorized platform [36]. A 3D laser device provides information that allows to
reconstruct the scenario in 3D. However, a 2D laser scanner only provides information
about the distance and the partial shape of the objects at a given height.
Bumpers acquire information from the environment by contact. They are very popular in
commercial cleaning robots. In fact, the robotic vacuum cleaners sold at shopping centres
reach a relatively low speed, and obstacle detection is made by contact without damaging
furniture. Some enhanced models use proximity sensors in order to minimize the impact of
the contact between the robot and the surrounding objects. Using low cost sensors reduces
the manufacturing cost of the product, but performance is also reduced. This is the reason
why some researchers are trying to develop new algorithms that allow these robots to
accomplish more complex tasks [37].
Although proximity sensors are very useful for distance measurement, information about
shape, texture and colour needs to be sensed by using other kind of sensors. In fact, human
beings and most mammals use the information provided by their sense of sight to
accomplish most tasks. Thus, many researchers try to design algorithms that provide this
ability to robots, and many techniques developed in the Artificial Vision field are being
applied in Robotics. A good example of the combination of both fields is the Open Source
Computer Vision Library or simply OpenCV [38], that was used to implement the vision
algorithms of the robot Stanley, the winner of the DARPA Grand Challenge in 2005 [39],
[40].
41
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
On the other hand, many researchers use multimodal perception in order to optimize the
results and to obtain more reliable solutions [41]. Many applications combine by using
different techniques data provided by proximity sensors, such as laser and ultrasonic
devices, and visual information acquired by monocular or stereo vision systems.
2) Propioceptive sensors.
As mentioned above, propioceptive sensors allow to estimate the internal state of the
mobile robot. In the autonomous navigation problem, such internal state is given by the
pose (position in a 2D plane and heading) and the velocity with respect to a global
reference system at each sample time. The robot needs to be aware of the current state for
predicting its future state and generating the necessary control commands to accomplish
the navigation tasks, such as map building, localization, obstacle avoidance and path
planning and tracking.
Wheeled robots are usually equipped with encoders for velocity estimation, gyroscopes
and/or compasses for heading measurement, IMUs for measuring both inclination and
heading and accelerometers for acceleration estimation, see Table 2-2. Odometry (also
known as dead reckoning) is a process that uses the information provided by this kind of
sensors to obtain the robot pose estimate at time . This estimation is the composition of a
set of consecutive relative motion estimates. Although, odometry provides good short-term
accuracy, is inexpensive and allows very high sampling rates, accumulation of errors cause
large global pose error which increase proportionally with the distance travelled by the
robot [42].
In general, dead reckoning is based on simple calculations that use the information
acquired from the encoders and additionally from gyroscopes or IMUs, among others.
Only in outdoor environments it is possible to use data from GPS, which allows to obtain
an absolute position estimation, also affected by errors.
Using encoders for velocity measuring is simple but adds some drawbacks related to
systematic (wheels with a slightly different diameter, bad wheel alignment or low sampling
rate of the encoders), and non-systematic (displacements on non-levelled floors and on
unexpected objects, wheel slipping or over-acceleration due to external forces), errors. All
these effects are highly undesirable in autonomous navigation, particularly in map building
and localization activities. For this reason, it is very frequent to combine the information
provided by both propioceptive and exteroceptive sensors in order to minimize the error in
pose estimation by considering the uncertainty of each source of information.
In this Thesis, a set of experiments have been carried out with the mobile robots Pioneer
42
Cognitive Autonomous Navigation System using Computational Intelligence techniques
and BENDER (see Chapter 6), equipped with the following sensors:

Exteroceptive: A 2D laser scanner attached to a motorized platform that allows to turn
the device around its vertical axis. Thus, the robot can easily obtain information about
objects located both in front and behind it.

Propioceptive: Two encoders, each of them associated to one of the wheels of the
drive system. Additionally, a low cost digital compass can also be used in the BENDER
robot for heading measurement.
In the following sections, a set of classic methods of building compact representations of
the environment by using the raw data sensed by a 2D laser scanner are reviewed.
Techniques that obtain the environment description by fusing the information provided by
several sources are also summarized.
2.3.2 Environment representation from raw data acquired by a
2D laser scanner.
A 2D laser scanner can be considered as a LIDAR system, which measures the distance
from the emitter to the target surface by using a laser beam, see Figure 2-2. Such distance is
calculated by measuring the delay between the laser beam emission and the detection of
reflected signal (time of flight). This kind of sensor is characterized by the following
parameters:

Maximum distance measurement or maximum radius (

Angular resolution (

Measurement area (semi-circle) (
).
).
).
Figure 2-2. Parameter configuration of a Sick 2D laser scanner.
43
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
Therefore, the information provided by a 2D laser scanner consists of a sequence of
pairs
at each time , where is the distance between the laser device and the point
located at
degrees inside the area covered by the device. Figure 2-2 shows the
parameters used by a Sick laser scanner whose configuration is:
meters,
and
. In this example, the vector of values obtained at time is
composed by
measurements, where the sample represents the distance between
a point of the surface of an object located at
degrees inside the semi-circle covered
by the laser device.
1) Segmentation.
Before using an algorithm that obtains a specific geometric model of the environment, it is
necessary to divide the set of raw data acquired by the 2D laser scanner into several clusters
where the distance is similar for each sample . This process is known as segmentation. In
addition, a previous filtering process for avoiding outliers is generally necessary before
segmentation.
Figure 2-3 shows an example of segmentation where it is possible to detect three groups of
adjacent points. Group 1 holds the points that describe a part of object 1 together with a
saturated zone (an area free of objects in the range of the sensor). Group 2 stores the
points that represent the frontal area of the obstacle. Finally, group 3 contains the points
that represent two other saturated areas and a corner of object 3.
Several classic segmentation algorithms can be found in the literature. In particular, the
Successive Edge Following (SEF) algorithm is one of the simplest methods used for this
purpose.
Figure 2-3. Example of segmentation with three obstacles and two saturated areas.
44
Cognitive Autonomous Navigation System using Computational Intelligence techniques
The SEF algorithm consecutively analyzes the set of raw data in polar form and generates a
new cluster when the difference of distance between two adjacent points is greater than a
heuristically defined threshold [43].
Another classic algorithm is known as Iterative End-Point Fit (IEPF). This method is
based on a recursive technique that allows to detect groups defined by a start point and a
final point. For each group, the farthest point from the straight line that contains the
segment defined by the start and the final points is searched, and the distance between the
point and the straight line is computed. If the distance is greater than a heuristically defined
threshold, such point is taken as the start and the final points of two clusters, respectively.
The algorithm is recursively executed over each obtained cluster [44].
In [45] some techniques, common within the artificial vision community are modified and
applied to the segmentation of the raw data provided by a 2D laser scanner. Scale space
theory (a formal method for handling structures at different scales), allows to represent
laser information in multi-scale space; this representation is robust to the noise inherently
present in the laser scans. In particular, a modified adaptive smoothing algorithm is
proposed and applied to laser range data within a modified scale space framework. The
method translates a line model mask over the range data to smooth and segment them at
the same time. Then, lines can be extracted from the segments by using a standard fitting
algorithm. However, this adaptive line-model method requires the presence of line
segments in the environment. If circular segments are found and their radius is large, they
can be considered as line segments, although the extracted lines from these non-line
segments turn out not to be position invariant; this can be considered as a drawback. A
fitting error measurement for each segment is then needed to filter segments with large
error. Another problem is that the intersection of two nearly parallel lines is very sensitive
to the robot position; therefore, these estimated intersection points would have a large
error. Thus, it is necessary that information about parallelism is attached to the extracted
line intersections as attributes to avoid calculating numerically unstable intersections.
In [46] an adaptation of the Mean Shift [47] clustering algorithm for data provided by a 2D
laser scanner is proposed. A robust technique based on the multimodal probability density
function of the residual error is used to segment the set of raw data into clusters and to
estimate the segment parameters that approximate such clusters. Once segmentation is
carried out, the representative parameters for each detected group are calculated. These
parameters define a set of features that can be used by other high level control algorithms.
Some segmentation methods directly model the obtained clusters by using a set of
parameters that define representative segments or straight lines as an approximation to the
raw data. Other research works show that it is possible to select geometric models based on
45
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
curves [48], mainly in outdoor environments where the shape of the objects cannot be
approximated only by straight lines or polygons.
2) Geometric models based on straight lines.
Indoor settings are often considered to be semi-structured scenarios that can be reliably
modelled by using polygons and polylines. This is one of the reasons why the
representations based on straight lines are selected for many applications. Furthermore, line
extraction techniques are usually simple, they can be used in real time, and the resulting
abstract representation of the environment is generally accurate and compact. Specifically,
lines or segments are considered as features that other high level algorithms (data
association, free space detection, map building or localization, among others), use as inputs.
In [49], the raw data provided by a 2D laser scanner are filtered and a segmentation
technique similar to the SEF method is carried out. Then, an exhaustive empirical
evaluation of six classic line extraction algorithms is performed. In particular, the analyzed
methods are: Split and Merge, Line Regression, Incremental algorithm or Line Tracking,
Random Sample Consensus (RANSAC), Hough transform and Expectation-Maximization
(EM) algorithm.
A brief description of each classic method analyzed in [49] is presented in the following
subsections. Advantages and drawbacks of each technique are also highlighted.
a) Split and Merge algorithm.
This technique was presented in [50] for the first time. It uses a set of raw data and a
heuristically defined threshold
as inputs and obtains as output a set of straight lines that
models the environment. The threshold defines the condition to search new lines. The
algorithm is divided into two stages: Split phase and merge phase.
In the split phase, two end points
and
are selected and the straight line
that crosses both points is calculated. Next, the farthest point
line is searched. If the distance between
and
subgroups whose end points are
from the straight
is greater than
and
, two new
are
generated. The procedure is repeated over both clusters. This recursive process finishes
when no point exceeds
.
In the merge phase, consecutive groups are analyzed. If two adjacent clusters are
represented by two lines very close to each other, both groups are combined and a new
straight line that crosses the end points is calculated. Then, the farthest point from the
46
Cognitive Autonomous Navigation System using Computational Intelligence techniques
segment is searched. If the distance between this point and the segment is below , they
are fusioned and, consequently, the new representative straight line is accepted. Clusters
that are represented by very short segments are also rejected. Once the split and merge
procedures are finished, the parameters of the straight lines are calculated and used as
representative features for both clusters.
The main advantage of this technique is its simplicity and accuracy in very structured
environments with very precise measurement data. However, in more complex and noisy
scenarios, this method is less robust. The threshold
should be adequate for the working
conditions, and the nature of data must be considered for heuristically doing this selection.
Furthermore, only the end points of each generated group are taken into account, so
outliers could lead to large segmentation errors and, consequently, the geometric model of
the environment could be unreliable.
b) Line regression algorithm.
This algorithm was initially proposed in [51] for accomplishing localization tasks based on
maps. In this research work the segmentation of the raw data from the 2D laser scanner is
addressed by deciding on a measure of model fidelity which is applied to adjacent groups
of measurements. Due to outliers, the extraction process includes a subsequent matching
step where segments which belong to the same landmark are to be merged while keeping
track of those which originate from distinct features. This is done by an Agglomerative
Hierarchical Clustering (AHC) algorithm with a Mahalanobis distance matrix. In particular,
the AHC algorithm uses a “bottom-up” strategy where each observation starts in its own
cluster and pairs of clusters are merged in higher hierarchy levels. The extraction process
allows to obtain both a collection of line segments and the associated covariance matrixes.
Summarizing, the key idea of the Linear Regression technique is inspired by the Hough
Transform method; the algorithm first transforms the line extraction problem into a search
problem in model space (line parameter domain), and then applies the AHC algorithm to
construct adjacent line segments [49]. For a set
of
raw data acquired by the 2D laser
scanner, a sliding window of size , where
, is defined. For each window, a line
fitting to the data is calculated. Then, a line fidelity array is computed, where each item is
the result of the sum of Mahalanobis distances between three adjacent windows. The AHC
algorithm allows to obtain line segments by scanning the fidelity array for consecutive
elements whose values are less than a heuristically defined threshold. Finally, the
overlapped segments are merged and the line parameters are recomputed for each segment.
A disadvantage of this method is that the implementation is complex. Furthermore, the
selection of the window size is very dependent of the environment and has a great
47
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
influence on the algorithm performance.
c) Incremental algorithm or Line Tracking.
This technique is based on incrementally calculating a regression line over a set of points.
When a new point is added, the distance from the point to the last estimated regression line
is obtained. If such distance is greater than a heuristically defined threshold, the point is
transformed into an end point of a new segment; otherwise, the point is added to the
cluster represented by the estimated regression line, and the parameters of such line are
recomputed.
The implementation of this algorithm is simple, and this can be considered its main
advantage. However, its robustness partly depends on the complexity of the environment
[43], and this is clearly a disadvantage.
d) RANSAC algorithm.
The RANSAC method [52] is an iterative technique that allows to estimate parameters of a
mathematical model from a set of observed data which contains outliers. Its approach is
the opposite to that of conventional smoothing techniques, since rather than using as much
of the data as possible to obtain an initial solution and then attempting to eliminate the
invalid data points, it uses an initial data set (with the minimum amount of elements), and
adds consistent data to this set if possible. It is a non-deterministic algorithm in the sense
that it generates a reasonable result only with a certain probability, which increases as more
iterations are allowed. The RANSAC algorithm has been frequently used in Artificial
Vision for obtaining representative information in bidimensional images. It can be
considered a general clustering technique that can be used in many scenarios if the feature
model is known.
A straight line can be used as fitting model. In this case, the RANSAC method randomly
takes two points from the set of data. Then, a consensus group is created from this pair of
points. Such group consists of the points that meet the following condition: they are close
to the straight line that contains the segment bounded by the initial pair of points. The
distance threshold that defines if a point is or is not near the line must be heuristically
defined when the algorithm starts. If the number of points that meet the condition is
greater than a threshold (also heuristically defined), the straight line is fitted again by taking
into account all the points. The process is repeated by randomly selecting a new pair of
points.
The main advantage of the RANSAC method is its ability to provide a robust estimation of
the model parameters with a high degree of accuracy even when a significant number of
48
Cognitive Autonomous Navigation System using Computational Intelligence techniques
outliers are present in the data set. The main disadvantage of RANSAC technique is that
there is no upper bound on the time it takes to compute the parameters. If the number of
iterations is limited, the solution obtained may not be optimal, or even good. The
RANSAC method tries to offer a trade-off between time consumed and accuracy. Another
drawback is that problem-specific thresholds need to be set.
e) Hough transform.
This technique is also widely used in the Artificial Vision context. It is used for pattern
recognition in images [53]. It requires that the desired features be specified in some
parametric form. In the standard Hough transform, patterns are geometric forms such as
circles or lines. A generalized Hough transform can be employed in applications where a
simple analytic description of features is not possible. In particular, searching lines is the
objective of the standard Hough transform when the set of raw data are provided by a 2D
laser scanner.
In order to detect lines, it is necessary to specify which parameters define a line. One of the
representations is the slope-intercept form,
, where
is the slope and is
the
component of the point where the line intersects the Y-axis. However, this
representation is not very stable. As lines get more vertical
, the magnitudes of
and
grow towards infinity, since
and
. In order to avoid these
problems, the normal representation of a line is selected:
(2-1)
Where is the length of the normal from the origin to the line, and is the angle of with
respect to the X-axis. Any straight line can be then represented as a pair
in the
parametric space
.
The set of samples acquired by a 2D laser scanner could be represented by a set of
points
in an Euclidian plane, where
. This set of points can be
transformed into a set of sinusoidal curves in the parametric space
parametric transformation defined by the equation (2-1):
by using the
(2-2)
The points where the sinusoidal curves intersect in the parametric space correspond to the
parameters
of all the feasible straight lines than can be formed.
To apply the standard Hough transform technique, the parametric space must be
discretized by defining the intervals
and
. Then, a bidimensional
49
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
matrix
of accumulation cells can be generated. Each cell corresponds to a straight line
whose parameters are
, where is the number of the row and is the number of the
column.
For each point
a line equation is evaluated according to the parameters of the
corresponding cell. If the point is a solution for the equation, the number of votes is
increased for such cell. A very high number of votes means that many points belong to the
same straight line, whose parameters are defined by the position of the cell inside the
matrix .
Once all the elements from the initial set of points have been processed, the cell with the
highest number of votes is chosen. If a minimum heuristically defined threshold is
exceeded, the points represented by the straight line corresponding to the cell are used to
fit again a straight line that is stored. Points correctly represented by a stored straight line
are eliminated from the initial set. The process is repeated for the rest of points [49] until
the set is empty.
The straight lines that represent a high number of points are taken as environment
descriptors.
One of the main disadvantages of this technique is related to the selection of the size and
resolution of the matrix that stores the number of votes. Both of them have an influence in
the results obtained by the method. Another drawback is that the Hough transform does
not take into account noise or uncertainty when the line parameters are estimated.
f) Expectation-Maximization algorithm.
The Expectation-Maximization (EM) algorithm [54] is based on an iterative probabilistic
technique frequently used for problems where certain variables are unknown (hidden). In
particular, the EM algorithm is used to find the maximum likelihood parameters of a
statistical model in cases where the equations cannot be solved directly. Typically these
models involve latent or hidden variables in addition to unknown parameters and known
data observations. Finding a maximum likelihood solution needs taking the derivatives of
the likelihood function with respect to all the unknown values and simultaneously solving
the resulting equations. In general, in statistical models with latent variables, this is not
possible. Instead, the result is normally a set of interlocking equations in which the solution
to the parameters requires the values of the hidden variables and vice-versa. In fact,
substituting one set of equations into the other generates an unsolvable equation.
The EM method tries to approximately calculate the parameters of a probabilistic model
that depends on non-observable variables by using maximum likelihood (ML) estimators.
50
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Each EM iteration alternates between performing an expectation (E) step and a
maximization (M) step. The expectation stage calculates the expectation of the loglikelihood evaluated by using the current estimate for the parameters. The maximization
step computes the parameters maximizing the expected log-likelihood found on the
expectation step. The parameter estimations are used to determine the distribution of the
hidden variables in the next E step. The algorithm convergence is ensured, because the
likelihood is increased in each iteration.
One of the main disadvantages of the EM method is that the selection of the initial values
of the algorithm is difficult. On the other hand, the algorithm could converge to a local
minimum, and this situation is not desirable for many applications.
In general, this technique is used for classification and pattern recognition tasks in the
scope of Artificial Vision. However, it can be used for solving other problems. For
example, in [55] the EM algorithm is used for 3D reconstruction.
3) Geometric models based on curves.
In many complex indoor environments it is possible to find furniture or other objects
whose shapes cannot be easily represented by straight lines. Some research works use
setting descriptions based on other geometric models, such as curves. These
representations are also interesting because they can be applied in non-structured outdoor
scenarios.
In fact, service robots may be required to operate both in indoor and partial outdoor
settings; intelligent wheelchairs are an example of mobile robots that can be used in both
kinds of environments. In this case, using a hybrid technique that allows to represent the
setting not only by means of line segments, but also by using curves, can be very useful.
In [56] a method for describing the environment by using lines and curves is developed. A
technique based on the adaptive estimation of the curvature is developed. A method that
detects break points [57] is used for the segmentation stage. The groups of points
generated in the segmentation step are split again by using a criterion based on the
curvature of the elements found in each cluster of points [48].
In [58] the representation of complex geometric shapes is carried out by using splinecurves. The raw data obtained from a 2D laser scanner at time are segmented first by
means of a basic data clustering procedure. A secondary segmentation process allows to
analyze the relative positions of consecutive points. If such points are near each other, it is
considered that they represent the same object of the environment. The shape of an object
could be complex and may contain corners that are also detected at a previous stage of the
51
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
algorithm. Finally the object is modelled by using a cubic spline curve. In [59] this kind of
representation is used for developing a new SLAM method.
2.3.3 Representation of the environment from data acquired by
other sensors.
As Table 2-2 shows, there are many alternative exteroceptive sensors that could be used for
environment perception instead of a 2D laser scanner. As mention in the preceding subsections, a 2D laser scanner provides accurate information about the distance of the objects
at a specified height. Furthermore, the shape of the objects can be estimated at such height.
The reduction of the economic cost of these sensors allows to use them in low-cost robots,
since the trade-off between cost and features is maintained. However, the nature of the
information provided by a 2D laser scanner makes it useless to acquire certain relevant
aspects of the environment such as colour, texture or the 3D shape of the objects, among
others.
For many researchers, using 3D information about the scenario is essential, mainly in 3D
reconstruction or for a more realistic comprehension of the environment. For this
purposes, 3D laser scanners could be used. This kind of laser devices are usually very
expensive; therefore, some researchers have adapted 2D laser scanners by using
servomotors [36] or have modified the typical setup of the device on the robot in order to
obtain vertical scans of the environment. These solutions are not free of disadvantages. For
example, a 2D laser scanner attached to a platform that rotates around a horizontal axis
generates a set of horizontal data lines in which some areas are scanned with a greater
density than others. This could lead to distortion effects when the point cloud is processed
[60]. Another problem arises when this kind of sensors are used. In particular, problems
related to the computational cost of the procedures for processing the set of sensed data.
On the other hand, although a representative bidimensional range image is obtained, the
qualitative information is missing (i.e. illumination, colour or texture, among others). In
order to obtain this kind of information, an artificial vision system is necessary.
Monocular vision systems have been used as the only sensor input of some navigation
methods. However, using exclusively a monocular camera for sensing the environment is
not always enough. In [61] some disadvantages when monocular cameras are used are
highlighted; in particular, those related to the scale factor indetermination. To overcome
this limitation, the researchers use a method that combines the data acquired by a
monocular camera and a 3D laser scanner. Other authors use hybrid perception systems
based on a 2D laser scanner and a monocular camera [62], [63], [64], [65], [66]. Combining
52
Cognitive Autonomous Navigation System using Computational Intelligence techniques
the information from monocular cameras with data provided by propioceptive sensors is
also a very useful choice.
Multimodal perception by using 2D laser devices and monocular cameras can be useful for
reducing the economic cost of the robot from a hardware point of view. However, the
computational cost of the algorithms that process the data is usually higher than for
systems that only uses one kind of sensor (for example a 2D laser scanner). In addition,
the combined calibration of the sensors may give rise to problems that should be taken into
account.
After analyzing the set of available exteroceptive sensors, the classic techniques that allow
to extract useful information from raw data and the computational cost of the algorithms
needed for autonomous navigation, some decisions can be taken. In particular, in this
Thesis, the set of algorithms that implement the modules of the navigation software
architecture proposed in Figure 2-1 only use data acquired by a low-cost 2D laser scanner,
specifically the Hokuyo URG-04LX [67]. Noisy data are filtered by using low complexity
methods. A semantic representation of the environment is obtained from the filtered data
and used for map building, localization, obstacle avoidance and local path planning.
2.4 Map building and localization.
As Figure 2-1 shows, the output of the software module “Environment Perception and
Representation” is the input of the “SLAM” component. This module is designed to solve
the following problem: The mobile robot should be capable of building a map and
localizing itself in an unknown scenario starting from a known initial position. In order to
accomplish this task, the robot will use an environment description obtained from the data
acquired from the sensors and properly combined and processed.
If the robot has access to a previously defined map, the “SLAM” module could be
substituted by a “Localization” component. In this case, the problem is addressed by
estimating the pose of the robot according to the information provided by the set of
sensors. A local description of the environment is obtained from this information and such
description is matched with the data provided by the map.
In general, a probabilistic approach is applied to solve both the SLAM and the simpler
localization problem. The position of the robot in the map is considered as a random
variable with an associated probabilistic distribution, and the position of the landmarks are
also random variables (in the SLAM problem). The parameters that define such
distributions provide information about the reliability of the localization and the SLAM
procedures. Other approaches based on Computational Intelligence techniques are also
53
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
useful for addressing these issues.
The description of SLAM and localization state of the art algorithms is addressed in the
following subsections, highlighting the advantages and disadvantages of each of the
analyzed classic methods.
2.4.1 Hybrid SLAM (metric and topological).
As mentioned before, SLAM can be defined as the technique that “addresses the problem of a
robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a
map thereof, and at the same time it wishes to localize itself using its map” [68].
If a suitable solution is found for the SLAM problem, or for localization (if the map is
available), an autonomous service robot will be able to execute tasks in which safe
navigation in an environment shared with humans is essential. Many advances have been
achieved in the metric SLAM approach. An interesting extension to this approach is to
build not only a geometric map of the environment but also a cognitive map that would
allow the robot to be aware of the surrounding world in a similar way as human beings are.
Topological and semantic map building can help to meet this objective. In fact, some
researchers have successfully proposed solutions based on hybrid maps.
A hybrid map is usually composed of a hierarchy of maps with a number levels. Semantic
maps are found at the highest level. Here, a meaning is assigned to the objects or abstract
entities stored in lower level maps. The topological level allows to connect different
representative locations of the global scenario. In general, a topological map is a graph that
connects different local metric maps, placed at the lowest hierarchy level. Metric maps
store information about the geometry of relevant places or natural landmarks of the setting.
Figure 2-4 shows a diagram where a hybrid map is defined by three level sub-maps:
semantic, topological and metric.
In semi-structured indoor environments (offices, hospitals or homes), human beings
usually classify each room according to their function, and such rooms are usually
connected by means of corridors. For example, Figure 2-4 shows a real domestic setting
with two bedrooms, two bathrooms, a kitchen, a living-room and a garden. The type of
objects placed in each room provides information about its use. If it is required that a
mobile robot interacts with this kind of setting in a similar way as a human being would do,
it is essential that the robot acquires some skills such as learning, object recognition, or
autonomous exploration abilities, among others, by using techniques based on active
perception. The ability of easily building extensible and reliable compact conceptual or
cognitive maps is also required.
54
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 2-4. Example of hierarchical hybrid map with three levels: Semantic, topological and
metric.
In these cases, the classic metric SLAM approaches, although insufficient by themselves,
are particularly useful for implementing the local maps stored at the lowest level of the
hierarchical global map.
1) Classic solution for the metric SLAM problem.
As local metric maps are essential components of global hybrid maps, several classic
techniques that solve the metric SLAM problem are analyzed in this sub-section, focusing
on the solutions proposed to the problem of maintaining a suitable representation of the
uncertainty.
While a mobile robot explores an unknown environment it uses its exteroceptive and
propioceptive sensors to acquire information about its internal state and about the
environment. With the set of data provided by the propioceptive sensors, the robot could
estimate its pose by using a model (dead reckoning). The accuracy of the estimate will
depend on the accuracy of the sensors. However, even when they are very accurate,
measurements are affected by errors that propagate to the estimation results. Thus, errors
need to be estimated and properly represented by using the uncertainty concept. Pose
estimation can also be enhanced by considering the measurements provided by
exteroceptive sensors which are not perfectly accurate either. Therefore, uncertainty is
present in both motion models and measurement models. Such uncertainty should be
55
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
explicitly considered. Classic methods use a probabilistic framework to meet this goal.
Feature extraction from the environment is also a significant task. As the measurement
model is usually affected by errors, it is necessary to define methods that take into account
such problems. In this case, using a probabilistic framework is again considered as the
classic solution.
In [1] an extensive review of the SLAM problem from a probabilistic point of view is
presented. After more than a decade of intense research, most proposed solutions are
based on the Bayesian probabilistic estimation framework.
a) Bayesian filtering.
Bayes’ rule is considered as the paradigm of the probabilistic inference. The Bayesian
interpretation of probability can be seen as an extension of logic that enables reasoning
with propositions whose truth or falsity is uncertain. In order to evaluate the probability of
a hypothesis, the Bayesian theory specifies some prior probability, which is properly
updated when new relevant data are observed.
The Bayes’ theorem was enunciated by Thomas Bayes in 1763 and determines the
conditional probability of a random event when another related event
following equation summarizes the Bayes’ law:
is known. The
(2-3)
The Bayesian interpretation of the probability is particularly useful for doing estimates
based on a prior subjective knowledge, since such estimates can be reviewed in function of
empiric evidence. Then, both the SLAM and localization problems can be addressed by
using the Bayesian probabilistic framework. In particular, the Bayes’ filter is the simplest
method that can be used for solving the SLAM and Localization problems. The Bayes’
filter extends the Bayes’ rule to temporal estimation problems by using a recursive
estimator that calculates a sequence of posterior probability distributions over data that
cannot be directly observed, for example a map. The equation that solves the SLAM
problem by using a Bayes’ filter, assuming that the Markov hypothesis is met in a static
world, is as follows:
(2-4)
Where
represents the current state,
is the map,
56
is the set of measurements,
is the
Cognitive Autonomous Navigation System using Computational Intelligence techniques
collection of control commands executed until time ,
reference at time ,
time
and
is the previous state,
is a normalization factor,
is the
is the set of measurements obtained at
are the control commands executed until time
.
The problem can be recursively solved if the data provided by the sensors together with
their probability, and the probability function for the previous sample time are available. In
order to evaluate the filter, two probabilistic models are necessary:
(motion
model), and
(perception model), that are obtained from the odometric
information and from the data provided by the sensors, respectively.
The motion model determines the effect of the control command over the state , that is,
it specifies the probability of reaching the current state after applying the command
over the previous state
. The perception model (also known as measurement model),
describes how the measurements are generated for the state inside the map
from a
probabilistic point of view.
The Bayes’ filter cannot be directly implemented in the general case. However, if some
assumptions hold, a number of specific Bayesian methods can be implemented such as the
Kalman Filter (KF), the Hidden Markov Models (HMM), the Dynamic Bayesian Networks
or the Partially Observable Markov Decision Processes (POMDPs).
In [69], Sebastian Thrun develops a complete analysis of Bayesian methods in the scope of
Probabilistic Robotics.
b) SLAM algorithms based on the Extended Kalman Filter (EKF).
The Extended Kalman Filter (EKF), is one of the most popular methods used in SLAM.
The EKF is an extension for non-linear systems of the Kalman Filter (KF) [70], developed
by Rudolf E. Kalman in 1960. The KF was designed with the purpose of identifying the
non-observable state of a linear dynamic system in presence of additive white noise. It is
particularly useful because the error feedback gain
can be optimally selected if the
variances of the noises that affect the system are known. The EKF can be applied to nonlinear systems, since it linearizes their equations around an estimate of the current mean
and covariance. In the case of well defined transition models, the EKF can be considered
the standard in the theory of nonlinear state estimation.
One of the pioneer works that used a formal technique to address the sensor fusion
problem by considering its stochastic nature is presented in [71], which is focused on the
stochastic aspects of data fusion in multisensory systems. In this case, the authors use a
unique reference frame wherein all object frames, including the robot, are known. A
formalism for manipulating uncertain data which reformulates the KF with the purpose of
57
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
directly taking into account spatio-temporal correlations is developed.
In the same line, in [72] data provided by a CCD camera and a 2D laser scanner are fused
by using an EKF. The proposed technique allows to estimate the position of a set of
natural landmarks together with the robot state. A map is then built by using this
information. Dependencies between position estimates are approximately represented by
means of correlation matrixes. Such matrixes provide information about the relation
between diverse entities of the environment and help to know if the robot has already
visited a place.
In [73], a robust SLAM algorithm based on EKF is also proposed. The robot used in the
experiments is the Nomadic SuperScout service robot equipped with ultrasonic sensors. The
authors also define a method for failure recovering which increases the accuracy and
reliability of the EKF-based technique.
The description of features or natural landmarks in the environment is an important issue
in the EKF-based SLAM. As mentioned in the preceding subsections, the setting is often
modelled geometrically by using points and line segments [74]. Some works propose the
use of spline curves to define the features that will be used as landmarks in EKF-based
SLAM [75].
The EKF-based SLAM methods present some disadvantages which are highlighted in the
analysis carried out in [9]. The problems are related to the Gaussian nature of the noise
used by the EKF and the linearization of non-linear models. Error in measurements and
the robot internal state estimate do not usually have a Gaussian distribution, so using the
mean and the covariance is not always the best way to represent the uncertainty. These
factors have an influence in the way that a probability distribution is projected over a
sequence of movements and measurements through time, and then, how errors are
accumulated.
Several researchers have developed methods that improve the performance of EKF-based
SLAM algorithms in the sense of robustness and efficiency. In particular, a SLAM
technique based on the Unscented Kalman Filter (UKF) is an alternative to EKF-based
SLAM methods. The UKF extends the KF to non-linear system by using a set of samples
that parameterize the mean and the covariance of a probability distribution, not necessarily
Gaussian [76].
An empirical validation of the UKF applied to the SLAM problem is carried out in [77]. In
this work, the robot is intended to explore a large scale outdoor environment. On the other
hand, the results obtained in [78] highlight that there are two key limitations of the UKF58
Cognitive Autonomous Navigation System using Computational Intelligence techniques
based SLAM method. Such drawbacks are related to the high computational complexity
and the inconsistencies that appear in the state estimate. In particular, the authors
demonstrate that the observability properties of the linear regression-based system model
employed in the UKF play a key role in determining the filter consistency. If these
properties do not agree with those of the underlying nonlinear system, consistency cannot
be guaranteed. The first mentioned problem is solved by using a new sampling strategy that
minimizes the linearization error with constant computational complexity. This means that
the global computational complexity is reduced to the level of the classic EKF. The second
mentioned problem (that is, the improvement of the state estimate consistency), is solved
by using a new algorithm named Observability-Constrained Unscented Kalman Filter (OCUKF).
Another way of reducing the computational complexity of the EFK-based SLAM
algorithms is proposed in [79]. In this work, a SLAM technique based on the Divide and
conquer paradigm allows to reduce the computational complexity to
; therefore, the
global computational cost of the procedure is
. On the other hand, the linearization
error is bounded since the proposed method works over local maps which are subsequently
grouped in a global map.
Some researchers have proposed the use of other stochastic techniques with the purpose of
solving some problems of the EKF-based SLAM methods, mainly related to the
computational complexity and the representation of the uncertainty by using Gaussian
probabilistic distributions. Such alternative approaches are outlined in the next subsection.
c) SLAM algorithms based on the Extended Information Filter (EIF).
The Sparse Extended Information Filter (SEIF) can be considered as an alternative to the
use of the EKF in the SLAM problem. In this case, the covariance matrix and the state
vector (used by the EKF), are substituted by the information matrix and the information
vector, respectively.
Assume that
is the pose of the robot at time , and
is the number of landmarks in the
environment. Assume that
is the
landmark in the scenario and belongs to the
interval
. Then, it is possible to describe the environment by means of the vector
, where the pose of the robot as another feature. Obviously, it is not
possible to know the state directly, but a probabilistic estimation of it can be made. From a
Bayesian point of view, the objective is to find a posterior distribution that represents the
state by taking into account the set of measurements, the previous state and the control
commands. In a EKF-based method, a Gaussian distribution, whose parameters are and
, is used for representing the posterior: On the other hand, the information matrix
59
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
and the information vector
are used instead with Extended Information filters.
The information matrix is the inverse of the covariance matrix
and the
information vector can be calculated as
. By using these equations, the posterior
distribution can be described as follows:
(2-5)
Where
is the measurement set and
is the set of control commands.
In [80] the SEIF is developed as an approximation to the EIF. The SEIF maintains a
disperse representation of the dependencies between the environment features by using a
normalized information matrix. This matrix, in contrast to the covariance matrix in an
EKF, stores high values in elements corresponding to pairs of landmarks that are close to
each other from a geometric point of view. However, when two features are far from each
other, the link between them is weak, so the value that relates them in the information
matrix is also low, and even close to zero. The SEIF-based SLAM technique uses this
property, and only the pairs of landmarks placed sufficiently close together will have a
value greater than zero within the information matrix. Thus, such matrix is sparse and can
be stored by using a linear data structure that can be scanned in a linear (and not quadratic)
time. In addition, it is possible to carry out updates in a constant time, independently of the
number of landmarks involved in the procedure.
In [11] an overview about the algorithms designed with the purpose of enhancing the
computational complexity of the SLAM process is presented. A hybrid method named
Combined Filter SLAM (CF-SLAM) is proposed. This technique uses the EKF and the
EIF together with a Divide and Conquer strategy. The objective is to build local submaps
which are combined within a global map. The set of local submaps (with constant size ),
is built by using EKFs. Each submap only stores a set of landmarks and the final pose of
the robot. The uncertainty is defined in terms of a Gaussian distribution by using the
information matrix and the information vector. In order to join all the submaps, the EIF is
used. The position of the robot within the global map can be estimated from its position in
the current local submap. This technique has a computational complexity of
for
each step of the algorithm. The global computational cost of the process is
.
e) FastSLAM algorithm.
Another alternative for the representation of the probability distribution is the Particle
Filter (PF). In this case, the posterior is represented by using a set of samples named .
Each sample or particle consists of a pair
, where is the state and is a factor that
measures how significant a particle is. This demonstrates that it is possible to describe the
60
Cognitive Autonomous Navigation System using Computational Intelligence techniques
uncertainty by using distributions that are not necessarily Gaussian. One of the
disadvantages of this method is related to the difficulty to select the necessary number of
particles for solving a given problem. A low number of particles can lead to erroneous
result at a minimum convergence time. However, a high number of particles will lead to
find a more accurate solution but will require a significant computational effort. The
algorithms that use PF must implement the prediction and update stages of the filter.
Several methods allow to solve this problem, and they have been used in very diverse fields
[81].
One of the most popular metric SLAM methods based on a hybrid technique that
combines particle filters and EKFs is the FastSLAM algorithm [10]. It was initially
proposed with the purpose of providing an alternative to EKF-based SLAM methods and
thus, solving the problems related to the computational complexity and the need of using
non-Gaussian distributions for representing the uncertainty. Historically, FastSLAM 1.0
was the first version of this family of algorithms. The FastSLAM 2.0 version introduced
several major modifications in order to improve the previous version.
The original algorithm, which is widely described in the Montemerlo’s PhD thesis [82], tries
to solve the problems derived from using the EKF-based SLAM methods by factorizing
the posterior into the product of the posterior probabilistic distribution of the trajectory
followed by the robot, and the posterior distributions of the landmarks conditioned to
such trajectory estimation:
(2-6)
Where
is the current position of the robot,
is the map,
is the set of the current
observations, is the control command, describes the data associations made between
the observations and the features present in the map and refers to each feature.
The FastSLAM method does not estimate the robot pose at time , but an estimate of the
full path followed by the robot until the current time. In order to calculate the probabilistic
distribution of the trajectories followed by the robot, a particle filter is used. A desirable
property of the PF is that the amount of calculations required for each incremental update
is constant, independently of the path length. Furthermore, this kind of filtering technique
allows to manipulate non-Gaussian distributions, which can be easily derived from nonlinear models.
On the other hand, the remaining distributions involved in the factorization procedure
and corresponding to each specific feature or landmark are calculated by using low
61
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
dimensional EKFs, since each individual EKF only estimates a feature. Moreover, each
EKF is conditioned by the robot pose; thus, each particle has its own set of EKFs. If is
the number of particles and is the number of features,
is the total number of EKFs
present in the algorithm.
2) Computational Intelligence applied to metric SLAM.
Although the Bayesian approaches to the SLAM problem are the most popular and they
are widely developed, there are alternative SLAM methods based on Computational
Intelligence techniques, such as Fuzzy Systems, Artificial Neural Networks (ANNs) and
Genetic Algorithms (GAs). This subsection outlines several significant contributions to the
autonomous navigation field from the Computational Intelligence perspective.
In [83] a real application that uses a Fuzzy System together with a GA to implement a
SLAM method is proposed. Such method is named Fuzzy-Evolutionary SLAM and
transforms the SLAM problem into a global optimization problem where the goal function
analyzes the quality of the robot pose respect to a local map within a partially built global
map of the setting. The GA is used to find the optimal robot pose. The knowledge about
the problem domain is pre-processed by a fuzzy system. This allows the GA to evolve to a
specific region of the search space and, consequently, the process is accelerated. In
particular, the fuzzy system models the uncertainty in odometry by using a knowledge base
that is useful for making predictions about the robot pose after executing a control
command. The uncertainty is described by a fuzzy system based on samples, where each
sample is a hypothesis about the observed world. The hypothesis processing is carried out
by using the GA by means of the natural selection principle, which leads to select the best
individuals and offers an iterative solution to the data association problem. In contrast to
resampling techniques found in particle filters, the GA carries out the resampling process
in an elitist way, since it does not permit that several copies of the same individual are
present in other generation. The selection procedure (based on cross and mutation
operations), generates a set of samples where each element represents a unique hypothesis
about the world. An Island Genetic Algorithm IAG is selected to represent the population.
This method has an inherent ability of maintaining and processing multiple hypothesis. In
particular, this characteristic is used to solve the loop closure problem.
In [15] the authors study the complexity and diversity of indoor environments and their
influence in the SLAM problem. An approach based on the use of Fuzzy Self-Organizing
Neural Networks (Fuzzy SONN), is proposed. In this case, the map building process is
equivalent to generating a neural network whose structure and parameters have to be
learnt. Although the obtained results are promising, the researchers maintain many future
research lines open with the purpose of enhancing the proposed technique.
62
Cognitive Autonomous Navigation System using Computational Intelligence techniques
In [84] a partially fuzzy solution to the SLAM problem is addressed. The evolution of the
robot motion is modelled by taking into account the data provided by the odometric
system when the control commands are applied. The non-linear process and observation
models are formulated as pseudo-linear models approximated by local linear models
according to the Takagi-Sugeno (TS) fuzzy approach. The combination of these local state
estimates provides a global state estimation. A linear discrete Kalman filter is used to
provide local estimates of the robot and the landmark locations for each local linear model
defined in the TS fuzzy model. The proposed algorithm shows that it is possible to
implement the Kalman filter without using Jacobian matrixes for linearization.
In [16] the authors highlight some disadvantages of the EKF related to the need of
previously having some knowledge about the covariance matrixes of both the process
model and the measurement model . The inaccurate knowledge of these matrixes could
diminish the long-term performance of the filter. In order to solve these problems, an
adaptive neurofuzzy Kalman filtering algorithm is proposed. Such filter attempts to
estimate the elements of the matrix at each sampling instant when a measurement update
step is carried out. The neurofuzzy-based supervision for the EKF algorithm is carried out
with the aim of reducing the mismatch between the theoretical and the actual covariance of
the innovation sequences. The parameters of the neurofuzzy system are learned offline by
using particle swarm optimization in the training step. The proposed scheme is developed
under the assumption that an a priori knowledge about the matrix is accurate enough.
However, there are circumstances where this knowledge is inaccurate. Therefore, the
choice of the basis on which such matrix should be adapted and how to configure an
efficient neurofuzzy system for such situation is still a challenge.
3) Topological SLAM.
Since the late eighties, a great research effort has been devoted to the development of
efficient algorithms to solve the problem of metric SLAM in indoor and outdoor
environments. Currently, a number of research lines related to this problem are still open,
and the development of hybrid strategies (cognitive, semantic and topological) is becoming
very relevant.
From the topological point of view, the construction of a map of the environment is not
intended to provide a metric representation of the setting, but to identify specific places
connected in some way to each other. Topological maps are generally described with
graphs where nodes represent specific locations within the environment, and arcs stand for
the possibility to navigate between such locations. The description of the local
environment stored in each node is frequently of geometric nature. If this is the case, the
topological map allows to connect local metric maps, possibly created with the classic
63
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
SLAM techniques summarized in the preceding section. This hierarchical structure makes
the global SLAM problem more tractable from a computational point of view, even in very
large environments.
The concept of topological map is not new, mainly in the context of path planning, which
implies the definition of a starting point, or initial pose, the goal point and the set of
intermediate steps needed to reach such goal safely. Robot navigation safety can be
assessed through the detection of the free space. To this respect, in addition to featurebased metric SLAM strategies, there also exist many approaches that use occupancy grid
maps as environment representation. In this kind of maps the workspace is divided in cells
with a certain probability of being occupied [85]. The main problem of this representation
is the need of a large storage space, thereby leading to remarkably non-compact
implementations. A frequently used kind of topological map is called roadmap (RM) [8].
From an algebraic point of view, a set of one-dimensional curves is considered a roadmap if
the following properties hold for every pair of points
and
in the free space
:

Accessibility: There exists a path from

Divisibility: There exists a path that allows to connect some point
RM to the point

in
to
within the RM.
within the
.
Connectivity: There exists a path within RM linking
and
.
Figure 2-5 shows an example of RM, because it is possible to reach any goal point using the
roadmap. In [86] the most relevant types of roadmaps are described in depth: visibility
maps, deformation- or retract-based maps (deformation retracts, retract-like structures and
piecewise retracts) and silhouette methods, see Figure 2-6.
Figure 2-5. Example of the accessibility, divisibility and connectivity properties that hold in
a roadmap.
64
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 2-6. Examples of roadmaps: Visibility graph, GVD and Silhouette method,
respectively.
In the visibility maps the nodes are the vertexes of the polygons that define the objects
present in the setting. Deformation or retraction-based maps are based in the
skeletonization of the free space; the Generalized Voronoi Graph (GVD) is the most
relevant example of this kind of roadmap. Finally, silhouette maps are built by the repeated
projection of a hyperplane on the multidimensional free space of the robot. As the
hyperplane sweeps such space, the so-called critical points are marked over the objects that
define the occupied zones of the setting [87], [88], [89]. If the workspace is twodimensional, the hyperplane becomes a line, and the critical points are defined with the
Cartesian coordinates
and .
a) GVD-based topological SLAM.
The generalized Voronoi graphs are widely used for the construction of topological maps,
and are based in the concept of Voronoi diagram [90].
Voronoi Diagrams (also called Thiessen Polygons or Dirichlet Tesselation), are a geometric
construction that allows to build partitions of the Euclidean space given a set of control
points. Such partitions are created by drawing the perpendicular bisectors of the segments
that join pairs of control points. The intersections of such bisectors determine a set of
polygons. Each polygon defines a bidimensional environment around a control point; the
perimeter of the polygons is equidistant to the closest pair of control points, and defines
their influence radius.
Let
be a discrete topological set of points in an Euclidean space. For every point
there
exists at least another point in that is the closest to . Let be composed by only two
points, as shown in Figure 2-7. In this simple case the space can be divided into two
regions whose frontier is the perpendicular bisector of the segment that joins both points.
All the points located to the right of such frontier are closer to point , while the rest is
65
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
closer to point .
If a new point is added to , each pair of points is joined by a segment, and its
perpendicular bisector is calculated. The resulting three bisectors allow to build a partition
of space where each control point is associated to an area called Voronoi cell, see Figure
2-8.
In the general case,
contains
points; for every point
in ,
or región associated to such point, so any point within it is closer to
is the Voronoi cell
than to any other
point in . The set of all Voronoi cells is the Voronoi Diagram. The segments that define
each cell are called edges, the points where two or more edges intersect are called vertexes,
and the points in are the generator points. In the following, the properties of Voronoi
Diagrams are summarized:
Figure 2-7. Example of space partition when S contains two points.
Figure 2-8. Example of space partition when S contains three points.
66
Cognitive Autonomous Navigation System using Computational Intelligence techniques

Two points
and
are neighbours if they are separated by an edge. Such edge is
the perpendicular bisector of the segment that joins both points.

A vertex is a point equidistant to three generator points
,
y
; thus, it is the
intersection point of three edges.

In general, a Voronoi cell is a convex polygon or an unbounded region.

A Voronoi cell is unbounded if its generator point belongs to the convex hull of
the point cloud contained in .

No generator point can lie within the circle centered in a Voronoi vertex passing
through three generator points.
This last property is related to the very important duality between Voronoi diagrams and
the Delaunay Triangulation. The relationship between both concepts is particularly
interesting for the topological SLAM problem, because the free space can be partitioned
into a set of disjoint cells by applying this kind of triangulation. A Delaunay triangulation is
a net of triangles that joins a cloud of points S. Every triangle in the net verifies the
Delaunay property, i.e., its circumscribed circumference does not contain any vertex of
other triangle, (see Figure 2-9). Once the Delaunay triangulation has been obtained, the
Voronoi diagram can be traced by simple joining the circumcenters. Every circumcenter is
a vertex of the diagram, and the segments that join them are the edges of the Voronoi cells
or regions, see Figure 2-10.
Figure 2-9. Example of triangulations. Only the triangulation on the right fulfils the
Delaunay condition.
Figure 2-10. Delaunay triangulation and its dual Voronoi diagram.
67
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
Both the Delaunay triangulation and the Voronoi diagram are well-known Computational
Geometry techniques, and are used in a wide variety of problems. Many researchers have
focused on the improvement of the computational cost of the algorithms used to generate
these tessellations, [91], [92], [93].
A generalization of the Voronoi diagram, the Generalized Voronoi Graph, has been
frequently used in topological SLAM research because of its good proprieties.
The first application of GVDs to topological mapping are obtained by Howie Choset [4],
[5], [94], [95], [96], [6]. In the experiments the robot is equipped with a number of range
sensors to measure distances to the environment obstacles. The exploration of the
environment allows the construction of a graph embedded in the free space. Such graph
amounts to a topological map with the proprieties of a roadmap that can be used for path
planning.
While Voronoi diagrams are built from a set of generator points, the extended definition of
GVD takes into account that real obstacles are not isolated points. Thus, a generalized
Voronoi region Fi is defined as the closure of the nearest points to a specific obstacle,
:
(2-7)
Where
is the distance from to obstacle
. The basic building block of the graph
is the set of points equidistant to two obstacles in the setting,
and
. This set can be
formally defined as
–
; the set can be further restricted to
those points that are closer to both obstacles than to any other obstacle in the
environment. This locus is named face, and can be defined as:
(2-8)
Finally, the GVD is the union of all faces:
(2-9)
The definition of GVD can be extended to multidimensional spaces. It can be shown that
the GVD partitions the free space into regions such that the points inside each region,
,
are closer to the obstacle
that to any other obstacle in the environment. If the graph is
built on a plane, the faces
are called edges. Such edges end in the meeting points, or points
equidistant to three or more objects. Finally, the boundary points are defined as those whose
distance to the nearest object is zero (see Figure 2-11).
68
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 2-11. Example of generalized Voronoi graph in a setting with three obstacles.
In [97] a procedure for building the GVD is described in detail. The GVD of a previously
unknown environment is obtained with the help of range sensors and a set of control laws
that define the exploration procedure. Previously visited places are recognized, and the
problem of loop closing is solved from a topological point of view. On the other hand, in
later works, the GVD generation techniques have been extended beyond the point-like
robot to models that include both robot position and heading [98] or even robots with a
specific shape [99].
b) Hierarchical topological SLAM.
In [100] a multilevel, multiresolution topological map construction technique named
hierarchical ATLAS is presented. It is specifically designed for robots which work in large
scale environments. The hierarchical ATLAS is structured in two levels: the high level map
connects local, feature-based submaps by using a topological strategy based in GVD. This
kind of environment representation allows to locally use a metric description based on
probabilistic techniques without the need of global data correlation. Thus, the exploration
and map building activities can be easily extended to large scale environments with
uncertainty.
Hierarchical maps basically exploit the advantages of graphs as a compact, node-based
global representation of the environment, while local, feature-based maps allow to identify
a landmark-based signature of relevant places.
One of the first hierarchical representations was proposed by Kuipers in [7], where the
concept of hybrid map construction was defined. In his work on Semantic Hierarchy of Spatial
Representations a robust, qualitative method of exploration, map building and navigation in
large scale environments is envisioned. In contrast to many techniques that build a
topological representation from a pre-existent geometric description, the proposed method
directly defines places and paths from the interaction with the environment, and uses them
69
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
to build a topological network. Finally, the geometrical knowledge is integrated in this
structure. In [101], the method is extended; local metric maps are stored for places, and
passage zones or gateways are defined to interconnect them. Such gateways are the basis of
the topological map.
In [102] a metric-topological SLAM method is presented that uses a Bayesian probabilistic
framework to properly consider measurement uncertainty. The proposed technique is
based on the reconstruction of the robot path in a hybrid continuous-discrete state space
that combines topological and geometric representations. In addition, a unified Bayesian
inference method based on an analytical formulation of the problem is defined to take into
account uncertainty in large scale settings efficiently.
Currently, a great number of researchers have adopted this hybrid and hierarchical
structure for map representation and building, both indoors and outdoors [103], [104],
[105], [106].
2.4.2 Fuzzy logic-based localization and map building.
In the preceding pages, the metric, topological and hybrid approaches to the localization
and map building problems have been summarized. In the case of metric SLAM, Bayesian
techniques have been reviewed, with a brief reference to some methods based in
Computational Intelligence concepts. On the other hand, Graph Theory and
Computational Geometry can be applied to the topological SLAM problem. This section is
devoted to describe some approaches based in the application of Fuzzy Systems to
robotics.
In [107] Fuzzy Systems are applied to the SLAM problem using ultrasonic sensors. The
proposed method focuses on sensor data analysis and on the reconstruction of the outline
of the surrounding objects. As the setting is only partially observable, object reconstruction
is performed by using the few available measurements. Thus, there is an uncertainty
associated to the localization process that can be modelled with fuzzy sets. The
membership grades to the fuzzy sets represent not only the uncertainty, but also the
similarity degree when two object outlines are compared. This simplifies the merging of
similar outlines. A polygonal approximation of the environment is built using data from the
ultrasonic sensors. The position and orientation uncertainties of the basic segments that
partially define an object are modelled with fuzzy sets. These segments are called basic
fuzzy segments. The basic fuzzy segments that define a specific side of the same object are
clustered and combined to finally obtain a single fuzzy segment that partially represents the
outline of the object. The uncertainty is propagated from the basic fuzzy segments to the
fuzzy description of the object outline. This fuzzy description can be used both in the map
70
Cognitive Autonomous Navigation System using Computational Intelligence techniques
construction and localization problems.
In [108] the problem of robot localization with respect to a known map using
measurements from ultrasonic sensors is also addressed. Fuzzy sets are used to model the
sensors, while robot position and heading are estimated by fuzzy triangulation.
In [13] the problem of robot navigation in an unknown setting is addressed with fuzzy
approaches. A fuzzy cell map is built as environment representation, and path planning is
solved by applying the
algorithm on such map. Ultrasonic and infrared sensors are fused
by a virtual sensor and used for environment perception. In a second step, the results of
[107] are extended to the construction of maps based on fuzzy segments. An additional
extension to fuzzy segment map building techniques is described in [109], where a 2D
scanner laser is used for perception.
The advantage of representing uncertainty with fuzzy systems stems from the possibility of
defining fuzzy sets for sensor modelling, including certain characteristics that could hardly
be described by more conventional techniques.
Finally, in [12] some interesting approaches to autonomous robot navigation are presented
that address the problems of map building, localization, obstacle avoidance and path
tracking among others with fuzzy system-based techniques.
2.5 Data association.
All the reviewed localization and map-building methods are dependent on a correct data
association between robot perception at a given time with the elements included in the
map. Such elements may consist of artificial or natural landmarks. As the robot navigates
through the environment, the objects in it are sensed from different points of view; some
of them cease to be observed at all, while others begin to fall within the measurement range
of the robot, (see Figure 2-12). The ability to detect which elements are new and which are
simply being observed from a different position is essential in order to perform the SLAM
activity. This process is named data association or matching.
Solving this problem is also fundamental for robot position, heading and speed estimation
in absence of odometry, and for local map registration when maps are built by different
robots in a multi-robot environment, among other problems.
Figure 2-12 shows an example of environment representation based on polylines, or sets of
connected segments, obtained from data sensed by a 2D laser scanner. At time , the robot
perceives two different objects whose outlines are described by two separate polylines.
71
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
Figure 2-12. Environment perception in two consecutive instants.
At time
, the robot has changed its position and heading slightly by
, so
it can perceive not only the mentioned objects (now closer to it), but also a third object
placed further. In addition, one of the segments of the polyline that describes the object
placed to the left of the vehicle is now longer that at time .
In a local data association approach, given two sets of features sensed at consecutive
instants, it is necessary to define a measure of similarity that allows to find out which of
them are new, which have been previously sensed, and which old features have to be
updated in the map. It is frequently possible to use the current estimation of position to
express both sets of features with respect to a common reference system in order to align
them. Figure 2-13 illustrates the data association problem for the features obtained in the
example depicted in Figure 2-12.
In the global data association problem there is no information about the robot pose change
between the sensing of both sets of features. Many approaches are based on data
correlation in order to quantify the similarity of both sets.
72
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 2-13. Alignment of segments for the data association process using common
reference system.
The data association problem can be addressed by using an initial stage where relevant
features are extracted, or without it. Independently of the data association method, this first
stage, if present, is deeply influential on the map building process, and also on many other
key tasks needed for autonomous robot navigation:

Robust robot position, heading and velocity estimation in order to correct errors in
odometry or replace it if unavailable.

Fusion of individual local maps in order to generate a global map.

Loop closing in SLAM tasks, which includes determining whether a place has or has
not been previously visited, and properly updating the map by taking into account this
information.

Object recognition, in order to provide the ability to build cognitive maps of the
environment with semantic information.
The following subsections review some relevant research works in the field of data
association, with and without feature extraction.
2.5.1 Data association without previous feature extraction.
Some authors perform the data association process without any previous feature extraction
stage. For example, in [110] a data association method is proposed based on a geometric
hashing approach that uses invariant Euclidean features in order to associate an input scan
to a set of reference scans without initial alignment. The technique is applied to the global
localization problem, and takes into account curved elements. The raw scan obtained from
73
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
the 2D laser device is directly used as input for the matching stage.
The application of the Iterative Closest Point (ICP) algorithm [111] and its variants to solve
the data association problem is frequent in the bibliography. Given two sets of 2D points
and
obtained from two scans, the ICP algorithm can be used to align them. For each
point in , its closest neighbour in
is determined and chosen as matching point. Next,
the rotation matrix and the translation vector are calculated from the N point-to-point
associations using the least squares method to minimize their distance between matching
points. The intermediate solution
scans
and
is applied to minimize the pose error between
. The procedure is repeated until the residual error
or the number of iterations
is greater than a maximum
is below a threshold
.
The disadvantages of the ICP method are its high computational cost and the possibility
that the algorithm converges towards a local minimum. Because of them, some researchers
have proposed improvements of the ICP algorithm, see [112] for example.
In [113] an alternative method named Polar Scan Matching (PSM) is proposed. It can be
included in the class of point-to-point matching algorithms that work with 2D laser
scanner data in polar form. The coordinate system is centered in the rotation axis of the
device’s mirror, so its X axis is aligned with the zero angle measurement. The set of points
expressed as pairs
are ordered by increasing angle . The newly sensed scan is
compared to the reference scan using an association rule base on the angular coordinate, or
bearing, so no search of matching points in a Cartesian reference system is needed, thereby
reducing both the complexity and the computational cost of the method.
2.5.2 Data association with previous feature extraction.
In many data association approaches, a feature extraction procedure is first applied on the
raw data obtained directly from the sensors. Then, a feature-based association algorithm
tries to align features extracted at different times.
In [114] a navigation experiment is presented where landmarks detected in the
environment are used to correct the odometry localization errors. A segment-based
environment representation is proposed to generate a global map. The data obtained by a
2D laser scanner are associated to segments in the global map by aligning individual points
to them. The displacement between sensed and map matching points is assumed to be
small.
In [115] a data association method is proposed for corner and straight line features
obtained from raw 2D scanner laser data. The association constraints for the set of
74
Cognitive Autonomous Navigation System using Computational Intelligence techniques
detected straight lines are applied in a specific order. First, the lengths of the detected (local)
and map (global) segments are compared. Then, the set of local lines is expressed with
respect to the global reference system by using the position and heading estimation
obtained from a hybrid localization system. After that, the heading angles of the local
straight lines relative to the
axis are compared to the angles of the global lines. In
addition, a maximum distance constraint is enforced between potentially matching lines.
Finally, in order to associate both lines, the local line must be placed within the limits of its
potentially matching global line. Corners are associated in a similar way. Finally, the
matching features are used in the hybrid localization algorithm whose results are compared
with those of a localization approach based solely on odometry.
In [116] a global localization method based on local map matching is presented. Each local
map consists of sets of aligned points built through an adaptive clustering technique. The
environment is described with segments generated from such groups of points. Some
segments are complete, because they represent occlusion-free real objects, while others are
incomplete, because they only partially describe some real elements. Complete segments are
defined by both endpoints, a central point, their orientation and their length, which is
invariant with respect to the point of view. Only complete segments are used in the data
association algorithm to generate a single global map from several local maps. Such maps
are ordered counterclockwise, and the data association algorithm is executed. As the data
association is only carried out on complete segments, its efficiency is improved.
In [107], [13] y [109] data association techniques based on the fusion of fuzzy segments are
described. A set of membership functions models not only the position and orientation
uncertainty of the segment, but also the reliability of segment-to-segment association.
2.6 Exploration of the environment, path planning and
path tracking.
The main goal of autonomous navigation is to provide the robot with the ability to move
safely through the environment. In this way, the robot will be able to perform other higherlevel tasks, for example transportation of objects, surveillance or guidance support to
persons, among others.
Once the robot has gained access to a map, paths can be planned on it in order to move
from one place to another. Path planning is a widely researched subject, see for example
[117], [118], [119], [120], [121], [122], [123], [124]. Any path planning strategy has to
consider the possibility of finding unexpected mobile or fixed obstacles that is necessary to
avoid, or of receiving new orders from the user that would compel the robot to change
75
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
plans. In these cases, techniques based on path deformation are usually useful and
compatible with real time operation. Fixed and mobile obstacle avoidance has also been the
subject of intense research efforts, some of them with remarkable success [125].
Once the path is planned, the robot can use path tracking algorithms in order to follow the
specified route. In fact, explicit path tracking can be considered as the last component that
the robot has to execute during the autonomous navigation process, once a planified path
is available.
Usually, path tracking algorithms take as input an explicit path, and generate as output a list
of curvature references that the robot should execute in order to approximately reach the
sequence of poses that compose the specified route. Perhaps the simplest technique that
solves this problem is the Pure Pursuit algorithm, described in depth in [126]. This
algorithm needs as input a list of 2D positions that define the path with respect to a global
reference system. It also needs as input the current position of the robot. At each control
cycle, the algorithm chooses among all the positions that define the path a specific one,
known as goal point and placed at a distance
command
from the robot, or lookahead. The curvature
is calculated as the curvature that the robot should apply to reach the goal
point following an arc of circumference tangent to its current trajectory:
(2-10)
Where
is the X coordinate of the goal point
reference system. The factors
y
relative to the robot-based
can be interpreted as the error signal and the gain of
the control law, respectively, as shown in [127]. The parameter L allows to modify the gain
of the control law. A long lookahead will lead to a gentler sequence of curvature
commands, at the expense of a lower tracking precision. With a short lookahead, the
tracking errors can be reduced, but at the cost of a greater control effort; however, a
sufficiently low lookahead may render the control loop unstable. In addition, the nonholonomic restrictions present in most mobile robots as well as their manoeuvrability limits
will be incompatible with some of the curvature commands generated by the pure pursuit
algorithm, so they will not be directly applicable.
On the other hand, this method is exclusively based on simplistic geometric calculations, so
the drive and steering dynamics are ignored and the path tracking performance will not be
optimal.
As a conclusion, a path tracking algorithm that leads to imprecise results will have a
negative impact on the overall navigation method, because the planned path will not be
correctly followed, and this result will affect the performance of the rest of the navigation
76
Cognitive Autonomous Navigation System using Computational Intelligence techniques
modules.
Some improvements have been proposed to overcome the limitations of the Pure Pursuit
controller. For example, in [128] a curve is used to characterize the piece of path between
the robot and the goal point. Other improvement approaches focus on techniques to
optimize the setting of the controller’s parameters [129]. Some research works also take
into account vehicle dynamics [130]. In this case, a sufficiently accurate mathematical
model is needed.
On the other hand, if the robot is simultaneously building a map, navigating and localizing
itself within the environment, it is necessary to consider whether robot guidance will be
autonomous, partially autonomous or teleoperated. Many research SLAM experiments are
indeed implemented by manually driving the robot. However, it should be possible to let
the robot take autonomous decisions about which places to explore. An intermediate
strategy is to allow the robot to explore the environment under some previously defined
general objectives but also provide the possibility that a user can specify a new behaviour
or decision if he or she deems it necessary. These are the principles of the exploration
method developed in this thesis.
Finally, the problem of autonomous robot navigation is harder in cluttered environments
with many potentially occluding objects that are either mobile or can easily be manually
moved. This kind of setting is still challenging when the goal is to robustly build a hybrid
topological and metric map. In order to solve these problems, cognitive and semantic
components have been proposed to complement hybrid maps. This kind of representation
tries to include high-level concepts by clustering topological map nodes and enhancing
such map with descriptions nearer to the human natural language, as well as the set of
potential actions to execute as a function of the detected objects and the specified goal
[131].
2.7 Conclusion.
It can be concluded from the review of the more relevant approaches to autonomous robot
navigation in unknown environments that there exist a number of known techniques that
rather successfully solve partial problems such as SLAM, exploration, path planning and
path tracking.
Most of the reviewed approaches focus on the implementation of a specific component
within the overall architecture of autonomous navigation. In addition, researchers do not
usually consider the economic cost as a limiting factor for the development of experiments.
On the other hand, the computational cost of some approaches is high, so they are not
77
Chapter 2. Autonomous navigation of mobile service robots: State of the Art
useful for real-time operation unless some additional efforts are devoted to the
optimization of the processing hardware.
This thesis tries to address the autonomous robot navigation problem in a unified manner.
A semantically labelled polygonal representation of the environment is the basis of a
topological map building method composed by nodes which represent selected zones of
the setting and contain metric information about them. The exploration is addressed with a
hybrid approach, in which the robot takes autonomous decisions as long as the user does
not issue any command. Finally, some contributions are made to the problems of path
tracking and cinematic and dynamic robot modelling.
All the algorithms are experimentally validated using real robots: A commercial robotic
platform (Pioneer) and also a low-cost prototype of service robot, BENDER, designed and
built in the context of this Thesis.
78
It's a dangerous business, Frodo, going out your door.
You step onto the road, and if you don't keep your feet,
there's no knowing where you might be swept off to.
J.R.R. Tolkien
Chapter 3
Reactive navigation using a description of the environment based on a
set of semantically labelled polylines.
I
n this chapter, a reactive navigation method for service mobile robots that work in an
unknown dynamic semi-structured environment is presented. This environment is
represented by a description based on semantically labelled polylines. (Final)
The information provided by a 2D laser scanner is used to build an environment
description based on line segments. The segments that belong to the same polyline are
connected by points that are characterized using low level semantic labels, which describe
particular spatial features, such as inside and outside angle corners, occlusion areas or
frontal and lateral walls, among others.
Next, a set of traversable segments is obtained by taking into account the semantically
labelled polygonal representation, after classifying the free space.
Finally, a set of feasible local safe paths is calculated by clustering and geometrically
approximating the set of middle points obtained from the traversable segments. The
algorithms designed to achieve these tasks are computationally efficient and they are
particularly appropriate when a low cost robot works in real time.
3.1 Introduction.
In this chapter, several algorithms that use the information sensed by a 2D laser scanner as
input are proposed. The output consists of a description of the environment based on line
segments. Such line segments are connected by semantically labelled points. In Chapter 2, a
summary about well-known methods that extract line segments from 2D laser data is
presented [67]. Specifically, these techniques are analyzed in detail in [49] that shows an
interesting comparison from a computational point of view.
Most existing methods shown in the literature are focused on estimating the shape of the
objects by using geometric elements such as lines and, to a lesser extent, curves [48], [58],
[75]. However, most of the algorithms mentioned do not take into account low level
semantic features that allow the connections between segments to be distinguished.
79
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
Some researchers use several elements such as corners, virtual corners and edges that seem to
acquire a certain semantic significance, as described in [56], [132]. Both works are focused
on extracting corners and a set of line and curve segments from which a set of parameters
is determined and included in classic metric SLAM algorithms. Nevertheless, they do not
clearly explain how to use such features, together with the geometric representation of a
setting, in real applications such as SLAM, reactive navigation, obstacle avoiding or path
planning, particularly when the environment is dynamic. The significance of features
extraction from a semantic point of view is not taken into account either when topological
maps are built.
In this Thesis, it is empirically demonstrated that the semantic classification of the
connections between segments facilitates many tasks such as path planning, data
association or obstacle avoiding, among others, since it is possible to find correspondences
between concepts with a specific meaning (inside and outside angle corners or walls, for
instance), and their spatial location in the mobile robot reference system. In this way, we
can easily transform the set of polylines in a structure of semantic labels that will be used
by computationally inexpensive navigation algorithms.
The following sections describe the set of algorithms that allow to obtain the mentioned
environment description. An efficient reactive navigation method based on that description
is also explained. All the proposed techniques have been validated in both a simulated
framework and the real robots Pioneer and BENDER. This last robot is described in detail
in Chapter 6.
3.2 Describing the environment using semantically
labelled polylines.
As mentioned before, the environmental information is sensed by using a 2D laser scanner;
specifically, a Hokuyo URG-04LX device whose main features are described in Appendix I.
Such information consists of a sequence of pairs
measured at where is the
distance from the laser sensor to a point located to
degrees inside the circle drawn
by the laser beam, where
is
, see Figure 3-1. These data are written in polar form
according to the laser reference system.
From this representation, it is possible to intuitively realize that if the differences of
distance in a set of consecutive samples are lower than a threshold, those samples probably
define the shape of some object.
80
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 3-1. Hokuyo URG-04LX laser device features.
Furthermore, if all the samples whose values
are equal to the highest
range value provided by the sensor, the area of the environment located between
and
does not contain any obstacles within the measurement range of the laser device. This area
can be considered as a saturated area (in which the laser device provides the maximum
range value for all the points), and it is bounded by the points
and
.
Taking into account the above definition, the global area sensed by the sensor can be
divided into a set of saturated and non-saturated subareas. Each saturated area is
represented by a segment whose ends are labelled as saturated points (SP). Saturated
segments do not represent real objects in the environment. In fact, it is not known what
can be found behind these segments unless the robot explores new regions following that
direction. However, if saturated areas are described using segments enclosed by labelled
points, we can easily unify the semantic representation system of the environment.
Non-saturated areas are associated with different real objects. Since there are occlusions
between elements found in the setting, all these areas should be analyzed and split again, if
necessary. Each non-saturated area is described by means of a polyline whose ends are
labelled as occlusion points (OC). The rest of points included in the polyline are labelled as
inside angle corners (IAC) and/or outside angle corners (OAC). Table 3-1 summarizes the
set of semantic features that allows to label the representative points found during the
segmentation process. Figure 3-2 outlines this process.
The classic SEF technique [43], explained in Chapter 2, has been taken as the foundation of
several algorithms described in this Thesis. These algorithms allow to obtain the successive
segmentations of the subareas according to the corresponding labels. The methods inspired
by the SEF technique are very suitable in real time applications due to their simplicity and
their low computational cost.
81
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
Description
SP (Saturated point)
An end of a saturated area.
OC (Occlusion)
The first or the last position belonging to a set of points that describe
a non-saturated area.
OAC (Outside angle corner)
A point inside a non-saturated area that defines an outside angle
corner (convex corner).
IAC (Inside angle corner)
A point inside a non-saturated area that defines an inside angle corner
(concave corner).
Table 3-1. Semantic labels for the representative points that bound the segments.
Figure 3-2. Global segmentation process based on searching semantic features.
The main drawback is related to noisy data and outliers. However, these problems can be
solved by using appropriate mean and median-based filters without significantly increasing
the global computational cost. Apart from these filters, a set of algorithms that simplify the
set of polylines and improve the quality of the semantic representation of the setting have
been developed, even in highly complex environments. The effectiveness of these
algorithms has been empirically tested using both a simulated framework and several real
robots in unknown dynamic semi-structured indoor settings. A wide range of real tests
have been carried out using both a Pioneer commercial robot and the low cost robot
BENDER.
3.2.1 Detection of saturated points (SP).
Let
be a sequence of
pairs of points in polar form, at time , where each pair
is the
sample inside and is the highest range value provided by the 2D
laser scanner. Algorithm 3-1 details the process that allows to obtain the set of saturated
areas. Figure 3-3 shows an example of application of the Algorithm 3-1.
Figure 3-3. Example of saturated areas extraction after applying Algorithm 3-1.
82
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Algorithm 3-1. Extracting saturated points.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
Algorithm ExtractingSaturatedPoints (List P):
i  0;
while (i < N) do
if (Pi.d = M) then
Add Pi a SPs
i  i + 1;
while ((Pi.d = M) and (i < N)) do
i  i + 1;
endwhile;
Add Pi-1 a SPs
else
i  i + 1;
endif;
endwhile;
return (List SPs);
In this case, the robot senses three saturated areas located between several objects found in
the scenario. Each area is bounded by two saturated points
where represents the
number of the saturated area and can be zero for the first saturated point or one for the
last one. From this representation, an arranged sequence of polylines is obtained according
to the -value of each point. Polylines whose points are labelled as
are named saturated
polylines and they consist of only one segment. Figure 3-4 shows the list of polylines after
executing Algorithm 3-1. Such sequence consists of three elements where each item is
represented by and is the index of the polyline inside the set. Each polyline consists of
one saturated segment whose ends are
and
.
3.2.2 Detection of occlusion points (OC).
Once the saturated areas have been obtained, the rest of areas are considered nonsaturated. Each non-saturated area is not associated with a single object in the setting, since
there are occlusions between these objects.
Figure 3-4. Sequence of polylines obtained after executing Algorithm 3-1.
83
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
For this reason, we need to calculate the occlusion points (OC). In order to achieve this
task Algorithm 3-2 is executed. As Figure 3-5 shows, there are two non-saturated areas.
These areas do not represent only two objects, since object 0 is partially hiding object and
object is partially hidden by object .
Algorithm 3-2 splits each non-saturated area and new subareas delimited by two occlusion
points are generated. Three subareas are obtained from the non-saturated area , whereas
the non-saturated area generates only one subarea. A part of the object is represented
by the subarea . The subarea is enclosed by the occlusion points
and
.
Algorithm 3-2. Extracting occlusion points.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
Algorithm ExtractingOcclusionPoints (P-SPs, Um):
i  1; N  length(P-SPs);
while (i < N) do
dif  |Pi-1.d- Pi.d|;
if (dif > Um) then
Add Pi-1.d to OcPs;
endif;
i  i + 1;
endwhile;
return (List OcPs);
Figure 3-5. Saturated areas associated to several objects that generate occlusions and the
obtained OC points.
84
Cognitive Autonomous Navigation System using Computational Intelligence techniques
In short, the full area is divided into seven subareas after executing both algorithms. The
saturated areas ,
and
are represented by the saturated polylines shown in Figure
3-4. Area
is represented by a polyline that consists of only one segment. However, areas
,
and
cannot be described using merely a line segment, therefore, it is necessary to
implement an algorithm that detects inside and outside angle corners.
3.2.3 Detection of inside and outside angle corners.
Objects , and , shown in Figure 3-5, share a common feature: all of them include an
outside angle (convex) corner (OAC). OAC points are convex from the robot point of
view. On the other hand, concave points are named inside angle (concave) corners (IAC).
An OAC point is detected when there is a slope change in function
for the
sample, where is the angle and is the distance from the laser to the
point
,
see Figure 3-6. Algorithm 3-3 detects these kind of corners using a subarea as input. IAC
points are detected using the same technique as proposed in Algorithm 3-3. In this case,
the distance increases until a maximum value is reached. At this point, the sign of the
slope of the distance function
changes, so this location is considered as an IAC point,
as Figure 3-7 shows.
Algorithm 3-3. Extracting candidate outside angle corners.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
Algorithm ExtractingCandidateOutsideAngleCorners (Zi, u):
ini  -1;
fin  -1;
m  ((Pu.d – P0.d)<0);
for k  u + 1 to N
if ((Pk.d – Pk-u.d) >0) then
if (m = true) then
end  k;
if (ini <> -1) and (end <> -1) then
if (search outside angle corner) then
Search point (with index between ini and end), whose distance is minimal;
12:
endif;
13:
Add point to ListPossibleOAC;
14:
ini  -1;
15:
end  -1;
16:
endif;
17:
endif;
18:
m  false;
19:
else
20:
ini  k-u;
21:
m  true;
22:
endif;
23: endfor;
24: return (ListPossibleOAC);
85
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
Figure 3-6. Detection of an outside angle corner OAC according to the slope sign change.
Figure 3-7. Detection of an inside angle corner IAC according to the slope sign change.
The sequence of IAC points is merged with the list of OAC points and the resulting
sequence is subsequently sorted according to the value. The set of IAC and OAC points
is inserted between the corresponding occlusion points. The result is a semantically labelled
polyline which appropriately describes an area of the environment. All the points included
in a polyline are expressed in the coordinate system of the robot.
Any polyline can be defined as the set of segments
its ends
the
segment.
where each item
is defined by
where represents the number of the polyline and is the number of
Each
pair
of
consecutive
segments
is
, where the letters ,
defined
y
by
the
set
can be substituted
by the appropriate label that describes each point: SP, OC, OAC or IAC. For instance,
according to this syntax, the representative polyline of
consists of two segments whose
ends are written using the expression
, where
. However, the point
is wrongly labelled since such point
is not a real convex corner. This happens because the corner detection algorithm has found
a slope sign change in the point where the laser beam comes into contact with the object.
86
Cognitive Autonomous Navigation System using Computational Intelligence techniques
In fact, if a line is traced between the points
and
and another line is drawn
from
to
, the angle formed by both lines has a value close to
degrees.
As mentioned before, due to noisy raw data and outliers, the use of filters is needed in real
scenarios. However, filtering does not eliminate the noise problem completely, so the
proposed algorithms for corners detection could produce wrong corners in some
circumstances. A computationally efficient simplification algorithm (Algorithm 3-4) has
been implemented to solve this problem. This algorithm considers that each corner point is
an end of two consecutive segments; if both segments form an angle close to
degrees,
the corner is eliminated. Field tests empirically prove the effectiveness of this algorithm.
Figure 3-8 shows the set of polylines obtained after detecting the IAC and OAC points for
the scenario described in Figure 3-5.
3.2.4 Detection of frontal and lateral walls.
Once the set of simplified polylines is obtained, the points that represent front and lateral
walls are added to the semantic description of the environment.
Figure 3-8. List of polylines obtained after executing the algorithms that detect several
features.
87
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
Algorithm 3-4. Simplifying a polyline.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13
14:
15:
16:
17:
Algorithm SimplifyingPolyline (P, u):
simplified = true;
while (simplified)
simplified = false;
i  0;
while ((i < N-2) AND (simplified=false))
d1 = Euclideandistance(P.pi, P.pi+1);
d2 = Euclideandistance(P.pi+1, P.pi+2);
d3 = Euclideandistance(P.pi, P.pi+2);
if (|(d1+d2)-d3| < u) then
if (P.pi+1 is labelled as IAC or OAC) eliminate point(P.pi+1);
N  N-1;
simplified = true;
endif;
endwhile;
endwhile;
return (P);
The new labels are FW (frontal wall) and LW (lateral wall):

Any point is labelled as FW if two conditions are met. The first condition states that
the perpendicular line defined by the robot position and that point should cut a
segment inside the polyline. The second condition states that this perpendicular line
should form an angle with the X axis of the reference system whose value is inside the
interval
.

Any point is labelled as LW if the first condition mentioned above is accomplished and
the perpendicular line forms an angle with the X axis of the reference system whose
value is outside the interval
.
The representative points FW and LW are particularly interesting in several circumstances.
For instance, when the robot is located at a closed area, it is possible to decide a control
action by taking into account the triangle whose vertices are the set of FW and LW points.
Figure 3-9 shows a potential scenario as observed by the robot. In Figure 3-9 (a) there is an
object at the end of the corridor and in Figure 3-9 (b) the same corridor is empty. In both
cases the setting consists of a frontal wall and two lateral walls that compose a closed
corridor. Therefore, there are two LW points and one FW point that form the triangle
whose circumscribed circumference could be used to enable the manoeuvre that allows the
robot to explore and leave the closed area efficiently. On the other hand, the relation
between the vertices of the triangle is the same in both Figure 3-9 (a) and Figure 3-9 (b),
although the set of polylines describing the environment is different. Therefore, the triangle
could also be used to help the robot to recognize the place, even when a small object has
been introduced in the setting.
88
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 3-9. Scenario where points labelled as FW and LW are used to enable an exploration
maneuver in a closed corridor.
In the following sections, we describe how to use the set of labelled polylines to solve
several problems related to autonomous navigation in an unknown dynamic semistructured environment.
3.3 Reactive navigation algorithm.
The above described representation of the environment can be used for the
implementation of high level algorithms that efficiently work in real time. In this section, a
reactive navigation method based on the semantically labelled polygonal representation of
the environment is described. The proposed technique calculates a set of traversable
segments using the labelled points as OC and OAC. For each traversable segment, the
middle point is computed. Finally, all the middle points are clustered and approximated
using line segments. These line segments locally define feasible safe paths (or routes) that
the robot could follow. The selection of the best route is carried out by using a high level
algorithm that will depend on the specific objectives of the final application.
3.3.1 Computation of traversable segments.
Once the set of polylines that describe the environment is calculated, it is possible to obtain
information about the available free space through which the robot can safely navigate. In
this section, this issue is addressed. Specifically, the algorithm that allows to calculate safe
traversable segments is detailed. The environment shown in Figure 3-10 represents a typical
setting sensed at time . The dotted lines define several traversable segments obtained after
relating pairs of labelled points that belong to different polylines.
89
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
Figure 3-10. Feasible traversable segments found in an environment described using
polylines consisting of SP, OC, OAC and IAC points.
Some elements, such as frontal and lateral walls (FW and LW), and IAC points are not
taken into account to calculate this kind of segments. The IAC points are non-convex from
the robot point of view. Therefore, instead of using IAC points, any OC or OAC
neighbouring point will be selected. On the other hand, FW and LW points are also
considered non-convex points, so their OC or OAC adjacent points are also selected
instead. Finally, any saturated polyline will be considered as a traversable segment, since its
only segment, whose ends are labelled as SP, defines an exploration border at time . In
short, the proposed technique just takes into account the occlusion points OC and the
OAC points to calculate the set of segments that can be safely crossed.
Algorithm 3-5 allows to calculate the set of feasible traversable segments using pairs of
points labelled as OC and OAC.
Algorithm 3-5. Generating feasible traversable segments.
1:
2:
4:
6:
7:
8:
9:
10:
11:
12.
14:
16:
17:
Algorithm GeneratingFeasibleTraversableSegments(List of polylines P):
for i  0 to N
for j  i+1 to N
for k  0 to Pi.Npoints
for l  0 to Pj.Npoints
if ((Pi.pk is labelled as OC or OAC) and (Pj.pl is labelled as OC or OAC)) then
generating feasible traversable segment enclosed by the points Pi.pk y Pj.pl
endif;
endfor;
endfor;
endfor;
endfor;
return (List Pass_S);
90
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 3-11 shows the results obtained after executing Algorithm 3-5 using as input the list
of polylines which represents the environment shown in Figure 3-10. The traversable
segments calculated by this algorithm must be evaluated and filtered since the ultimate list
of traversable segments should consist of only those segments that could be safely crossed.
In order to achieve this goal the following conditions should hold:

The length of any traversable segment should be higher than a given threshold. This
threshold can be calculated by taking into account the size of the robot.

Features very distant from each other define very long segments. These segments are
not interesting (they usually are redundant), and they are not taken into account. The
threshold that allows to discriminate these segments is heuristically defined. Any
segment whose length is higher than this threshold is eliminated.

If a traversable segment cuts any polyline which is part of the environment description,
such traversable segment is not considered safe and it is also eliminated.

Let
be the origin point in the coordinate system of the robot and let be a
traversable segment delimited by the points and . If any point of any polyline that
describes the environment is inside the triangle with vertices , and the segment
is also eliminated, see Figure 3-12.
3.3.2 Generation of feasible safe paths from the set of
traversable segments.
Once the set of traversable segments is obtained, the middle points of the segments are
calculated. Only middle points located at a given distance from any segment that belongs to
the set of polylines are taken into account.
Figure 3-11. Calculated segments after executing Algorithm 3-5.
91
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
Figure 3-12. Example that shows how to take into account the triangle condition.
Next, the set of valid middle points is clustered. Each cluster is represented by means of a
line. Both the clusters and the lines are updated in each iteration. The reactive navigation
algorithm selects only one line at time using a fuzzy system, according to several
conditions defined by a high level software component. A goal point is obtained from the
selected line. That point is used as input by a classical pure pursuit algorithm. If there are
conditions in the environment that lead the robot to a collision when it tries to reach the
calculated target point, a collision avoidance algorithm is triggered. The following
subsections explain how to address all these issues in detail.
1) Clustering the set of middle points.
The measurement range of the laser beam limits how far the objects can be in order to be
detected. The region covered by the laser beam usually spans
degrees, although typical
Hokuyo devices can cover
and even
degrees. As the experiments have been
carried out using a Hokuyo URG laser device, an arc of
degrees could be sensed.
However, this range has been reduced to
degrees to maintain the compatibility with
other laser devices. Therefore, the set of polylines obtained from raw data describes what
the robot “sees” inside a semi-circle whose radius is at most
meters. All the middle
points obtained from the traversable segments are placed inside this semi-circle. In order to
cluster the set of middle points, the semi-circle is split in several sectors. Figure 3-13 shows
a partition of this semi-circle using five angular regions of the same size. Each region is
delimited by two angular values and
and each middle point
is written in polar
form. All the middle points whose
values fall inside the interval
belong to the
angular region defined by this interval. Therefore, all the middle points of a specific region
form a cluster.
92
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 3-13. Clustering middle points using several uniformly distributed angular regions.
Orange-colored points are wrongly clustered.
However, a static division of the semi-circle is not reliable enough because some interesting
semantic features of the environment description are not taken into account. For example,
Figure 3-13 shows that several middle points fall in the cluster defined by the interval [ ,
]. Some of those middle points are wrongly clustered according to the shape of the
polygonal representation of the environment. This happens because there are several
labelled points that are convex from the robot point of view and these points define
significant exploration areas such as corridors, for instance. So, it is more suitable to use
the angles of these points as the boundaries of the intervals that define the angular clusters.
This technique allows to define the set of angular regions dynamically; therefore, the
number and size of the clusters will depend on the scenario. In order to calculate the set of
convex points, a closed polygon is generated from the set of polylines that describe the
environment. A sorted list of convex points from the robot point of view is obtained.
Consecutive pairs of these points are taken to form the intervals that define each cluster.
Figure 3-14 shows the set of convex points from the robot point of view that corresponds
to the scenario shown in Figure 3-13. Finally, the intervals obtained in that example are
shown in Figure 3-15.
If there are not any points enclosed by the angular region bounded by the interval
,
the cluster defined by this interval is not taken into account in the path generation
algorithm. If the environment is represented by only one semantic polyline, the setting in
front of the robot is closed, and this area does not need to be explored. Figure 3-15 shows
five angular clusters. Such clusters will be used to generate a set of feasible safe paths at
time , as described in the following subsection.
93
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
Figure 3-14. Detecting convex points from the robot point of view.
Figure 3-15. Dynamic generation of angular clusters. The clusters bounded by the intervals
[ , ] and [ , ] do not enclose any middle point. The cluster defined by the interval
encloses ten middle points and the cluster
contains eight middle points.
Finally, the interval
forms a cluster with three middle points.
2) Generating feasible safe paths.
This subsection explains how to generate feasible safe paths, from which a goal point will
be selected to be followed by the robot, using the set of angular clusters described above.
Each angular cluster defined by the interval
encloses a set of middle points that
could be approximated using a straight line whose parameters are calculated by using an
algorithm whose main steps are enumerated as follows:

A pair of points are taken from the cluster. The parameters of the straight line that
joins them are obtained.

For all the points enclosed by the cluster, the perpendicular distance from each point to
the straight line is calculated. The root mean squared error (RMSE) is calculated after
adding all the distances. This procedure is repeated for each possible pair of points.
94
Cognitive Autonomous Navigation System using Computational Intelligence techniques

The most representative straight line is selected as that one with the lowest RMSE
value.

Finally, for each cluster, another straight line is added; it joins the origin point, where
the robot is located, and the representative line of the cluster.
Figure 3-16 shows the set of feasible paths or routes obtained after executing this algorithm
over the scenario shown in Figure 3-11. The representative straight lines of the clusters
provide local paths or routes that the robot can use in order to safely explore a given area
of the free space. The final selection of a specific route will depend on a high level
algorithm where several conditions defined by the user will be taken into account.
3.3.3 Reactive navigation method based on fuzzy techniques.
In order to carry out the experiments that validate the algorithms proposed in this Thesis, a
high level software component, which allows the robot to navigate safely, has been
designed. Taking into account a set of conditions defined by the user, this software
component analyzes the set of feasible routes, calculated by the above described algorithm.
According to these conditions, a merit factor value is assigned to each straight line. The line
with the highest merit factor is then selected. A goal point that belongs to the selected line
and located at a given distance from the robot is selected to be followed by the robot using
the well-known pure pursuit technique [3]. If the steering command calculated by the pure
pursuit algorithm leads the robot to a potential collision, an emergency algorithm is
triggered. Moreover, the velocity control is also carried out using a fuzzy-based method.
The execution of all these algorithms makes sure that the robot will safely navigate through
any environment even in presence of dynamic obstacles.
Figure 3-16. Straight lines obtained after grouping the set of middle points taking into
account several dynamically calculated angular intervals.
95
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
1) Selecting the suitable route using a fuzzy system.
As mentioned before, the user of the high level application can select a set of features that
will be taken into account to select the more suitable route. The selected route has to
accomplish the following requirements: the number of middle points represented by the
straight line associated to the cluster should be maximum, the robot heading change
required to follow a new route (effort) should be minimum and the length of the straight
line that represents the cluster should be maximum. This strategy maximizes the
exploration perspectives and minimizes the heading changes, as much as possible. To reach
this target, a fuzzy system has been designed using the XFuzzy 3.0 tool [14]. For each
cluster, the following features are taken into account: the number of middle points in a
cluster, the length of the representative straight line of this cluster and the angle formed by
the Y axis of the robot reference system and the segment that joins the origin of this
reference system and the projection of the farthest middle point over the representative
line of the cluster. Each value is properly normalized in the interval
to facilitate the
fuzzy system design. Figure 3-17 shows a diagram of the implemented fuzzy system.
The output of the fuzzy system is a merit factor value. The selected route is the one with
the maximum merit factor value. Table 3-2 shows the rule base whose inputs are
EFFORT, DISTANCE and NUMBEROFPOINTS and whose output is MERITFACTOR.
Figure 3-18 graphically shows the output of the rule base using some representative inputs.
The three inputs and the output are fuzzy variables whose types are trapezoidal fuzzy sets.
Figure 3-19 shows the distribution of the fuzzy values in the universe of discourse for each
fuzzy type.
Using a fuzzy system to select the suitable route provides several advantages, since the user
could not only define the set of requirements to achieve, but also a different significance
could be given to each feature by adjusting the fuzzy sets or changing, erasing or adding
any rule. Moreover, the way of selecting the route is more approximated to human being
reasoning, because semantic concepts such as NEAR, FAR, LOW or HIGH, among
others, are used.
Figure 3-17. Fuzzy system designed to select the suitable route according several
conditions defined by the user high level application.
96
Cognitive Autonomous Navigation System using Computational Intelligence techniques
(a)
(b)
(c)
(d)
Figure 3-18. Fuzzy system inputs and outputs. Figures (a), (b) and (c) show the relation
between the inputs EFFORT and DISTANCE and the output MERITFACTOR when the
number of points is 0, 0.5 and 1, respectively. Figure (d) shows the relation between the
input NUMBEROFPOINTS and the output PROBABILITY when EFFORT and
DISTANCE are 0.5.
(a)
(b)
(c)
(d)
Figure 3-19. Fuzzy sets used by the route selection fuzzy system. (a) Fuzzy variable
NUMBEROFPOINTS. (b) Fuzzy variable EFFORT. (c) Fuzzy variable DISTANCE. (d)
Fuzzy variable MERITFACTOR.
97
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
NUMBEROFPOINTS
LOW
MEDIUM
HIGH
EFFORT
VERYLOW
LOW
MEDIUM
HIGH
VERYHIGH
VERYLOW
LOW
MEDIUM
HIGH
VERYHIGH
VERYLOW
LOW
MEDIUM
HIGH
VERYHIGH
VERYLOW
LOW
MEDIUM
HIGH
VERYHIGH
MEDIUM
LOW
MEDIUM
HIGH
VERYHIGH
VERYLOW
LOW
MEDIUM
HIGH
VERYHIGH
VERYLOW
LOW
MEDIUM
HIGH
VERYHIGH
VERYLOW
LOW
MEDIUM
HIGH
VERYHIGH
VERYLOW
LOW
MEDIUM
HIGH
VERYHIGH
DISTANCE
NEAR
MEDIUM
FAR
NEAR
MEDIUM
FAR
NEAR
MEDIUM
FAR
MERITFACTOR
LOW
VERYLOW
VERYLOW
VERYLOW
VERYLOW
MEDIUM
LOW
LOW
VERYLOW
VERYLOW
HIGH
HIGH
MEDIUM
LOW
VERYLOW
MEDIUM
LOW
LOW
VERYLOW
VERYLOW
HIGH
MEDIUM
MEDIUM
LOW
LOW
VERYHIGH
HIGH
MEDIUM
MEDIUM
LOW
MEDIUM
MEDIUM
LOW
LOW
VERYLOW
HIGH
HIGH
MEDIUM
LOW
LOW
VERYHIGH
VERYHIGH
HIGH
MEDIUM
LOW
Table 3-2. Rule base with NUMBEROFPOINTS, EFFORT and DISTANCE as inputs,
and MERITFACTOR as output.
2) Selecting the suitable velocity using a fuzzy system.
As mentioned before, if the selected goal point leads the robot to a collision, an emergency
algorithm is triggered. Several conditions are taken into account to execute this algorithm.
The distance from the robot to any labelled feature belonging to the set of polylines is
98
Cognitive Autonomous Navigation System using Computational Intelligence techniques
considered. The angle that forms the feature point with the Y axis of the robot reference
system is also used. Moreover, the velocity is adjusted according to the situation. In order
to select the suitable speed, a fuzzy-based algorithm has been implemented. In the
experiments, a minimum distance ( meters) has been chosen as a threshold.
First, the distance from all the points belonging to the set of polylines and the robot is
calculated. Next, the closest distance to the robot is checked. If the distance is shorter than
the threshold and the point is located inside a circular sector in the interval
the
emergency algorithm calculates a steering command that allows the robot to turn around
using a very low speed (
meters per second in tests). The ends of the interval are values
that depend on the kind of the labelled point. For example, if the point is labelled as IAC,
the interval is
degrees, however if the point is labelled as LW, the interval is
degrees.
If the emergency algorithm is not being executed, it is also necessary to decide which
velocity is more appropriate. The fuzzy-based system shown in Figure 3-20 allows the
robot to select a proper velocity. It should be noticed that the fuzzy system consists of two
rule bases hierarchically connected. The output SPEED can be VERY LOW, LOW,
MEDIUM, FAST and VERY FAST. The first subsystem calculates the velocity (internal
variable SPEED), according to the inputs COLLISION and TARGET. COLLISION is
the angle defined by the line that joins the closest labelled point of a polyline with the robot
and the Y axis of the robot reference system. On the other hand, TARGET is the angle
defined by the line that joins the goal point with the robot and the Y axis of the robot
reference system. If there is not a selected route whose goal point could be followed, the
value of this angle is zero.
The fuzzy set that defines the type of the fuzzy variables COLLISION and TARGET is
shown in Figure 3-21 (a). On the other hand, the types TMinDistance and TSpeed define
the type of the fuzzy variables MINDISTANCE and SPEED, respectively, see Figure 3-21
(b) and (c).
Figure 3-20. Fuzzy system designed to select the suitable velocity according to the
probability of collision, distance to the closest feature and target point.
99
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
(a)
(b)
(c)
Figure 3-21. Fuzzy sets used by the velocity selection fuzzy system. (a) Fuzzy variables
COLLISION and TARGET. (b) Fuzzy variable MINDISTANCE. (c) Fuzzy variable
SPEED.
Both rule bases have been designed to reduce the velocity properly when the robot is
approaching to an obstacle. Furthermore, the fuzzy algorithm adjusts the speed according
to the steering command obtained from the pure pursuit algorithm. In order to do this, the
goal point and the closest labelled point detected by the robot are related. For example, if
there is a corner located in front of the robot slightly on the right (RIGHT would be the
selected fuzzy value), and the goal point is located on the right of the corner
(RIGHTRIGHT for instance), the speed should be reduced enough (LOW), because the
robot has to track a sharp curve to reach the goal point and as it is following this curve, it is
approaching to the obstacle, see Figure 3-22. Table 3-3 details the set of rules that
generates the output SPEED using COLLISION and TARGET as inputs. All the inputs
and the output are normalized to the interval
. Table 3-4 shows the set of rules that
generates the final output SPEED using MINDISTANCE and the output of the
subsystem whose rules are detailed in Table 3-3 as inputs.
Figure 3-22. Example of situation where it is necessary to reduce the velocity.
100
Cognitive Autonomous Navigation System using Computational Intelligence techniques
COLLISION
TARGET
RIGHTRIGHT
RIGHT
LEFT
LEFTLEFT
RIGHTRIGHT
VERYLOW
LOW
HIGH
VERIHIGH
RIGHT
LOW
MEDIUM
HIGH
VERYHIGH
LEFT
VERYHIGH
HIGH
MEDIUM
LOW
LEFTLEFT
VERYHIGH
HIGH
LOW
VERYLOW
MINDISTANCE
Table 3-3. Rule base with inputs COLLISION and TARGET and SPEED as output.
VEL
VERYLOW
VERYLOW
VERYLOW
LOW
LOW
MEDIUM
VERYSHORT
SHORT
MEDIUM
LONG
VERYLONG
LOW
LOW
LOW
LOW
MEDIUM
MEDIUM
MEDIUM
LOW
LOW
MEDIUM
MEDIUM
MEDIUM
HIGH
LOW
MEDIUM
HIGH
HIGH
HIGH
VERYHIGH
LOW
MEDIUM
HIGH
VERYHIGH
VERYHIGH
Table 3-4. Rules base with inputs VEL and MINDISTANCE and final SPEED as output.
Note that the first fuzzy subsystem provides a value of speed by taking into account only
the position of the goal point and the closest labelled point. However, the distance between
the robot and this labelled point is also relevant, so the second subsystem readjusts the
velocity according to this distance. Figure 3-23 graphically shows the behaviour of the full
system when MINDISTANCE is ,
and . Changes in SPEED according to
MINDISTANCE when COLLISION and TARGET are are also detailed.
(a)
(b)
(c)
(d)
Figure 3-23. Fuzzy system inputs and outputs. Figures (a), (b) and (c) shows the relation
between the inputs TARGET and COLLISION and the output SPEED when
MINDISTANCE is 0, 0.5 and 1, respectively. Figure (d) shows the relation between the
input MINDISTANCE and the output SPEED when TARGET and COLLISION are 0.5.
101
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
3.4 Experiments and results.
In this section, a set of experiments and results are presented. These tests validate the
technique used to extract the set of semantically labelled polylines, the method that allows
to calculate the list of traversable segments and, finally, the fuzzy algorithms that allow the
robot to navigate. In order to carry out the mentioned experiments, a simulator and two
robotic platforms have been used. The simulator is based on the Player/Stage tool. On the
other hand, with respect to the tests performed in real unknown dynamic semi-structured
environments, both a commercial Pioneer robot and the low cost robot BENDER have
been used.
The results obtained from these tests demonstrate that the algorithms properly work in real
time regardless of the robotic platform. Given that the low level software architecture has
been implemented using distributed components (which use the middleware YARP [133]
to exchange data), any higher level algorithm (in this case, the algorithm that obtains the
semantic polygonal representation of the setting), can interact with any low level software
component transparently. Therefore, the higher level components do not need to know if
they are working with a simulated or a real robot. This methodology can be used for both
the refinement process and the final experiments.
3.4.1 Validating the algorithm that extracts the set of
semantically labelled polylines.
In this section, the computational complexity of the designed algorithms is analyzed. In
addition, the reliability of the results obtained by the algorithm that calculates the set of
semantically labelled polylines is also evaluated.
1) Computational cost of the algorithms.
Since the classic SEF technique [43] is the foundation of the algorithms that extract the
semantic polygonal description, the computational cost of such algorithms is low. This is
empirically demonstrated after measuring the time required for obtaining the semantic
representation of the setting when different computers are used. In particular, such results
are analyzed in depth in Chapter 6.
On the other hand, analyzing the computational complexity of the designed algorithms
allows to compare such algorithms to other methods. In particular, the computational cost
of several classic methods that obtain a line-based description of indoor settings is
presented in [49]. In this work, the number of points in an input scan and the number of
extracted line segments are respectively named and . The same nomenclature is used in
102
Cognitive Autonomous Navigation System using Computational Intelligence techniques
this Thesis; therefore, the proposed algorithms could be easily compared to other classic
methods. Furthermore, other parameters according to the analyzed algorithms are also
used. In particular, is the number of points that belong to the set of saturated areas and
is the number of corners.
Table 3-5 shows the complexity of the designed algorithms by considering the defined
parameters. Figure 3-24 illustrates an example where the computational complexity
associated to the set of algorithms which obtain the non-simplified representation of the
setting is calculated. Specifically, two saturated areas and three non-saturated areas are
identified. Additionally, six occlusion points and two corners are detected. Once this
description is calculated, the procedure that simplifies the set of polylines scans the number
of detected corners
. Finally, each line segment is analyzed in order to find frontal and
lateral walls. As the algorithms are sequentially executed, the global computational
complexity can be calculated as
.
Algorithm
Saturated points extraction
Occlusion points extraction
OAC and IAC points extraction
Polyline simplification
Frontal and lateral walls detection
Complexity
N
N-M
N-M
NC
S
Table 3-5. Computational complexity of the algorithms that allow to obtain the semantic
polygonal representation of the environment.
Figure 3-24. Example of extraction of a semantic polygonal description of the environment.
The number of points that should be scanned during each stage is highlighted.
103
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
2) Reliability of the semantic polygonal representation of the environment.
The reliability of the obtained results has been measured by using a method based on the
RMSE calculated by taking into account the length of the perpendicular lines that join all
the raw points with their corresponding segments of a polyline.
As mentioned before, points sensed by the 2D laser device are expressed in polar form.
Each raw point
is represented by the segment bounded by points
and
if
, see Figure 3-25. The reliability of this association is estimated
using a measurement of distance error (ED) defined by the following equation:
(3-1)
Where
are
is the length of the perpendicular line that joins
, whose polar coordinates
, with the segment enclosed by the labelled points
and
,
expressed in polar form by the pairs
and
respectively.
For each polyline, the global RMSE is calculated using the following equation:
(3-2)
Where is the number of segments that form the
points represented by that polyline.
polyline and
is the number of raw
Next, a representative set of results is presented, both using the Player/Stage-based
simulator and the real robots Pioneer and BENDER in unknown real environments. For
each experiment the following graphical results are shown:

A robot view in a static 2.5D scenario if the experiment is carried out in a simulated
environment.

A real environment picture if the experiment is carried out in a real setting.
Figure 3-25. Correspondences between raw points and the polyline segment that represents
them.
104
Cognitive Autonomous Navigation System using Computational Intelligence techniques

The graphical output obtained by the software component that generates the semantic
polygonal description of the environment.

The graphical representation of the RMSE values obtained after matching raw points
and polylines.
Results in the simulated environment.
As described in Appendix I, a Player/Stage based-simulator of the robot BENDER has
been developed. The following experiments are carried out using this simulated framework.
The floor plan of a hospital area is used as environment.
First experiment.
Figure 3-26 shows the simulated robot BENDER in one of the rooms of the hospital floor
plan. The environment is described by six semantically labelled polylines numbered from 0.
Figure 3-26-(a) describes the association between the segments of each polyline and the
obstacles found in the scenario. Figure 3-26-(b) shows a top view of the area where the
robot is located. Figure 3-26-(c) describes the set of polylines whose segments approximate
the environment. Each segment is enclosed by the representative points labelled as OC
(green colour), OAC (blue colour), IAC (red colour) and FW and LW (black colour). The
segments are red colored and the raw data are represented using the blue colour. Figure
3-26-(d) shows the RMSE calculated using the equation (3-2), for each
polyline.
The RMSE value of the polylines that describe saturated areas is considered null or zero,
since the robot cannot know if there are objects located beyond the measurement range of
the laser beam. The set of polylines reliably represents the objects found in the
environment acquired by the inexpensive Hokuyo URG 2D laser device. The higher RMSE
value does not exceed 0,025 meters; therefore, any raw data is well-represented by its
corresponding segment.
Second experiment.
Figure 3-27 (a) and Figure 3-27 (b) show that the robot is located at a closed area of the
hospital floor plan. The robot can “see” an IAC point on the right. On the left, in front of
the robot there is a part of the environment hidden by a wall. The setting is represented by
three polylines.
Figure 3-27 (c) illustrates how polylines reliably describe the sensed environment and even
small-sized corners are detected. This is confirmed by the RMSE bar chart that shows the
RMSE calculated for each polyline (see Figure 3-27 (d)).
105
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
(a)
(b)
RMSE
0,03
0,025
0,02
0,015
0,01
0,005
0
P0
P1
(c)
P2
P3
P4
P5
(d)
Figure 3-26. First experiment in a simulated environment.
(a)
(b)
RMSE
0,025
0,02
0,015
0,01
0,005
0
P0
(c)
P1
(d)
Figure 3-27. Second experiment in a simulated environment.
106
P2
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Third experiment.
In this experiment, how the algorithm works when the robot is located at one of the main
corridors of the hospital floor plan is analyzed, see Figure 3-28 (a) and Figure 3-28 (b).
Given that other small-sized corridors and several rooms converge in the main corridor,
the robot detects a great number of occlusions and the number of generated polylines is
quite high in this case, see Figure 3-28 (c). Polylines P0, P6, P10 and P16 represents
saturated areas, then their RMSE values are null, as Figure 3-28 (d) shows.
The RMSE values calculated for polylines P13 and P14 are
and
respectively.
These are the highest error values obtained in this experiment. The part of the
environment described by polyline P14 could generate two polylines taking into account
that a corner hides a small area of the setting. However, this occlusion is not detected
because the segmentation heuristically defined threshold is greater than the distance
between the two consecutive points that would define that occlusion, see Algorithm 3-2.
This means that the number of polylines obtained at time depend on this threshold, to a
large extent. Therefore, it is necessary to select a suitable value of this threshold to balance
reliability and number of representative polylines.
(a)
(b)
RMSE
0,07
0,06
0,05
0,04
0,03
0,02
0
(c)
P0
P1
P2
P3
P4
P5
P6
P7
P8
P9
P10
P11
P12
P13
P14
P15
P16
P17
P18
P19
0,01
(d)
Figure 3-28. Third experiment in a simulated environment.
107
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
Results in a real environment.
This section shows the results obtained when the algorithm that allows to calculate the
semantic polygonal representation of the environment is executed over a real environment
using a real robot. Figure 3-29 clearly demonstrates the significance of applying the
selection and simplification algorithms mentioned in section 3.2.4., mainly when the real
setting is complex and noisy. The section of the laboratory where the experiment was
carried out is shown in Figure 3-29 (a). The raw data sensed by the Hokuyo URG 2D laser
device is shown in Figure 3-29 (b). Both the commercial Pioneer robot and the robotic
platform BENDER use the same 2D laser device. Therefore, the model of the robot used
in these experiments is not actually relevant.
(a)
(b)
(c)
RMSE (non-simplified)
RMSE (simplified)
0,1
0,08
0,06
0,04
0
(d)
P0
P1
P2
P3
P4
P5
P6
P7
P8
P9
P10
P11
P12
P13
P14
P15
0,02
(e)
Figure 3-29. Experiment carried out in a real environment. (a) Area of the laboratory. (b)
Raw data acquired by the 2D laser device. (c) Non-simplified representation of the
environment. (d) Simplified representation of the environment. (e) RMSE analysis.
108
Cognitive Autonomous Navigation System using Computational Intelligence techniques
In these tests, the most significant question is to know the reliability of the designed
algorithms when the robot has to semantically represent a real environment at a given
instant in real time.
Comparing the results obtained when the simplification of the polylines is (and is not)
carried out is particularly important. Therefore, Figure 3-29 (c) shows the results obtained
when the simplification algorithm is not executed whereas Figure 3-29 (d) demonstrates
that the setting is better represented if the set of polylines is simplified. For example, the
door of the laboratory is represented by using a set of adjacent corners when the
simplification algorithm is not executed, but such description is not correct. This problem
is solved after simplifying the set of polylines.
Finally, Figure 3-29 (e) shows the RMSE comparative results obtained in both cases: the
simplified representation is red colored and the not simplified representation is blue
colored. As observed, simplification does not have a great influence in the representation
accuracy. This means that the set of polylines properly approximates the environment even
if the polygonal description is simplified.
Note that polylines P11, P12 and P15 (which consist of a great number of points subject of
simplification), generate RMSE values slightly higher than those calculated from the set of
not simplified polylines. However the number of segments is reduced, so balancing both
results, the simplified option is selected.
3.4.2 Validating the autonomous navigation system.
A set of representative results obtained after executing the set of algorithms that allow the
robot to navigate safely is presented in this section. First, the algorithm that calculates the
set of traversable segments is executed. Next, only the middle points that accomplish the
conditions described in section 3.3.1 are taken into account. This set of middle points is
clustered by taking into account several features found in the environment as detailed in
section 3.3.2. Finally, after calculating a straight line that adjusts each group of clustered
middle points, a fuzzy algorithm is executed to select the route that satisfies the
requirements defined by the user (as section 3.3.3 explains). A point of this line is selected
to be followed by the robot, using the pure pursuit method.
An emergency algorithm is triggered if the robot is led to a collision while it is following
this point. This algorithm uses the semantic tags of the polygonal description to take
decisions that avoid collisions. Another fuzzy algorithm selects the suitable velocity by
taking into account the environment representation at a given time and the position of the
robot in the setting.
109
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
1) Navigating through the simulated environment.
The hospital floor plan has also been selected to carry out the experiments that validate the
autonomous navigation system. In order to demonstrate the performance of all the
algorithms described in section 3.3, a sequence of representative figures has been recorded
while the robot was navigating through the environment. The path tracked by the robot in
the Stage simulator is shown together with the semantic polygonal description of the
setting and the selected route to follow.
In this experiment, the mobile robot is located at the main corridor of the hospital floor
plan, see Figure 3-30. Note that there are a large number of intersections and rooms that
could be explored. Using the fuzzy reactive navigation algorithm proposed in section 3.3.3
the robot selects the path that accomplishes the requirements defined by the user.
In short, the robot navigates through the corridor, see Figure 3-30 (a). At the end of the
corridor the robot turns to the right, as Figure 3-30 (b) shows. As the room is represented
by a set of polylines that do not generate traversable segments, the area is considered
closed. Therefore, the robot decides to go to the end of the room. If corners or walls are
too close to the robot, the emergency algorithm that allow the robot to avoid collisions is
triggered. The velocity is modified according to the circumstances by executing the fuzzybased algorithm described in section 3.3.3.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3-30. The robot navigates through the main corridor of the hospital floor plan.
110
Cognitive Autonomous Navigation System using Computational Intelligence techniques
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3-31. Results obtained after executing the algorithms proposed to navigate safely.
Once the closed room has been explored, the robot returns to the main corridor, see
Figure 3-30 (c). As shown in Figure 3-30 (d), Figure 3-30 (e) and Figure 3-30 (f), the robot
navigates through the corridor again, but in the opposite direction. Figure 3-31 shows the
set of middle points and their corresponding representative straight lines according to the
sequence shown in Figure 3-30.
2) Navigating through a real environment.
In this section, an experiment carried out in an unknown semi-structured dynamic indoor
scenario is described. Figure 3-32 shows a diagram of this experiment. Figure 3-33
illustrates the result of the experiment described in the diagram shown in Figure 3-32. The
robot starts to navigate close to the door of a room, see Figure 3-33 (i). When it goes out to
the corridor, it decides to turn to the right, according to the route whose merit factor is the
highest when the fuzzy-based algorithm, described in section 3.3.3, is executed. There are
two persons in the corridor. One of them goes right next to the robot, see Figure 3-33 (ii).
The robot detects the obstacle but it can go on navigating safely. On the other hand, the
second person stays in the middle of the corridor and he is located in front of the robot.
As the distance between the robot and this person becomes smaller than the threshold 0.5
meters, the emergency algorithm is triggered and the robot turns around, see Figure 3-33
(iii). The safety velocity (0.07 m/s), is selected. Once the person has been avoided, the
robot goes through the corridor again, but in the opposite direction. Two persons are next
to the door now. When the robot is close to these persons, it decides to go into the room
again, see Figure 3-33 (iv).
111
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
Figure 3-32. Diagram of the experiment carried out in a real environment.
(i)
(a)
(b)
(c)
(a)
(b)
(c)
(a)
(b)
(c)
(a)
(b)
(c)
(ii)
(iii)
(iv)
Figure 3-33. Results obtained after executing the proposed algorithms to navigate safely in
a real environment. Pictures (a) represent the set of the polylines together with the
calculated feasible routes. Pictures (b) show what the robot sees. Pictures (c) show a view
of the robot navigating through the setting.
112
Cognitive Autonomous Navigation System using Computational Intelligence techniques
3.5 Conclusion.
In this chapter, a collection of techniques that allow a service mobile robot to navigate in
unknown dynamic semi-structured indoor settings is presented. The environment is
represented by a description based on semantically labelled polylines. The labels used for
describing significant features found in typical indoor settings are saturated points (SP),
occlusion points (OC), inside angle corners (IAC), outside angle corners (OAC), frontal
walls (FW) and lateral walls (LW). All the points that define the edges of the polylines are
identified by one of these labels.
In addition, a list of feasible traversable segments is obtained after classifying the free space
by taking into account the semantically labelled polygonal representation. Then a set of
feasible local safe paths is calculated by clustering and geometrically approximating the set
of middle points obtained from the traversable segments. The algorithms designed to
achieve these tasks are computationally efficient and they are particularly appropriate when
a low cost robot works in real time.
Finally, the reactive navigation method has been tested both in a simulated framework and
using the real robots Pioneer and BENDER. An emergency algorithm has been also
implemented in order to avoid collisions since the environment is dynamic. Furthermore,
the selection of the route according to several heuristically defined objectives, is carried out
by using a fuzzy system. The velocity of the robot is also controlled using another fuzzy
system that properly generates the speed by taking into account potential collisions when a
route is selected.
The main advantage of the proposed semantic representation of the environment is that
the computational cost is very low. That is mandatory if it is required to use a low-cost
robot that works in real time. The storage needs are minimized because the description of
the setting is very compact. In addition, using semantic tags that conceptually define the
meaning of representative points or features facilitates the design of higher level control
algorithms, in particular, those based on fuzzy techniques.
113
Chapter 3. Reactive navigation using a description of the environment based on a set of semantically labelled polylines
114
It is not our part to master all the tides of the world,
but to do what is in us for the succour of those years
wherein we are set, uprooting the evil in the fields that
we know, so that those who live after may have clean
earth to till.
Chapter 4
J.R.R. Tolkien
Hybrid map building, localization and exploration
T
his chapter describes a set of techniques that allow a robot to build a hybrid
(metric and topological) map. The trajectory tracked by the robot is computed at
the same time as the hybrid map is built. This trajectory is calculated by taking
into account the information acquired by propioceptive (dead reckoning), and
exteroceptive sensors (a virtual sensor that calculates the robot pose change by using the
raw data acquired by a 2D laser scanner as input). Since some errors may arise due to the
intrinsic inaccuracy of the 2D laser device, the uncertainty in measurements is modelled by
means of Gaussian distributions. In particular, an error propagation algorithm is used to
calculate the uncertainty of the measurements provided by the pose virtual sensor
according to the known inaccuracy of the 2D laser scanner. Data fusion is carried out by
using a technique based on an Extended Kalman Filter (EKF). (Final)
As self-localization highly depends on the natural landmark detection and data association
procedures, the technique that allows to extract such natural landmarks from the semantic
polygonal representation of the environment has also been enhanced. In particular, natural
landmarks are segments whose ends are semantically labelled points of the polygonal
description of the setting, and a segment is considered to provide a suitable natural
landmark if it consists of two representative properly labelled points. The semantic
representative points are obtained after executing the algorithms that generate the set of
semantically labelled polylines. Then, modelling the inaccuracies that arise during the
execution of such algorithms in a conventional way is not a trivial issue, since they are
based on a hybrid (geometric and semantic) representation of the environment and some
parameters, which specify the performance of the mentioned algorithms, are heuristically
defined. Therefore, it is not a simple task to model the uncertainty by using conventional
statistical distributions. Moreover, using such uncertainty to take decisions about which
segments should represent valid landmarks is also difficult. These drawbacks can be
overcome by using a fuzzy-based method, since it is possible to design a set of rules that
allow to select such landmarks according to certain requirements heuristically defined by
the user. The hierarchical fuzzy system that allows to accomplish this task is fully described
in this chapter.
Autonomous exploration is also addressed by means of exploration algorithms that allow
115
Chapter 4. Hybrid map building, localization and exploration
the robot to be partially guided by a human user. To achieve these objectives the
environment representation described in Chapter 3 is used again as input of a hierarchical
fuzzy system that determines the route to follow according to the rules defined by the user.
Several experiments performed using both the simulation framework and the real robots
Pioneer and BENDER in real scenarios demonstrate the advantages of the proposed
methods.
4.1 Generation of a topological map using a semantic
polygonal description of the free space.
Researchers usually address map building, exploration and path planning separately; some
of these methods are summarized in section 2.4. Many simultaneous localization and map
building experiments are actually carried out by tele-operating the robot. In general,
building a metric map is the main purpose of these approaches, and relevant semantic
aspects related to the spatial distribution of the objects in the environment are not taken
into account. For instance, geometric features such as points, straight lines and curves are
not usually linked to specific semantic meanings and abstract concepts that provide a high
level awareness about the environment. The proposed solutions are usually focused on
achieving a high degree of geometric accuracy. However, in this Thesis, the robot is not
just required to be capable of building an accurate map from a geometric point of view. In
addition, the robot should acquire some degree of semantic awareness about the spatial
distribution of the scenario (and, consequently, about the free space). Using this
knowledge, the robot should navigate through the environment, determine its position
within a topological map built by itself and take intelligent decisions about its own
behaviour according to the needs of human users.
Generally speaking, topological map building methods address the problem by generating a
graph whose nodes are connected by links. These nodes usually store local metric submaps and information about the environment that can be used by the robot in order to
recognize the place in the future. In most cases, these techniques only deal with the map
generation issues, while the problems related to autonomous exploration, path planning or
obstacle avoiding are addressed separately. In this Thesis, a solution that simultaneously
deals with both the topological map building and the autonomous exploration of the
environment is proposed. Furthermore, interaction with human users is also a significant
issue that is also taken into account, since the human user can easily help the robot to
explore the environment using simple natural high-level commands. In order to achieve the
objectives related to autonomous map building and exploration, a set of algorithms have
been implemented. As mentioned before, these algorithms use the semantically labelled
116
Cognitive Autonomous Navigation System using Computational Intelligence techniques
polygonal description of the environment explained in depth in Chapter 3. From this
description it is possible to determine which areas should or should not be explored.
Moreover, using these semantic features facilitates the implementation of algorithms which
allow the robot to carry out some recognition tasks such as identifying known places and
remembering how to go from one place to another.
4.1.1 Semantic description of the free space.
The exploration and topological map building problems can be simultaneously addressed if
a suitable representation of the free space is available. In this work, this representation is
directly obtained from the environment description detailed in Chapter 3.
In order to semantically characterize the free area around the robot, a closed polygon is
calculated using the set of labelled polylines. This polygon is simple (no pair of adjacent
edges intersect), irregular and generally non-convex (at least, one of the internal angles is
greater than 180 degrees, so there are external diagonal lines) and can be divided into a set
of convex sub-polygons.
To obtain a semantic representation of the free space, the collection of convex subpolygons that define such space can be extended by adding features that allow to identify
what kind of sub-polygons represent different parts of the free area from a conceptual
point of view. For instance, if a sub-polygon consists of segments from the same polyline,
this sub-polygon defines a closed area, and there are not any regions to explore from it. On
the other hand, if a sub-polygon includes an edge defined by the last OC point of the
polyline and the first OC point of the polyline
, there is a traversable segment that
could lead the robot to a new unexplored part of the environment. Taking these
considerations into account, the method proposed in this work identifies different regions
of the free space from a cognitive point of view and associates semantic tags to the edges
of the convex sub-polygons. In fact, the edges of these sub-polygons are labelled using
abstract concepts related to exploration. Specifically, four kinds of edges are considered:
exploration, back, environment and traversable edges:

An exploration edge (ExpEdge) is defined by two points points (
the following condition:
and
is the final occlusion point of the polyline
), that fulfill
and
is the
initial occlusion point of the polyline
. These edges usually represent borders, since
the robot does not know which objects are behind them. Therefore, the area beyond
these edges could be explored.

A back edge (BckEdge) is defined as a segment with an end point on the origin of the
robot reference system and the other on either the first point of the first segment in the
117
Chapter 4. Hybrid map building, localization and exploration
first polyline, or the last point of the last segment in the last polyline.

An environment edge (EnvEdge) is defined as a segment included in any of the
polylines that describe the environment. Those edges represent part of the objects
found in the setting and their ends are labelled as IAC, OAC, OCC, FW or LW as
defined in Chapter 3.

A traversable edge (TraEdge) is defined as a border shared by two adjacent convex subpolygons.
Table 4-1 shows the requirements that should be fulfilled in order to assign a label to a
given edge. Figure 4-1 shows an example of environment description from which the
closed polygon is generated. Some examples of occlusion points (green), saturated points
(yellow) and frontal and lateral wall points (black) are shown.
1) Generation of convex sub-polygons.
The space enclosed inside the non-convex polygon describes the free space around the
robot at time . This space is divided into a set of convex sub-polygons by using Algorithm
4-1.
Input
is formally defined as a list of labelled points in the Euclidean space:
, where
is the number of vertices of
the polygon, and
are respectively the coordinate, coordinate and label of
point obtained from the list of polylines. Any edge of the polygon is defined as the pair
, where
if the condition
is met and
, otherwise.
To obtain the set of convex sub-polygons, it is necessary to detect which points inside
the polygon are convex according to the robot position
; this position is always
considered as a convex point for convenience. This facilitates the design of the proposed
algorithms.
Type
Formal definition and condition for labelling
Exploration edge
ExpEdge = {P(i), P(j)} / P(i)=(xi, yi, Lbi), P(j)=(xj, yj, Lbj) and Lbi=OC,
Lbj=OC, P(i) Polyline k and P(j) Polyline k+1.
Back edge
BckEdge = {P(i), P(j)} / P(i)=(xi, yi, Lbi), P(j)=(xj, yj, Lbj) and P(i)=O or
P(j)=O / O=(xrobot,yrobot).
Environment edge
EnvEdge = {P(i), P(j)} / P(i)=(xi, yi, Lbi), P(j)=(xj, yj, Lbj) and {P(i), P(j)}
are the bounds of a segment inside the polyline k.
Traversable edge
TraEdge = {P(i), P(j)} / P(i)=(xi, yi, Lbi), P(j)=(xj, yj, Lbj) and none of
the above conditions are met.
Table 4-1. Conditions for assigning labels to edges.
118
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Algorithm 4-1. Generating convex sub-polygons.
1:
2:
3:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
19:
20:
21:
22:
23:
24:
25:
27:
28:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
40:
41:
42:
43:
Algorithm GeneratingConvexSubPolygons (P):
N  Number of elements inside P;
for i  0 to N used[i]  false;
endfor;
level  0;
while (k / used[k] = false)
j  0;
while (j < N)
if (P[j] is convex) then
Add P[j] to sub-polygon S;
j++;
enc  false;
while ((j < N) AND (not enc))
if (P[j] is not convex) then
if (not used[j]) then
Add P[j] to sub-polygon S; used[j]  true;
endif;
j++;
else
enc  true;
endif;
endwhile;
if ((j < N) AND (enc)) then Add P[j] to sub-polygon S;
endif;
if ((j  N) AND (not enc)) then Add P[0] to sub-polygon S;
endif;
else
j++;
endif;
endwhile;
level++;
Add S to LSubP; Erase used non-convex points; Re-label convex points;
if (there are not any convex points left)
for k  0 to N
used[k]  true;
endfor;
endif;
endwhile;
return (LSubP);
Figure 4-1. Closed polygon obtained from the semantic description of the setting.
119
Chapter 4. Hybrid map building, localization and exploration
On the other hand, the pair of triangles
for each point
are calculated, see Figure 4-2. A point is convex if the following condition is satisfied:
(4-1)
Where
and
are the angles associated to vertex in triangles 1 and 2, respectively.
For instance, according to condition (4-1), point in Figure 4-2 (a) is convex, while point
in Figure 4-2 (b) is concave.
In each iteration, Algorithm 4-1 searches a set of convex points. Concave points located
between two adjacent convex points
and
are added to a new sub-polygon.
Furthermore, and
are also included in this sub-polygon. In each iteration, a set of
convex sub-polygons is generated. Concave points are eliminated from the original set ,
and the remaining points are consecutively connected again. The process is repeated until
there are not any convex points except point . When Algorithm 4-1 is executed, some
convex sub-polygons could have only two points; such sub-polygons are really segments.
This situation happens if there are not any concave points between two adjacent convex
points. In this case, those sub-polygons are not taken into account. Figure 4-3 shows an
example of the execution of Algorithm 4-1.
Figure 4-2. (a) Convex point with respect to the robot position. (b) Concave point with
respect to the robot position.
Figure 4-3. Example of Algorithm 4-1 execution. There are four convex sub-polygons in
level 0 and one convex sub-polygon in level 1.
120
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Each convex sub-polygon has a level number according to the iteration of the algorithm in
which the sub-polygon was obtained, as each iteration is associated to a level . A
relationship degree between sub-polygons could be established by taking into account such
levels. In fact, a pair of sub-polygons are relatives if there is a path that allows the robot to
go from one sub-polygon to another one intersecting only traversable edges. A subpolygon is another sub-polygon’s parent if they share a traversable edge and their levels are
and
respectively. If both of them belong to the same level they are considered as
“brothers”. Moreover, each convex sub-polygon has a centroid. The robot can navigate
through the free space by following the paths defined by the set of feasible trajectories that
link the centroids, see Figure 4-4.
Abstract semantic concepts about the free space distribution can be generated by taking
into account the set of labelled convex sub-polygons. For example, if the level of a subpolygon is zero and it has at least one exploration edge, the area of the free space
represented by that sub-polygon can be considered an interesting place to explore, that is,
an exploration area. If several sub-polygons accomplish these conditions, the sub-polygon
whose centroid is closer to the robot could be explored first. For instance, if a sub-polygon
consists only of traversable edges, it represents a part of the free space that allows to
connect different exploration areas. Other examples illustrating how to acquire high level
abstract concepts from this labelled representation of the free space could actually be
found. Furthermore, using these abstract concepts, a human user could easily decide about
the behaviour of the robot by providing additional information to the exploration
algorithms in real time.
Figure 4-4. Set of feasible exploration paths obtained from the relationship between
centroids.
121
Chapter 4. Hybrid map building, localization and exploration
2) Exploration algorithm using the semantic representation of the free space.
Exploring could be formally defined as searching with the purpose of discovery. Therefore,
any exploration technique should be based on seeking exploration areas in the free space.
In this subsection a method that allows the robot to safely and autonomously explore the
environment is detailed. External commands from a human user can also be taken into
account if required, so that he or she can partially guide the robot using natural interaction.
The description of the free space explained in subsection 4.1.1-1 provides a collection of
interconnected convex sub-polygons that represent different free areas. Note that the subpolygons that include exploration edges characterize regions that could be explored by
crossing such edges, so the centroid of one of these sub-polygons could be taken as a
representative goal point useful to explore unknown places. Many sub-polygons consisting
of at least one exploration edge can be calculated at time , so it it necessary to implement
an algorithm for selecting the more appropriate exploration area according to some
conditions. For example, if there are two feasible exploration areas, the first one in front of
the robot and the second one located on the right, and it is not desirable that the robot
abruptly changes its heading, the sub-polygon that describes the free area in front should
be selected. However, if the place has already been visited or a human user orders the
robot to turn to the right, the sub-polygon that represents the second free area should be
chosen. In addition, if it is required that the robot navigates safely, the algorithm should
take into account both fixed and mobile obstacles in order to allow the robot to avoid
them. In this work, a method that generates some feasible exploration paths has been
designed. A specific path is selected by using a fuzzy system that calculates a merit factor
associated to each feasible path. This algorithm is similar to that described in section 3.3.3
but considering a higher abstract description of the environment, since it takes into account
not only low level semantic features but how these features are related to each other to
describe different free space areas.
In the experiments, the robot navigates through indoor settings, so it usually goes through
corridors. The robot could change its direction when it finds an intersection or there is a
room where it should go into. This decision could be taken autonomously or guided by a
human using natural interaction. If the human user orders to explore a specific area, this
order prevails over any autonomous decision. However, if there are not any user
commands, the algorithm selects the path with the best merit factor calculated by the fuzzy
system. If several paths accomplish the condition proposed by the user, the path that leads
to the closer sub-polygon with exploration edges is selected. Figure 4-5 outlines the whole
process, which starts when the polygon generated as the result of linking all the adjacent
polylines is obtained, and finishes when the path is selected.
122
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 4-5. Algorithm that obtains the appropriate path by taking into account the output
of the fuzzy system and the decision of the human user.
In order to design the fuzzy system rule base, the normalized full length of the path and the
normalized value of the direction DirChange of any sub-polygon that contains at least one
exploration edge are considered. The direction DirChange is calculated as the angle formed
by the axis of the robot coordinate system and the line that joins the origin of the robot
reference system and the centroid. In order to normalize the full length of the path, the
minimum and maximum length of all the feasible routes are considered. The result is a
number in the interval
. The direction DirChange is normalized taking into account the
interval
degrees. The result is also a number in the interval
.
Figure 4-6 (d) shows the fuzzy system inputs and output. The distributions of the fuzzy
variables are detailed in Figure 4-6 (a), Figure 4-6 (b), and Figure 4-6 (c). The distribution
of the output values according to the inputs is detailed in Figure 4-6 (e). Finally the rule
base is shown in Table 4-2. Note that the merit factor is very high if the path is very long
and DirChange is very close to
degrees. The merit factor is very low when the path is
very short and DirChange is close to or
degrees. The remaining possibilities are
covered by the rest of the rules.
123
Chapter 4. Hybrid map building, localization and exploration
(a)
(d)
(b)
(c)
(e)
Figure 4-6. Fuzzy system for the selection of exploration paths. (a) Fuzzy set for the
variable that defines the length of the path. (b) Fuzzy set for the variable that characterizes
the angle defined by the line that joins the centroid and the origin and the X axis of the
robot reference system. (c) Fuzzy set for the output variable MFactor. (d) Inputs and
output of the fuzzy system. (e) Distribution of the values calculated by the fuzzy system
within the universe of discourse of the input variables.
PLength
DirChange
VERYLOW
LOW
MEDIUM
HIGH
VERYHIGH
VERYLOW
LOW
LOW
HIGH
VERYHIGH
VERYHIGH
LOW
VERYLOW
LOW
HIGH
HIGH
HIGH
MEDIUM
VERYLOW
VERYLOW
MEDIUM
MEDIUM
MEDIUM
HIGH
VERYLOW
VERYLOW
LOW
LOW
MEDIUM
VERYHIGH
VERYLOW
VERYLOW
LOW
LOW
LOW
Table 4-2. Rule base with inputs PLength and DirChange and MFactor as output.
Figure 4-7 shows an example of the behaviour of the algorithm that selects the appropriate
exploration path. If the commands issued by a human user are being considered, orders
such as “GO RIGHT” “GO LEFT” and “GO STRAIGHT” will be taken into account. In
fact, in the experiments, the human user can use this kind of commands or remotely select
the goal centroid by using an appropriate graphical user interface (GUI). This simplifies the
interaction between the human user and the robot.
Figure 4-7 illustrates an example where there are two feasible exploration areas on the left,
one exploration area in front of the robot and one exploration area on the right.
124
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 4-7. Set of feasible exploration paths. The green path has the worst merit factor
since its length is the shortest and the angle is 135º. The black path has the best merit
factor, since its length is the longest and the angle is the closest to 90º.
If the user orders “GO RIGHT” at time , then the black path would be selected and the
position of the goal centroid relative to the global reference system would be stored as
target. A human user command is considered as totally executed when the robot
approximately reaches the goal centroid, for example at time
. If the user sends a
new command in the interval
, such command will not have any effect. If there
is not any available command, the robot takes decisions autonomously according to the
output of the fuzzy system. If a different behaviour is required, the fuzzy system rule base
could be easily modified without changing the rest of the software component that
implements the exploration task. This is clearly an advantage of the approach.
In the experiments, unless the human user sends a specific command, the robot tries to
find exploration areas located at a long distance while changing its direction smoothly. This
leads the robot to explore corridors and big rooms first. Additional constraints are
introduced to reduce the number of feasible paths while the topological map is built. In
particular, the nodes in the map are used to determine which places have been visited and
to accept or to reject those paths if necessary. On the other hand, the robot is at a dynamic
environment, where mobile obstacles should be avoided if required. The emergency
algorithm proposed in section 3.3.3 is also used to ensure that the robot navigates safely
either autonomously or partially guided by a human user.
As it is fully explained in the following section, when the map building algorithm detects a
place that can be considered as a candidate node of the map, the exploration algorithm also
125
Chapter 4. Hybrid map building, localization and exploration
uses the position of such node as a possible goal point.
4.1.2 Topological map building.
In this section, an intelligent technique that uses the representation of the free space
described in section 4.1.1 is presented. The method builds a topological map of the
environment by analyzing the set of sub-polygons and taking into account features such as
level numbers, edge labels and relationship degree, among others, to determine where map
nodes should be located. The topological map is described by a graph whose nodes store
local semantic polygonal descriptions of the environment according to the representation
proposed in Chapter 3.
A node is defined as a set of data that describes a place visited by the robot at time .
Each node consists of a point
in the Euclidean space whose Cartesian coordinates in
the global coordinate system are
. The global coordinate system is defined as
coincident with the robot reference system at time . Information about neighbouring
nodes and the trajectories to reach them is saved. The set of semantically labelled polylines
that represents the sensed environment (in front of the robot and behind it) is also stored.
The proposed method allows the robot to determine which places should be remembered.
When the robot approaches to these places, a “candidate node” defined as a point
in the global reference system, is generated. If the robot is close enough to a
candidate node while exploring and a set of requirements are met, the node is transformed
into a “real node”. Then, the topological map is updated, see Figure 4-8.
The trajectories that allow the robot to go from one node to another should be accurately
calculated; therefore, trajectory estimation is a key issue in the proposed algorithm. In fact,
calculating the path that joins two neighbour nodes could be carried out using simple
odometry. However, odometry based on data acquired by propioceptive sensors is sensitive
to accumulative errors. The semantic polygonal representation of the environment is taken
into account to solve this problem.
Figure 4-8. Sequence of stages that allow to update the topological map by taking into
account the free space semantic description.
126
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Pose change estimation is carried out by using both measurements from propioceptive
sensors (encoders), and the environment description obtained at consecutive times
and . Natural landmarks represented as segments are extracted from the setting
representation and used for data association. A hierarchical fuzzy system allows to improve
the data association method by only selecting natural landmarks that meet certain
requirements. This issue is fully addressed in section 4.2.1-2.
In addition, the uncertainty in measurements acquired by the 2D laser scanner is also
considered and an algorithm based on an Extended Kalman Filter (EKF) is used for data
fusion, see section 4.3. Moreover, when a node is revisited, the stored environment
description is used to correct the global trajectory followed by the robot (if necessary) by
using the data alignment and association proposed in section 4.2.1-1.
1) Generation of candidate nodes.
Indoor settings usually consist of features such as walls, columns and furniture, among
others, often distributed in parallel or at right angles. In general, the main structure of these
scenarios is a set of corridors that lead to furnished rooms or to other corridors (see a
simple example in Figure 4-9). When a human being walks through a domestic setting, a
hospital or an office building, he or she usually pays attention to places that are subject to
explore. For example, if someone follows a corridor and a door is opened on the right, he
or she could determine that there is a place that must be remembered because there is an
intersection, and the space behind the door leads to another part of the environment (see
Figure 4-9 (a)). The human being could easily go back to a place in the future if the place
and its relation with the rest of areas is memorized.
In general, the trajectory followed by people when they walk through a corridor is parallel
to its main walls. When people arrive to an intersection they can decide to go into a room
or to follow a new perpendicular corridor. In this case, they should usually turn
approximately 90 degrees. If a room is closed, the place should be memorized before
leaving it, and the trajectory would be resumed.
In short, if the robot is required to build a topological map as a human being would do, it
should use the knowledge about geometric relations such as perpendicularity, distance, or
heading between the main components of a typical indoor environment. However, unlike
human beings, the robots used in the experiments only have a 2D laser device to sense the
environment, so their awareness about the whole setting is clearly limited. As the example
in Figure 4-9 shows, the robot just “sees” a part of the global setting at time whose
semantic polygonal representation is shown in Figure 4-9 (b).
127
Chapter 4. Hybrid map building, localization and exploration
(a)
(b)
Figure 4-9. Example of domestic setting. (a) The robot is located at the main corridor.
Several intersections define significant places that should be remembered. (b) Semantic
polygonal representation of the environment at time t.
In this instance the environment consists of several rooms and secondary corridors on the
right and on the left of the main one. Intersections are then located at areas where rooms
and secondary corridors join the main corridor. Knowing if a place should be considered
significant is one of the main issues that must be addressed when the robot is required to
build a topological map. In this work, the robot can determine which places are subject to
be memorized as it is navigating by taking into account the semantic description of the free
space obtained from the semantic polygonal representation of the environment. Feasible
significant places are represented by “candidate nodes” selected by using heuristic rules
inspired by the human behaviour mentioned above.
The proposed method allows the robot to detect significant exploration areas located at
both sides by taking into account the semantic description of the free space. The collection
of convex sub-polygons that represent free regions of the environment at time are
analyzed and used to create “candidate nodes” that will be transformed into “real nodes”
according to some requirements. First, zero level sub-polygons are considered, since they
can directly define exploration areas. Next, the rest of level sub-polygons are taken into
account, because they can lead the robot to exploration areas represented by their zero
level relatives.
a) Candidate nodes obtained from zero level exploration sub-polygons.
In order to calculate a collection of candidate nodes at time , the zero level sub-polygons
that meet the following conditions are considered, see Figure 4-10:
128
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 4-10. Example of environment whose free space is represented by a set of
semantically labelled convex sub-polygons. Sub-polygons 2, 4 and 5 are exploration subpolygons since they contain at least one exploration edge and their level is L0. An interval
whose values are respectively [80, 110] is used to select traversable edges according
to the notion of perpendicularity. Only sub-polygons 2 and 4 contain traversable edges that
form an angle with the X-axis of the laser reference system within the interval.

At least, the sub-polygon contains one exploration edge.

None of its edges is labelled as BckEdge.

The sub-polygon has a traversable edge whose direction is defined by the angle
(between the edge and the X-axis of the robot reference system), inside the interval
, where
and
are heuristically defined.
Zero level sub-polygons that accomplish the first condition are named exploration subpolygons. Only exploration sub-polygons that meet the rest of the requirements are taken
into account to estimate the position of candidate nodes. The rest of exploration areas that
are not located to the right or to the left of the robot are filtered out (remember that
perpendicularity is a common geometric characteristic found in indoor environments). This
means that only exploration areas that generate intersections (according to the notion of
perpendicularity), will be taken into account.
Figure 4-11 shows an example to clarify the candidate node selection algorithm. As Figure
4-11 (a) shows, the robot senses a great part of the main corridor. Rooms located on the
left are partially perceived, and the secondary corridor on the right is also partially
represented. The farthest intersection is not sensed at time , so a saturated polyline
describes this area. Figure 4-11 (b) shows how the closed polygon can be divided into a set
of convex sub-polygons. Only four zero level sub-polygons have exploration edges.
129
Chapter 4. Hybrid map building, localization and exploration
(a)
(b)
Figure 4-11. Example of domestic setting. (a) Partitions of the closed polygon obtained
from the set of semantically labelled polylines. (b) Set of convex sub-polygons with labelled
edges. Exploration edges are blue colored.
In order to select which sub-polygons should generate candidate nodes, the values
and
are defined. For example if values
and
degrees are respectively used, only the
sub-polygons located on the left will be taken into account; this setting for the interval of
acceptable angles is intended to select exploration paths perpendicular to the current one;
however, in this case it would seem intuitively unacceptable not to consider an exploration
path leading to the right. If the the interval
is extended to
for example,
the sub-polygon located on the right would also be considered, but this would lead as well
to the selection of exploration paths too far from perpendicularity. This situation should be
avoided because finding intersections is the main purpose of this method, and the notion
of intersection is directly related to the notion of perpendicularity.
A more sensible improvement of the path selection method can be achieved if subpolygons of levels other than are also considered. Note that the exploration area located
on the right is described by a zero level sub-polygon whose traversable edge links the last
occlusion point of the first polyline and the first occlusion point of the third polyline; such
segment is also a traversable edge of a level 1 sub-polygon. This traversable segment can
only be reached through the traversable segment whose ends are the last occlusion point
(OC) of the first polyline and the first outside angle corner (OAC) of the third polyline.
Moreover, this segment forms an angle close to
degrees relative to the robot reference
system, so it can be used to detect the first and nearest intersection. A candidate node
would be defined, even with narrow
intervals centered in
degrees, for instance
, thereby achieving a more satisfactory result. This approach is described in detail
in subsection 4.1.2-1-b.
130
Cognitive Autonomous Navigation System using Computational Intelligence techniques
b) Candidate nodes obtained from non-zero level sub-polygons.
As mentioned before, once zero level sub-polygons have been analyzed, their ancestors at
higher levels should also be considered. The direction of diagonal lines between vertices
(except the position where the robot is located) as well as edges labelled as TraEdge of
such sub-polygons are checked against the limits defined by the interval
. If there
are several diagonal lines or edges whose angle is within the interval, the line whose
value is closest to
degrees is selected. Figure 4-12 shows an example where a level one
sub-polygon located to the right can generate a candidate node. Note that in Figure 4-11
the robot detects an exploration area behind an object partially hidden by an occlusion. The
free space around this object is enclosed by the level one sub-polygon. The robot should
navigate through this part of the environment before approaching the exploration area.
c) Selecting candidate nodes and calculating their representative positions.
As a human being would do, if the robot goes through a corridor or a room while
executing the exploration algorithm proposed in section 4.1.1-2, the Y-axis of the robot
reference system usually will tend to be parallel to the walls that delimit such place. When
the Y-axis of the robot is parallel enough, perpendicular corridors and rooms are detected
as feasible exploration areas, see Figure 4-13 (a) and Figure 4-13 (b).
The places where these perpendicular regions converge are intersections. The robot can
determine the position of these intersections before reaching them by generating candidate
nodes.
Figure 4-12. First level sub-polygon with a diagonal inside the interval
131
.
Chapter 4. Hybrid map building, localization and exploration
(a)
(b)
Figure 4-13. (a) Candidate nodes located at the centres of the intersections. (b) Candidate
nodes located at the centres of the intersections when the heading and the robot position
change.
Moreover, the environment sensed by the robot changes while the robot is advancing;
therefore, the list of candidate nodes calculated at time should be updated at time
,
and so on. Consequently, it is necessary to maintain a set of candidate nodes
over time, each of them defined by their position relative to the global reference system.
This position is calculated by taking into account the robot pose at the global reference
system. This pose is updated by using the EKF-based procedure described in 4.3.
At time , a set of sub-polygons that can generate candidate nodes is obtained. In order to
filter out non-valid candidate nodes and to calculate the position of the valid ones, only
specific edges or diagonal lines of the sub-polygons that generate them are considered
according to the following stages:

For each sub-polygon that allows to generate a candidate node, the segment (edge or
diagonal line) is selected as explained in the preceding section. The length of this
segment must be lower than a heuristically defined threshold . This limits the number
of candidate nodes that will be taken into account; very distant features will not be
considered. In the experiments, is set to two meters.

For each selected segment, its middle point
and the parallel straight line that
crosses the
point of the robot reference system are calculated. A segment
perpendicular to that includes
intersects in point , see Figure 4-14. This point
will define the candidate node position in the global reference system if several
requirements are met:
132
Cognitive Autonomous Navigation System using Computational Intelligence techniques
(a)
(b)
Figure 4-14. (a) Example of a non-valid candidate node when none of the three first
requirements are accomplished. (b) Example of a non-valid candidate node when the
fourth condition is not met.
- 1. The point
must lie within one of the convex sub-polygons. Otherwise, the
candidate node is considered non-valid.
- 2. The length of the perpendicular segment should not exceed a fixed boundary
(two meters in the experiments). This condition limits the number of candidate nodes.
Very distant exploration areas located at both sides of the robot are not considered
since closer areas should be explored first.
- 3. The perpendicular segment does not cross any segment included in the set of
polylines that describes the environment. Figure 4-14 (a) shows an example when this
condition and the two conditions mentioned above are not accomplished.
- 4. The half-line that starts at point and contains the point
does not intersect any
segment included in the set of polylines that represents the setting, or otherwise, if it
intersects any of them at a point
, the length of the segment defined by the end
133
Chapter 4. Hybrid map building, localization and exploration
points
and
does not exceed a fixed threshold
. This threshold is
meters
in the experiments. In this way, small spaces are not considered as exploration places
and the map is simplified, see Figure 4-14 (b).

The coordinates of the point
the global reference system.

Finally, if an old candidate node calculated at time and stored in
is very close to
the position of a new candidate node at time , and
, the old candidate node is
eliminated from , and the position of the new one is updated to the average position
of both candidates.
are transformed from the robot reference system to
Figure 4-15 shows an example of candidate nodes obtained after analyzing the set of edges
and diagonal lines and taking into account the above mentioned conditions.
2) Updating the topological map.
A candidate node can be transformed into a real node when the robot gets closer than an
heuristically set threshold to it (0.5 meters for the experiments). The position of the real
node will be the position of the robot in the global reference system when the robot
reaches the circle whose centre is the position of the candidate node and the radius is .
Figure 4-15. Candidate nodes calculated according to TraEdge edges and diagonal
segments.
134
Cognitive Autonomous Navigation System using Computational Intelligence techniques
In order to create the new real node, the following additional condition should be met: no
other existing real node can be closer than the distance . Furthermore, the real node
stores the set of semantically labelled polylines that describes the environment in front of
and behind the robot at time , see Figure 4-16. In the experiments, the robot BENDER is
equipped with a servo that allows to turn the 2D laser device (see Chapter 6), which
facilitates the task of acquiring the environmental information behind the robot. This is
particularly useful in loop closure, because when it is required to know if a real node is
really being revisited the point of view of the robot should not have any influence. Both
the estimation of the global pose of the robot and the information about the environment
stored in a particular real node are taken into account to close loops.
The global trajectory followed by the robot while building the map can be corrected when
a node is revisited by comparing the set of polylines sensed at time and the set of
polylines stored in the revisited node. In order to do this, both sets of polylines are
properly aligned and the technique described in subsection 4.2.1-2 (based on geometric
transformations), allows to obtain how much the robot is misplaced with respect to the real
node.
Figure 4-16. The robot is inside the circle defined by the first candidate node. When the real
node is created the area around the robot is memorized. The memorized set of polylines is
used when the node is revisited.
135
Chapter 4. Hybrid map building, localization and exploration
The first node of the map, located at position (0, 0), is created at the beginning of the map
building process. When any other node is created and added to the map, a neighbourhood
relationship with the previously visited one is established. The trajectory that joins both
nodes is then saved. If the robot approaches an area represented by an existing real node, it
is considered that the node is being revisited. If there is not a neighbourhood relationship
between the previous node and the revisited one, this relationship is established and the
trajectory that joins both nodes is stored. Figure 4-17 (a) illustrates an example of
topological map obtained from the simulated environment shown on Figure 4-17 (b).
The robot is located at a corridor where several rooms converge. First, the initial node is
created at position (0, 0). Three intersections are detected. Each candidate node represents
an intersection. The first candidate node is generated from a first level sub-polygon. As
Figure 4-17 (a) illustrates, the selected diagonal from which the position of the candidate
node is calculated is represented by a red line. The second candidate node is generated
from a traversable edge of a first level sub-polygon (specifically the sub-polygon 19), see
the blue line in Figure 4-17 (a). Finally, the third candidate node is obtained from the
traversable edge of a zero level sub-polygon. This edge is represented by a green line in
Figure 4-17 (a).
Figure 4-17. Example of topological map building. (a) Description of the free space; subpolygons and candidate nodes are displayed. (b) Simulated environment from which the
topological map is being built.
136
Cognitive Autonomous Navigation System using Computational Intelligence techniques
As Figure 4-18 (a) shows, the robot starts at the position (0, 0) where the first node is
created. Three intersections are detected in the simulated environment (see Figure 4-18
(b)). Therefore, three candidate nodes are generated. Sub-polygons 5 and 9 allow the robot
to identify the same candidate node from different edges because both positions are very
close (in the experiments, if two edges define two feasible nodes separated by 0.5 meters
they are fused to create a unique node). The final position of the candidate node is the
mean value between the positions obtained from both sub-polygons. The dotted line
represents the trajectory followed by the robot.
When the robot has advanced some meters, each candidate node has been transformed
into a real node. Specifically, Figure 4-19 (a) shows an example of topological map
consisting of three real nodes and several candidate nodes located in front of the robot.
For each real node, the environment around the robot is memorized by storing the
polygonal representation of the setting located in front and behind. The trajectory that
joins two neighbours is represented by a red line. The simulated environment where the
robot navigates through is described in Figure 4-19 (b).
Figure 4-18. Example of topological map building. (a) Description of the free space; subpolygons and candidate nodes are displayed. The first node is located at (0,0). Subpolygons 5 and 9 generate the same candidate node. (b) Simulated environment from
which the topological map is being built. The candidate nodes are located at the
intersections.
137
Chapter 4. Hybrid map building, localization and exploration
Figure 4-19. Example of topological map building. (a) Description of the free space using
sub-polygons. There are three real nodes (red) and three candidate nodes (dark gray). (b)
Simulated environment from which the topological map is being built.
Note that the geometric relation between nodes is a key issue when a topological map is
being built. Therefore, the trajectories that join two nodes should be accurately estimated.
In this thesis, odometry-based techniques are used for this purpose due to their simplicity
and low computational cost. In addition, the information from propioceptive sensors is
complemented by the semantic representation of the environment to improve the
estimation of the robot pose. The following section details the proposed method that
allows the robot to localize itself within the topological map. In particular, pose estimation
involves to accomplish several tasks: natural landmark extraction, data association and
EKF-based data fusion.
4.2 Pose change estimation using the semantic
polygonal representation of the environment.
Position, heading and velocity estimation can be carried out using well-known procedures.
Particularly, classic odometry-based methods are simple and computationally inexpensive.
They usually use the information obtained from propioceptive sensors such as encoders,
compasses or gyroscopes, among others. The pose of the robot is calculated by integrating
velocity and heading measurements over time, relative to a starting position. In this case,
the accuracy of propioceptive sensors is critical. However, even when very accurate devices
are used, propioceptive sensor-based odometry has several long-term drawbacks. Both
systematic errors (bad wheel alignment, different diameter of the wheels or discrete and
138
Cognitive Autonomous Navigation System using Computational Intelligence techniques
limited resolution of the encoders, among others), and non-systematic errors (skids and
slides) will affect individual estimates. As the process integrates new relative estimations
with the past ones, the global error is accumulative. On the other hand, if the robot is not
equipped with propioceptive sensors, this kind of odometry methods cannot be applied,
except if a theoretical model of the vehicle is used. This solution usually leads to higher
errors, since there is not feedback from the robot. In order to solve these problems,
exteroceptive sensors should be also used.
In the experiments, two differential drive robots (Pioneer and BENDER) that use
encoders to estimate the velocity of each wheel have been used. The change in heading is
obtained as a function whose inputs are the angular velocities of each wheel. Finally, the
change in the position of the robot is calculated using a model of the vehicle whose inputs
are the global heading and the linear velocity. As odometry based only on the information
from propioceptive sensors usually leads to unacceptable accumulative errors, the robot
pose change is also estimated from the measurements obtained by a virtual sensor. This
virtual sensor obtains the robot pose change through data association between the sets of
polylines obtained at consecutive instants. Natural landmarks used in the proposed data
association technique are represented by segments. Once these segments are properly
calculated they can be used to estimate the robot pose change. Finally, a data fusion
procedure can be used to combine the measurements obtained from odometry and from
the designed virtual sensor.
4.2.1 Implementation of a pose change virtual sensor based on
the semantic polygonal description of the environment.
This section describes a technique for robot pose change estimate by using the setting
representation described in Chapter 3. The main objective is to design a pose virtual
sensor. A procedure for natural landmarks extraction is designed to meet this goal. Natural
landmarks are represented by using segments whose end points are considered as stable
features. Next, the segments extracted at consecutive instants
and and properly
associated are used to calculate a set of robot pose change estimates at time .
1) Segment association.
Consider that the robot senses a part of the setting at time
. This area is described by
a set of semantically labelled polylines, see Figure 4-20 (a). At time , the robot obtains
another semantic polygonal representation, see Figure 4-20 (b). Using both descriptions, an
algorithm that associates the edges of both polylines that describe the same objects can be
implemented. For a small , the data association task is relatively straightforward.
139
Chapter 4. Hybrid map building, localization and exploration
Figure 4-20. Changes in the set of polylines during the interval
.
But, even in this case and in addition to their change in relative position and orientation,
the set of polylines could slightly differ in the number of vertices, and some point labels
could be different. For instance, if the robot detected an occlusion point at time
, it
might perceive an outside angle corner at time instead, since the point of view is different
and the perception of the object represented by the polyline is better (see the points
included in a circle, in Figure 4-21). This situation demonstrates that occlusion points are
not stable, because they represent the transition between different objects in the scenario.
On the other hand, points labelled as IAC and OAC define features of the same objects
that are invariant to translation, rotation and point of view. Therefore, only these points
should be used to define segments that could be associated between consecutive instants.
Note that the length of the segments bounded by IAC and OAC points is constant
through time. In short, a polyline is generally defined as a sequence of segments whose end
points are IAC, OAC, LW and FW. Only the first and last segments in the polyline contain
OC points. The segments defined by IAC and OAC maintain not only their length but also
the relation with their adjacent neighbours through time, so these kind of segments could
be considered as natural landmarks, which are always located at the same place if a global
reference system is used.
Figure 4-21. Feasible association of segments between consecutive instants
140
and .
Cognitive Autonomous Navigation System using Computational Intelligence techniques
In order to find matches between the polygonal representations obtained at consecutive
instants, the set of labelled polylines obtained at time should be first aligned with the list
of polylines obtained at time
. Only segments of the polyline bounded by stable
points (IAC and OAC) are considered to accomplish this task. Segments bounded by stable
points included in the set of polylines acquired at time
are stored in a list named
. Segments from the set of polylines obtained at time (also bounded by stable
points), are stored in a list named
.
One segment
is taken from
and one segment
is taken from
. All
the segments belonging to
are recalculated using the segment
as a reference
system. All the segments included in
are recalculated using
as a reference system.
Both collections of segments are then compared using an Euclidian function of distance.
Specifically, two segments
and
whose coordinates are
and
respectively, are associated if the following condition is met:
(4-2)
Where
and
are the Euclidean distances between the end points of the segments,
is the absolute difference between the lengths of the segments and
and
are two
heuristically defined thresholds. This procedure is repeated for all the possible alignments.
The accuracy of each feasible alignment is represented by the factor . This factor
(measured as a percentage), is proportional to the number of segments that meet the
conditions described in (4-2):
(4-3)
Only the alignments that exceed a minimum percentage given by the designer are taken
into account. The best alignment is the one with the maximum
value. If no alignment
exceeds the minimum percentage, the robot cannot use the information of the
environment in the interval
, and the pose estimation only depends on classic
odometry by using information from propioceptive sensors. If there is a valid alignment, a
collection of natural landmarks (represented by segments), is obtained by relating all the
OAC and IAC points found in the set of polylines to each other. For example, Figure 4-22
(a) shows that it is possible to combine the OAC and IAC vertices to generate six segments
that will be used as natural landmarks. As Figure 4-22 (b) illustrates, only two IAC points
and two OAC points are considered because there are only three associated polyline
segments after alignment. The new OAC point obtained at time is not taken into account
in the data association process because it was an occlusion point at time
.
141
Chapter 4. Hybrid map building, localization and exploration
Figure 4-22. Stable segments generated after relating OAC and IAC points.
The matching segments between
and can be used as an absolute reference to
estimate how the pose of the robot changes. The inaccuracy of this estimate is lower than
the errors obtained when the information acquired from propioceptive sensors is used.
Obviously, the more accurate is the representation of the environment, the more correct is
the estimation of the change in the pose of the robot.
2) Pose change estimation using a set of matching segments.
It is important to mention that, even when the exteroceptive sensor is quite reliable, some
errors may arise due to the intrinsic imprecision of the perception device and the
inaccuracies occurred after executing the algorithms that generate the semantic polygonal
representation of the setting. The intrinsic inaccuracy of the perception device could be
addressed by using statistical techniques that allow to define the uncertainty of the
measurements according to probability distributions. In fact, section 4.3 addresses this
problem in depth. However, it is difficult to model the inaccuracies that result from the
execution of the algorithms that generate the semantic polygonal description of the
environment in a conventional way, since such algorithms work with a hybrid (geometric
and semantic) representation of the setting, and some parameters which define the
performance of the mentioned algorithms are heuristically defined.
Due to these inaccuracies, there are segments that describe the environment more precisely
than others. This issue should be taken into account when such segments are used to
estimate the robot pose change. Only those segments (natural landmarks) that accomplish
certain requirements should be considered for data association. The mathematical
definition of such requirements could be a difficult task. However, they could be
conceptually defined in terms of how confident a segment is according to how accurately
such segment semantically represents the connection between two stable features of the
environment. Fuzzy systems are a suitable Computational Intelligence-based method for
142
Cognitive Autonomous Navigation System using Computational Intelligence techniques
solving this problem, since fuzzy rule bases can be easily designed by using labels that
represent high level concepts. Thus, a method for the detection of non suitable segments
based on a hierarchical fuzzy system is proposed. The use of this fuzzy system allows to
improve both the natural landmark extraction and data association procedures.
a) Fuzzy system-based rejection of unreliable natural landmarks represented as
segments.
Once the alignment between two set of segments obtained at consecutive instants is
calculated, it is necessary to reject every pair of matching segments that do not accomplish
certain requirements defined by the designer. These requirements are related to the
accuracy of the polygonal representation of the environment. Specifically, the RMSE
measurement described in section 3.4.1 is considered and used as input of a hierarchical
fuzzy system that implements such requirements. The output of the fuzzy system (a
confidence value), represents the reliability of each segment of a given pair. Only those
pairs whose components have a confidence that exceeds a heuristically defined threshold
are used in the pose change estimation procedure.
Figure 4-23 shows how the robot acquires information from the 2D laser device at
consecutive instants
and .
Figure 4-23. Example of natural landmarks represented as segments obtained during
and . Raw data at consecutive instants are slightly different.
143
Chapter 4. Hybrid map building, localization and exploration
Raw data sensed at time
and are slightly different; therefore, although the
polygonal representations consist of the same stable features both at time
and , the
position of the stable features depend on the position of the raw points that define such
features. Since these positions will be used for the robot pose change estimation, it is
necessary to ensure that the error when they are calculated is low enough.
Assume that
at time
represents a segment (natural landmark) defined by points
. On the other hand,
and
represents the matching segment defined by points
and
at time . Points
and
are included in a semantic polyline
describes a portion of the environment sensed by the 2D laser device at time
.
polyline that describes the same portion of the setting at time includes the points
. In order to calculate the confidence degree of the pair of matching segments
that
The
and
and
the hierarchical fuzzy system shown in Figure 4-24 is used. The accuracy of points
,
,
and
is estimated by taking into account the RMSE obtained by
comparing the edges of the corresponding polylines to the set of raw data that they
represent, see section 3.4.1.
Summarizing, a feature point is reliable if the edges that share such point accurately fit the
raw data; therefore, the RMSE value associated to each edge should be low enough. A
segment is reliable if its end points are also reliable. Finally, a pair of segments calculated at
time
and are reliable if the confidence of each segment exceeds the threshold
.
Figure 4-25 shows how to implement the hierarchical fuzzy system that calculates the
confidence degree of a segment by using the XFuzzy 3.0 tool. A fuzzy rule implemented by
the module “basepointconfidence” allows to obtain the confidence of a stable point (IAC
or OAC) from the inputs RMSE0 and RMSE1.
Figure 4-24. Fuzzy system that allows to calculate the confidence degree of a segment. In
the example, the fuzzy system obtains the confidence value for the pair of segments
and
.
144
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 4-25. Hierarchical fuzzy system that calculates the confidence of a segment.
As a segment is bounded by two stable points, the confidence of the first end point is
obtained using rmsep0pastseg and rmsep0nextseg as inputs. The confidence of the second end
point is obtained using rmsep1pastseg and rmsep1nextseg as inputs. In general, rmsepipastseg
represents the RMSE calculated after comparing the edge whose last end point is point
that defines the natural landmark to the set of raw data described by such edge. On the
other hand, rmsepinextseg represents the RMSE obtained after comparing the edge whose
first end point is also point to the set of raw data approximated by such edge. Once the
confidences of the end points of the segment that describes a natural landmark are
obtained, these values are used as inputs of the second level module
“basesegmentconfidence”. The inputs P0CONFIDENCE and P1CONFIDENCE are in
the interval
, where means “null confidence” and means “total confidence”. The
rule base implemented by the module “basesegmentconfidence” also generates a
confidence value in the interval
as final output.
The fuzzy type TRMSE defines the membership functions that allow the fuzzification of
the inputs of the “basepointconfidence” module, see Figure 4-26 (a). These inputs are
normalized to the interval
. The RMSE normalization associated to an edge is carried
out by considering a minimum and a maximum value. In the experiments, the minimum
and maximum values are
and
respectively. If one of the end points of the segment
that defines a landmark is the first or the last point of an edge whose RMSE is higher than
the maximum value, then the segment is not taken into account.
Figure 4-26 (b) illustrates the fuzzy type TCONFIDENCE for the confidence values. This
set of membership functions allows the fuzzification and the defuzzification of the inputs
and of the output of the module “basesegmentconfidence”.
145
Chapter 4. Hybrid map building, localization and exploration
Table 4-3 shows the implementation of the rule base for the module
“basepointconfidence”. Table 4-4 describes the distribution of rules for the module
“basesegmentconfidence”.
(a)
(b)
Figure 4-26. Fuzzy types for the hierarchical fuzzy system that calculates the confidence of
a segment (natural landmark).
RMSE0
RMSE1
VERYLOW
LOW
MEDIUM
HIGH
VERYHIGH
VERYLOW
VERYHIGH
HIGH
HIGH
LOW
LOW
LOW
HIGH
HIGH
MEDIUM
MEDIUM
LOW
MEDIUM
HIGH
MEDIUM
MEDIUM
LOW
VERYLOW
HIGH
LOW
MEDIUM
LOW
LOW
VERYLOW
VERYHIGH
LOW
LOW
VERYLOW
VERYLOW
VERYLOW
P0CONFIDENCE
Table 4-3. Rule base with inputs RMSE0 and RMSE1 and PointConfidence as output.
P1CONFIDENCE
VERYLOW
LOW
MEDIUM
HIGH
VERYHIGH
VERYLOW
VERYLOW
VERYLOW
LOW
LOW
LOW
LOW
VERYLOW
LOW
LOW
MEDIUM
MEDIUM
MEDIUM
LOW
LOW
MEDIUM
MEDIUM
HIGH
HIGH
LOW
MEDIUM
MEDIUM
HIGH
VERYHIGH
VERYHIGH
LOW
MEDIUM
HIGH
VERYHIGH
VERYHIGH
Table 4-4. Rule base with inputs P0CONFIDENCE and P1CONFIDENCE and
SEGMConfidence as output.
146
Cognitive Autonomous Navigation System using Computational Intelligence techniques
b) Pose change estimation using pair of matching segments from consecutive
instants.
Once the unreliable features have been removed from the set of pairs of matching
segments by using the hierarchical fuzzy system described above, the change in the pose of
the robot relative to the selected pairs of segments can be calculated. This procedure is
equivalent to computing the apparent motion of the associated segments relative to the
robot, see Figure 4-27.
Assuming that a segment is perceived at times
and , the robot pose change
between both instants can be obtained by taking the segment as the reference system.
First, the change in heading of the segment is calculated, as if the robot remained
motionless and the segment changed its position, see Figure 4-28.
Figure 4-27. Two segments from instants
and are matched. The segment is taken
as the reference system in order to estimate the change in position of the robot.
Figure 4-28. Calculation of the variation in robot heading using two matched segments
from instants
and .
147
Chapter 4. Hybrid map building, localization and exploration
In this case,
is
, where
is the change in the heading of the segment relative to
the robot between
and . Next, the variation in the position of the robot in an
Euclidean plane can be calculated by executing a set of geometric transformations where
the segment defines the coordinate system. The selection of the axis and the axis for
the segment-based reference system is determined by the order of the end points of such
segment. If and
are the ends of the segment, both points could be written in polar
form according to the reference system defined by the 2D laser device. Point
coordinates
is placed after point with coordinates
if
.
with
Figure 4-29 (a) shows an example where the robot is located in front of a segment at time
. Then the robot slightly moves to the left and the new pose of the robot at time is
also represented. The robot reference systems at instants
and are described by the
black and red coordinate systems, respectively. The segment is bounded by points
and
. Figure 4-29 (b) shows both points in polar form with respect to the 2D laser device
reference system at time
. The points are
and
. Figure 4-29 (c) shows both points in polar coordinates with respect to
the 2D laser device reference system at time . Such points are
. According to the value of , point
is placed before
and
, so the
axis
of the segment-based reference system is obtained by extending the half line that forms an
angle of
degrees with the segment and whose origin is the point . The axis is
defined as the perpendicular line to the
axis that contains the point . Once the
reference system based on the segment is defined, it is possible to calculate the position of
the robot according to the segment-based coordinate system both at time
and time
, since this reference system does not change during the interval
. The position
of the robot respect to the segment-based reference system is calculated by considering
the triangle formed by points , and .
Figure 4-29. (a) 2D laser reference systems at times
and . (b) Ends of the segment
according to the 2D laser reference system at time
. (c) Ends of the segment
according to the 2D laser reference system at time .
148
Cognitive Autonomous Navigation System using Computational Intelligence techniques
As Figure 4-30 shows, it is possible to determine the position of the robot using the
equations:
(4-4)
Where
are the Cartesian coordinates of the point
according to the segmentbased coordinate system,
is the distance from point
to point
and the angle is
calculated by applying the sine theorem:
(4-5)
The value of
is obtained from the equation (4-5):
(4-6)
If segment stays at the same position over time and
and
are calculated
according to the segment-based reference system, it is possible to estimate the change in
the pose of the robot during the interval
. In order to calculate such coordinates,
the following translation operation is made:
(4-7)
Where
and
according to
are the coordinates of the translated point
. A rotation
, the angle between segment and the -axis of the robot reference
system at time
, allows to calculate the value of
defined by the 2D laser device at time
:
in the reference system
(4-8)
Figure 4-30. Estimation of the change in the position of the robot relative to the robot
reference system at time
using translation and rotation operations.
149
Chapter 4. Hybrid map building, localization and exploration
c) Fuzzy system for the rejection of unreliable pose change estimates.
Each pair of matching segments provides a robot pose change estimate. Ideally, all these
pose change estimations should be identical. However, due to inaccuracies in the data
association procedure, slightly different estimations could be obtained. If one of the
estimations is very different to the rest of estimates, such estimation should be rejected. A
procedure that detects such erroneous estimates has been designed. Specifically a fuzzybased technique that use the output of the fuzzy system described in section 4.2.1-2-a has
been designed. As mentioned before, the use of a fuzzy system for the detection of
unreliable data allows to heuristically define rules by considering semantic concepts which
are difficult to model by using conventional methods.
Assume that is the number of available pose change estimations obtained from
of matching segments at times
and . Figure 4-31 shows an example where
pairs
.
The coordinates of the stable features
are represented according to the laser reference
system at time
. The point where the robot is located at time
is the coordinate
origin
, and the point where the robot is located at time is defined by
.
The following system of equations defines the relation between point
points :
and the rest of
(4-9)
Each pair of segments and that represent the same natural landmark provides a value
of
; as mentioned before; ideally, all such values should be the same.
Figure 4-31. Relation between the position of the robot according to the robot reference
system at
and the rest of points that define the landmarks using the 2D laser distance
measurements.
150
Cognitive Autonomous Navigation System using Computational Intelligence techniques
The position change
obtained from any pair of matching segments and would
be a solution of equations (4-9), but in practice this does not happen. Therefore the RMSE
(Root Mean Squared Error), computed using the following equation, should be considered:
(4-10)
If the pose change estimation
is calculated from a pair of matching segments
that represent the same natural landmark, the RMSE is named
. If
and
is high and the
confidence degree obtained by the hierarchical fuzzy system described in Figure 4-25 for
each component of the pair of matching segments that allows to calculate
is not high
enough, such pair should be rejected. Another fuzzy system has been designed to carried
out this procedure by estimating the unreliability degree of each segment that defines the
pair and .
Assume that the confidence degree of
is
and the confidence degree of
is .
Assume that
is the RMSE according to equation (4-10). Assume that a threshold
is
heuristically defined. All the pairs of matching segments whose pose change estimate
exceeds
are rejected. The rest of pairs are analyzed by using the fuzzy system shown in
Figure 4-32.
The confidence degrees
and
and the normalized
value are used as inputs. The
normalization process is carried out by taking the threshold
this case, the outputs
and
as the maximum value. In
represent the unreliability degree of the robot pose change
estimate obtained from the pair of segments
and
whose confidence degree was
calculated by using the fuzzy rule base defined in Table 4-4. In the experiments, a pose
change estimate is considered valid if
the designer.
and
, where
is heuristically defined by
Figure 4-32. Application of a fuzzy system that allows to calculate the final unreliability
degree of a pair of matching segments. This allows to reject those non valid pose change
estimates.
151
Chapter 4. Hybrid map building, localization and exploration
The fuzzy system used in Figure 4-32 has been implemented by using the XFuzzy 3.0 tool.
Figure 4-33 shows the inputs and the output of the rule base. Figure 4-34 illustrates the
fuzzy type that allows the defuzzification of the output Ud.
Table 4-5 shows the rule base that allows to obtain an Ud (Unreliability Degree) value from
the inputs SConf (Segment confidence) and RMSE . The fuzzy type for the variable
SConf is TCONFIDENCE and the fuzzy type for the variable RMSE
is TRMSE (see
Figure 4-26).
Figure 4-33. Fuzzy system that calculates the unreliability degree of a segment of a pair of
matching segments by taking into account its confidence degree and the eSi value.
Figure 4-34. Fuzzy type that allows the defuzzification of the output Unreliability Degree.
SegConf
RMSE eSi
VERYLOW
LOW
MEDIUM
HIGH
VERYHIGH
VERYLOW
HIGH
MEDIUM
LOW
VERYLOW
VERYVERYLOW
LOW
HIGH
MEDIUM
LOW
VERYLOW
VERYLOW
MEDIUM
VERYHIGH
HIGH
HIGH
MEDIUM
LOW
HIGH
VERYVERYHIGH
VERYHIGH
HIGH
HIGH
MEDIUM
VERYHIGH
VERYVERYHIGH
VERYVERYHIGH
VERYHIGH
HIGH
MEDIUM
Table 4-5. Rule base with inputs SegConf and RMSE eSi and Ud (Unreliability Degree) as
output.
152
Cognitive Autonomous Navigation System using Computational Intelligence techniques
4.3 EKF filtering for pose estimation.
The localization problem involves answering the following question: Where is the robot?
Instead of maintaining a single hypothesis as to where in the world the robot might be, the
probabilistic localization maintains a probability distribution over the space of all such
hypotheses. In fact, the challenge is to maintain a position probability density over all
possible robot poses. This density can have different forms according to the available
information about the robot’s position, such as an uniform density distribution (when the
robot has no idea about its position), a multimodal density distribution (several places
should be taken into account) or a unimodal density distribution (the robot is highly certain
about its position).
In this work, the problem is to know how the pose of the robot varies in the interval
, where
is small enough. For this purpose, the semantic polygonal
representation of the environment is considered together with the measurements acquired
by the encoders and the kinematic model of the robot. In this case, the robot could be
quite certain about its pose, therefore an unimodal density distribution (specifically, a
Gaussian distribution), is used for representing the uncertainty of the pose change.
The technique used for combining the measurements provided by both exteroceptive and
propioceptive sensors is based on the EKF. The EKF is a Kalman filtering variant that can
be used on nonlinear systems.
The Kalman Filter (KF) is a recursive Bayesian method for the estimation of the state of a
dynamical system in the presence of Gaussian noise [70], [134]. It simultaneously maintains
estimates of both the state vector
and the error covariance matrix . Thus, the output of
the KF is a Gaussian probability density function (PDF) with mean and covariance . In
the context of localization, the KF output is a distribution of likely robot positions instead
of a single position estimate.
A general diagram of the technique that estimates the pose of the robot at time , using the
virtual sensor measurement together with the information obtained from propioceptive
sensors (encoders), is shown in Figure 4-35.
While the robot navigates, the polygonal representation of the environment can include
stable features such as IAC and OAC points. If these kind of labelled points are found in
the description of the setting, the robot can use the virtual sensor. However, if appropriate
features are not detected, the virtual sensor is inactive. In this case, only the information
provided by the propioceptive sensors is taken into account, together with the model of the
robot.
153
Chapter 4. Hybrid map building, localization and exploration
Figure 4-35. General diagram of the estimation of the pose of the robot.
If this information is not available either, only the model of the robot is used to estimate
the pose considering that the reference inputs are ideally reached. The algorithm that
implements the EKF-based filter proposed in this work is adapted to the described
circumstances.
The virtual sensor provides a set of estimates of the relative pose change of the robot
according to the set of detected natural landmarks. In order to use this information in the
EKF-based algorithm, it is necessary to combine all the relative pose change estimates
provided by the virtual sensor. Subsection 4.3.1 explains the statistical technique used for
this purpose. Finally, subsection 4.3.2 describe both the model of the robot and the global
behaviour of the filter.
4.3.1 Obtaining the output of the pose change virtual sensor by
using statistical methods.
If a set of valid natural landmarks are available the virtual sensor provides the set of valid
pose change estimates
. As mention above, if there are not available landmarks (
is empty), the robot can use only the information from propioceptive sensors to estimate
its pose change. Specifically,
stores the robot pose change estimate obtained
from odometry at time . If
is not empty, it is necessary to obtain a unique
representative value (
) of the set, which will be combined to
by using
the EKF-based algorithm.
154
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Although the set of pose change estimates in
at time have been considered as valid
estimations from a conceptual point of view, all of them are affected by the uncertainty of
the 2D laser scanner measurements. Moreover, it is necessary to select a function that
allows to combine all the pose change estimations in order to calculate
, which is
actually the value that the virtual sensor should provide as the measurement used in the
update stage of an EKF-based method. This function should take into account the
uncertainty of each estimate when
is computed. In particular, the uncertainty of
each individual pose change estimate is represented by using a Gaussian distribution. The
parameters of the Gaussian distribution are calculated by propagating the uncertainty of the
2D laser raw measurements. This procedure (detailed in Appendix II) is summarized as
follows.
Since any individual pose change estimate
in
is obtained from a pair of matching
segments
whose end points are respectively the individual raw points in polar form
and
sensed by a 2D laser scanner, it is possible to
define
as a function of the input vector
:
(4-11)
Where:
(4-12)
(4-13)
(4-14)
(4-15)
The uncertainty is modelled by using Gaussian distributions, so each individual
estimate
in
have associated a Gaussian distribution
. A method for uncertainty propagation fully described in Appendix II
is used to calculate the covariance matrix
inputs
covariance values
. In particular, only the uncertainty of
will be considered and modelled with Gaussian distributions with
, respectively. Furthermore, it is assumed that the
distance measurements are not correlated. Note that only errors in the range component
are considered to be significant [49], [51], so no uncertainty is estimated for the component
.
155
Chapter 4. Hybrid map building, localization and exploration
The uncertainty present in the set of inputs
could be easily propagated to the
set of outputs
if both sets were related by a linear function and input errors
follow a Gaussian distribution. While the second requirement is assumed to be met, the
function that relates inputs and outputs is non-linear. However, the uncertainty in outputs
can still be approximately computed by linearization:
(4-16)
Where the covariance matrix of the input x is defined by the following equation:
(4-17)
The Jacobian of
is defined as follows:
(4-18)
After properly expanding equation (4-16) the following set of equations allow to estimate
the terms of the covariance matrix of
:
(4-19)
(4-20)
(4-21)
(4-22)
(4-23)
(4-24)
In particular, the covariance for the components ,
and
are defined by equations
(4-19), (4-20) and (4-21) respectively. The correlation between
and ,
and
and
and
are respectively defined by the equations (4-22), (4-36) and (4-37). The
equations that define the terms of the Jacobian of
are fully detailed in Appendix II.
156
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Finally, the pose change estimation
as the weighted mean of the
, where
segments
weight:
, provided by the virtual sensor, is calculated
components of all the Gaussian distributions
. Each
value is obtained from the pair of matching
. The inverse of each individual covariance matrix
is taken as
(4-25)
The covariance matrix
is calculated by using the following equation:
(4-26)
4.3.2 Implementation of the EKF.
As Figure 4-35 shows, the algorithm that implements the EKF-based filter consists of two
stages: prediction and update. In the prediction phase, the angular and linear velocities
are obtained using the data acquired by the encoders. A discrete approximation of
the classic kinematic model of a tricycle is used to obtain the position
, when the
pose
is available.
(4-27)
In addition,
position
is calculated using the angular velocity
as input and the change in
is calculated from
and by means of the following equation:
(4-28)
Where the time passed during the interval
is
.
Both the Pioneer and BENDER robots used in the experiments are equipped with
encoders that allow to estimate the number of times per second that a wheel turns.
Knowing the diameter of the wheel, the distance covered during a second can be measured.
As both robots have a differential drive system, each wheel is connected to its own motor
and encoder.
The linear and angular velocities
and
157
are obtained using the independent
Chapter 4. Hybrid map building, localization and exploration
measurements obtained from each wheel as inputs:
(4-29)
Where
and
are the linear velocities of the right and left wheels respectively, and
and
are the angular velocities of the right and left wheels respectively. According to
Figure 4-36, the linear and angular velocities are related by the radius .
(4-30)
Equation (4-30) could be rewritten taking into account that the curvature
the inverse of the radius .
The EKF-based filter uses the measurements
and
is defined as
to estimate the pose of the robot at
time according to the pose obtained at time
by applying equations (4-27) and
(4-28). The inputs and are calculated using the values acquired from the encoders by
applying equations (4-29) and (4-30).
The uncertainty in measurements is represented by a Gaussian distribution with mean
and covariance
.
(4-31)
Where
is the covariance associated to and
is the covariance associated to . The
state of the robot at time is represented by the matrix .
(4-32)
Assume that
is the state of the robot at time
.
Figure 4-36. Parameters of the kinematic model of a differential drive system robot.
158
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Then, the EKF-based algorithm allows to predict the state of the robot
at time , by
applying the inputs
to the kinematic model described in (4-27) and (4-28).
(4-33)
As the dynamical system (4-33) is non linear, it is necessary to linearize the equations about
the current estimate. The Jacobian of
with respect to the state is
:
(4-34)
On the other hand, the Jacobian of
respect to the matrix
is
:
(4-35)
If the matrix
is combined with the matrix
, the matrix
is obtained:
(4-36)
The Covariance
associated to
is then calculated by using the following equation:
(4-37)
Where
state
coincides with the identity matrix and
.
is the covariance associated to the
Once the prediction phase is finished, the update stage is carried out. If the environment
provides a polygonal representation that includes stable labelled points (IAC and OAC), it
is possible to obtain a measurement (the pose change), from the virtual sensor described
in section 4.2.1 together with its covariance matrix :
(4-38)
(4-39)
This information is used to calculate the filter gain that will be used for updating the
pose change obtained in the prediction stage. As pose change measurements are being
used, the equation that computes is defined as follows:
159
Chapter 4. Hybrid map building, localization and exploration
(4-40)
Where
vector
is the identity matrix that represents the Jacobian of the state respect to the
, and
is the inverse of the matrix
:
(4-41)
The current state estimation
to update
and
, together with its covariance matrix
, is obtained by using
, from the obtained pose change measurements
and
.
(4-42)
(4-43)
If there are no stable features detected by the virtual sensor at time
current estimated state is directly transformed as follows:
in the setting, the
(4-44)
(4-45)
Figure 4-37 outlines the full process (prediction and updating).
Figure 4-37. Obtaining the current state
and the covariance matrix
160
using an EKF.
Cognitive Autonomous Navigation System using Computational Intelligence techniques
The following possibilities are taken into account:

If there are no available measurements of and from the encoders, the reference
inputs velocity and curvature are used in the prediction stage.

If there are no available measurements from the virtual sensor the update stage is not
executed.

If there are no available measurements, neither from propioceptive nor from
exteroceptive sensors, then the model of the robot is used and and are calculated
using this model.
The pose obtained from the output of the EKF-based filter is labelled taking into account
these circumstances. If the pose is calculated as the combination of the measurement
obtained from the encoders and the measurement generated by the virtual sensor, such
pose is labelled as NON-BLIND. The rest of possible labels are: BLIND1 (there are no
available measurements from the virtual sensor), BLIND2 (there are no available
measurements from the propioceptive sensor) and DOUBLE-BLIND (both possibilities
simultaneously occur). The trajectory followed by the robot from the preceding node is
stored when a new topological node is created or when the current node is being revisited
and the precedent and current node are not neighbours. The output of the EKF-based
filter provides the pose of the robot over time, and the sequence of these poses makes up
such trajectory. The labels NON-BLIND, BLIND1, BLIND2 and DOUBLE-BLIND
could be used to estimate how confident the position of a node is. If most poses of the
trajectory that leads the robot from one node to another one are labelled as NON-BLIND,
such trajectory is considered more confident than the trajectory whose poses are labelled as
any kind of BLIND in its majority. If a node is created after following a confident
trajectory, the position of the node could be also considered confident.
The effectiveness of the proposed techniques is demonstrated in the following section that
describes a set of experiments carried out both in simulation and in real environments.
4.4 Experiments and results.
This section details the set of experiments carried out both in simulated and real
environments. The main goal of these tests is to demonstrate that the proposed techniques
are efficient and provide good results in most cases.
Several sub-sections describe different experiments: first, the exploration algorithm is
tested, next, the technique that allows to store the trajectory followed by the robot is
verified and, finally, a global experiment for topological map building is presented.
161
Chapter 4. Hybrid map building, localization and exploration
4.4.1 Testing the exploration algorithm.
Several tests have been carried out to demonstrate the performance of the exploration
technique proposed in section 4.1.1-2. Figure 4-38 shows one of these experiments. The
robot is located at an initial position of the simulated map of a hospital. The robot should
follow the secondary corridor that intersects with the main corridor. When the intersection
with the main corridor is reached, the robot should turn to the left and then it should
explore the main corridor. Figure 4-38 (a) shows the simulated environment on the left. A
section of this environment where the robot is located at a given time is shows on the right.
The goal centroid selected by the exploration algorithm is also indicated. In this case, the
furthest exploration sub-polygon located in front of the robot is selected by the fuzzy
system. Figure 4-38 (b) and Figure 4-38 (c) show the robot following the secondary
corridor, since the selected exploration sub-polygon is still located in front. Finally, Figure
4-38 (d) shows that the robot has turned to the left and is following the main corridor.
Figure 4-39 shows a detail of the semantic polygonal representation of the setting and how
the free space is divided into a set of labelled convex sub-polygons.
(a)
(b)
(c)
(d)
Figure 4-38. Autonomous exploration experiment. (a) The robot is located at the initial
position. It starts to advance to the point represented by the position of goal centroid. (b)
The robot is following the secondary corridor that intersects with the main corridor. (c) The
robot reaches the main corridor. (d) The robot has turned to the left and it is following the
main corridor.
162
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 4-39. Detail of the semantic representation of the free space at time . There are five
exploration sub-polygons that define the exploration areas located at the intersections of
the main corridor. The exploration sub-polygon 5 defines the end of the corridor.
This partition allows the robot to acquire abstract information about the environment and
facilitates the process of taking a decision about where to go. Furthermore, a human user
could order what direction to follow by using natural commands such as “GO
STRAIGHT” or “GO TO RIGHT”, among others.
4.4.2 Testing the pose estimator.
The method that estimates the pose of the robot by using the techniques proposed in
section 4.2 has been tested both in a simulated and in a real environment. First, a statistical
analysis of the virtual sensor performance is carried out in subsection 4.4.2-1. Next,
subsection 4.4.2-2 describes a set of experiments (in a simulated and in a real setting),
where the robot accurately follows a trajectory by using the information provided by both
propioceptive and exteroceptive sensors, which is properly combined by an EKF-based
method.
1) Statistical analysis of the virtual sensor performance.
As it is fully explained in subsection 4.2.1, the robot uses a virtual sensor to estimate its
pose change at time . The virtual sensor provides a pose change measurement
from several points obtained from the data acquired by a 2D laser scanner. These points
are the end points of a pair of matching segments. In addition, for each pose change
estimate, its uncertainty is calculated by propagating the uncertainty of the end points that
define the pair of matching segments. Since the virtual sensor measurements will be used
163
Chapter 4. Hybrid map building, localization and exploration
by an EKF-based method, the uncertainty is modelled by using Gaussian distributions. In
order to prove that the method for propagating the uncertainty is suitable in this context,
an experiment in a simulated environment has been carried out. It is fully described as
follows.
Assume that a simulated world consists of a unique object represented by a segment
bounded by the end points
and , which are considered stable features. The simulated
robot is composed of a simulated 2D laser scanner that can sense such object. The
measurements obtained from the 2D laser scanner are affected by Gaussian white noise,
which has been modelled by taking into account the features of the Hokuyo device fully
described in Appendix II, see Figure 4-40.
The robot is initially located at (0, 0) and its heading is also zero. A set of curvature and
velocity references are then applied. In particular, the robot follows the trajectory described
in Figure 4-40 (a), which allows the robot to approach the object.
(a)
(b)
(c)
Figure 4-40. Simulated world for the experiment. (a) Ground truth trajectory followed by
the robot and segment that represents the object in the simulated setting. (b) Relative
position of the segment from the robot point of view while the robot follows the trajectory.
(c) Detail of the set of relative positions of the segment while the robot is approaching the
segment.
164
Cognitive Autonomous Navigation System using Computational Intelligence techniques
From the robot point of view, the environment at time is described by
whose end
points are represented in polar coordinates
, according to the robot reference system.
The end points
,
,
and
of a pair of
matching segments
and are used to estimate the pose change of the robot at time
. Figure 4-40 (b) shows how the segment is perceived by the robot while the trajectory
shown in Figure 4-40 (a) is followed. Due to noise when the component is acquired, see
Figure 4-41, the end points of the segment are not always located at the expected positions
while the robot advances to the object, see details in Figure 4-40 (c). Therefore, the set of
pose change estimations calculated by taking into account the position of a pair of
matching segments at consecutive instants is affected by an error, see Figure 4-42. The
uncertainty of the pose change estimation at time is then modeled by using a Gaussian
distribution whose covariance can be estimated by applying the method proposed in
section 4.2.1.
Figure 4-41. Representation of the covariance of the distances to the end points of the
segment when such segment is sensed along the trajectory. Since the Hokuyo 2D laser
scanner is being simulated, the error is 0.01 if the distance value is in the interval [0.01, 1]
meters and the 1% of the measurement if the distance value is greater than 1 meter.
Figure 4-42. Trajectory reconstructed from the set of relative segments versus the ground
truth trajectory.
165
Chapter 4. Hybrid map building, localization and exploration
Thus, for each pose change estimate along the trajectory, there is a calculated covariance
matrix that provides information about the accuracy of such estimation.
In order to prove that each covariance matrix really represents the uncertainty of each
pose change estimate
, a Montecarlo method has been implemented by using the
Matlab environment.
The experiment (where the robot follows the ground truth trajectory shown in Figure 4-40
(a)), has been repeated
times, where
. For each repetition
of the
experiment, the list of pose change estimates consists of elements, where
,
and the sampling time
is 0.015 seconds. The list of the pose change values
corresponding to the ground truth trajectory is denoted by . Then, for each , an error
matrix can be calculated:
(4-46)
The matrix can be defined as follows:
(4-47)
The size of the matrix is
On the other hand, every
experiments at time
as the vector
and each element is calculated as
row of the matrix
. The
.
represents the error obtained for the
component of the
row of the matrix
is defined
, which represents the error obtained for each component
of the pose change estimate at time
during the experiment .
The vector of covariance matrixes that represent the uncertainty associated to each pose
change estimate along the trajectory is defined by the following equation:
(4-48)
Where each component
is defined as follows:
(4-49)
The following equations allow to obtain the terms of the matrix (4-49):
(4-50)
166
Cognitive Autonomous Navigation System using Computational Intelligence techniques
(4-51)
(4-52)
(4-53)
(4-54)
(4-55)
Where
,
and
represents the mean values of the errors in the estimation of each
component of the pose change obtained for all the experiments at time
, and is
the mathematical expectation. Figure 4-43 and Figure 4-44 show a comparative between
the set of empirical covariance matrixes and the set of calculated covariance matrixes for
each pose change estimate at time
.
(a)
(b)
(c)
Figure 4-43. Components of the main diagonal of the covariance matrixes. (a) Component
. (b) Component
. (c) Component
.
167
Chapter 4. Hybrid map building, localization and exploration
(a)
(b)
(c)
Figure 4-44. Off-diagonal covariance components of the covariance matrixes that provide
information about the correlation between variables. (a) Component
. (b)
Component
. (c) Component
.
A vector of calculated covariance matrixes is obtained in each experiment. The mean of
the vectors is calculated and used in the comparative study. In particular, Figure 4-43 (a),
Figure 4-43 (b) and Figure 4-43 (c), show the components of the main diagonal of the
covariance matrixes
,
and
at time
.
On the other hand, Figure 4-44 (a), Figure 4-44 (b) and Figure 4-44 (c) provide information
about the correlation between the variables
,
and
at time
. It can be
observed that both the empirical and the calculated covariance values are reduced as the
robot advances to the object represented by the segment. This result is expected due to the
Hokuyo 2D laser scanner provides measurements whose error is the
of such
measurements when the distance is higher than meter, and for lower distances (in the
interval
meters), the error is
. The standard deviation for the distance to each
end point of the segment is calculated as
, where the maximum error is the
specified by the 2D laser device manual.
168
Cognitive Autonomous Navigation System using Computational Intelligence techniques
2) Trajectory estimation.
This subsection details two experiments, carried out over the simulated environment,
which demonstrate the benefits of using the EKF-based method to estimate the followed
trajectory by combining the measurements provided by the encoders and by the designed
pose virtual sensor.
The first experiment proves that trajectory estimation based on dead-reckoning leads to
poor long-term results. When the same experiment is repeated by using the measurements
provided by the virtual sensor whenever they are available, the obtained results are clearly
better. The second experiment shows the results obtained when the EKF-based method
that combines both measurements is used.
a) Comparison of dead-reckoning and virtual sensor based pose estimation.
This subsection analyzes the disadvantages of using dead-reckoning for pose estimation. In
addition, it shows that it is possible to improve the odometry procedure if a pose virtual
sensor, based on the data acquired by a 2D laser scanner, is used.
Although the test has been carried out in the Player/Stage simulator, Gaussian white noise
has been added to the velocity and curvature measurements to analyze the effects which
are usually present in real propioceptive sensors such as encoders. Noise has been
modelled by two Gaussian distributions with zero mean and
and
variance for
curvature and velocity measurements, respectively.
The experiment has been repeated twice, see Figure 4-45. First, only dead-reckoning has
been used, as Figure 4-45 (a) shows. Next, only the measurements provided by the virtual
sensor have been taken into account when available, see Figure 4-45 (b). The map of the
hospital has been used to carry out both tests. Specifically the robot goes along the corridor
until it finds the last room located on the right. The topological map is built while the robot
navigates through the corridor. Most of the intersections (
) have been properly
detected in both experiments. The trajectories that link the topological nodes (represented
by using circles) that represent such intersections are erroneously estimated from the
node when only dead-reckoning is used. Error is increased while the robot approaches the
end of the corridor. However, such trajectories are correctly estimated when only the
measurements acquired by the virtual sensor are taken into account.
When the experiment is carried out by considering only the data provided by the virtual
sensor, it is possible that there are not available measurements due to the specific structure
of the environment; in these cases, dead-reckoning must be used instead.
169
Chapter 4. Hybrid map building, localization and exploration
Figure 4-45. Topological map built while the robot navigates along the main corridor. (a)
Only dead-reckoning is used. (b) Only measurements from the virtual sensors are used if
they are available.
In particular, the estimated trajectory consists of
samples, of which
have been
measured by using the pose virtual sensor and the rest has been calculated by using deadreckoning (since valid natural landmarks were not found).
This means that most of the poses (
) have been labelled by using the tag “NONBLIND”. Figure 4-46 shows a comparison between the ground truth and the estimated
trajectory by using dead-reckoning, see Figure 4-46 (a), and by using the pose virtual
sensor, see Figure 4-46 (b).
The experiment demonstrates that the robot estimates its pose more accurately when
measurements acquired by the virtual sensor are used. Figure 4-46 (c) shows the deviation
of the robot estimation from the ground truth trajectory when the method based on deadreckoning and the method based on the measurements from virtual sensor are used. This
deviation is calculated as the Euclidean distance between the real position of the robot and
the position where the robot believes to be for placed for each sample of the trajectory.
b) Trajectory estimation with the EKF-based method.
Although the pose virtual sensor provides much more accurate measurements than deadreckoning for pose estimation, the information provided by propioceptive sensors should
not be completely rejected.
170
Cognitive Autonomous Navigation System using Computational Intelligence techniques
(a)
(b)
(c)
Figure 4-46. Results obtained for a trajectory estimate. (a) Estimation by using deadreckoning only. (b) Estimation by using the pose virtual sensor (when available). (c)
Distances between the real and estimated positions of the robot, when only dead-reckoning
or only the virtual sensor (when available) are used.
In fact, if there are not any natural landmarks available (for example, when the robot
navigates through a corridor bounded by two long walls), it is not possible to use the
virtual sensor. In this case, the only way to estimate the pose is by using odometry based on
the encoder measurements. The proposed EKF-based method is suitable to combine both
sources of information, while maintaining the uncertainty of the process updated.
The following experiment shows the benefits of using the EKF-based method. The map of
the hospital has been used again, see Figure 4-47 (a). The robot navigates along the main
corridor from left to right. When the robot approaches the end of the corridor, it turns
around and navigates from right to left. Some intersections (not previously detected) are
then added. Topological nodes that represent the detected intersections are recognized
when the robot navigates again along the corridor (loop closure), see Figure 4-47 (b). The
trajectory consists of a set of
samples (poses), from which
) have been
estimated by using the combined information provided by dead-reckoning and the virtual
sensor (NON-BLIND), and the rest have been estimated by using dead-reckoning only.
Figure 4-48 shows the number of landmarks detected for each sample of the trajectory.
171
Chapter 4. Hybrid map building, localization and exploration
The experiment shows that the EKF-based method provides accurate results for pose
estimate and consequently for trajectory estimation between nodes. Furthermore, as
mentioned in subsection 4.1.2, the robot can correct its pose when a node is revisited. In
the example, when the robot navigates along the corridor again (from right to left), it can
use the information stored in the recognized nodes to correct the odometric deviations.
(a)
(b)
Figure 4-47. (a) Trajectory followed by the robot while navigating through the simulated
environment of the hospital. (b) Map of the environment built while the robot navigates
along the corridor. The estimated trajectory is also represented.
Figure 4-48. Number of estimated landmarks for each sample of the trajectory; when the
number of landmarks is zero, only dead-reckoning is used.
172
Cognitive Autonomous Navigation System using Computational Intelligence techniques
4.4.3 Testing the topological map building method.
This subsection shows the results obtained after executing the algorithm that allows the
robot to build a topological map of an unknown scenario.
The simulated environment represented by the hospital map is used to show the
performance of this algorithm. The robot has used the exploration technique proposed in
section 4.1.1-2. In order to facilitate the loop closure task, a human user has specified the
natural command “GO TO LEFT” while the robot was exploring the unknown setting, see
Figure 4-49. When the robot finds a revisited node, it closes the loop and uses the stored
representation of the environment to correct its position if necessary. Note that the
trajectories that join two adjacent nodes have been calculated from the sequence of poses
provided by the EKF-based pose estimator. As shown in Figure 4-50, the ground truth
trajectory is represented by using a dotted black line.
The experiment shows that even when the robot navigates through the setting without
closing a loop for a long time, the pose estimate is accurate, since the natural landmarks
found in the environment are used to improve the odometry results. This ensures that the
position of the nodes is also accurate enough, even when they have not been revisited yet.
Furthermore, the robot can recognize a place just by comparing its position with the
position of the nodes in a global reference system. However, the set of polylines stored in
each node is used to ensure that a place is properly identified.
Figure 4-49. Hospital map used for the topological map building.
173
Chapter 4. Hybrid map building, localization and exploration
Figure 4-50. Topological map building obtained after applying the technique proposed in
section 4.1.2.
The portion of the environment and the corresponding free space that the robot has
labelled using the proposed semantic representation is represented in gray colour. In order
to facilitate the comprehension of the generated map, Figure 4-50 only represents the areas
detected in front of the robot, however, for each node in the map, the robot stores the set
of polylines that describes the scenario located both in front and behind it. Nodes could be
identified by a consecutive number (index), while they are created. In particular, in the
example shown in Figure 4-50, the robot is revisiting the
node, since the index that
corresponds to the first node is zero. As the first node is located at the start position, the
uncertainty of this node is also zero.
When a node is revisited, some decisions can be taken. If the node was created at the
beginning of the experiment then its index is very low. Assume that the last node visited
by the robot (before revisiting node ), is identified with index , and
. In this case,
the position of the node is more reliable than the position of the robot, because due to
174
Cognitive Autonomous Navigation System using Computational Intelligence techniques
odometric errors the uncertainty in the robot position is higher than the uncertainty of the
position of node .
If the difference between both positions is very high, the position of the robot should be
corrected according to the position of node using the information stored in it. However,
if a node (for example one whose index is ), is being revisited and the previously visited
node (whose index is , for instance), was created before
, the position of the
robot is at least as reliable as the node position. In this case, both positions could be
modified considering the uncertainty of both poses.
In fact, maintaining a suitable measurement of the confidence of the full map is still a
challenge and, as it is described at the end of this dissertation (in section “Future work”), a
new technique is being developed to properly update the position of the nodes according
to several circumstances.
In short, this work demonstrates that compact topological maps built according to the
proposed technique allow the robot to remember significant places accurately by using the
information stored in the revisited nodes, even when no pose corrections are made. In fact,
if odometry is enhanced by using the information provided by the designed virtual sensor,
a place can be recognized by simply comparing the robot and the node positions. The place
identification is guaranteed by aligning the semantic polygonal description of the setting at
time with the semantic polygonal representation stored in the revisited node.
4.4.4 Testing the topological map building method in a real
environment with the BENDER robot.
The whole navigation system has been tested using the BENDER robot. Two experiments
are shown in this subsection. Both experiments have been completely recorded in a video,
but only a few representative frames have been selected to illustrate the obtained results.
Specifically, the robot is located at a corridor of a building. Several doors are found along
the corridor. The exploration activity is carried out by executing the algorithm explained in
4.1.1. Figure 4-51 shows how the environment is represented by using the semantic
polygonal description detailed in Chapter 3. The intersections are detected before reaching
them, and several candidate nodes are generated. Only the candidate nodes that really
represent intersections are transformed into real nodes.
On the other hand, an experiment is carried out while a person walks near the robot, see
Figure 4-52. Several candidate nodes are then generated while the robot is located close to
the mobile obstacle; however, once the mobile obstacle disappears, only a valid candidate
175
Chapter 4. Hybrid map building, localization and exploration
node is transformed into a real node. Although tests demonstrate that the method is
relatively robust in presence of mobile objects, the generation of candidate nodes due to
such mobile obstacles is an undesirable situation that will be avoided in a future work, as
explained in Chapter 7.
Both tests demonstrate that BENDER provides inaccurate pose measurements when deadreckoning is used, since the encoders are not accurate enough and the interaction with
terrain also has its influence, among other causes. Such measurements are combined with
the information obtained from the virtual sensor in order to improve the results; however,
the robot does not detect valid natural landmarks along the corridor and only dead
reckoning is taken into account. As observed, the heading parameter is quite affected by
dead-reckoning inaccuracies. However, the intersections are properly remembered and the
calculated distance between topological nodes is correct. Furthermore, if adjacent nodes are
close to each other, the robot could easily correct its position when they are consecutively
revisited by matching the stored semantic polygonal representation and the current set of
semantically labelled polylines.
Figure 4-51. Experiment carried out in the main corridor of a building. The semantic
polygonal representation of the setting is highlighted. While BENDER approaches the end
of the corridor, a topological map where nodes represent intersections (doors) are
generated. Only valid candidate nodes are transformed into real nodes.
176
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure 4-52. Experiment that validates the map building method in presence of mobile
obstacles. (a) BENDER detects candidate nodes for each intersection (door). (b) A person
walks near the robot. Non-valid candidate nodes are generated. (c) The person is not close
to the robot and only the valid candidate node is transformed into a real node.
4.5 Conclusion.
In this chapter, a set of techniques that allow a robot to build a hybrid (metric and
topological) map while exploring the environment has been presented. All the proposed
algorithms use the semantic polygonal description of the setting detailed in Chapter 3.
From this representation, a labelled collection of convex sub-polygons that divide the full
free space into significant free sub-areas is obtained. The list of sub-polygons is directly
used by the exploration algorithm, which selects the best route according to the output of a
fuzzy system.
The topological map is defined as a set of interconnected nodes whose links store the
trajectories that allow the robot to go from one node to another one. This trajectory is
estimated using the sequence of poses generated by an EKF-based method that combines
the pose of the robot calculated by using the information acquired from the encoders and
the pose of the robot computed by using the information provided by a virtual sensor. This
virtual sensor generates a pose change estimate by associating natural landmarks obtained
after aligning the set of semantically labelled polylines detected at consecutive instants.
These natural landmarks are segments whose ends are IAC and OAC points. In addition,
177
Chapter 4. Hybrid map building, localization and exploration
the robot pose change is modelled as a random variable whose uncertainty is properly
estimated by means of an uncertainty propagation method.
Modelling in a conventional way the inaccuracies which arise during the execution of the
algorithms that generate the semantic description of the setting is not a trivial issue, and it
is not a simple task to model the uncertainty by using conventional statistical distributions;
therefore, the selection of valid natural landmarks is improved by using a hierarchical fuzzy
system that allows to reject non-valid natural landmarks according to several rules
heuristically defined by the user.
Both the exploration and map building methods have been validated using a simulated
framework and using the robots Pioneer and BENDER in a real unknown indoor setting.
The experiments demonstrate the effectiveness of the designed methods.
178
May it be a light to you in dark places, when all other
lights go out.
J.R.R. Tolkien
Chapter 5
Computational Intelligence techniques for robot model learning and
path tracking.
T
his chapter describes a method for mobile robot model learning based on
Computational Intelligence techniques. This approach is particularly useful for
robots with highly non-linear models. (Final)
Previous chapters show that the availability of a simulation framework is very helpful to
validate the navigation algorithms before using real platforms such as Pioneer or
BENDER. In this thesis, the Player/Stage-based simulation framework has been used for
development. However, the experiments demonstrate that the performance of the
algorithms is not the same when real robots are used, because simulation frameworks are
generally not close enough to reality. On the other hand, it is sometimes necessary to
predict the behaviour of the vehicle before taking a decision. A simulation software
component based on Computational Intelligence techniques for robot modelling by using
real data could facilitate the execution of the prediction stage.
All the navigation process stages have been addressed using Computational Intelligence
methods in previous sections, except modelling and path tracking. This chapter deals with
these issues. In this case, the real robot ROMEO 4R has been used in the experiments.
Whereas both the Pioneer and BENDER robots use a differential drive system, the
ROMEO 4R vehicle has an Ackerman configuration, so stricter non-holonomic and
manoeuvrability constraints have to be considered. In this case, designing an artificial
neural network-based model of the vehicle is particularly interesting because it is necessary
to take into account not only the drive system but also the steering system.
The kinematic/dynamic behaviour of ROMEO 4R is learnt using a set of neural networks
optimally selected by a multi-objective genetic algorithm; only a limited set of features is set
up by a human expert.
In addition, a neural network-based path tracking method is also proposed. Both modelling
and path tracking techniques are compared to classic algorithms.
179
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
5.1 Introduction
Sometimes, it is necessary to predict the behaviour of the vehicle before taking a decision.
A simulation software component based on Computational Intelligence techniques to learn
robot models from real data could facilitate the execution of the prediction stage. In
particular, Neural Networks have become very commonly used for learning the behaviour
of highly non-linear systems [135]. Specifically, Radial Basis Function (RBF) nets are a
good choice because its training is fast and they can easily be turned into a fuzzy system. In
general, human experts are usually needed to select the sets of input data training and
configure the training parameters. Finding an accurate model based on neural systems may
need a highly time-consuming effort. This can be a major drawback for the approach that
may often lead to choose classic methods to obtain models, even though they may be less
accurate. Therefore, it is very important to find a method to reduce the time spent by
experts in model design. Using genetic algorithms (GA) can be an interesting option,
because the execution of a GA that obtains an optimal neural model does not require a
very high expertise. In particular, the Multi-Objective Genetic Algorithm (MOGA) has
been proved to be a very effective approach [19], because several simultaneous objectives
can be easily defined.
When the behaviour of a system must be learnt by a set of RBF neural networks [136], it is
very important to provide a set of objectives and goals related to the accuracy and
complexity of the neural-based model. Then, it is necessary to find an appropriate set of
inputs, and determine how many lags must be taken into account to learn such behaviour.
Instead of requiring an expert for this task, the MOGA could also be used to define those
key design points, as well as the suitable number of hidden neurons and training
parameters.
This chapter shows how an optimal RBF Neural Network (RBFNN)-based model of the
kinematic and dynamic behaviour of a robot is obtained using the MOGA designed at the
Centre of Intelligent Systems (CSI) of the University of Algarve. The method has been
tested using the autonomous car-like robot ROMEO 4R.
On the other hand, path tracking is the last step of the chain in autonomous navigation
systems, although not the least important. A high accuracy is required in this stage to
ensure that the robot is suitably located at a position of the environment according to the
computations made by the rest of the components of the navigation system. The path
tracking activity involves the generation of curvature references according to the kinematic
relations defined by the lateral error from the robot to the goal point, the error in heading
and the robot curvature [137]. Many different methods have been implemented including
180
Cognitive Autonomous Navigation System using Computational Intelligence techniques
linear and nonlinear control laws. The Pure Pursuit method [126], [2], General Predictive
Control applied to vehicle guidance [138], the use of fuzzy logic for the selection of goal
points [17] and hierarchical fuzzy systems applied to visual path tracking [139] are some
examples. Most path tracking methods, mainly those involving predictive approaches [138],
need a mathematical model of the system to calculate the curvature and velocity references.
The success of the method depends on the accuracy of the chosen model, but obtaining a
mathematical model with enough accuracy is usually difficult when the system is highly
non-linear. In addition, many path-tracking techniques are based on the selection of a
single goal point; however, a human driver actually considers the whole piece of path
located ahead of the vehicle, within some distance limits. Therefore, several goal points
could be used to describe a piece of the path in order to compute the steering command
needed to follow the given trajectory.
This chapter also describes how the path tracking problem is solved by implementing a
controller based on a Multi-Layer Perceptron (MPL) neural network. The proposed
algorithm is tested using the robot ROMEO 4R [140]. A comparative study between the
results provided by the proposed method and the results obtained after executing the
classic pure pursuit algorithm is also presented.
5.2 Car-like service robots.
Small and medium-size service wheeled robots such as robotic vacuum cleaners (Roomba),
assistant robots (PeopleBot) or remote presence systems (Texai) are usually fitted with
differential drive systems, see Figure 5-1 (a). Although they also have non-holonomic
constraints, their manoeuvrability limitations are less restrictive than in car-like vehicles;
they can even turn around over a single point if needed, so changing direction is simple.
The dimension and weight of these kind of robots are also small and they generally
navigate on reasonably flat and smooth floors, so their dynamic behaviour is very simple
and, in general, dynamic modelling may not be necessary. On the other hand, car-like
robots are particularly appropriate to carry out many service applications such as
transportation of people in indoor or outdoor settings, like shopping centres, hospitals, or
museums. They can be used for the support of elderly or handicapped persons and can
easily be driven in manual mode if required, see Figure 5-1 (b), because most people are
familiar with car driving. An accurate modelling is important for simulation and
implementation of planning and path tracking algorithms in car-like vehicles. This chapter
addresses the modelling and path tracking problems with a neural net-based approach, and
shows its applicability for a car-like vehicle provided with the necessary set of
propioceptive and exteroceptive sensors.
181
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
(a)
(d)
Figure 5-1. Service robots. (a) Robots with a differential drive system. (b) Autonomous
wheelchair for transportation of people.
5.2.1 Modelling a car-like vehicle using a set of Radial Basis
Function neural networks.
The kinematics of a car-like autonomous robot with Ackermann steering is usually
described with the so-called bicycle model:
(5-1)
Where
is the derivative of the pose of the vehicle,
is the velocity and
is the
curvature of the trajectory, see Figure 5-2.
If the model (5-1) holds for the mobile robot, the angle of the steering wheel controls the
size of the circle followed by the vehicle if a constant velocity is held; the radius of this
circle is , and
. The state variables are the pose
, the velocity and the
curvature . The desired velocity reference
and the steering command are the system
inputs; low level controllers are responsible for tracking such references. A purely
kinematic model is usually not sufficiently accurate, and the dynamics of the vehicle has to
be modelled to some extent.
Figure 5-2. Bicycle-based kinematic model of a mobile car-like robot.
182
Cognitive Autonomous Navigation System using Computational Intelligence techniques
The simplest and most frequently used improvement of (5-1) to take into account vehicle
dynamics is to include a linear, first-order model of the steering wheel subsystem
considering a time constant and a pure delay .
(5-2)
In this thesis, the car-like robot ROMEO 4R (see Figure 5-3), designed at the University of
Seville [140], has been used in the experiments whose results are detailed in the last sections
of this chapter.
The steering and the velocity of the robot can be controlled by a human driver (manual
mode), or by a controller (automatic mode). In indoor environments, a gyroscope, which
measures heading changes, is commonly used together with encoders that offer useful
information about the velocity of the wheels; GPS data are also available for outdoor
experiments. By integrating the odometric equation (5-1), a new pose can be computed by
taking into account the real curvature and the current velocity and heading.
ROMEO 4R makes use of a PC-compatible computer as embedded controller. The PC is
equipped with data acquisition cards to read the information from sensors, which is
processed and properly recorded.
The ROMEO 4R basic control software has been designed as a layered system where the
lower modules offer services to the higher ones. The lowest layer provides functions to
access sensing information and send reference commands to the hardware controllers of
the vehicle. Therefore, all the algorithms implemented to carry out high-level control tasks
use the services supplied by the mentioned module. Finally, communication problems are
solved using YARP.
Figure 5-3. Front and side view of the ROMEO 4R.
183
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
A log file can be generated both when the vehicle is autonomously controlled and when it
is under manual control. This file holds all the necessary information to analyse the
experiment, and it is particularly useful to obtain training data sets.
Designing a good model from real data is usually hard, and although mathematical models
can be inaccurate, they are often for this purpose, because they are easily implemented.
However, Computational Intelligence techniques have demonstrated to be very useful and
even more accurate when the behaviour of highly non-linear systems must be learnt. In this
Thesis, a hierarchical neural network-based system has been used to describe the dynamickinematic model of a car-like robot instead of using a mathematical approach. Comparative
results show that this choice is more appropriate than a conventional mathematical
solution.
A neural network-based kinematic/dynamic model of the robot.
The dynamic and kinematic behaviour of the robot ROMEO 4R has been modelled using
a set of interconnected black boxes that contain Radial Basis Function (RBF) neural
networks, see Figure 5-4. Each neural network (black box) allows to approximate a nonlinear function that defines the behaviour of each component of the kinematic/dynamic
model of the vehicle (Steering, Velocity and Heading), using recorded real data as training data
set.
Figure 5-4. Model of the vehicle using black boxes.
184
Cognitive Autonomous Navigation System using Computational Intelligence techniques
As Figure 5-5 shows, a RBF neural network typically has three layers: an input layer, a
hidden layer (where each neuron is activated by using a radial basis activation function),
and a linear output layer. Therefore, it can be considered as a linear combination of radial
basis functions:
(5-3)
Where
is the value generated by the
output neuron,
connection between the hidden neuron and the output neuron ,
and
is the output generated by the
is the weight of the
is a threshold or bias
hidden neuron. In addition, any output from a
hidden neuron is obtained using the following equation:
(5-4)
Where
is the
component of the input vector,
of the radial base function
and
is a component of the vector centre
is the width of this function. In general, the function
is Gaussian:
(5-5)
On the other hand, training a RBF neural network typically consists of two phases; first,
the widths and centres of the Gaussian activation functions of the hidden layer are
obtained, and then the weights of the output layer are determined.
Figure 5-5. RBF neural network.
185
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
This can be justified by considering the different nature of the non-linear hidden neurons
versus the linear output neurons.
In order to train the whole system proposed in Figure 5-4, the components Steering, Velocity
and Heading are considered independently. Each subsystem receives a set of inputs and
generates a single output according to the non-linear behaviour of the system, which can be
approximated by a RBF neural network. Each RBF neural network is separately trained
using real data obtained from the information stored in several log files recorded while a
human driver drives the vehicle. The records related to the pose and the velocity of the
vehicle, such as the heading , the steering and the velocity
and the desired steering
and velocity commands
are taken into account, whereas the rest of records stored
in the log file are ignored. Since sensors are sampled with a configurable period, a time
mark is also available.
In order to acquire enough information about the dynamic behaviour of the robot, the
black boxes are built as feedback systems that use past outputs as inputs. Steering and
Velocity modules also depend on their past outputs because of the dynamic behaviour of
the system. Due to the fact that every box is implemented as a closed-loop system, the
neural networks must be trained with a training data set that covers a wide range of
heterogeneous references, including the higher and lower limits of each component, in
order to obtain a stable model. Furthermore, the design is based on MISO (Multiple Input
Single Output) structures, since training and validation processes are easier to carry out and
better results are usually obtained in both training time and accuracy.
5.2.2 Selection of the best collection of neural networks that
solves a problem using a MOGA.
Neural networks have proved to be very efficient tools for learning the behaviour of highly
non-linear systems [135]. However, reaching accurate results depends on many issues
related to training and testing data sets, training parameters and the expertise of those
involved in the training process. Even experts can find serious problems when they train a
neural network that must be executed within a feedback system, because classic open-loop
testing data sets are not enough to prove the suitable performance of the system; instead,
closed-loop tests should be used to verify the accuracy of the model.
Given that the selection of an optimal neural network can be considered a relatively classic
optimization problem, many researchers use Genetic Algorithms for this purpose,
particularly those allowing the specification of several objectives. These genetic algorithms
are called Multi-Objective Genetic Algorithms (MOGA) [141].
186
Cognitive Autonomous Navigation System using Computational Intelligence techniques
1) Definition of MOGA.
When an optimization problem has to be solved by considering several goals. The
objectives under consideration can easily conflict with each other. The optimization of a
particular solution with respect to a single goal could lead to unacceptable results with
respect to the other objectives. Thus, an appropriate solution to a multi-objective problem
is to explore a set of solutions, each of which satisfies the targets at an acceptable level
without being dominated by any other solution. In fact, the ultimate aim of a multiobjective optimization algorithm is the identification of solutions in the Pareto optimal set,
that can be defined as the collection of all feasible solutions in the solution space which
are non-dominated with respect to each other.
In real-life problems, Pareto optimal sets are actually preferred to single solutions because
the final result is usually a trade-off. For instance, accuracy in results and minimum number
of hidden neurons could be two conflicting goals that should be satisfied. Probably,
increasing the number of hidden neurons enhances the accuracy. But if there are
computational constraints related to storage space, it is necessary to give up accuracy in
favour of a decrease of the neural network dimensions. The set of all the Pareto optimal
solutions in the objective space is called the Pareto front (PF).
For many problems, the number of Pareto optimal solutions is huge (even infinite); so, due
to its size, identifying the entire Pareto optimal set is nearly impossible. A more practical
approach to multi-objective optimization is to analyze a set of solutions (the best-known
Pareto set) that represent the Pareto optimal set or PF as well as possible. In short, a multiobjective optimization algorithm should meet some requirements: the best-known Pareto
front should be as close as possible to the true Pareto front, solutions in the best-known
Pareto set should be uniformly distributed and diverse over of the Pareto front and the
best-known Pareto front should capture the whole spectrum of the Pareto front.
On the other hand, Genetic Algorithms (GA), first introduced by Holland and his
colleagues in the 1960s and 1970s [142], are inspired by the evolutionist theory that
explains the origin of species. In nature, weak and unfit species within their environment
are faced with extinction by natural selection. The strong ones have greater opportunities
to pass their genes to future generations by means of reproduction and to become
dominant in their population. Sometimes, random mutations occur in genes. These
alterations could provide additional advantages or drawbacks. Unsuccessful mutations are
eliminated by natural selection. In short, GA belongs to the larger class of evolutionary
algorithms which generate solutions to optimization problems using techniques inspired by
natural evolution such as inheritance, mutation, selection and crossover. In GA, a
population is a collection of chromosomes or individuals defined by genes that control
187
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
several features of the chromosome. For a specific optimization problem, a chromosome
generally encodes a unique solution within the solution space using an array of bits, or
even an array of other types and structures. A typical GA requires a genetic representation
of the solution domain and a fitness function to evaluate individual solutions. In general,
the fitness function is defined over the genetic representation and measures the quality of
the represented solution. Once the genetic representation and the fitness function are
defined, a GA proceeds to initialize (usually randomly) a population of solutions
(generation), and then to improve it through repetitive application of the mutation,
crossover, inversion and selection operators. This process is repeated until a termination
condition is reached, such as finding a solution that satisfies some criteria, or reaching a
maximum number of generations, among others. Figure 5-6 shows an outline of the
structure of a basic GA.
A generic, single-objective GA could be easily modified to find a set of multiple nondominated solutions in a single run. The ability of GA to concurrently search different
regions of the solution space S allows to find a varied set of solutions for problems with
non-convex, discontinuous, and multi-modal solution spaces.
Figure 5-6. Basic structure of a classic genetic algorithm.
188
Cognitive Autonomous Navigation System using Computational Intelligence techniques
In fact, the crossover operator of GA may exploit structures of good solutions with respect
to different objectives to create new non-dominated solutions in unexplored parts of the
Pareto front. Therefore, genetic algorithms are a very popular heuristic approach to multiobjective design and optimization problems. The first multi-objective GA, called vector
evaluated GA (or VEGA), was proposed by Schaffer [143]. Later, several multi-objective
GA, with different fitness assignment procedures, elitism or diversification approaches,
among others), have been developed in the last years.
2) Implementation of the MOGA.
The MOGA used in this work, designed and implemented at the Centre of Intelligent
Systems (CIS) of the University of Algarve [144], [145], is based on the algorithm
developed by Fonseca and Fleming in [146], which was the first multi-objective genetic
algorithm that explicitly combined Pareto-based niching (collaboration between
populations [147] and ranking techniques) to encourage the search toward the true Pareto
front while maintaining diversity in the population.
In short, the MOGA (implemented at the CIS), is mainly designed to obtain the collection
of best RBF neural networks that could solve a problem related to system modelling, where
the inputs consists of a set of lags of the variables that define the system. Every
configuration of a neural network is represented by a chromosome, and the MOGA selects
the best inputs, the best number of lags and the best number of neurons according to
several objectives defined by the user.
Figure 5-7 shows an example of feasible training record which holds inputs. For each
input, five lags are considered. The chromosome specifies which lagged inputs are selected
to compose the training record. In the example, the first item of is the first lag of the first
input, the second item of is the second lag of the second input, the third item of is the
fourth lag of the second input, and so on. The first item of the chromosome determines
the number of items or genes (16 in the example). The second item defines the number of
hidden neurons that are used. The rest of the items are the indexes of the selected lags. The
MOGA will decide which lags are better and how many of them are necessary according to
the objectives set up at initialization, and how many hidden neurons and inputs are needed.
The designer should provide the boundaries of the parameters included within the
chromosome (RBF neural network), such as minimum number and maximum number of
hidden neurons, maximum number of lags and maximum number of inputs, among others.
In addition, the parameters related to the MOGA, such as maximum number of
generations, maximum number of individuals of the population and desired objectives
should also be defined.
189
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
Figure 5-7. Chromosome obtained from a record of a training data set.
Specifically, the optimization objectives are: minimum norm of linear parameters (weights),
minimum root mean square error (RMSE) in training and testing stages and minimum RBF
complexity. The RBF complexity is measured by using the following equation:
(5-6)
Where is the number of hidden neurons, is the number of inputs, is the number of
spreads or widths of the Gaussian radial functions and
is the number of linear weights.
Furthermore, different priorities are assigned to the objectives. A higher priority indicates
preference in relation to other objectives with lower priority.
In general, the execution of a multi-objective genetic algorithm is costly in terms of
computational time. For this reason, an implementation based on a master-slave distributed
architecture [20], [21] has been selected for the implementation of the MOGA.
The training algorithm runs as follows. First, the master stores information about
chromosomes into a shared database. Concurrently, each worker takes out one
chromosome from the database and trains a RBF neural network with the early stopping
method, which is sent back to the database. MOGA operations (selection, recombination
and mutation), are implemented in the master to obtain a new generation. This process is
repeated as many times as specified by the designer. Next, a validation test is carried out
using an open-loop data validation set obtained from the vehicle, but not included in the
previously used training data set. Several neural networks accomplishing the objectives and
restrictions are obtained at the end of this stage. This collection of RBF neural networkbased models are non-dominated (Pareto-optimal) individuals, taking in consideration the
190
Cognitive Autonomous Navigation System using Computational Intelligence techniques
predefined goals and priorities. Finally, closed-loop tests are also carried out in order to
prove the suitable performance of the obtained model. Figure 5-8 outlines the master-slave
distributed training algorithm.
3) Optimizing the kinematic/dynamic model of a car-like robot using the MOGA.
As mention in section 5.2.1, the car-like vehicle ROMEO 4R is modelled using several
Radial Basis Function neural networks [136] according to the diagram shown in Figure 5-4.
The best collection of neural networks that meet the architecture and performance
requirements detailed in section 1.2.2.-2 is selected using the MOGA.
The following subsections explain how suitable sets of real training data are obtained from
the mobile robot and how tests of the models are carried out. First, some neural networks
are trained and tested with the MOGA to find the best steering model, next a velocity
model is also obtained and, finally, using the outputs from the other models as inputs, the
behaviour of the heading is learnt.
a) Steering system model.
Guiding a car-like vehicle involves sending the steering command called
to a controller
on each control cycle. In order to reach the reference curvature, the steering wheel angle is
set to an appropriate value by the PID-based low-level controller.
Figure 5-8. Distributed software architecture that implements the MOGA.
191
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
Due to system dynamics, the real curvature is not immediately equal to the desired one,
. The desired steering command is only reached after a certain lag, so for steering system
modelling a set of past values of
and should be considered as inputs. In addition, the
output obtained at time also depends on the set of past values of curvature references
and the set of past values of real, measured curvatures, see the Steering box in Figure 5-4.
The training pattern generation process should be carried out by taking into account that a
set of sufficiently representative training patterns should be selected, and over-fitting
effects should be avoided. Consequently, in order to obtain a significant training data set, a
human driver carried out a wide set of manoeuvres, covering the ROMEO 4R full range of
movements, while sensor data were sampled and recorded every
milliseconds.
The steering command is always within the interval
because its minimum curvature radius is
meters.
for ROMEO 4R,
From this information, a set of training patterns are obtained, including delayed data during
one second and a half, approximately. This involved the generation of over
lagged real
steering values and lagged desired steering values.
Each pattern defines the set of inputs and the output that the RBF neural network should
learn during the learning process. As Figure 5-9 shows, the output
is obtained using a
number of past inputs (
,
,
, ...,
), and a number of past outputs
(
,
,
, ...,
). Table 5-2 shows a sequence of patterns extracted from the
training data set obtained after driving the vehicle with the set of desired steering
commands shown in Table 5-1, where and are .
Figure 5-9. Neural-based steering model.
t
0
0.15
1
0.15
2
0.15
3
0.15
4
0.25
5
0.25
6
0.27
7
0.3
Table 5-1. Set of desired steering commands.
192
…
…
Cognitive Autonomous Navigation System using Computational Intelligence techniques
t
0
1
2
3
4
5
6
7
…
0
0
0.05
0.07
0.10
0.12
0.15
0.16
0
0.15
0.15
0.15
0.25
0.25
0.27
0.3
0
0
0.15
0.15
0.15
0.25
0.25
0.27
0
0
0
0.15
0.15
0.15
0.25
0.25
…
…
…
…
…
…
…
…
…
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.05
0.07
0.10
0.12
0.15
0
0
0
0
0.05
0.07
0.10
0.12
0
0
0
0
0
0.05
0.07
0.10
…
…
…
…
…
…
…
…
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Table 5-2. Sequence of patterns from a training data set with N=M=15.
b) Testing the steering system model.
As Figure 5-10 (a) illustrates, two paths were recorded to obtain both the data training and
the validation sets for the steering system model shown in Figure 5-9, while a human driver
was driving the ROMEO 4R vehicle. Figure 5-10 (b) shows a comparison between the
desired steering command and the real measured curvature obtained when the paths were
followed. The information, stored in two log files, was used to extract a data training set by
including 15 lags for the desired steering inputs and 15 lags for the real steering inputs.
(a)
(b)
Figure 5-10. Log files for the steering training data set.
193
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
A set of goals or constraints were defined as training objectives for the MOGA: Maximum
training RMSE, maximum testing RMSE; maximum Norm of Weights and maximum RBF
Complexity (see equation (5-6)).
After the training and open-loop validation procedures using the test and validation data
sets, the MOGA selected a collection of non-dominated Pareto optimal RBF neural
network-based models that met the given objectives. However, open-loop trials are not
enough to assess the performance of the system. Therefore, an algorithm was implemented
to test the close-loop performance of each RBF neural network-based model. A set of
desired steering commands with appropriate delays were used directly as the corresponding
inputs, while the outputs of the neural network generated the rest of inputs by closing the
loop, see Figure 5-11. Once the RBF neural networks belonging to the Pareto optimal
collection were tested using this algorithm, the RBF model whose parameters are shown in
Table 5-3 was selected. These parameters are the selected lags, the number of hidden
neurons, the number of inputs, the set of centres of Gaussian functions calculated by the
MOGA algorithm, the set of spreads of these functions and the set of weights of the linear
sub-network. Each connection between an input and a hidden neuron is associated to a
Gaussian function whose centre and spread was fitted during the training process. The
hidden neurons, connected to a single output neuron, are defined by a set of weights, also
fitted during the MOGA procedure.
Summarizing, the selected RBF has input neurons, hidden neurons and output
neuron. Due to the low complexity of the network, its computational cost is low during the
working stage.
Figure 5-11. Closed-loop tester.
194
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Lags
Selected Lags
Hidden Neurons
1
24
2
16
3
13
4
12
5
10
6
4
7
0
…
…
20
0
5
-0.2594
0.2287
0.4075
0.6529
0.1992
-0.4142
0.2633
0.8306
0.5277
-0.2189
-0.922
0.2521
0.6375
0.9569
-0.2963
1.2197
0.619
0.7256
-1.1535
-0.0556
0.2882
Centres
Spreads
Weights
-0.5469
0.109
0.4829
0.1244
-0.8161
-1.0272
-0.1214
0.604
-1.5501
0.9862
1.1042
0.9852
-1.1928
0.8661
-0.4182
-0.0969
0.3488
0.2963
-0.4673
Bia = 0.3912
Table 5-3. Neural network parameters.
In order to assess the performance of the RBF neural network-based steering system
model, the algorithm shown in Figure 5-11 is used to obtain the curvature output predicted
with the input steering commands recorded during the experiments. In Figure 5-12 and
Figure 5-13 the real curvatures obtained while the human driver was driving are compared
to those obtained by executing the closed-loop tester.
(1)  RMSE = 5.6174e-004
(2)  RMSE = 5.762e-004
(3)  RMSE = 12.0e-004
(4)  RSME = 85.0e-004
(5)  RMSE = 30.0e-004
(6)  RMSE = 6.3144e-004
Figure 5-12. Comparison between experimentally measured curvature (solid red line) and
generated by the RBF neural network-based model (dashed blue line) (Part 1).
195
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
(7)  RMSE = 20.0e-004
(8)  RMSE = 7.7264e-004
(9)  RMSE = 25.0e-004
(10)  RMSE = 6.3690e-004
LEGEND
 real
 estimated
(11)  RMSE = 20.0e-004
Figure 5-13. Comparison between experimentally measured curvature (solid red line) and
generated by the RBF neural network-based model (dashed blue line) (Part 2).
The distribution of the Root Mean Squared Error (RMSE) is shown in Figure 5-14,
together with the main statistical parameters. The mean value is around
, with a
standard deviation of
.
Figure 5-14. Root Mean Squared Error (RMSE) obtained after executing the collection of
experiments.
196
Cognitive Autonomous Navigation System using Computational Intelligence techniques
As Figure 5-12 and Figure 5-13 show, the behaviour of the RBF neural network-based
model is suitable enough in most of the trials. The highest error is found in the fourth trial.
As Figure 5-12-(4) shows, the curvature reference (or steering command) changes very fast,
and the real system is not capable of reacting so rapidly. To design a good controller, a low
control effort is a desired feature. Therefore, a good controller would not probably change
the steering command so fast. However, this possibility could be taken into account with
an additional training effort that included additional training patterns with faster changes of
the steering command.
Although the stability of the selected neural model is not strictly guaranteed, it has shown
to be stable and accurate in a wide range of tests. On the other hand, the RBF neural
network-based model needs only simple arithmetic calculations, so its computational cost is
very low.
c) Drive system model.
Both the position and the heading of the vehicle depend on its velocity. Although many
control approaches assume a constant velocity to simplify the problem, the variation on
velocity should be considered in order to obtain a high quality model of the robot. As
Figure 5-4 shows, the Heading black box uses both the real steering command and the real
velocity as inputs. Consequently, it is necessary a drive system model that calculates the real
velocity by taking into account the desired velocity command. In this case, the training data
set has also been recorded from real experiments under the control of a human driver;
different velocity values have been used in order to obtain information about the dynamic
behaviour of the drive system. As Figure 5-15 shows, several lagged past outputs and
several lagged desired velocity commands have been extracted from the log files and
included in the data set.
Figure 5-15. Neural-based drive system model.
197
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
d) Testing the drive system model.
In order to obtain a RBF neural network-based model, a set of acceleration patterns are
needed. These patterns are obtained from the desired and real velocity values and their
respective lags.
In general, the slope of the terrain has a significant influence in the drive system behaviour;
therefore, acquiring information about the terrain features would be necessary for a more
exact model. As this disturbance cannot be currently measured, its effects can be noticed in
some experiments where the model achieves a lower than expected performance. Anyhow,
despite these inaccuracies, a drive system model can be successfully obtained from
experimental data.
Figure 5-16 (a) shows some examples of the records used during the training process,
obtained in several acceleration trials. In Figure 5-16 (b), the slope of the road influences
the acceleration pattern. For this reason, the vehicle keeps increasing its velocity after the
desired value is reached, and an underdamped response is obtained. This behaviour
depends on the slope of the terrain and the low-level controller dynamics.
The data set used during the training process consisted of a collection of patterns with
current and lagged values of the desired and measured velocities. As in the steering system
model, the MOGA optimization process provided the more appropriate RBF neural
network-based model that met the design objectives.
The objectives for the MOGA are the same as those used with the steering model:
maximum RMSE in training, maximum RMSE in testing; maximum Norm of Weights and
maximum RBF Complexity. After the training and open-loop validation procedures using
the test and validation data sets, the MOGA selected a collection of non-dominated Pareto
optimal RBF neural network-based models that met the design objectives.
(a)
(b)
(c)
Figure 5-16. Learning the behaviour of the drive system. (a) Patterns for learning; (b)
patterns for testing; (c) patterns for validating the model.
198
Cognitive Autonomous Navigation System using Computational Intelligence techniques
The RBF neural network-based model of the drive system was also tested in closed loop,
like the steering system model. A set of desired velocity commands with appropriate delays
was used directly as the corresponding inputs, while the output of the neural network
generated the rest of inputs by closing the loop, see Figure 5-17. Once the RBF neural
networks belonging to the Pareto optimal collection were tested using this algorithm, the
model whose parameters are detailed in Table 5-4 and whose centres are shown in Figure
5-18 was selected.
Figure 5-17. Closed-loop tester.
Lags
Spreads
Weights
29
5.6122
8.8371
28
3.8664
-2.1676
25
3.5331
2.1871
24
7.1537
-6,034
23
0.457
246e6
22
21
20
19
17
16
15
Bias =1.2721
Table 5-4. Best neural network parameters.
Figure 5-18. Set of centres.
199
14
13
9
8
7
3
2
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
The records in the structure detailed in Table 5-4 provide information about the selected
RBF neural network-based model of the drive system. Like with the steering system model,
the described parameters are the selected lags, the number of hidden neurons, the number
of inputs, the set of centres of Gaussian functions calculated by the MOGA algorithm, the
set of spreads of these functions and the set of weights of the linear sub-network. The
selected RBF net consists of
input neurons, hidden neurons and input neuron.
In order to demonstrate that the RBF neural network-based model was able to simulate the
behaviour of the drive system, the input velocity commands recorded during the manual
mode experiments were applied to the neural net, as Figure 5-17 shows. In Figure 5-19 and
Figure 5-20 the real velocities measured during the experiments are compared to those
obtained by executing the closed-loop tester.
(1)  RSME = 80.0e-4
(2)  RSME = 202.0e-4
(3)  RSME = 124.0e-4
(4)  RSME = 97.0e-4
(5)  RSME = 257.0e-4
(6)  RSME = 203.0e-4
Figure 5-19. Comparison between the experimentally measured velocity (dashed blue line)
and the velocity generated by the RBF neural network-based model (solid red line) (Part 1).
200
Cognitive Autonomous Navigation System using Computational Intelligence techniques
(7)  RSME = 141.0e-4
(8)  RSME = 183.0e-4
LEGEND
V real
V estimated
V desired
(9)  RSME = 187.0e-4
Figure 5-20. Comparison between the experimentally measured velocity (dashed blue line)
and the velocity generated by the RBF neural network-based model (solid red line) (Part 2).
The distribution of the Root Mean Squared Error (RMSE) is shown in Figure 5-21,
together with the main statistical parameters. The mean value is around 0.0164, with a
standard deviation of 0.0057.
In general, good results are obtained, but it is necessary to pay attention to several subjects.
Note that in Figure 5-19 and in Figure 5-20 trials 1 to 9 show the real behaviour of the
drive system response under a step change in the desired velocity, as well as the estimated
output velocity. For trials 6 to 9, the model is unable to predict the underdamped response
of the system. This behaviour is quite different from that obtained in trials 1 to 5, and is
caused by the slope of the road, as mentioned before. In order to learn this behaviour
properly, other sensing information would be needed, such as the tilt angle of the vehicle
measured by an inclinometer.
Statistical values:
Mean = 0.0164
 = 0.0057
Median = 0.0183
Figure 5-21. Results obtained after testing the closed-loop model.
201
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
e) Heading system model.
Since the results from the steering and drive models are available, the heading system
dynamic model can be designed as a black box whose inputs are the desired curvature, the
desired velocity and several past heading outputs, see Figure 5-22. A number of lagged real
velocity, lagged curvature and lagged heading values are needed as inputs. The output is the
heading of the vehicle at the current time. Both velocity and steering angles are the
simulated values obtained respectively by the models Steering and Velocity from the desired
curvature and velocity commands. As in the preceding sub-models, data recorded under
the control of a human driver have been used to obtain the training patterns. A wide range
of manoeuvres were carried out to completely cover the range of possible data patterns.
Figure 5-23 shows graphically the heading, velocity and curvature measured and stored in
the log file from which the learning data set were generated.
Figure 5-22. Black box model of the heading system.
Figure 5-23. Experimentally measured heading, velocity and curvature.
202
Cognitive Autonomous Navigation System using Computational Intelligence techniques
f) Testing the heading system model.
As in de preceding sub-models, the behaviour of the heading system was modelled using a
RBF neural network. A collection of non-dominated Pareto optimal RBF neural networkbased models was obtained by executing the MOGA according to the same objectives
specified in sections a) and c). In addition to the open-loop tests carried out using the
validation learning data set, a closed-loop tester was also executed, and the RBF neural
network-based model whose parameters are shown in Table 5-5 and Table 5-6 was finally
selected by the designer. The selected RBF net consists of
input neurons, hidden
neurons and output neuron.
The closed-loop testers for the steering and drive system models were executed and their
outputs stored to be later used as inputs of the closed-loop heading system model tester.
The outputs of the heading system were used as lagged inputs of the model, see Figure
5-24.
Figure 5-24. Black box model of the heading system.
Lags
29
26
24
22
21
20
19
17
Centers
1.2957
1.2366
0.2956
0.2375
1.0763
1.52
0.8668
1.1115
0.9468
0.7248
0.3687
0.4016
0.697
0.0922
-0.167
-0.274
0.0419
1.4657
0.4157
0.5187
0.5521
0.7997
0.3492
0.295
-0.043
0.4771
0.6044
0.4543
0.8923
-2.179
-1.904
-0.926
0.7381
1.0359
1.0626
0.6991
0.8675
1.0013
0.7094
0.6202
0.2594
0.026
0.0194
0.4347
0.1019
1.412
1.0504
0.5012
Spreads
15.4982
6.7624
7.051
9.8995
5.3742
4.7238
Bias
-3.063
Weights
12.260
18.027
-12.48
-6.835
-3.451
-1.125
Table 5-5. Neural network parameters. (Part 1)
203
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
Lags
16
13
11
10
9
5
4
2
Centers
0.6844
0.6236
0.4071
5.5179
6.6415
7.4768
6.8666
6.26
-0.301
-0.548
-0.627
6.1229
0.7959
-0.344
-0.451
-0.231
-0.226
-0.617
-0.755
4.8827
2.3072
0.8232
0.4561
0.6444
-0.614
0.1502
0.7398
-0.779
-1.256
-1.508
-0.072
0.903
0.4175
0.185
0.4536
-7.354
-7.534
-7.790
-8.293
-7.877
0.3142
0.0231
-0.335
0.1211
0.6826
0.754
0.6481
0.091
Table 5-6. Neural network parameters. (Part 2).
The records in the structure detailed in Table 5-5 and Table 5-6 define the RBF neural
network-based model selected for the heading system. Like with the steering system and
the drive system models, the parameters are the selected lags, the number of hidden
neurons, the number of inputs, the sets of centres and spreads of the Gaussian functions,
and the set of weights of the linear sub-network.
In order to demonstrate that the RBF neural network-based model was able to simulate the
behaviour of the heading system, the input velocity and steering commands recorded
during the manual mode experiments were applied to the neural net, as Figure 5-24 shows.
In Figure 5-25 and Figure 5-26 the real heading values measured during the experiments
are compared to those obtained by executing the closed-loop tester.
(1)  RSME = 71.0e-4
(2)  RSME = 18.0e-4
(3)  RSME = 47.0e-4
(4)  RSME = 17.0e-4
Figure 5-25. Comparison between experimentally measured heading (dashed blue line) and
simulated heading generated by the RBF neural network-based model of the heading
system (solid red line) (Part 1).
204
Cognitive Autonomous Navigation System using Computational Intelligence techniques
(5)  RSME = 75.0e-4
(6)  RSME = 54.0e-4
LEGEND
real
estimated
(7)  RSME = 88.0e-4
Figure 5-26. Comparison between experimentally measured heading (dashed blue line) and
simulated heading generated by the RBF neural network-based model of the heading
system (solid red line)(Part 2).
The distribution of the Root Mean Squared Error (RMSE) is shown in Figure 5-27,
together with the main statistical parameters. The mean value is around 0.0057, with a
standard deviation of 0.0029. In general, good results are obtained, but it is necessary to
pay attention to several subjects. Figure 5-25 (1) and Figure 5-25 (2) show the behaviour of
the heading model when the vehicle is driven around a complete circle. As shown, the RBF
neural network-based model is very accurate in this case. In the test shown in Figure 5-25
(3) the performance of the model is lower, because of the very fast changes in the desired
curvature. This behaviour was not included in the learning data set used for training the
steering system model. Further training of the neural network including additional patterns
with faster changes in the steering command should be carried out, in order to improve the
results.
Statistical values:
Mean = 0.0057
 = 0.0029
Median = 0.0062
Figure 5-27. Results obtained after testing the closed-loop model.
205
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
On the other hand, the velocity also has a high influence in the performance of the heading
model. Therefore, inaccurate results in the drive system model will lead to errors in the
heading system model. As mentioned before, the steering system model could be improved
by generating a new training data set with additional sensor inputs to measure the slope of
the terrain. This improvement would also lead to better results for heading.
5.2.3 A comparison between the RBF neural network-based
models of the drive, steering and heading systems and their
respective linear models.
In this section, the results obtained by using the RBF neural network–based models for the
steering, drive and heading systems are compared to those provided by the well-known
kinematic model given by (5-1) and the first order dynamic models for the steering and
drive systems shown in Figure 5-29 (a) and in Figure 5-29 (b). A Simulink-based diagram
that implements the mentioned models is included in Figure 5-28. In order to compare the
results from the RBF neural network–based heading model with those provided by the
mathematical kinematic module, the same sequence of real velocity and steering angle
values are used as inputs.
Figure 5-28. Kinematic model and first order dynamic models.
(a) Steering Model
(b) Drive Model
Figure 5-29. First order dynamic models.
206
Cognitive Autonomous Navigation System using Computational Intelligence techniques
As Figure 5-30 shows, the experiments demonstrate that the results obtained from the RBF
neural network-based model of the steering system are in most cases better than those
obtained from the first-order linear model. Specifically, in the second experiment, see
Figure 5-30 (2), a better performance of the linear model is obtained because the RBF
neural network-based model was not trained for fast changes in the curvature command.
This could be easily solved by retraining the neural network with additional patterns that
take into account such working conditions.
(1)
(2)
(3)
(4)
Figure 5-30. Steering System. Comparison between experimental data, simulation results
from the RBF neural network-based model and simulation results from the linear model.
207
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
On the other hand, Figure 5-31 shows a better behaviour of the RBF neural network-based
model for the drive system, even in the third and fourth experiments, where the
underdamped behaviour is present, see Figure 5-31 (3) and Figure 5-31 (4). Note that in
those cases more sensor inputs would be needed in order to properly learn the
performance of the drive system, as previously explained.
In short, using a RBF neural network-based model of the drive system is advantageous
even under unmodelled disturbances. In these cases, the RBF neural network-based model
could be easily retrained using a more complete training pattern set with additional inputs,
if necessary. However, the linear models cannot be easily improved unless major changes
are undertaken. Finally, Figure 5-32 shows the comparison between the RBF neural
network-based model and the first-order linear model of the heading system.
(1)
(2)
(3)
(4)
Figure 5-31. Drive system. Comparison between experimental data, simulation results from
the RBF neural network-based model and simulation results from linear model.
208
Cognitive Autonomous Navigation System using Computational Intelligence techniques
(1)
(2)
(3)
Figure 5-32. Heading system. Comparison between experimental data, simulation data
from the RBF neural network-based model and simulation data from the linear model
system.
In this case, the first, second and third experiments, see Figure 5-32 (1), Figure 5-32 (2) and
Figure 5-32 (3), demonstrate that the RBF neural network-based model is better than the
linear model. Note that the difference is particularly significant in the third test.
5.3 An Artificial Neural Network-based method for
path tracking.
As mentioned in section 5.1 path tracking could be considered the last step of the chain of
activities in robot navigation, but not the least important. In general, any path tracking
controller uses an explicit path to follow as input and returns the sequence of curvature
references (steering commands) that the robot has to execute in order to accurately reach
the positions that form the path. The simplest technique that solves this problem is
possibly the Pure Pursuit algorithm [126]. This method is summarized in the following subsection.
5.3.1 The classic Pure Pursuit algorithm.
The classic Pure Pursuit algorithm proposed by Amidi [126] uses an explicit path as input,
209
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
normally stored as a set of positions in a global reference system, together with the current
position of the robot. One of this positions is selected to be the goal point
control interval . The distance between the position of the robot
point is named “lookahead”, or simply . The steering command
at every
and the goal
(the curvature
reference that the robot should apply in order to describe a circumference arc to reach the
goal point), is calculated according to the following equation:
(5-7)
Where
terms
is the X-coordinate of the point
and
in the robot reference system. The
can be interpreted as the error signal and the gain of the control law,
respectively [127].
The parameter allows to modify the gain of the control law. The steering control will be
smoother at the expense of a worse tracking accuracy when the lookahead is longer,
however, if the lookahead is shorter, the tracking errors can be reduced, but the steering
commands increase and the robot’s motion can become unstable. Moreover, due to nonholonomic and manoeuvrability constraints, not all the steering commands can be applied.
On the other hand, this method is exclusively based on geometric computations.
Furthermore, the dynamic behaviour of the steering and drive systems is not considered, so
errors in tracking can be expected. This could have a very negative effect in the global
navigation algorithm, since if the path planned by a previous component is not properly
followed, localization or mapping tasks would not be correctly carried out.
Some improvements of the Pure Pursuit Controller have been developed. In [128], a curve
is fitted to the piece of the path between the position of the robot and the goal point.
Other works are focused on enhancing the techniques that allow to properly adjust the
controller parameters [129]. The dynamic behaviour of the robot is considered in some
works [130], but it is necessary to have an accurate mathematical model of this behaviour.
In this Thesis, the navigation problem is globally addressed as a set of software
components where the path tracking module is located at the end of the chain. Specifically,
this component is implemented by using a Computational Intelligence-based approach, in
particular a supervised Artificial Neural Network (ANN), as it is explained in detail in the
following sub-section.
The proposed technique allows to obtain better results than using the classic Pure Pursuit
algorithm while maintaining a relatively low computational cost, as the experiments
demonstrate both in simulated and in real environments. Furthermore, the explicit
210
Cognitive Autonomous Navigation System using Computational Intelligence techniques
mathematical models of the drive and steering systems are not necessary, since the ANN is
capable of directly learning the dynamic and kinematic behaviour of the robot. Then, only
the set of equations that define the behaviour of the ANN are needed.
5.3.2 An Artificial Neural Network-based path tracker.
This sub-section describes the proposed Artificial Neural Network-based path tracker
(ANNPT). The advantages of this method are demonstrated by executing the implemented
algorithm in a simulated environment and carrying out real experiments on the real nonholonomic robot ROMEO 4R.
As in the Pure Pursuit method, the objective of the ANNPT is to obtain the appropriate
steering command (or curvature reference),
, by taking into account the position of the
robot and the sequence of points that form the path which should be tracked.
A simple ANN-based approach would be one that allows to calculate the output
from
a set of inputs (such as the current position of the robot
and the selected goal point
located at a given distance ), by using a supervised Artificial Neural Network (ANN).
However, if only
and are taken into account, the dynamic behaviour of the robot
is not considered. Furthermore, when a human being drives a vehicle, he or she usually
allows for a piece of the path in front. Therefore, when the steering command is calculated
at , not only a goal point, but a set of consecutive goal points that define such piece of the
path should be considered. In addition, if the dynamic behaviour of the robot is required to
be taken into account, at least the curvature at
must also be used as input.
The following equation defines the function that provides the steering command to reach a
part of the explicit path according to the current robot pose, the last measured curvature
and the set of goal points that define the piece of the path.
(5-8)
Where
is the steering command,
is the position of the robot at
is the curvature measured at
,
,
are the points
that represent the piece of the path selected as objective (see Figure 5-33), and
the heading of the robot at
.
is
This function can be easily approximated by using a supervised ANN such as the wellknown Multilayered Perceptron (MLP) model [18].
211
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
Figure 5-33. A piece of the path represented by a set of consecutive goal points.
1) Training stage.
Before including the ANNPT into the global navigation architecture, it is necessary to train
the MLP using a set of patterns wide enough to describe the behaviour of the robot
according to the inputs specified in (5-8). In order to facilitate this task, the function can be
transformed into another one where the set of goal points, which describe the piece of the
path to follow, are expressed in polar form in the robot reference system. In this case, the
position and the heading of the robot are not necessary, see Figure 5-34:
(5-9)
In the experiments, a simulated framework and the real robot ROMEO 4R have been
used. First, the path tracker was tested using the simulated framework that reproduces the
ROMEO 4R dynamic and kinematic behaviour. The set of training patterns was obtained
after manually driving the simulated robot. A wide range of manoeuvres were recorded.
Next, the real robot ROMEO 4R was used for testing. In this case, only the kinematic
model was considered in the training process, in order to reduce the set of manoeuvres to
be applied by the human driver and to simplify the experiments. In this case, the input
is not necessary.
Figure 5-34. Set of five goal points represented in polar form in the robot reference system.
212
Cognitive Autonomous Navigation System using Computational Intelligence techniques
The training data set consists of a wide range of simple curves (arcs) that the vehicle could
potentially follow, built by sweeping the entire interval of possible steering commands. A
set of points is calculated from each simple curve. The first point is located at an initial
distance called
and the last point is located at a final distance called . Each training
record is represented by the vector:
(5-10)
Where
and
. Since
defined by the designer, several pairs of
and
are parameters that should be heuristically
and
values must be taken into account when
the training data set is being built. This will allow to change the lookahead boundaries
without retraining the neural network, see Figure 5-35. Furthermore, the velocity could be
used as an additional input when the speed has to change during navigation. In all the
experiments, the velocity has been considered constant for simplicity; however, testing the
algorithm when velocity changes during the test only requires to train the neural network
using new patterns that take into account these variations. The architecture of the MLP
neural network used to implement the ANNPT is shown in Figure 5-36.
Figure 5-35. Set of feasible manoeuvres for obtaining the training data set. The minimum
length of the curvature radius is 2,5 meters for the robot ROMEO 4R.
Figure 5-36. MLP neural network architecture used for path tracking.
213
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
Figure 5-35 illustrates an example where two training patterns are obtained. For instance,
the number of inputs of the ANN is 9. The first pattern is generated taking into account
that one input is
and the rest of inputs are the points in polar form
. The output of the neural network is
.
The second pattern is generated considering again the input
points in polar form
and the set of
. The output is also
.
The indexes and define the number of points between two consecutive selected goal
points, and they are heuristically selected.
The training process has been carried out using the Neural Network MATLAB ToolBox. The
Levenberg-Marquardt training algorithm has been selected to adjust the set of weights
and
.
2) Working stage.
Once the training stage is completed, the set of weights
and
are used in the working
stage using the following equation:
(5-11)
Where is the output of the ANN, is the number of hidden neurons, is the number
of input neurons, is the weight of the link between the hidden neuron and the output
neuron,
is the weight of the link between the input neuron and the hidden neuron ,
is the
item of the input vector and and
activation function is the sigmoid function, and
are activation functions. In this case, the
:
(5-12)
3) Experiments.
This sub-section shows several experiments carried out to demonstrate the efficiency of the
proposed path tracking method. First, the technique has been validated in a simulated
framework where the kinematic and dynamic behaviour of the robot ROMEO 4R has been
modelled. The models have been used for both obtaining the set of training patterns and
testing the trained ANNPT. Finally, a simplified version of the ANNPT (only considering
the kinematic behaviour of the real robot ROMEO 4R) has also been validated. As
mentioned before, the simplified version of the ANNPT just requires using the set of goal
points as inputs; therefore, the set of necessary manoeuvres that cover all the possible
curves that the robot could follow is reduced and becomes easier to obtain.
214
Cognitive Autonomous Navigation System using Computational Intelligence techniques
a) Results obtained using the simulated framework.
The kinematic behaviour of the ROMEO 4R robot is modelled using equation (5-1). In
addition, the dynamic model obtained for the experiments takes into account that the
steering system has a pure delay, so the vehicle starts to follow the simple curve when the
real curvature reaches the desired steering command, and not before. This important
question has been considered when the training data set has been generated.
Different ANNPTs were tested taking into account different parameters: number of goal
points, size of the selected piece of the path and number of hidden neurons, among others.
Furthermore, each experiment was carried out using a different constant velocity;
therefore, the neural network, with parameters specified by the designer, was retrained
taking into account diverse training data sets obtained from the set of manoeuvres
executed with different velocities.
Once the neural network was validated using a set of testing patterns consisting of simple
curves, a set of more complex paths was utilized to test the ANN-based path tracking
controller. This validation algorithm is outlined in Figure 5-37.
Table 5-7 shows the eight tests that were carried out. In tests 1, 3, 4, 5 and 6 the applied
velocity is the typical speed used by handicapped people while driving a wheelchair. On the
other hand, a very slow velocity is used in tests 7 and 8. Finally, test 2 is carried out to
demonstrate that the behaviour of the proposed method is suitable even when the velocity
is moderately increased. The results obtained by the proposed method are compared to the
results obtained after executing the classic Pure Pursuit algorithm for the same paths and
the same velocities, see Figure 5-38.
Figure 5-37. Diagram of the algorithm that allows to validate the performance of the
proposed ANNPT.
Test
Velocity
m/s
Km/h
1
2.78
10
2
11.11
40
3
2.78
10
4
2.78
10
5
2.78
10
6
2.78
10
7
1.11
4
8
1.11
4
Table 5-7. Collection of tests carried out to demonstrate the performance of the proposed
ANNPT in a simulated environment.
215
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
Test 1
Test 2
Test 3
Test 4
Test 5
Test 6
Test 7
Test 8
Figure 5-38. Set of experiments carried out to demonstrate that the ANNPT has a better
performance than the Pure Pursuit controller.
As Figure 5-38 shows, the heading and position of the robot respect to the explicit path to
follow is different in each experiment. If the direction of the robot coincides with the
direction of the first positions of the path, the manoeuvrability constraints do not have any
effect at this point, as Tests 1 and 5 demonstrate. However, if the direction of the robot is
very different or even opposite (pay attention to Test 6), to the direction of the initial piece
216
Cognitive Autonomous Navigation System using Computational Intelligence techniques
of the path, the result calculated by the MLP Neural Network must be limited to the
interval
. In this case
and
, since the minimum length of
the curvature radius is
meters. In these cases, the robot describes a circumference until
the direction is approximately parallel to the direction of the piece of the path that it is
required to follow. Even when the robot is located at a position which is distant from the
path (pay attention to Test 7), the ANNPT gives suitable steering commands to reach the
path and properly track it.
All the tests demonstrate that the proposed Neural Network-based technique provides
more accurate results than the basic Pure Pursuit controller, while maintaining a low
computational cost. This is an advantage because the robot must interact with the
environment in real time. Note that the ANNPT is the last software component that must
be executed and it must share the available time at every control interval with the rest of
modules.
b) Results obtained using the real robot ROMEO 4R.
The ANNPT has also been validated in the real robot ROMEO 4R. As mentioned before,
a simplified version of the neural network-based controller has been used. In this case, the
previous curvature reference is not taken into account, but good results are also obtained
even when the set of manoeuvres recorded to generate the training data set consists of less
patterns. Figure 5-39 shows an example of path tracked using the ANNPT. In particular,
the MLP neural network has only three hidden neurons. Only 8 goal points have been
selected to describe the piece of the path to follow during a control interval, so 16 inputs
are needed. Therefore, the full size of the neural network is
weights.
Figure 5-39. Validation of the ANNPT using the robot ROMEO 4R. The red dotted line
represents the path followed by the robot when the ANNPT is executed. The black line
represents the original path to be tracked.
217
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
The distance between a pair of consecutive goal points is
meters. The selected
lookaheads are
, therefore
and
. All the distances are expressed in meters.
As Figure 5-39 illustrates, the steering commands provided by the ANNPT allow the real
robot ROMEO 4R to track the path with low error.
5.4 Conclusion.
A global strategy to find a good model of a real car-like vehicle has been proposed. The
MIMO (Multiple Input, Multiple Output) system that describes the behaviour of the robot
has been divided into several MISO systems whose outputs are used as inputs by other
modules. This technique facilitates the use of neural networks that model the behaviour of
each component.
Specifically, a RBF neural network architecture is used in this work since the training and
testing processes are faster than for other kind of neural networks. A MOGA has been
utilized for searching the non-dominated Pareto-optimal collection of RBF neural networkbased models of the steering, drive and heading systems.
The distributed software architecture developed at the CIS of the University of Algarve for
the implementation of the MOGA provides a nearly-automatic method of finding a
suitable neural network according to the design goals.
In short, the experiments demonstrate that the RBF neural network-based models perform
better than other well-known, frequently used linear models. Moreover, even in those cases
where the performance is not as good as expected, RBF neural network-based models
could be trained again, in order to accomplish new goals. In particular, the inaccuracy of
some results for the drive block illustrate the need of information about important
disturbances that affect the behaviour of the vehicle. This problem could be solved by
adding new sensors to the vehicle and redefining the training data set used in tests.
On the other hand, the main drawback of the approach is the need to manually set up
some of the MOGA parameters, although this process is much easier than finding by
experience, trial and error the best neural network structure.
Finally, many systems show a highly non-linear dynamic behaviour and suitable
mathematical models can be hard to obtain, so using models based on training data sets is
frequently the best solution and possibly the only one.
Furthermore, in this chapter a ANN-based path tracking method is also proposed. The
218
Cognitive Autonomous Navigation System using Computational Intelligence techniques
results obtained both in simulation and in the real robot ROMEO 4R demonstrate that a
Computational Intelligence-based path tracking technique provides advantages if compared
to the classic Pure Pursuit method. The results obtained are better and the computational
cost is not significantly higher than in the well-known geometric algorithm. Furthermore,
unlike more complex methods that take into account the dynamic behaviour of the vehicle,
no complex mathematical models are necessary when the ANNPT is used. This controller
demonstrates to be flexible and that it could be used with any similar robot by simply
acquiring a new data training set and adjusting again the weights of the neural network.
219
Chapter 5. Computational Intelligence techniques for robot model learning and path tracking
220
And its object is Art not power, sub-creation not
domination and tyrannous re-forming of Creation.
J.R.R. Tolkien
Chapter 6
BENDER: A low-cost architecture for Service Robotics.
T
his chapter describes the hardware and software architectures of the low cost
service robot BENDER (Basic ENvironment for DEveloping Robotic systems),
developed within the scope of this Thesis as a basic platform for experimentation.
The BENDER service robot is compared to other robots on the market and the
relationship between benefits and cost is analyzed. It can be concluded that BENDER
retains many of such benefits and its cost is much lower than that of other existing
commercial platforms. (Final)
The proposed software architecture is open source, and it consists of a set of distributed
processing elements that interact using the YARP middleware. This provides interesting
advantages, since it is possible to decide which processes should be executed on board of
the robotic platform and which should be remotely executed, if necessary. An additional
advantage of the architecture is that it is very easy to integrate the robot in an ambient
intelligence environment, since the software architecture is easily expandable. Furthermore,
the software architecture could be executed on the robot BENDER or on any other similar
mobile robot by adapting its low-level software components.
6.1 BENDER: A low-cost hardware and software
architecture.
A service robot designed to aid people should meet several functional requirements. The
robot must be capable of navigating through a dynamic indoor setting with specific
geometrical features. Therefore, it must acquire the information from the environment,
process this information and act in an appropriate way according to the needs of the user,
all in real time. On the other hand, easy human-machine interaction is also a significant
issue to consider, in order to make it accessible to a general audience not necessarily
familiar with Robotics. Finally, the total cost of the robot must be low, so that it is
affordable for low-budget applications. BENDER (see Figure 6-1), meets such
requirements. It is a small robot with wheels and differential drive system, with a weight of
approximately
kg. and a height of
cm.
221
Chapter 6. BENDER: A low-cost architecture for Service Robotics
Figure 6-1. The BENDER service robot.
In addition, it is equipped with a low cost 2D laser device (Hokuyo URG-04LX) and a low
cost stereo vision system consisting of two Logitech AF cameras with an integrated "pan &
tilt & zoom" (PTZ) system. Specifically, the laser sensor has been placed on a rotating
platform driven by a servo motor that allows the device to be rotated
degrees. This is
particularly useful for the topological map building experiments described in section 4.1,
because when a new topological node is defined, it is necessary to store the semantic
polygonal description of the environment located both in front and behind the robot in
order to cover
degrees.
The software architecture is open and distributed; therefore, it is possible to extend the
functionality of the robot through the design and implementation of new components and,
if necessary, easily integrate it in a ubiquitous environment using wireless interfaces.
6.1.1 Hardware architecture.
The hardware components of the BENDER robot are enumerated in Table AI-1 in
Appendix I, including the individual cost of each element. In addition, BENDER is
compared to the most popular commercial robots that are currently available. A detailed
study of costs between these commercial platforms and the proposed low-cost robot is
presented in Table AI-2 in Appendix I, considering the features provided by each one. As
it is demonstrated, most of commercial robotic platforms used for research tasks or as
service robots have proper technical features but its cost is very high. Low-cost robots are
less expensive, but they provide less functionality than BENDER. Furthermore, some
commercial robots are meant for a specific purpose, so its capabilities cannot easily be
extended.
222
Cognitive Autonomous Navigation System using Computational Intelligence techniques
It is important to mention that although the BENDER robot is equipped with a
differential drive system (whose hardware components are detailed in Figure AI-3 in
Appendix I), the low level software architecture allows to calculate the position, heading
and velocity by taking the curvature and the velocity references as inputs. This references
are described by using the pair
, where is the desired curvature and is the desired
velocity. This facilitates the implementation of high level components that can be applied
to both differential-drive and car-like robots.
The set of low-level software components (algorithms that directly interact with the
hardware devices) must be executed on the onboard computer of the robot; however, the
high-level software components (algorithms that implement the behaviour of the robot or
allow to carry out specific tasks), could be executed either on the onboard computer or on
a remote computer using a wireless connection and the YARP middleware.
6.1.2 Software architecture.
In this section, the software architecture implemented in the BENDER service robot is
described in depth. The set of algorithms that compose such architecture can be grouped
in two categories: low-level and high-level software architectures, see Figure 6-2. The lowlevel software architecture consists of a set of algorithms that solve the problems related to
hardware interaction and raw data input from sensors. The high-level software architecture
consists of the software components that allow the robot to execute high-level tasks, such
as setting description, map building, self-localization, exploration or interaction with
human users.
Figure 6-2. Distributed software architecture. Red lines represent the information shared
between high-level and low-level processes. Black lines represent the information shared
between processes of the same level.
223
Chapter 6. BENDER: A low-cost architecture for Service Robotics
All the modules or components of the global architecture have been designed as individual
processing units that share data using a middleware framework, specifically, the YARP
network.
1) Low-level software architecture.
In this section, the software modules that make possible the interaction between the highlevel software components and the hardware devices are described. These algorithms are
executed on the onboard computer of BENDER, that runs a Linux operating system.
The experiments proposed in this Thesis are based on the information provided by the
drive system and the 2D laser device. Therefore, the software modules that control both
hardware elements are described in the following sub-sections.
a) Drive system software controller.
As mentioned before, the BENDER service robot is equipped with a differential drive
system, see Figure 6-3. The robot can change its position and heading when the drive
system receives the desired velocities of the right and left wheels as input commands.
However, in order to maintain the compatibility with car-like vehicles, the software
component designed to control the drive system uses the inputs
, where
is the
desired curvature and
is the desired velocity, with the purpose of obtaining the pose of
the robot
as output, where
is the position and is the heading of the
robot according to the robot reference system at time .
As Figure 6-3 shows, the position of the robot
is defined as the centre of the rear
shaft. The parameter defines the length of the rear shaft. The steering command (also
named desired curvature) is defined as the inverse of the turning radius .
Figure 6-3. BENDER differential drive system.
224
Cognitive Autonomous Navigation System using Computational Intelligence techniques
In order to transform the reference
into a pair
, where
is the desired
velocity for the left wheel and
is the desired velocity for the right wheel, the following
equations are used:
(6-1)
Where
is the angular velocity.
As detailed in Table AI-1, the drive system is equipped with a hardware controller that
facilitates the interaction with the motors and the encoders. This controller is connected to
the onboard computer by using a standard serial port connector. The controller datasheet
specifies the protocol that allows to send commands to the drive system and to receive data
from it. In particular, it is possible to send two bytes that respectively define the velocity of
the right and the left wheels. Furthermore, two encoders can be read to estimate the
velocity of each wheel. Once these data are available, a simple odometry module calculates
the linear velocity of the robot and the change in heading and position. The information
provided by the digital compass can also be taken into account for heading estimate.
Figure 6-4 shows the software component that allows to obtain the pose of the robot from
the desired curvature and velocity. Since these commands are usually generated by other
higher level software modules and the output is also normally used by higher level
algorithms, the YARP network is used to send and receive both the commands and the
outputs. In particular, from the drive system software component point of view, the YARP
port “/bender/references/receiver” is used to receive the steering and velocity commands and
the port “/bender/odometry/sender” is used to send the pose of the robot. The information
shared by means of the YARP ports has a specific format given by the YARP interfaces
defined in the Appendix 1, specifically in Code A1-1.
Figure 6-4. Drive system software component.
225
Chapter 6. BENDER: A low-cost architecture for Service Robotics
The software component Drive System is implemented as a process that consists of two
threads: Send Command and Odometry. These threads directly interact with the MD25
hardware controller. This device is a dual motor driver designed for use with the EMG30
motors. These motors are used in the BENDER drive system.
The MD25 protocol defines that it is necessary to send two signed bytes to change the
velocity of each wheel. The minimum velocity is 0 and the maximum velocity is
(if the
robot is going forward) or
(when the robot is going backward). As mentioned above,
in order to maintain the compatibility with car-like robots, the thread Send Command
receives a pair
from the YARP port “/bender/references/receiver” and it transforms
these values into a suitable pair of bytes that represent the velocity of the left and right
wheels. Once, the velocities for the right and the left wheels have been calculated according
to Equation (6-1), the bytes are obtained by using the following transformation:
(6-2)
Where
is the maximum value that the velocity can reach. It has been empirically
demonstrated that
.
On the other hand, the thread Odometry acquires the information from the encoders as two
long integer values which are used, together with the wheel radius, to calculate the velocity of
each wheel. The difference between the velocity of both wheels allows to estimate the
change in heading. This value could be optionally combined with the estimation provided
by the digital compass. Furthermore, the previous pose is stored and used to compute the
current pose of the robot. This pose is shared with the rest of components by sending the
result through the port “/bender/odometry/sender”.
b) Software controller for exteroceptive sensors.
As mentioned before, the robot BENDER is equipped with several low-cost sensors that
allow the robot to acquire the necessary information from the environment to carry out the
navigation tasks proposed in this Thesis.
In particular, the Hokuyo 2D laser device is particularly relevant for the experiments, since
several high level software components use the semantic polygonal representation of the
environment obtained from its raw data. In order to facilitate the implementation of the
software module that allows to acquire the information sensed by the Hokuyo device, the
driver hokuyoaist from Player has been used. Therefore, it is also necessary to execute the
226
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Player server on the onboard computer. A process takes the data from the Player server
and sends them through the YARP port “/bender/laser2d/sender”. The process provides data
with a frequency of
. In the experiments,
is
ms, see Figure 6-5.
c) Software controller for voice synthesis.
Natural interaction with human users is a key issue in service robotics. Therefore, the
BENDER service robot is equipped with a software component that allows to answer to
the user by using natural language. The festival driver provided by the Player environment is
used to reach this purpose. Figure 6-6 shows a diagram where the behaviour of the
software component is detailed.
Figure 6-5. Software component that allows the robot to acquire raw data from the Hokuyo
2D laser device.
Figure 6-6. Software component that controls the voice synthesis device.
227
Chapter 6. BENDER: A low-cost architecture for Service Robotics
d) Software component for controlling the 2D laser device gyratory platform.
As shown in Figure 6-1, the robot is also equipped with a servomotor that allows to rotate
the 2D laser device if necessary. Figure 6-7 shows the diagram of the software module that
controls this platform. A rotation command is received from the YARP port
/bender/servo/receiver. Then the Main thread sends such rotation command to the hardware
servo controller according to the protocol defined by the device. A serial port connection is
used to carry out this task. Once the servo has executed the task, an answer is sent to the
YARP network by means of the port /bender/servo/sender.
2) High-level software architecture.
In this section, the set of software modules that compose the high-level software
architecture are described.
These algorithms can be executed remotely or on the onboard computer under a Linux
operating system. Figure 6-2 shows that the high-level software architecture consists of the
following set of components: “Setting description” module, “Topological SLAM” module,
“Exploration” module and “Human interaction” module. These components interact with
the low level modules by using the YARP network.
“Setting description” acquires raw data from the 2D laser device by using a YARP
connection with the port /bender/laser2d/sender. For doing this, the “Setting description”
module creates the YARP port /bender/laser2d/receiver. Furthermore, the information
provided by the “Drive system” low level component is read from a connection with the
port /bender/odometry/sender. The port /bender/odometry/receiver is also created by the “Setting
description” component. Then, this high level module calculates the semantic polygonal
representation of the environment at time using the algorithms detailed in section 3.2.
Figure 6-7. Software component that controls the gyratory servo platform.
228
Cognitive Autonomous Navigation System using Computational Intelligence techniques
This description, together with the representation of the free space described in section
4.1.1, is used by the high level components “Topological SLAM” and “Exploration”.
The “Topological SLAM” component builds a topological map of the setting and allows
the robot to estimate its pose by using the techniques proposed in sections 4.1.2 and 4.2.
Furthermore, this component interacts with the servo that rotates the 2D laser device
platform when it is necessary to obtain a description of the environment located behind the
robot. The YARP ports used for this interaction are /bender/withservo/sender and
/bender/withservo/receiver.
A
connection
is
made
between
/bender/servo/receiver. Another connection
the
ports
/bender/withservo/sender
and
is made between ports /bender/servo/sender and
/bender/withservo/receiver.
allows the “Topological SLAM” module to send a command to
rotate the gyratory platform where the 2D laser device is placed. Once the platform has
been rotated, the “Servo controller” low level software module sends an acknowledgement
(ACK) message to “Topological SLAM” by using
. The high level module stores the
semantic polygonal description calculated from the 2D laser raw data and sends another
command (by means of
), to rotate the gyratory platform again. When the platform is at
the original position the “Servo controller” module sends a ACK message (through
indicate that the robot can go on navigating.
) to
The “Exploration” component allows the robot to plan a local path according to the free
space description. The selection of a goal point to follow could also be based on the
selection of a possible node. In this case, the information provided by the “Topological
SLAM” component is used by the “Exploration” module. Moreover, a human user can
specify a command to select a route, as it is explained in section 4.1.1-2, so the modules
“Exploration” and “Human interaction” are also connected. The steering and velocity
commands are calculated and sent to the low level hardware architecture by using the
connection between the YARP ports /bender/references/sender and /bender/references/receiver.
Finally, “Human interaction” allows to graphically display the results obtained by the rest
of modules. In addition, this high level component sends strings to the “Voice Synthesis”
low level component by using a YARP connection defined by the ports /bender/say/sender
(created by the “Human interaction” component), and /bender/say/receiver. These strings are
played back using the Festival Speech Synthesis System. Therefore, a human user could
easily interact with the service robot in a natural way.
Figure 6-8 shows a more detailed diagram of the high level architecture and the relationship
between high and low-level software components by using YARP ports.
229
Chapter 6. BENDER: A low-cost architecture for Service Robotics
Figure 6-8. Detailed diagram of the high-level software architecture.
6.2 Main features of the proposed architecture.
The hardware architecture could be extended by adding new devices. For example, a ring
of ultrasonic devices could easily be added. The low level software architecture should also
be extended in this case. Using the YARP network facilitates this task. A new low-level
software component would be created, and the results provided by the new device would
be sent through a new YARP port. Other high level components could also be added to
acquire and process the information provided by the device. The objects shared by using
the YARP connection should also be properly defined. This demonstrates that one of the
main features of the proposed architecture is that it can be easily extended if required.
Furthermore, the high level software architecture is completely independent of the low
level software architecture.
This is clearly an advantage because the high level algorithms can be easily executed in any
robot, or even in a simulated platform, controlled by a low level software architecture that
provides the same YARP ports used for communication.
In fact, in this Thesis the high level software architecture detailed in Figure 6-8 has been
tested in the BENDER robot, in a Pioneer robot and in a Player/Stage simulation
environment. The YARP interface classes used in the experiments and the description of
the configuration files for the simulated environment are described in Appendix I.
230
Cognitive Autonomous Navigation System using Computational Intelligence techniques
6.3 Computational cost analysis.
This section analyzes the high-level software architecture from the point of view of the
processing cost. As mentioned above, the components of the high-level architecture can be
executed either in the onboard computer or remotely. Some experiments have been
performed by executing the algorithms of the high-level software architecture in both the
BENDER onboard computer (OnBC) and a remote computer (RemC) whose
characteristics are described in Table 6-1. The same onboard computer has also been used
in the Pioneer platform.
These experiments demonstrate that the algorithms proposed in this Thesis are efficient
from a computational cost point of view. In order to reach this conclusion, a set of tests
for measuring the computational cost have been carried out. The time consumed by the
tasks shown in Table 6-2 has been separately measured in both computers.
Next subsections show the results obtained after executing each task in the remote
computer and in the onboard computer. Time spent by YARP communications and the
relation between the number of polylines, the number of stable features and the tasks
performance is also considered.
Figure 6-9 shows the topological map built during the experiment; the necessary processing
is carried out first in the remote computer and then in the onboard computer. The robot
navigates through the main corridor of a building (from left to right). Several rooms are
located on the left and several secondary corridors are located on the right of the main
corridor.
Description
Computer
Model
Onboard computer
Asus Eee PC 1008HA Notebook
Remote computer
Sony VAIO VPCF13E1E
Processor
Intel Atom N280 1.66 GHz.
Cache 512K. 1 Core. 2 Threads.
Intel® Core™ i5-460M 2.53 GHz.
Cache 3M. 2 Cores. 4 Threads.
Memory
1 GB
4 GB
Table 6-1. Characteristics of the computers used in the experiments. A Linux operating
system is used in both computers.
Task
SemDescription
Asoc
Loc
Map
Exp
Description
Obtaining the semantic polygonal description.
Data association.
Localization.
Updating the topological map
Generating curvature and velocity references for exploration
Table 6-2. Selected tasks for measuring the computational cost.
231
Chapter 6. BENDER: A low-cost architecture for Service Robotics
Figure 6-9. Topological map built during the experiments for measuring the computational
cost.
During the experiment, the robot detected
topological nodes. For each control cycle,
the robot acquires the raw data set from the 2D laser device, obtains the semantic
representation of the environment, carries out the data association process, updates the
topological map and calculates the new curvature and velocity commands according to the
exploration algorithm. Specifically, all these tasks (detailed in Table 6-2) are consecutively
repeated every
.
6.3.1 Computational cost obtained in the remote computer.
This subsection describes the results obtained after executing the above mentioned tasks in
the remote computer whose features are detailed in Table 6-1. Figure 6-10 shows the time
consumed by the task SemDescription that obtains the semantic description of the setting
(black thin columns). The number of polylines (blue trace), and the number of stable points
(red trace) obtained from this description are also respectively represented. The main Yaxis (on the left), represents the number of polylines and the number of stable points. The
secondary Y-axis represents the time specified in seconds. The X-axis represents the
number of control cycles repeated during the experiment. Note that the execution time of
SemDescription is generally close to
. As mentioned before, the time elapsed
between two consecutive control cycles is approximately
.
Figure 6-11 shows the time consumed by the data association task. This is the task with the
highest computational cost. This cost strongly depends on the number of stable points
used to obtain the segments that define natural landmarks.
232
25
0,0005
20
0,0004
15
0,0003
10
0,0002
5
0,0001
0
0
0,00035
0,00025
0,00015
Time (seconds)
0,00045
0,00005
1
9
17
25
33
41
49
57
65
73
81
89
97
105
113
121
129
137
145
153
161
169
177
185
193
201
209
217
225
233
241
249
257
265
273
281
289
297
305
313
321
329
Number of polylines and stable points
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Number of control cycle
SemDescription
NPOL
NumberStablePoints
1,000000
20
0,800000
15
0,600000
10
0,400000
5
0,200000
0
0,000000
Time (seconds)
25
1
9
17
25
33
41
49
57
65
73
81
89
97
105
113
121
129
137
145
153
161
169
177
185
193
201
209
217
225
233
241
249
257
265
273
281
289
297
305
313
321
329
Number of stable points
Figure 6-10. Computational cost for the semantic polygonal representation task in the
remote computer.
Number of control cycle
Asoc
NumberStablePoints
Figure 6-11. Computational cost for the data association task.
If the number of stable points is higher, the segment alignment and association process is
more expensive from the point of view of the processing cost. In general, the time required
for the data association task (Asoc) (black columns) is generally lower than 250 ms. The
association task first carries out an alignment and then the data association algorithm. If
there is not a valid alignment, the association is not carried out and the robot only uses the
information provided by the propioceptive sensors. In this case, the time required for the
data association task is lower, since part of the association algorithm is not executed.
Once the data association task finishes, the task that allows to estimate the pose of the
robot (Loc) is executed. Figure 6-12 shows its execution time (black columns) and the
number of available stable points (NumberStablePoints) for each control cycle. If there are
not any stable features, only the odometry calculated from the data obtained from the
encoders is used. In this case, the computational cost is lower, since the EKF algorithm
just executes the prediction stage. On the other hand, if there are stable features that have
been properly associated, the virtual sensor provides an estimate of the robot pose change.
Finally, this estimate is used by the EKF algorithm to execute the update stage. In this case,
the computational cost slightly increases, see Figure 6-12. The time required to estimate the
pose is generally close to
.
233
0,00025
20
0,0002
15
0,00015
10
0,0001
5
0,00005
0
0
Time (seconds)
25
1
9
17
25
33
41
49
57
65
73
81
89
97
105
113
121
129
137
145
153
161
169
177
185
193
201
209
217
225
233
241
249
257
265
273
281
289
297
305
313
321
329
Number of stable points
Chapter 6. BENDER: A low-cost architecture for Service Robotics
Number of control cycle
Loc
NumberStablePoints
Figure 6-12. Computational cost for the self-localization task in the remote computer.
Figure 6-13 shows the computational cost for the task that allows the robot to build the
topological map (Map), as well as the number of polylines (NPOL). There is a clear relation
between the number of polylines and the time required to execute the task. This occurs
because the set of possible topological nodes is calculated from the semantic description of
the free space. This description consists of the set of convex polygons obtained by taking
into account the set of labelled polylines that describe the setting. The computational cost
is generally close to
.
Finally, Figure 6-14 shows the computational cost for the task that allows to select a
suitable route by taking into account the free space semantic description (Exp) and the
number of polylines (NPOL). The number of routes increases according to the complexity
of the scenario.
25
0,00045
20
0,00035
0,0003
15
0,00025
0,0002
10
0,00015
0,0001
5
0,00005
0
1
8
15
22
29
36
43
50
57
64
71
78
85
92
99
106
113
120
127
134
141
148
155
162
169
176
183
190
197
204
211
218
225
232
239
246
253
260
267
274
281
288
295
302
309
316
323
330
0
Number of cycles
Map
NPOL
Figure 6-13. Computational cost for the topological map building task in the remote
computer.
234
TIme (seconds)
Number of polylines
0,0004
Cognitive Autonomous Navigation System using Computational Intelligence techniques
25
0,012
Number of polylines
0,008
15
0,006
10
0,004
5
Time (seconds)
0,01
20
0,002
0
1
9
17
25
33
41
49
57
65
73
81
89
97
105
113
121
129
137
145
153
161
169
177
185
193
201
209
217
225
233
241
249
257
265
273
281
289
297
305
313
321
329
0
Number of control cycle
Exp
NPOL
Figure 6-14. Computational cost for the exploration task based on the free space semantic
description executed in the remote computer.
If the free space representation is obtained from a semantic polygonal description of the
setting with a large number of polylines, the time required to execute the exploration task is
slightly higher. Note that the time required to execute this task is generally close to
.
6.3.2 Computational cost obtained in the onboard computer.
This subsection describes the results obtained after executing the tasks (mentioned in
section 6.2), in the onboard computer whose features are detailed in Table 6-1. The
topological map built by the robot during the experiment is the same as the map built for
the test in the remote computer, see Figure 6-9. Therefore, the same analysis is presented
as follows.
Figure 6-15 shows for each control cycle the computational cost for the task that builds the
semantic polygonal description of the setting (SemDescription), the number of polylines
(NPOL) and the number of stable points detected while the robot navigates through the
environment (NumberStablePoints). It can be seen that the time required for executing
such task is generally close to
. This value is higher than the one obtained when the
task is executed in the more powerful remote computer; however, the performance is still
appropriate and it demonstrates that the computational cost of the algorithms that allow
the robot to accomplish such task is low.
Figure 6-16 shows for each control cycle the computational cost of the data association
task (ASOC) and the number of detected stable points (NumberStablePoints). In this case,
the time required for executing this task is generally close to
. If the number of
stable points is high and there are many possible natural landmarks that have to be aligned
and associated, the time required for this operation can be significantly higher (more than
one second).
235
Chapter 6. BENDER: A low-cost architecture for Service Robotics
0,003
0,0025
20
0,002
15
0,0015
10
0,001
5
Time (seconds)
Number of polylines and stable points
25
0,0005
0
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
131
136
141
146
151
156
161
166
171
176
181
186
191
196
201
206
0
Number of control cycle
SemDescription
NPOL
NumberStablePoints
2,500000
20
2,000000
15
1,500000
10
1,000000
5
0,500000
0
0,000000
Time (seconds)
25
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
131
136
141
146
151
156
161
166
171
176
181
186
191
196
201
206
Number of stable points
Figure 6-15. Computational cost for the semantic polygonal representation task in the
onboard computer.
Number of control cycle
ASOC
NumberStablePoints
Figure 6-16. Computational cost for the data association task in the onboard computer.
This is clearly a problem that should be taken into account, because the duration of the
control cycle should be kept stable and below a maximum limit that can experimentally be
estimated at about
. Executing the data association process remotely or increasing
the processing capabilities of the onboard computer are two feasible solutions. The
optimization of the algorithm that executes the data association task is a third option that
will be considered in future works. The use of the YARP network facilitates data sharing
between software components; therefore executing the processes with high expensive
computational cost remotely is a good solution if a high bandwidth WIFI connection is
available. This allows not to increase the cost of the service robot. Any personal computer
could be used as remote computer by simply installing the open-source YARP software.
Figure 6-17 shows the computational cost of the self-localization task (Loc) and the
number of detected stable points (NumberStablePoints) for each control cycle. In this case,
the time required for accomplishing the task is close to
, even when the number of
stable points increases. This can be considered a good result.
236
Cognitive Autonomous Navigation System using Computational Intelligence techniques
0,0006
0,0005
20
0,0004
15
0,0003
10
0,0002
5
Time (seconds)
Number of stable points
25
0,0001
0
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
131
136
141
146
151
156
161
166
171
176
181
186
191
196
201
206
0
Number of cycle control
Loc
NumberStablePoints
Figure 6-17. Computational cost for the self-localization task in the onboard computer.
Figure 6-18 shows for each control cycle the computational cost for the topological map
building task (Map) and the number of polylines (NPOL). The time required for executing
this task is generally in the interval
. As can be observed, there is a strong relation
between the time consumed by the task and the number of polylines that it is necessary to
process to accomplish it. Figure 6-19 shows that the computational cost of the exploration
task is also low (it does not exceed
), even with the lower processing capability of the
onboard computer. Finally, an experiment has been carried out to measure the time
consumed when data are shared by using the YARP network. The results demonstrate that
this time is also very low, see Figure 6-20.
0,001
0,0008
15
0,0006
10
0,0004
5
0,0002
0
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
131
136
141
146
151
156
161
166
171
176
181
186
191
196
201
206
0
Time (seconds)
Number of polylines
20
Number of control cycle
Map
NPOL
Figure 6-18. Computational cost for the topological map task in the onboard computer.
0,03
15
0,02
10
0,015
0,01
5
0,005
0
0
Number of control cycle
Exp
NPOL
Figure 6-19. Computational cost for the exploration task in the onboard computer.
237
TIme (seconds)
0,025
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
131
136
141
146
151
156
161
166
171
176
181
186
191
196
201
206
Number of polyllines
20
Chapter 6. BENDER: A low-cost architecture for Service Robotics
Time (seconds)
0,000200
0,000150
0,000100
0,000050
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
131
136
141
146
151
156
161
166
171
176
181
186
191
196
201
206
0,000000
Number of control cycle
LASER
POSE
Figure 6-20. Computational cost for YARP communications.
In this test, the low level software components DriveSystem and 2DLaser send information
to the high level software components by using the corresponding YARP connections. The
raw data provided by the 2D laser device are sent by the 2DLaser component every control
cycle. The time required for this task (LASER in Figure 6-20) is generally close to
.
The graph associated to the POSE task shows that the time required by the YARP network
to send the robot pose is also close to
. Some spurious high times can be observed;
however, the number of this spurious values is very reduced and they are generally related
to the behaviour of the WIFI connection that the YARP network uses.
6.4 Conclusion.
In this chapter, a low cost architecture for Service Robotics has been proposed and
validated. Specifically, the robotic platform BENDER has been designed and built, and it
has been used to carry out the experiments presented in this Thesis. The BENDER robot
is clearly inexpensive if compared to commercial robots, while maintaining the same
features, in general.
238
Even the most subtle spiders may leave a weak thread.
J.R.R. Tolkien
Chapter 7
Conclusions and future work.
T
his chapter presents the Thesis conclusions and a number of foreseeable future
developments. A summary of the main contributions and an analysis of the
achieved objectives are first described. Next, future research activities to extend the
work presented in this Thesis are detailed. (Final)
7.1 Conclusions and contributions.
The development of inexpensive indoor service mobile robots that can efficiently realize
service tasks for human users in domestic environments is currently attracting a growing
interest. Cost reduction of robotics hardware is a critical issue that should be considered if
it is required to successfully introduce real robots in real domestic scenarios. However,
decreasing the hardware cost normally involves increasing the inaccuracy of the results
obtained when a specific task is required to be executed, since it is necessary to reduce the
cost of important hardware components such as propioceptive and exteroceptive sensors
and computers. Therefore, requirements related to computational efficiency should be
taken into account when robot perception and control algorithms are designed and
implemented.
In particular, autonomous navigation is a key task which any mobile service robot should
efficiently accomplish, since the success of many other tasks depends on the capabilities of
the robot to move across a dynamic scenario.
Designing an efficient software architecture to solve the autonomous navigation problem
involves implementing several software components that accomplish a set of significant
subtasks.
In this Thesis, a set of Computational Intelligence-based techniques that solve several
problems related to autonomous navigation have been proposed, designed and validated.
Specifically, these techniques meet the objectives defined in Chapter 1. The main
contributions of this work are summarized as follows:

A software component that builds a low level semantic representation of indoor
239
Chapter 7. Conclusions and future work
settings in real time from 2D laser scanner raw data has been implemented. This
software component consists of a set of computationally inexpensive algorithms that
obtain a semantically labelled polygonal description of the environment; specifically, a
list of polylines formed by segments whose end points are tagged by using low level
semantic labels. These labels allow the robot to distinguish relevant features such as
saturated areas, occlusions, inside and outside angle corners and descriptive points that
represent frontal and lateral walls.
While many algorithms that describe the setting by using geometric elements do not
include semantic information in the environment representation, the setting description
technique proposed in this work takes into account these abstract concepts. This is an
advantage because the robot acquires a certain degree of abstract awareness about the
surrounding space. This knowledge is particularly useful when it is required to control
the behaviour of the robot or to allow the robot to take decisions considering the
human behaviour.

A software component that provides a semantic description of the free space
considering the semantically labelled polygonal representation of the environment has
been developed. The description consists of a set of convex sub-polygons whose edges
are semantically labelled using tags that determine if a sub-area is potentially explorable.

Using both semantic descriptions (of the environment and of the free space) a
hierarchical fuzzy system allows to select a safe path to follow. The set of feasible paths
can be obtained by two different methods. The first method analyzes the relation
between semantic labels. From these relations, a set of traversable segments are
calculated. The middle points of these traversable segments are properly grouped and
approximated by straight lines that determine safe routes to follow. The second
method is an enhanced version of the first one. In this case, the routes are defined by
the centroids of the convex sub-polygons that represent sub-areas of the free space.

A hierarchical fuzzy system has been implemented with the purpose of allowing the
robot to avoid mobile obstacles. The control of the velocity parameter is also provided
by another fuzzy system.

A method for topological map building that takes into account the semantic
representation of the free space has also been designed, implemented and validated.
The topological map is defined as a graph whose nodes store significant semantic and
geometric information of specific places. Nodes are linked by paths defined as a list of
robot poses. Each robot pose is estimated by combining the information acquired by
propioceptive and exteroceptive sensors using a method based on an Extended
Kalman Filter (EKF). Natural landmark selection is enhanced by using a hierarchical
fuzzy system. Such natural landmarks are used in a data association procedure that
allows to estimate the robot pose change between two consecutive instants.
240
Cognitive Autonomous Navigation System using Computational Intelligence techniques
This method for topological mapping allows to optimize the description of the
trajectory followed by the robot. Only relevant places are remembered together with
the trajectories that link such places; therefore, only a reduced group of semantic
features together with their positions are stored; then, data storage and processing
requirements are low.

The software architecture has been validated using a simulation framework based on
the open source Player/Stage tool. Furthermore, a set of experiments have been carried
out using a commercial robotic platform (Pioneer) and the low-cost service robot
BENDER, specifically designed and built for this Thesis. In particular, experiments
carried out using BENDER demonstrate that the software components proposed in
this work provide good results even when the hardware cost is reduced.

Finally, the robot modelling and path tracking issues have also been addressed. A global
strategy to find a good model of a real car-like vehicle has been proposed. The
designed method uses a set of Radial Basis Function Neural Networks (RBFNN),
optimally selected by a Multi-Objective Genetic Algorithm (MOGA). Furthermore, the
development of an efficient and computationally inexpensive path tracking method
using a Multi-layer Perceptron Neural Network (MLPNN) has also been presented.
These Computational Intelligence-based techniques prove that it is possible to learn the
dynamic and kinematic behaviour of a robot with strict holonomic and manoeuvrability
constrains without using complex mathematical models. The real robot ROMEO 4R
has been used to validate such techniques.
As a final remark, it is worth mentioning that this Thesis demonstrates that well-known
Computational Intelligence paradigms such as Fuzzy Systems, Artificial Neural Networks
and Multi-Objective Genetic Algorithms provide very useful tools to solve problems in the
Robotics context.
7.2 Dissemination of the results.
Several international and national publications have been presented during the
development of this Thesis. In the following, a list of the published results arranged by
publication date is included:

Pavón N., Ferruz J., Ollero A., “Navegación autónoma basada en la
representación del entorno mediante polilíneas etiquetadas semánticamente”.
ROBOT 2011. Robótica Experimental. Sevilla (Spain). Noviembre 2011, pp. 1 – 6.

Pavón N., Ferruz J., Ollero A., “Describing the environment using semantic
labelled polylines from 2D laser scanned raw data: Application to autonomous
navigation”. IEEE/RSJ International Conference on Intelligent Robots and Systems
241
Chapter 7. Conclusions and future work
(IROS 2010), Taipei (Taiwan), September 2010, pp. 3257 – 3262.

Pavón N., Ferruz J., "BENDER 3.0, una Plataforma Robótica Remota para
Aplicaciones Docentes: Aplicación a Programación Concurrente". Revista
Iberoamericana de Automática e Informática Industrial (RIAI). CEA-IFAC, Vol. 7, Num. 1,
January 2010, pp. 54-63.

Pavón N., Falcón C., Ferruz J., “BENDER: An open, low–cost robot architecture
for educational and research purposes”, Robotica 2010, 10th Conference on
Mobile Robots and Competitions. Leiria (Portugal), March 2010.

Pavón N., Ferruz, J., “B.EN.DE.R. 2.0: Basic ENvironment for DEveloping
Robotic software: Application to educational purposes”. IEEE International
Conference on Mechatronics, ICM 2009. Málaga (Spain), April 2009, pp 1 – 6.

Pavón N., Ferruz J., Ruano A.E., Ollero A. “Using a genetic algorithm to obtain a
neural network-based model of a real autonomous vehicle”. IEEE International
Symposium on Industrial Electronics, ISIE 2008. Cambridge (United Kingdom),
July 2008, pp 929 – 934.

Pavón N., Ferruz, J., "Taller de Programación Concurrente: Diseño de software
para robots basado en hilos Posix". XXVIII Jornadas de Automática. Huelva
(Spain). Year 2007.

Pavón N., Sánchez O., Ollero A. “MLP Neural Networks to Control a Mobile
Robot Using Multiple Goal Points”. International Symposium on Robotics and
Applications ISORA, World Automation Congress WAC 2004, Seville (Spain), July
2004.

Pavón N., Sánchez O., Ollero A. "Mobile robot path tracking using a RBF neural
network". Neural Network Engineering Experiences - EANN'03, pp 364-371.
Torremolinos, Málaga (Spain), September 2003.

Pavón N., Sánchez O., Ollero A. "NeurofuzzyMT: Neuro - Fuzzy modelling tool.
Herramienta de modelado neuroborroso. Aplicaciones a Robótica". XI
Congreso Español sobre Tecnologías y Lógica Fuzzy (ESTYLF). León (Spain).
Year 2002.
Furthermore, it is worth mentioning that the BENDER robotic platform was awarded by
the CEAC-IFAC with the Prodel Award during the conference XXVIII Jornadas de
Automática in 2007.
7.3 Future work.
Future research efforts will consist of extensions of the work presented in this Thesis, as
well as the exploration of new problems, raised while developing the proposed techniques.
242
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Regarding the topological mapping and self-localization problems, several future works are
proposed:

Nodes are linked by trajectories consisting of robot poses. Such poses are estimated by
combining data from propioceptive and exteroceptive sensors. In particular, an EKFbased method is used for data fusion. An error propagation-based technique allows to
estimate the covariance corresponding to virtual sensor measurements obtained from
raw data acquired by the 2D laser scanner. The uncertainty of such raw data is
estimated by considering the error information offered by the manufacturer. This
procedure could be enhanced by using real sensed data which would be used to obtain
an uncertainty model of the 2D laser scanner.

The natural landmarks extraction method could also be improved by training the
hierarchical fuzzy system with real data.

It would be interesting to compare the results obtained by the EKF filter with another
Computational Intelligence-based technique that would allow to manage the
uncertainty by using fuzzy confidence degree values. The new hierarchical fuzzy system
should be adjusted using real training data to improve the rule bases designed by the
expert.

Maintaining a suitable measurement of the confidence of the full map is still a
challenge. This problem could be solved by using fuzzy systems designed to properly
update the position of the nodes and the pose of the robot according to several
circumstances related to the age of the node and the set of relevant features found
along the trajectory that links two adjacent nodes.

Filtering of mobile and small objects will also be addressed to enhance the performance
of the algorithm that detects possible nodes.
Regarding to the inexpensive robotic platform BENDER, several improvements are
currently being developed:

In order to reduce the economic cost of the robot, the 2D laser device will be
substituted with a hybrid device based on two webcams and a low cost 3D depth
sensor (Kinect).

A more powerful drive system (with more accurate encoders), will allow the robot to
carry heavier loads and navigate more precisely.
Finally, new challenges related to the improvement of the scenario representation and
interpretation have emerged during the development of this Thesis. Moreover, interaction
with human users is also another issue that will be addressed:
243
Chapter 7. Conclusions and future work

The proposed representation of the environment will be enhanced by using features
obtained from the artificial vision system. As mentioned above, this artificial vision
system will consist of a low cost stereo pair based on webcams and a 3D depth sensor
based on the Kinect device.

Methods for 3D reconstruction of the setting and object recognition will also be
designed. Computational efficiency will be a key aspect; therefore, efforts will be aimed
at the development of efficient algorithms by using hardware architectures, based on
FPGA (Field Programmable Gate Array) and GPU (Graphics Processing Unit).

The interaction with human users will be improved by including techniques that allow
the robot to understand simple voice commands and gestures.
244
Appendix I
In this appendix, several issues related to the software architecture proposed in Chapter 6
are addressed. In particular, the formal definition of the software architecture is defined.
The YARP interfaces needed for communication between software components are also
described. (Final)
Furthermore, the configuration of the Player/Stage-based simulated environment used in
the experiments is detailed.
AI-1) Definition of the software architecture.
The low level software architecture is described by using a set of graphic diagrams whose
meaning is explained in this section.
a) The representation of a multithreaded independent software component is shown in
Figure AI-1. A synchronous thread is a thread of execution that is periodically repeated,
whereas an asynchronous thread cyclically executes a task, but each iteration only starts
after an asynchronous signal is received.
b) The representation of the communication between a software component and the Player
server is shown in Figure AI-2. A configuration file is needed to load the Player drivers.
The interaction between the software architecture and the Hokuyo 2D laser device is
carried out by using the driver “hokuyoaist”.
Figure AI- 1. Graphical items that formally define a low level software component.
245
Appendix I
Figure AI- 2. Graphical items that formally describe the connection between a software
component and the Player server.
AI-2) YARP interfaces.
This section shows the main YARP interfaces used in the proposed software architecture.
When a connection between two YARP ports is made, it is necessary to define a class
whose attributes store the information that will be sent and received by using the YARP
network. The interfaces of the necessary classes designed for the software architecture are
listed as follows.

Connection between /bender/references/sender and /bender/references/receiver

Connection between /bender/odometry/sender and /bender/odometry/receiver.
C++ Code A - 1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
typedef struct{
double x, y, heading, v;
}TPose;
typedef struct{
double v, g; // Velocity and curvature references
}TReferences;
class yarping_position2d: public Portable{
private: TPose pose;
public:
void set_pose(TPose pose);
TPose get_pose(void);
virtual bool write (ConnectionWriter& connection);
virtual bool read (ConnectionReader& connection);
};
--------------------------------------------------------------------------------class yarping_references: public Portable{
private: TReferences ref;
public:
void init_ref(void);
void set_ref(TReferences ref);
TReferences get_ref(void);
virtual bool read (ConnectionReader& connection);
virtual bool write (ConnectionWriter& connection);
};
246
Cognitive Autonomous Navigation System using Computational Intelligence techniques

Connection between /bender/laser2d/sender and /bender/laser2d/receiver.
C++ Code A - 2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
const double PRECISION_LASER_HOKUYO = 360.0/1024.0;
#define MAX_MEASURE_DATA_VALUES 682
typedef struct{
double m_sec;
double m_nsec;
int dataLength;
// Data length
int initialAngle;
// Initial angle (degrees)
double resolutionAngle;
// Angle resolution
double data[MAX_MEASURE_DATA_VALUES]; // Raw data (metros)
double range;
// Range
double dataStdDev;
//
(variance)
int status;
// Status variable
}TLaser;
// ---------------------------------------------------------------------------------class yarping_laser_player: public Portable{
private:
TLaser laser_player;
public:
void set_laser_player
(TLaser laser_player);
TLaser get_laser_player
(void);
virtual bool write
(ConnectionWriter& connection);
virtual bool read
(ConnectionReader& connection);
};
AI-3) Configuration files for the designed Player/Stagebased simulated environment.
This section shows the configuration files used in the Player/Stage-based simulated
environment. The definition of the world is stored in the bender.world file. The definition of
the simulated BENDER robot is stored in the bender.inc file. The Hokuyo 2D laser device is
also simulated, and its configuration is stored in the hokuyo.inc file.
1) The bender.cfg file.
driver
(
name "stage"
provides ["simulation:0"]
plugin "stageplugin"
worldfile "bender.world"
)
driver
(
name "stage"
provides [ "6665:position2d:0" "6665:laser:0" "6665:sonar:0"]
model "bendercito"
)
driver
(
name "festival"
provides ["speech:0"]
)
247
Appendix I
2) The bender.world file.
include "map.inc"
include "bender.inc"
include "hokuyo.inc"
resolution 0.005
paused 0
size [20 20]
window
(
size [1400.000 700.000]
scale 52.592
window_size/map_size = 1000/20
show_data 1
)
floorplan
(
name "cave"
size [20.000 15.000 1.000]
bitmap "img/hospital.png"
)
bender
(
name "bendercito"
pose [-7.980 2.788 0 -1.998]
hokuyolaser()
)
3) The hokuyo.inc file.
define hokuyolaser laser
(
range_min 0.0
range_max 5.6
fov 240.0
samples 681
color "blue"
pose [0.11 0 0 0]
size [0.05 0.05 0.07]
)
4) The map.inc file.
define floorplan model
( color "gray30"
boundary 1
gui_nose 0
gui_grid 0
gui_movemask 0
gui_outline 0
gripper_return 0
fiducial_return 0
laser_return 1
ranger_return 1
)
5) The bender.inc file.
define base position
(
color "yellow"
drive "diff"
gui_nose 1
obstacle_return 1
laser_return 1
ranger_return 1
blobfinder_return 1
fiducial_return 1
#
#
#
#
#
#
#
#
Default color.
Differential steering model.
Draw a nose on the robot so we can see which way it points
Can hit things.
reflects laser beams
reflects sonar beams
Seen by blobfinders
Seen as "1" fiducial finders
248
Cognitive Autonomous Navigation System using Computational Intelligence techniques
localization "odom"
localization_origin [0 0 0 0]
# Start odometry at (0, 0, 0, 0).
odom_error [ 0.01 0.01 0.01 0.01 ] # Odometry error or slip in X, Y and Theta
)
define bender base
(
size [0.30 0.30 0.45]
blocks 4
# Front -------------------block
(
points 9
point[0] [0.150 0.000]
point[1] [0.145 0.040]
point[2] [0.125 0.080]
point[3] [0.090 0.110]
point[4] [0.001 0.110]
point[5] [0.000 -0.110]
point[6] [0.090 -0.110]
point[7] [0.125 -0.080]
point[8] [0.145 -0.040]
)
# Rear ---------------------block
(
points 9
point[0] [0.000 0.150]
point[1] [-0.060 0.140]
point[2] [-0.110 0.110]
point[3] [-0.140 0.060]
point[4] [-0.150 0.000]
point[5] [-0.140 -0.060]
point[6] [-0.110 -0.110]
point[7] [-0.060 -0.140]
point[8] [0.000 -0.150]
)
# Wheel 1 ----------------------block
(
points 4
point[0] [0.085 0.12]
point[1] [0.085 0.14]
point[2] [0.005 0.14]
point[3] [0.005 0.12]
color "gray15"
)
# Wheel 2 ----------------------block
(
points 4
point[0] [0.085 -0.14]
point[1] [0.085 -0.12]
point[2] [0.005 -0.12]
point[3] [0.005 -0.14]
color "gray15"
)
mass 7.5
drive "diff"
)
AI-4) Hardware description of BENDER.
This section shows the description and cost of the hardware elements of the robot
BENDER, see Table AI-1. Furthermore, a comparative study about the functionality and
cost of commercial robots and the BENDER robot is also detailed, see Table AI-2. Figure
AI-3 shows the main components of the drive system.
249
Appendix I
Device
Description
Cost (€)
Differential drive system that consists of two 12 V. EMG30 motors with
encoders, a MD25 hardware controller (Dual 12 V. 2.8 A. H Bridge
Motor Drive) and two wheels.
161,15
Two AF Logitech cameras with independent incorporated PTZ.
270
Hokuyo URG-04LX 2D laser device.
1500
Hitec servo motor.
15,35
Servo motor controller.
42,45
Tilt Compensated Compass Module CMPS09.
52,75
Onboard computer Asus Eee PC 1008HA Notebook. Intel Atom N280
1.66 GHz., 1 GB RAM, 250 GB hard disk, Wi-Fi, long life battery (6
hours). Weight = 1,1 kg.
Battery: 12 V and 7 AH
350
27,83
Chassis components.
75
TOTAL
2.494,53
Table AI-1. Description and cost of the hardware elements of the robot BENDER.
P R W E I U L MV
SV G GPS
Ex T PTZ
Pu
Estimated Cost
Robot
1 3 × × ×
×
×
×
GP
2700 €
Wifibot
1 1
×
o
SP
525 €
Roomba
1 4 × × × × o
×
o
×
×
×
GP
36995 $
PeopleBot
1 3 × ×
× ×
×
o
×
×
×
GP
38630 $
GhiaBot
1 3 × ×
× o
×
o
o
×
×
×
GP
39980 $
Pioneer 3-AT
× o
o
o
o
×
×
o
GP
17185 $
Pioneer P3-DX 1 3 × ×
0 0 × ×
×
o
×
×
GP
3495 $
AmigoBot
1 3 × ×
× o
×
o
×
o
GP
48995 $
PatrolBot
1 4 × ×
×
×
×
×
×
×
×
×
GP
53490 $
Seekur Jr.
2 5 × ×
o ×
×
×
o
×
×
×
GP
2495 €
BENDER
P: Number of processing units / R: Computer performance (1=low…5=high) / W: Wifi / E: Encoders
I: Infrared devices / U: Ultrasonic devices / L: 2D Laser device / MV: Monocular vision system /
SV: Stereo vision system. G: Gyroscope or digital compass / GPS: Global Positioning System /
Ex: Extension possibilities
T: Tele-operated / PTZ: Pan & Tilt & Zoom
Pu: Purpose (GP=General purpose, SP=Specific purpose) × = Standard version / o = Optional
Table AI-2. Comparative study about functionality and cost of commercial robots and the
BENDER robot.
250
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure AI-3. Drive system components of the BENDER robot.
251
252
Appendix II
This appendix includes the full derivation of the equations that allow to estimate the
covariance matrix of the pose change obtained from the virtual sensor described in Chapter
4. (Final)
AII-1) Definition of the problem.
As explained in Chapter 3, the virtual sensor provides a pose change estimate by using the
set of associated segments between consecutive instants
and . Chapter 4 describes
a statistical technique that allows to obtain the final pose change estimation at time ,
taking into account all the pose change estimates for each associated segment. This final
pose change estimation depends on the uncertainty of each individual pose change
estimate. The uncertainty is represented by a Gaussian distribution.
In order to calculate the parameters of the Gaussian distributions associated to each
individual pose change estimation, the uncertainty in the raw measurements obtained by a
Laser 2D scanner device is considered. Then, the results are applied to the Hokuyo Laser
2D device, used by the robots Pioneer and BENDER, in the experimental work of this
thesis.
In particular, this laser 2D model provides
measurements that cover
. According
to the specifications, the maximum and minimum measurable distances are
meters, respectively. The angular resolution is approximately
The scanning time is
per scan. The accuracy is
the interval
, and the
maximum error is
.
(
and
steps).
when the distance is in
of the measurement, in other case. Therefore, the
Every single raw measurement is a 2D point in polar form
, relative to the laser
device reference system. Usually, only errors in the range component
are considered to
be significant [49], so in the following no uncertainty will be estimated for .
The virtual sensor (see Chapter 4), computes a pose change estimate by using the
information provided by a set of matching segments, obtained from raw 2D laser scanner
data. Thus, the uncertainty of every individual pose change estimation will be estimated by
253
Appendix II
taking into account the measurement uncertainties of such input device and propagating
them to the incremental pose estimation results.
As Figure AII - 1 shows, the end points of the segments are directly selected among the
raw points measured by the laser device; therefore, the pose change estimation uncertainty
can be calculated by propagating the uncertainty in the distance to the end points of the
segment, both at time
, and at time
.
In general, the pose change estimate will be a non-linear function whose inputs are:
,
since
at time
the
segment
is
and
Only the uncertainty of inputs
bounded
by
the
points
at time .
will be considered and modeled with Gaussian
distributions with covariance values
, respectively. Furthermore, it is
assumed that the distance measurements are not correlated.
The uncertainty present in a set of inputs can be easily propagated to the output if both are
related by a linear function and input errors follow a Gaussian distribution. If the function
is non-linear, the uncertainty in outputs can still be approximately computed by
linearization.
In general, consider that
is a set of
functions of
inputs, where every single function
can be linearized by using the Taylor approximation:
(AII- 1)
Where is in the interval
and
is a constant value that does not contribute to the
error in . In matrix notation, the following equation can be defined:
(AII- 2)
Therefore, the equation (AII- 2) can be simply written as:
(AII- 3)
254
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Figure AII - 1. Pose change estimation from an associated segment between consecutive
instants. (a) Geometric description of the parameters involves in the computation of
position change. (b) Geometric description of the parameters involved in the computation
of heading change.
Where is the Jacobian matrix,
is the input vector and
that does not contribute to the error in . If each input
is a vector of constant values
has an error defined by
and
there is correlation between the inputs, the input covariance matrix can be defined by the
following equation:
(AII- 4)
The covariance of
can be approximately computed as follows:
(AII- 5)
If there is no correlation between inputs:
(AII- 6)
255
Appendix II
(AII- 7)
The components of the pose change estimate obtained from a single matching pair of
segments is a function whose inputs are
:
(AII- 8)
Then,
can be linearized by using the Taylor approximation where the Jacobian
is defined as:
(AII- 9)
Then the covariance matrix for
can be written as follows:
(AII- 10)
Where the vector
is defined as follows:
(AII- 11)
The covariance matrix for the input
is:
(AII- 12)
In equation (AII- 12) the covariance terms have been renamed so as to allow to write the
expressions in a more compact indexed form. According to equation (AII- 7), the
covariance matrix for
can be defined as follows:
256
Cognitive Autonomous Navigation System using Computational Intelligence techniques
(AII- 13)
Taking into account that the covariance terms
,
,
and
are considered small
enough to be neglected, it is possible to calculate each component of the matrix
by using the following equations:
(AII- 14)
(AII- 15)
(AII- 16)
(AII- 17)
(AII- 18)
(AII- 19)
AII-2) Calculation of partial derivatives.
In this section, the components of the Jacobian
are calculated. In particular, only the
partial derivatives with respect to the inputs with uncertainty (
considered.
,
,
,
), are
AII-2.1) Partial derivatives for the change in heading.
As Figure AII - 1 (b) shows, the change in heading of the robot is equal to the opposite of
the apparent change of heading of the segment between times and
:
(AII- 20)
The partial derivatives for the change in heading are
,
,
and
:
(AII- 21)
257
Appendix II
(AII- 22)
(AII- 23)
(AII- 24)
The angles
and
system at time
between a segment and the
axis of the laser reference
and , respectively, are calculated by using the following equations:
(AII- 25)
(AII- 26)
Furthermore, the equations (AII- 25) and (AII- 27) could be rewritten as follows:
(AII- 27)
(AII- 28)
Where
,
,
,
,
,
and considering that:
(AII- 29)
(AII- 30)
The derivatives
,
,
and
can be calculated by using the following
equations:
(AII- 31)
258
Cognitive Autonomous Navigation System using Computational Intelligence techniques
(AII- 32)
(AII- 33)
(AII- 34)
(AII- 35)
(AII- 36)
Finally, the derivatives
,
,
and
can be obtained by using the
following simplified equations:
(AII- 37)
(AII- 38)
(AII- 39)
(AII- 40)
Taking into account equations (AII- 21), (AII- 22), (AII- 23), (AII- 24) and equations (AII37), (AII- 38), (AII- 39), (AII- 40), the highlighted components of the Jacobian (see (AII9)), shown in Figure AII - 2 are calculated.
Figure AII - 2. Components of the Jacobian related to the change in heading.
259
Appendix II
AII-2.1) Partial derivatives for the change in position.
As Figure AII - 1 (a) shows, the change in position
estimates
and
is calculated from the position
obtained by considering a pair of segments associated at
consecutive instants
and as reference systems (both segments represents the same
real segment in the global reference system). Once the positions of the robot
and
are calculated using each segment-based reference system, a geometric
transformation is carried out in order to define the position of the robot at time in the
robot reference system at time
. The following equation summarizes this last step:
(AII- 41)
Where
,
and are calculated as:
and
. In addition, the terms
,
,
(AII- 42)
The equation (AII- 41) can be rewritten as follows:
(AII- 43)
From (AII- 42) and (AII- 43), the following equations are obtained:
(AII- 44)
(AII- 45)
Using the identity
simplified as follows:
, the equation (AII- 44) can be
(AII- 46)
On the other hand, as
can also be simplified as follows:
, the equation (AII- 45)
(AII- 47)
According to Figure AII - 3, the following components of the Jacobian should be
calculated
,
,
,
,
,
,
,
.
260
Cognitive Autonomous Navigation System using Computational Intelligence techniques
In general, it is necessary to calculate
and
, where
could be either
,
,
, or
. The following equations allow to compute the partial derivatives:
(AII- 48)
Figure AII - 3. Components of the Jacobian related to the change in position.
The terms in (AII- 48) can be obtained as follows:
(AII- 49)
Where the derivatives of the terms
,
,
and
can be
expanded as follows:
(AII- 50)
In order to fully expand the preceding equations, it is necessary to calculate the following
terms:
,
,
,
,
,
,
,
terms are already known, since
,
and
261
,
,
and
and
. The last four
are calculated by using
Appendix II
the equations (AII- 37) and (AII- 38).
The angles
and
are defined by the following equations (using the Sine Theorem):
(AII- 51)
Where
can be expressed as follows (by using the Cosine Theorem):
(AII- 52)
Then, the partial derivatives
,
,
,
,
,
,
and
, can be
calculated as follows:
(AII- 53)
As
does not depends on
,
, then
, therefore
. On the other hand, as
can be calculated by using the following equation:
(AII- 54)
The same procedure can be applied in order to calculate the partial derivatives
, and
:
(AII- 55)
(AII- 56)
262
Cognitive Autonomous Navigation System using Computational Intelligence techniques
The rest of partial derivatives
,
,
and
, are calculated as follows:
(AII- 57)
(AII- 58)
(AII- 59)
(AII- 60)
Where
and
are calculated by using the following equations:
(AII- 61)
(AII- 62)
Finally, the terms of the Jacobian, highlighted at Figure AII - 3, can be calculated as
follows, by defining
and
:
(AII- 63)
(AII- 64)
(AII- 65)
(AII- 66)
263
Appendix II
(AII- 67)
(AII- 68)
(AII- 69)
(AII- 70)
264
References
(Final)
[1]
S. Thrun, “Robotic Mapping: A Survey,” Pittsburgh, 2002.
[2]
A. Ollero and O. Amidi, “Predictive path tracking of mobile robots. Application to
CMU Navlab,” in Fifth international conference on advanced robotics, Italy, 1991.
[3]
A. Ollero and G. Heredia, “Stability analysis of mobile robot path tracking,” in
IEEE/RSJ International Conference on Intelligent Robots and Systems., Pittsburgh, PA ,
USA, 1995.
[4]
H. Choset and J. Burdick, “Sensor based planning. I. The generalized Voronoi
graph,” in IEEE International Conference on Robotics and Automation, 1995.
[5]
H. Choset and J. Burdick, “Sensor based planning. II. Incremental construction of
the generalized Voronoi graph,” in IEEE International Conference on Robotics and
Automation, 1995.
[6]
H. Choset and K. Nagatani, “Topological simultaneous localization and mapping
(SLAM): toward exact localization without explicit localization,” IEEE Transactions on
Robotics and Automation, vol. 17, no. 2, April 2001.
[7]
B. Kuipers and Y. Byun, “Robot Exploration and Mapping Strategy Based on a
Semantic Hierarchy of Spatial Representations,” Journal of Robotics and Autonomous
Systems, vol. 8, p. 47–63, 1991.
[8]
J. Latombe, Robot Motion Planning, Boston: Kluwer Academic Publishers, 1991.
[9]
T. Bailey, J. Nieto, J. Guivant, M. Stevens and E. Nebot, “Consistency of the EKFSLAM Algorithm,” in IEEE/RSJ International Conference on Intelligent Robots and Systems,
Beijing, 2006.
[10] M. Montemerlo and S. Thrun, “Simultaneous Localization and Mapping with
Unknown Data Association Using FastSLAM,” in IEEE lnternsfiond Conference on
Robotics an Automation, Taipei, 2003.
[11] C. Cadena and J. Neira, “SLAM in O(log n) with the Combined Kalman Information Filter,” in IEEE/RSJ International Conference on Intelligent Robots and Systems,
265
References
St. Louis, USA, 2009.
[12] D. Driankov and A. Saffiotti, Fuzzy Logic Techniques for Autonomous Vehicle
Navigation, VIII ed., D. Driankov and A. and Saffiotti, Eds., Springer Verlag, 2001,
p. 391.
[13] H. Martínez-Barberá, M. Zamora and A. Skarmeta, “Navigation of a mobile robot
with Fuzzy Segments.,” in IEEE International Conference on Fuzzy Systems, Melbourne,
2001.
[14] I. Baturone, F. Moreno-Velo, S. Sanchez-Solano and A. Ollero, “Automatic design of
fuzzy controllers for car-like autonomous robots,” IEEE Transactions on Fuzzy Systems,
vol. 12, no. 4, pp. 447- 465, August 2004.
[15] X. Dai, B. Hao and L. Shao, “Self-Organizing Neural Networks for Simultaneous
Localization and Mapping of Indoor Mobile Robots,” in First International Conference on
Intelligent Networks and Intelligent Systems, Wuhan, 2008.
[16] A. Chatterjee and F. Matsuno, “A Neuro-Fuzzy Assisted Extended Kalman FilterBased Approach for Simultaneous Localization and Mapping (SLAM) Problems,”
IEEE Transactions on Fuzzy Systems, vol. 15, no. 5, pp. 984 - 997, October 2007.
[17] A. Rodríguez-Castaño, G. Heredia and A. Ollero, “Fuzzy path tracking and position
estimation of autonomous vehicles using differential GPS,” Mathware & Soft
Computing, vol. 2, no. 3, pp. 257 - 264, 2000.
[18] F. Rosenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain
Mechanisms, Washington: Spartan Books, 1962.
[19] T. Murata and H. Ishibuchi, “MOGA: multi-objective genetic algorithms,” in IEEE
International Conference on Evolutionary Computation, 1995.
[20] P. Ferreira, A. Ruano and C. Fonseca, “Genetic assisted selection of RBF model
structures for greenhouse inside air temperature prediction,” in IEEE Conference on
Control Applications, Instanbul (Turkey), 2003.
[21] C. Teixeira, W. Pereira, A. Ruano and M. Ruano, “Multi-objective genetic algorithm
applied to the structure selection of RBFNN temperature estimators,” Adaptive and
Natural Computing Algorithms, pp. 506-509, 2005.
[22] R. Vaughan, “Massively multi-robot simulation in stage,” Swarm Intell, vol. 2, p. 189–
208, 22 August 2008.
266
Cognitive Autonomous Navigation System using Computational Intelligence techniques
[23] IFR, “IFR International Federation of Robotics,” 2010. [Online]. Available:
http://www.ifr.org/home/. [Accessed September 2010].
[24] G. Bekey and J. Yuh, “The Status of Robotics,” IEEE Robotics & Automation
Magazine, vol. 15, no. 1, pp. 80 - 86, March 2008.
[25] W. Gruver, “Intelligent robotics in manufacturing, service, and rehabilitation: an
overview,” IEEE Transactions on Industrial Electronics, vol. 41, no. 1, pp. 4-11, February
1994.
[26] R. Schraft, E. Degenhart, M. Hagele and M. Kahmeyer, “New robot applications in
production and service.,” in Proceedings or the IEEE/Tsukuba International Workshop on
Advanced Robotics. 'Can Robots Contribute to Preventing Environmental Deterioration?',
Tsukuba, 1993.
[27] P. Ibach, N. Milanovic, J. Richling, V. Stantchev, A. Wiesner and M. Malek, “CERO:
CE RObots community,” IEE Proceedings Software, vol. 152, no. 5, pp. 210 - 214,
October 2005.
[28] E. Aarts, “Ambient intelligence: a multimedia perspective,” IEEE Multimedia, vol. 11,
no. 1, pp. 12-19, March 2004.
[29] C. Ramos, J. C. Augusto and D. Shapiro, “Ambient Intelligence—the Next Step for
Artificial Intelligence,” IEEE Intelligent Systems, vol. 23, no. 2, pp. 15-18, March 2008.
[30] M. Kim, S. Kim, S. Park, M.-T. Choi, M. Kim and H. Gomaa, “Service robot for the
elderly.,” IEEE Robotics & Automation Magazine, vol. 16, no. 1, pp. 34-45, March 2009.
[31] C. Balaguer, A. Gimenez, A. Huete, A. Sabatini, M. Topping and G. Bolmsjo, “The
MATS robot: service climbing robot for personal assistance,” IEEE Robotics &
Automation Magazine, vol. 13, no. 1, pp. 51 - 58, March 2006.
[32] R. Jackson, “Robotics and its role in helping disabled people,” Engineering Science and
Education Journal, vol. 2, no. 6, pp. 267 - 272, December 1993.
[33] H. Utz, S. Sablatnog, S. Enderle and G. Kraetzschmar, “Miro - middleware for
mobile robot applications,” IEEE Transactions on Robotics and Automation, vol. 18, no.
4, pp. 493 - 497, August 2002.
[34] A. Iborra, D. Caceres, F. Ortiz, J. Franco, P. Palma and B. Alvarez, “Design of
service robots,” IEEE Robotics & Automation Magazine, vol. 16, no. 1, pp. 24-33, 16
March 2009.
267
References
[35] A. Ndjeng, D. Gruyer and S. Glaser, “Improvement of the Proprioceptive-Sensors
based EKF and IMM Localization,” in 11th International IEEE Conference on Intelligent
Transportation Systems, Beijing, 2008.
[36] A. Harrison and P. Newman, “High Quality 3D Laser Ranging Under General
Vehicle Motion,” in International Conference of Robotics and Automation (ICRA), Pasadena,
2008.
[37] L. Erickson, J. Knuth, J. O'Kane and S. LaValle, “Probabilistic localization with a
blind robot,” in IEEE International Conference on Robotics and Automation (ICRA).,
Pasadena, 2008.
[38] G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV
Library, Sebastopol: O'Really, 2008.
[39] W. D. Jones, “Hard drive - robotic cars impress in rough road race,” IEEE Spectrum,
vol. 42, no. 12, p. 13, December 2005.
[40] G. Bradski and A. Kaehler, “Robot-Vision Signal Processing Primitives [Applications
Corner],” IEEE Signal Processing Magazine, vol. 25, no. 1, 2008.
[41] X. Zhao, Q. Luo and B. Han, “Survey on robot multi-sensor information fusion
technology,” in 7th World Congress on Intelligent Control and Automation, Chongqing,
2008.
[42] J. Borenstein, “Internal Correction of Dead-reckoning Errors With the Compliant
Linkage Vehicle,” Journal of Robotic Systems, vol. 12, no. 4, pp. 257-273, April 1995.
[43] A. Siadat, A. Kaske, S. Klausmann, M. Dufaut and R. Husson, “An Optimized
Segmentation Method for a 2D Laser-Scanner Applied to Mobile Robot Navigation,”
in Proceedings of the 3rd IFAC Symposium on Intelligent Components and Instruments for Control
Applications, 1997.
[44] R. Duda and P. Hart, Pattern Classification and Scene Analysis, Nueva York: John
Wiley & Sons, Inc., 1973.
[45] F. Tang, M. Adams, J. Ibanez-Guzman and W. Wijesoma, “Pose invariant, robust
feature extraction from data with a modified scale space approach,” in IEEE
International Conference on Robotics and Automation., 2004.
[46] R. Martinez-Cantin, J. Castellanos, J. Tardos and J. Montiel, “Adaptive Scale Robust
Segmentation for 2D Laser Scanner,” in IEEE/RSJ International Conference on Intelligent
Robots and Systems, Beijing, 2006.
268
Cognitive Autonomous Navigation System using Computational Intelligence techniques
[47] Y. Cheng, “Mean Shift, Mode Seeking, and Clustering,” IEEE Trans. Pattern Anal.
Machine Intell., vol. 17, no. 8, pp. 790-799, August 1995.
[48] P. Nuñez, R. Vazquez-Martin, J. del Toro, A. Bandera and F. Sandoval, “Feature
extraction from laser scan data based on curvature estimation for mobile robotics,” in
IEEE International Conference on Robotics and Automation, Orlando, Florida, 2006.
[49] V. Nguyen, A. Martinelli, N. Tomatis and R. Siegwart, “A comparison of line
extraction algorithms using 2D laser rangefinder for indoor mobile robotics,” in
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005.
[50] T. Pavlidis and S. Horowitz, “Segmentation of Plane Curves,” IEEE Transactions on
Computers, Vols. C-23, no. 8, pp. 860 - 870, 1974.
[51] K. O. Arras and R. Siegwart, “Feature Extraction and Scene Interpretation for MapBased Navigation and Map Building,” in Proc. of SPIE, Mobile Robotics XII, Pittsburgh,
1997.
[52] M. Fischler and R. Bolles, “Random sample consensus: A paradigm for model fitting
with application to image analysis and automated cartography,” Communications of the
ACM, vol. 24, no. 6, p. 381–395, 1981.
[53] O. Duda R. and P. Hart, “Use of the Hough Trasformation to detec lines and curves
in pictures,” Comm. ACM, vol. 15, no. 1, pp. 11-15, January 1972.
[54] A. Dempster, N. Laird and D. Rubin, “Maximum likelihood from incomplete data via
the EM algorithm,” Journal of the Roya Statistical Society. Series B (Methodological), vol. 39,
no. 1, pp. 1-38, 1977.
[55] S. Thrun, C. Martin, L. Yufeng, D. Hahnel, R. Emery-Montemerlo, D. Chakrabarti
and W. Burgard, “A Real-Time Expectation Maximization Algorithm for Acquiring
Multi-Planar Maps of Indoor Environments with Mobile Robots,” IEEE Transactions
on Robotics and Automation, vol. 20, no. 3, pp. 433 - 443, June 2004.
[56] P. Núñez, R. Vázquez-Martín, J. del Toro, A. Bandera and F. Sandoval, “Natural
landmark extraction for mobile robot navigation based on an adaptive curvature
estimation,” Robotics and Autonomous Systems, vol. 56, no. 3, pp. 247 - 264, 2008.
[57] G. Borges and M. Aldon, “Line extraction in 2D range images for mobile,” Journal of
Intelligent and Robotic Systems, pp. 267 - 297, 2004.
[58] L. Pedraza, G. Dissanayake, J. Valls Miró, D. Rodríguez-Losada and F. Matía, “BSSLAM: Shapping the world,” in Proceedings of Robotics: Science and Systems, Atlanta, 2007.
269
References
[59] L. Pedraza, D. Rodríguez-Losada, P. San Segundo and F. Matía, “Building maps of
large environments using splines and geometric analysis,” in IEEE/RSJ International
Conference on Intelligent Robots and Systems, Nice, France, 2008.
[60] A. Mandow, J. Martínez, A. Reina and J. Morales, “Fast range-independent spherical
subsampling of 3D laser scanner points and data reduction performance evaluation
for scene registration,” Patter Recognition Letters, vol. 31, pp. 1239-1250, 2010.
[61] P. Newman, D. Cole and K. Ho, “Outdoor SLAM using visual appearance and laser
ranging,” in IEEE International Conference on Robotics and Automation, Orlando, Florida,
2006.
[62] A. Tapus, S. Heinzer and R. Siegwart, “Bayesian Programming for Topological
Global Localization with Fingerprints,” in IEEE International Conference on Robotics and
Automation, New Orleans, 2004.
[63] M. Tomono, “Environment modeling by a mobile robot with a laser range finder and
a monocular camera,” in Proceedings of the IEEE Workshop on Advanced Robotics and its
Social Impacts, 2005.
[64] J.-H. Kim, J.-S. Oh and J.-H. Kim, “Urban Driving System for UGV using Laser
Scanner and Vision,” in ICCAS-SICE, Fukuoka, 2009.
[65] D. Gálvez, K. Sjö, C. Paul and P. Jensfelt, “Hybrid Laser and Vision Based Object
Search and Localization,” in IEEE International Conference on Robotics and Automation,
Pasadena, 2008.
[66] F. Fang, X. Ma, X. Dai and K. Qian, “A new multisensor fusion SLAM approach for
mobile robots,” Journal of Control Theory and Applications, vol. 7, no. 4, pp. 389 - 394,
2009.
[67] L. Kneip, F. Tache, G. Caprari and R. Siegwart, “Characterization of the compact
Hokuyo URG-04LX 2D laser range scanner,” in IEEE International Conference on
Robotics and Automation, Kobe, 2009.
[68] S. Thrun, M. Jefferies and W. Yeap, “Simultaneous Localization and Mapping.
Robotics and Cognitive Approaches to Spatial Mapping,” in Springer Tracts in Advanced
Robotics, vol. 38/2008, Springer Berlin / Heidelberg, 2008, pp. 13-41.
[69] S. Thrun, Probabilistic Robotics, Massachusetts: MIT Press, 2006.
[70] R. E. Kalman, “A New Approach to Linear Filtering and Prediction Problems,”
Transactions of the ASME - Journal of Basic Engineering, vol. 82, pp. 35 - 45, 1960.
270
Cognitive Autonomous Navigation System using Computational Intelligence techniques
[71] P. Moutarlier and R. Chatila, “Stochastic multisensory data fusion for mobile robot
location and environment modelling,” in 5th International Symposium on Robotics Research,
Tokio, 1989.
[72] J. Castellanos, J. Martinez, J. Neira and J. Tardos, in IEEE International Conference on
Robotics and Automation, Leuven, 1998.
[73] G. Zunino and H. Christensen, “Simultaneous localization and mapping in domestic
environments,” in International Conference on Multisensor Fusion and Integration for Intelligent
Systems, 2001.
[74] C. Connette, O. Meister, M. Hagele and G. Trommer, “Decomposition of line
segments into corner and statistical grown line features in an EKF-SLAM
framework,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, San
Diego, CA, 2007.
[75] L. Pedraza, D. Rodriguez-Losada, F. Matia, G. Dissanayake and J. Miro, “Extending
the Limits of Feature-Based SLAM With B-Splines,” IEEE Transactions on Robotics,
vol. 25, no. 2, pp. 353 - 366, April 2009.
[76] S. Julier, J. Uhlmann and H. Durrant-Whyte, “A new method for the nonlinear
transformation of means and covariances in filters and estimators,” IEEE Transactions
on Automatic Control, vol. 45, no. 3, pp. 477 - 482, March 2000.
[77] R. Martinez-Cantin and J. Castellanos, “Unscented SLAM for large-scale outdoor
environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems,
2005.
[78] G. Huang, A. Mourikis and S. Roumeliotis, “On the complexity and consistency of
UKF-based SLAM,” in IEEE International Conference on Robotics and Automation, Kobe,
2009.
[79] L. M. Paz, J. D. Tardós and J. Neira, “Divide and Conquer: EKF SLAM in O(n),”
IEEE Transaction on Robotics, vol. 24, no. 5, pp. 1107 - 1120, 2008.
[80] S. Thrun, Y. Liu, D. Koller, A. Ng, Z. Ghahramani and H. Durrant-Whyte,
“Simultaneous localization and mapping with sparse extended information filters,”
International Journal of Robotics Research, vol. 23, p. 693–716, 2004.
[81] S. Arulampalam, S. Maskell, N. Gordon and T. Clapp, “A tutorial on particle filters
for on-line nonlinear/non-Gaussian Bayesian tracking,” IEEE Transactions on Signal
Processing, vol. 50, no. 2, pp. 174-188, 2002.
271
References
[82] M. Montemerlo, “FastSLAM: A Factored Solution to the Simultaneous Localization
and Mapping Problem With Unknown Data Association,” Pittsburgh, PA 15213,
2003.
[83] M. Begum, G. Mann and R. Gosine, “A Fuzzy-Evolutionary Algorithm for
Simultaneous Localization and Mapping of Mobile Robots,” in IEEE Congress on
Evolutionary Computation, Vancouver, 2006.
[84] C. Pathiranage, K. Watanabe and K. Izumi, “A solution to the SLAM problem based
on fuzzy Kalman filter using pseudolinear measurement model,” in Annual Conference
SICE, Takamatsu, 2007.
[85] A. Elfes, “Using occupancy grids for mobile robot perception and navigation,” IEEE
Computer, vol. 22, no. 6, pp. 46 - 57, June 1989.
[86] H. Choset, K. Lynch, S. Hutchinson, G. Kantor, W. Burgard, L. Kavraki and S.
Thrun, Principles of Robot Motion. Theory, Algorithms, and Implementations,
Massachusetts: MIT Press, 2005.
[87] J. Canny, “Constructing roadmaps of semi-algebraic sets I: Completeness,” Artificial
Inteligence, vol. 37, pp. 203 - 222, 1988.
[88] J. Canny, “Computing roadmaps of general semi-algebraic sets,” The Computer Journal,
vol. 35, no. 5, pp. 504 - 514, 1993.
[89] J. Canny and M. Lin, “An opportunistic global path planner,” Algorithmica, vol. 10, pp.
102 - 120, 1993.
[90] F. Aurenhammer, “Voronoi diagrams - A survey of a fundamental geometric
structure,” ACM Computing Surveys, vol. 23, pp. 345 - 405, 1991.
[91] P. Cignonia, C. Montania and R. Scopigno, “DeWall: A fast divide and conquer
Delaunay triangulation algorithm in Ed,” Computer-Aided Design, vol. 30, no. 5, pp. 333
- 341, April 1998.
[92] Ž. Borut, “An efficient sweep-line Delaunay triangulation algorithm,” Computer-Aided
Design, vol. 37, no. 10, pp. 1027 - 1038, September 2005.
[93] M. Löffler and J. Snoeyink, “Delaunay triangulation of imprecise points in linear time
after preprocessing,” Computational Geometry, vol. 43, no. 3, April 2010.
[94] K. Nagatani and H. Choset, “Toward robust sensor based exploration by
constructing reduced generalized Voronoi graph,” in IEEE/RSJ International
272
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Conference on Intelligent Robots and Systems, 1999.
[95] H. Choset, I. Konukseven and J. Burdick, “Mobile robot navigation: issues in
implementating the generalized Voronoi graph in the plane,” in IEEE/SICE/RSJ
International Conference on Multisensor Fusion and Integration for Intelligent Systems,
Washington, DC, 1996.
[96] K. Nagatani, H. Choset and S. Thrun, “Towards exact localization without explicit
localization with the generalized Voronoi graph,” in IEEE International Conference on
Robotics and Automation, 1998.
[97] H. Choset and K. Nagatani, “Topological Simultaneous Localization and Mapping
(SLAM): Toward Exact Localization Without Explicit Localization,” IEEE
Transaction on Robotics and Automation, vol. 17, no. 2, pp. 125 - 137, April 2001.
[98] H. Choset and J. Yeong Lee, “Sensor-Based Construction of a Retract-Like Structure
for a Planar Rod Robot,” IEEE Transactions on Robotics and Automation, vol. 17, no. 4,
pp. 435 - 449, August 2001.
[99] H. Choset and J. Yeong Lee, “Sensor-Based Exploration for Convex Bodies: A New
Roadmap for a Convex-Shaped Robot,” IEEE Transaction on Robotics, vol. 21, no. 2,
pp. 240 - 247, April 2005.
[100] B. Lisien, D. Morales, D. Silver, G. Kantor, I. Rekleitis and H. Choset, “The
hierarchical atlas,” IEEE Transactions on Robotics, vol. 21, no. 3, pp. 473 - 481, June
2005.
[101] B. Kuipers, J. Modayil, P. Beeson, M. MacMahon and F. Savelli, “Local metrical and
global topological maps in the hybrid spatial semantic hierarchy,” in IEEE
International Conference on Robotics and Automation, New Orleans.
[102] J. Blanco, J. Fernández-Madrigal and J. González, “Toward a Unified Bayesian
Approach to Hybrid Metric–Topological SLAM,” IEEE Transaction on Robotics, vol.
24, no. 2, pp. 259 - 270, April 2008.
[103] F. Ferreira, I. Amorim, R. Rocha and J. Dias, “T-SLAM: Registering Topological and
Geometric Maps for Robot Localization in Large Environments,” in IEEE
International Conference on Multisensor Fusion and Integration for Intelligent Systems, Seoul,
2008.
[104] F. Werner, F. Maire, J. Sitte, H. Choset, S. Tully and G. Kantor, “Topological SLAM
using Neighbourhood Information of Places,” in IEEE/RSJ International Conference on
Intelligent Robots and Systems, St. Louis, 2009.
273
References
[105] Ronghua-Luo, Huaqing-Min and Yan-Shao, “A New Hierarchical Simultaneous
Localization and Mapping Method Based on Local Shape Context of Environment,”
in Proceedings of the Ninth International Conference on Machine Learning and Cybernetics,
Qingdao, 2010.
[106] R. Iser and F. M. Wahl, “Building Local Metrical and Global Topological Maps Using
Efficient Scan Matching Approaches,” in IEEE/RSJ International Conference on
Intelligent Robots and Systems, Nice, 2008.
[107] J. Gasós and A. Martín, “Mobile Localization using Fuzzy Maps,” Lecture Notes on
Artificial Intelligence.
[108] K. Demirli and I. Türksen, “Sonar based mobile robot localization by using fuzzy
triangulation,” Robotics and Autonomous Systems, vol. 33, no. 2-3, pp. 109-123, 30
November 2000.
[109] D. Herrero-Pérez and H. Martínez-Barberá, “Indoor Fuzzy Self-Localization using,”
vol. 1, no. 1, September 2007.
[110] M. Tomono, “A scan matching method using Euclidean invariant signature for global
localization and map building,” in IEEE International Conference on Robotics and
Automation, New Orleans, 2004.
[111] P. Besl and N. McKay, “A method for registration of 3-D shapes,” IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 14, pp. 239-256, 1992.
[112] W. Tingting, Z. Chongwei, B. Wei and W. Mulan, “Mobile robot simultaneous
localization and mapping in unstructured environments,” in 2nd International Asia
Conference on Informatics in Control, Automation and Robotics, Wuhan, 2010.
[113] A. Diosi and L. Kleeman, “Laser scan matching in polar coordinates with application
to SLAM,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005.
[114] I. Cox, “Blanche- An Experiment in Guidance and Navigation of an Autonomous
Robot Vehicle,” IEEE Transactions on Robotics and Automation, vol. 7, no. 2, pp. 193 204, April 1991.
[115] F. Tan, J. Yang, J. Huang, W. Chen, W. Wang and J. Wang, “A corner and straight
line matching localization method for family indoor monitor mobile robot,” in IEEE
International Conference on Information and Automation, Harbin, 2010.
[116] X. Zezhong, L. Jilin and X. Zhiyu, “Map building and localization using 2D range
scanner,” in IEEE International Symposium on Computational Intelligence in Robotics and
274
Cognitive Autonomous Navigation System using Computational Intelligence techniques
Automation, 2003.
[117] N. Sariff and N. Buniyamin, “An Overview of Autonomous Mobile Robot Path
Planning Algorithms,” in 4th Student Conference on Research and Development, Selangor,
2006.
[118] R. Inigo and D. Alley, “Path planning algorithms for mobile robots,” in 5th IEEE
International Symposium on Intelligent Control, 1990.
[119] D. Zhu and J.-C. Latombe, “New heuristic algorithms for efficient hierarchical path
planning,” IEEE Transactions on Robotics and Automation, vol. 7, no. 1, pp. 9 - 20,
February 1991.
[120] V. Lumelsky, “A comparative study on the path length performance of mazesearching and robot motion planning algorithms,” IEEE Transactions on Robotics and
Automation, vol. 7, no. 1, pp. 57 - 66, February 1991.
[121] K.-C. Fan and P.-C. Lui, “Solving find path problem in mapped environments using
modified A* algorithm,” IEEE Transactions on Systems, Man and Cybernetics, vol. 24, no.
9, September 1994.
[122] G. Jan, K. Y. Chang and I. Parberry, “Optimal Path Planning for Mobile Robot
Navigation,” IEEE/ASME Transactions on Mechatronics, vol. 13, no. 4, pp. 451 - 460,
August 2008.
[123] W. Chung, S. Kim, M. Choi, J. Choi, H. Kim, C.-b. Moon and J.-B. Song, “Safe
Navigation of a Mobile Robot Considering Visibility of Environment,” IEEE
Transactions on Industrial Electronics, vol. 56, no. 10, pp. 3941 - 3950, October 2009.
[124] J. Antich, A. Ortiz and J. Minguez, “A Bug-inspired algorithm for efficient anytime
path planning,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, St.
Louis, 2009.
[125] J. Minguez, J. Osuna and L. Montano, “A "divide and conquer" strategy based on
situations to achieve reactive collision avoidance in troublesome scenarios,” in IEEE
International Conference on Robotics and Automation, 2004.
[126] O. Amidi, “Integrated Mobile Robot Control,” Pittsburgh, PA, 1990.
[127] J. Morales, J. Martínez, M. Martínez and A. Mandow, “Pure-Pursuit Reactive Path
Tracking for Nonholonomic Mobile Robots with a 2D Laser Scanner,” EURASIP
Journal on Advances in Signal Processing, pp. 1-10, 2009.
275
References
[128] S. Singh and D. Shin, “Position Based Path Tracking for Wheeled Mobile Robots,” in
IEEE Conference on Intelligent Robot Systems, Tsukuba, 1989.
[129] A. Ollero, A. García-Cerezo and J. Martínez, “Fuzzy supervisory path tracking of
mobile reports,” vol. 2, no. 2, pp. 313-319, April 1994.
[130] D. Shin, S. Singh and J. Lee, “Explicit path tracking by autoumous vehicles,” Robotica,
vol. 10, pp. 539-554, 1992.
[131] A. Nütcher and J. Hertzberg, “Towards semantic maps for mobile robots,” Robotics
and Autonomous Systems, vol. 56, pp. 915 - 926, 2008.
[132] P. Nunez, R. Vazquez, J. del Toro, A. Bandera and F. Sandoval, “A Curvature based
Method to Extract Natural Landmarks for Mobile Robot Navigation,” in IEEE
International Symposium on Intelligent Signal Processing., Alcalá de Henares, 2007.
[133] P. Fitzpatrick, G. Metta and L. Natale, “Towards long-lived robot genes,” Robotics and
Autonomous Systems, vol. 56, no. 1, pp. 29-45, 31 January 2008.
[134] P. Maybeck, “The Kalman filter: An introduction to concepts.,” Autonomous Robot
Vehicles, 1990.
[135] S. Haykin, Neural Networks: a comprehensive foundation, 2nd ed., New Jersey:
Prentice-Hall, 1999.
[136] P. Ferrera and A. Ruano, “Exploiting the separability of linear and nonlinear
parameters in radial basis function networks,” in Adaptive Systems for Signal Processing,
Communications, and Control Symposium, 2000.
[137] A. Ollero, Robótica. Manipuladores y robots móviles, Editor Marcombo Boixareu,
2001.
[138] J. Ferruz and A. Ollero, “Visual Generalized Predictive Path Tracking,” in Proceedings
of the 5th International Congress on Advanced Motion Control, Amc'98, 1998.
[139] A. Ollero and G. Heredia, Integrated Computer-Aided Engineering, vol. 6, no. 4, pp. 289301, 1999.
[140] A. Ollero, B. Arrue, J. Ferruz, G. Heredia, F. Cuesta, J. López and C. Nogales,
“Control and Perception Components for Autonomous Vehicle Guidance.
Application to the Romeo Vehicles,” Control Engineering Practice, vol. 7, no. 10, 12911299.
276
Cognitive Autonomous Navigation System using Computational Intelligence techniques
[141] A. Konaka, D. W. Coitb and A. E. Smith, “Multi-objective optimization using genetic
algorithms: A tutorial,” Reliability Engineering and System Safety, pp. 992-1007, 2006.
[142] J. Holland, “Adaptation in natural and artificial systems,” 1975.
[143] J. Schaffer, “Multiple objective optimization with vector evaluated genetic
algorithms,” in Proceedings of the international conference on genetic algorithm and their
applications, 1985.
[144] I. Griffin and P. Fleming, “Multi-objective evolutionary computing solutions for
control and system identification,” in Intelligent Control Systems using Computational
Intelligence Techniques, vol. 70, IEE Control, 2005.
[145] A. Ruano, P. Ferreira and C. Fonseca, “An overview of nonlinear identification and
control with neural networks,” in Intelligent Control Systems using Computational Intelligence
Techniques, vol. 70, IEE Control, 2005.
[146] C. Fonseca and P. Flemming, “Multiobjective genetic algorithms,” in IEE colloquium
on Genetic Algorithms for Control Systems Engineering, London, 1993.
[147] J. Horn, “The nature of niching: Genetic Algorithms and the evolution of optimal,
cooperative populations,” Urbana-Champaign, 1997.
277