Robot Vision and Intelligent Robots

Transcription

Robot Vision and Intelligent Robots
Institute of Measurement Science
Faculty of Aerospace Engineering
Federal Armed Forces University Munich
Research on
Robot Vision and Intelligent Robots
Approaches @ Results @ Publications
Synopsis
Vision and the realization of intelligent robots that can see have been an important area of research at the
Institute of Measurement Science since 1977. The program has two goals: first, to gain a basic understanding of vision and intelligence in general; second, to develop better equipment and better methods for
the relization of intelligent vision-guided robots. Such equipment and methods will enable advanced
robots of the future to: drive autonomous vehicles fast and safely in traffic; maintain and repair equipment; and survive and continue to perform autonomously a variety of demanding tasks in an unpredictable and changing environment.
The research within the Institute of Measurement Science has focused on the following areas:
< Architecture and realization of multi-processor systems for real-time recognition of dynamic scenes
< Motion stereo for distance measurement and spatial interpretation of image sequences.
< Recognition, classification, and tracking of objects in dynamic scenes
< Intelligent vehicles and vision-based robots capable of learning.
From the beginning two principles have determined the orientation of the work. One, machine vision must
always be done in real time, keeping pace with changes in the environment. Two, all research results are
tested and demonstrated in practical real-world experiments. Some of these experiments (i.e., docking
maneuvers of an air cushion vehicle, landing approach of an airplane, and autonomous driving on highways) that we have realized in collaboration with the Institute of System Dynamics of our university are
described in [Dickmanns, Graefe 88b].
One of the experiments, the demonstration of autonomous road-following at a speed of 96 km/h as early
as 1987, has drawn national, and international, attention. This demonstration set a world record for autonomous road vehicles. Instrumental for this success were two key elements developed by the Institute
of Measurement Science: the robot vision system BVV 2, and the extremely fast, efficient and robust feature extraction based on controlled correlation. Together, they allowed such a fast and reliable image
interpretation that the driving speed of the vehicle was exclusively limited by the power of its engine, not
by the vision system.
In recent years the work has involved problems related to autonomous mobility in open terrain, in
buildings, and in complex traffic situations on public roads (recognition of traffic situations in real time).
Currently our research focuses on vision-based robots capable of learning. The long-term goal is a robot
with a practical, action-oriented (rather than theoretical) intelligence as it may be found in animals.
Institut für Meßtechnik
@
UniBw München
15.12.96
Head of Institute: Prof. Dr. Volker Graefe
e-mail: [email protected]
@
85577 Neubiberg
@
Germany
Fax: (+49 89) 6004-3074
Phone: (+49 89) 6004-3590 or -3587
Federal Armed Forces University Munich
- 2 -
Prof. Dr. V. Graefe, Measurement Science
Hardware for Robot Vision
Vision System Architecture
From the start, our approach to high-speed image interpretation was based on giving the vision system
an internal structure that matches the structure of the task of robot vision. By this approach small sets of
loosely coupled standard microprocessors yield a better system performance than expensive supercomputers that, despite their extremely high performance in other applications, are generally inadequate
for this particular type of task. Moreover, our systems may be easily programmed in high-level languages.
A wide-band videobus and an independent systembus enable all processors in the system to operate
simultaneously and without being delayed by communication bottle necks.
This hardware structure forms an excellent basis for particularly powerful "object-oriented" vision
systems as introduced by [Graefe 89b, d; 91]. In such systems hardware and software are structured
according to those physical objects which are visible in a robot's environment and relevant for its operation.
Realized Vision Systems
Four generations of such robot vision systems were developed according to these ideas. The oldest one,
conceived in 1977 and equipped with now obsolete 8-bit processors, was nevertheless able to control an
unstable mechanical system by vision only [Haas 82], [Graefe 83a]. The third generation, equipped with
32-bit processors, is about 1000 times more powerful [Graefe 90]. It has proven sufficient for performing
in real time all the image processing necessary for understanding entire road traffic situations as required
by an automobile operating autonomously in high-speed highway traffic [Graefe 92c; 93]. A newer
compact system, based on the Intel 960 CA processor and contained in a standard PC has a similar
performance [Graefe, Meier 93], [Meier 93].
Video Cameras
The dynamic range of solid state video cameras is not sufficient to cope with the enormous range of
illumination differences occurring in dynamic scenes [Graefe, Kuhnert 88b]. We have, therefore, investigated methods for having the vision system control directly the „electronic shutter” in a situation-dependent way, thus extending the effective dynamic range by several orders of magnitude [Huber93], [Graefe,
Albert 95].
Motion Stereo and Spatial Interpretation of Image Sequences
Basic Idea
Measuring the distance to an external object is an important task occurring in various forms in the field
of robotics. We developed a novel method, based upon motion stereo, for this purpose. It has significant
advantages over traditional methods, such as laser range finding and parallax stereo, especially about
mobile robots [Graefe 90b]. According to this method, the velocities with which selected features are
moving in an image, the so-called image velocities, are used directly as a basis for measuring distance.
Experiments proved that a relatively small motion may suffice for measuring distances with errors of less
than 1%, without even having to calibrate the single camera being used.
Implementation and Test
Feature displacements and image velocities occurring in practical applications are usually very small.
Subpixel resolution in localizing features in the image with a precision far exceeding the spatial resolution
of the CCD camera used is a prerequisite for the application of such methods. To this effect, we developed real-time techniques based on recursive estimation and demonstrated their suitability in real-world
experiments with the camera and the vision system mounted on an indoor-vehicle, and also in an automobile [Huber, Graefe 93a]. Errors below 1% of the true distance were achieved even in such relatively
unstructured environments [Huber, Graefe 90; 91], [Huber 93].
Federal Armed Forces University Munich
- 3 -
Prof. Dr. V. Graefe, Measurement Science
Object Recognition in Dynamic Scenes
Concept for Feature Extraction
During the first step of image processing, the feature extraction, we prefer procedures similar to those
operating in the receptive fields of organic vision systems, rather than relying upon time-consuming lowlevel operations involving entire images, such as smoothing and segmentation procedures.
Controlled Correlation
One such method for feature extraction, the controlled correlation, was developed at the Institute of
Measurement Science as a result of a systematic investigation of methods for a fast, and especially, robust
feature extraction [Kuhnert 84; 86a; 88]. It can be implemented very efficiently, allowing dynamic scenes
to be analyzed in real time using standard microprocessors. Correlation is a flexible and robust, but
computationally expensive method for feature extraction. What makes controlled correlation nevertheless
suitable for real-time applications, even without employing special hardware, is that only selected relevant
sections of the image are processed, and that a sparsely populated ternary mask is used as a discrete
reference function.
The "world record runs" of 1987 (96 km/h on an Autobahn and 40 km/h on unmarked campus roads with
severe shadows) constituted a crucial test of the method of controlled correlation.
Texture and Color Features
We are investigating methods for utilizing texture and color besides the primarily used edge and corner
features [Tsinas, Meier, Efenberger 93]. They have proven particularly advantageous not only in totally
unstructured environments, as encountered, e.g., in off-road driving [Liu, Wershofen 92], but also for the
recognition of signaling lights in road traffic scenes [Tsinas 95; 96].
Application of Model Knowledge
According to the object-oriented approach [Graefe 89b] knowledge related to objects visible in the scene
may be utilized for maximizing the robustness and reliability of feature extraction. Good results may thus
be obtained even under very difficult conditions, such as shadows, reflections on a wet road, or a textured
background.
Primarily, we use 2-D object models for this because of their versatility and efficiency. They model the
visual appearance of physical objects. 2-D models can be simple form models, but in difficult scenes
models based on symmetry and on statistics of feature distributions have proven effective [Graefe 93],
[Regensburger 94].
Model knowledge relating to the distance dependency of optical imaging has been utilized, too, for
recognizing other vehicles following the own vehicle on highways. A simple method for creating a sizenormalized 2-D object representation based on a distance-dependent subsampling of images was developed. In combination with an adaptation of feature extraction to the distance-dependent contrast it greatly
facilitates object recognition. It also leads to very short execution times of less than 1 ms [Efenberger 96],
[Efenberger, Graefe 96], [Graefe, Efenberger 96a].
An Example: Obstacle Recognition
Recognizing obstacles within a vehicle's intended path is a key problem for mobile robots. It has been a
focus of our work since 1987, especially in regard to autonomous road vehicles. In principle, it is more
difficult to recognize objects rather than the highway itself since pre-knowledge is generally available
regarding the appearance of the road. Such knowledge may be utilized for recognizing the road in the
image; however, there is generally no pre-knowledge available of the shape of obstacles or their movements.
Our approach to obstacle recognition comprises an object detector and a separate tracking and classification process. Several copies of the latter may be active simultaneously for tracking more than one object.
Additional processes, such as estimating the state of motion of an obstacle, may be added [Graefe 90a].
Detection distances up to 340 m for smaller objects, and about 700 m for larger ones could be demonstrated [Solder 92], [Solder, Graefe 93]. A generic 2-D model of an obstacle is used for recognizing
shadows and other false alarms, and for tracking physical objects in the image. This approach has led to
an exceptional degree of robustness, allowing vehicles to be tracked reliably even in dense city traffic
with frequent lane changes [Graefe 93], [Regensburger 94], [Regensburger, Graefe 94].
Federal Armed Forces University Munich
- 4 -
Prof. Dr. V. Graefe, Measurement Science
Autonomous Vehicles
Automatic Co-Pilot for Road Vehicles
The potential of robot vision technology for contributing to improved traffic safety and to relieve car
drivers in the future was investigated in the framework of the EUREKA-project PROMETHEUS. In
collaboration with other institutes of the university the Institute of Measurement Science investigated the
technology for an automatic copilot intended to warn the driver of imminent danger, or even to drive
automatically during monotonous periods [Graefe 95a]. Truck drivers, especially, could benefit greatly
of such a copilot, alleviating their tiresome and hazardous occupation. [Graefe, Kuhnert 88b / 92] and
[Graefe 92b, c; 93] give an overview of research on autonomous road vehicles.
Recognition of Traffic Situations
To facilitate the recognition of complex traffic situations while driving on a highway, methods were
developed for detecting and classifying those objects which are relevant while driving on a highway: lane
markers [Tsinas, Graefe 92], [Wershofen 92]; obstacles and vehicles in front [Graefe 90a],
[Regensburger 93], [Solder 92], [Solder, Graefe 93]; traffic signs; passing vehicles [Graefe, Jacobs 91];
and vehicles approaching from behind [Efenberger et al. 92], [Efenberger 96]. The individual recognition
modules were tested in highway scenes [Graefe 93].
Mobility in Open Terrain
In order to realize autonomous mobility off roads, such as on dirt roads or in open terrain, various types
of objects, like furrows, edges of fields and meadows, and tracks of vehicles must be recognized.
Experiments conducted in such scenes have shown that the approaches to feature extraction and environmental modeling as developed at the Institute of Measurement Science can be employed for autonomous
vehicles in unstructured environments, too [Liu, Wershofen 92]. Recognition of objects that could be
obstacles, such as trees and rocks, was also demonstrated [Efenberger 96], [Efenberger, Graefe 96].
Neural Networks, Genetic Algorithms, and Fuzzy Logic
In order to assess the potential of certain unconventional programming approaches for computer vision
and control of intelligent robots such methods were investigated experimentally. These investigations
included the utilization of learning for the detection and classification of objects in image sequences and
for a user-friendly generation of controllers for an autonomous vehicle.
Neural Networks
Neural networks are well-known structures capable of learning; the backpropagation method is often used
for training them. Such a network was implemented; after training it by a suitably developed process it
was able to recognize vehicles in images of road scenes [Blöchl, Tsinas 92].
Subsequently, a set of networks was trained to recognize in the image the lanes of the road. Both modules
may be coupled within one system. This enables tracking of the driving lane and detecting any existing
obstacles thereon in real time [Tsinas, Graefe 93].
Genetic Algorithms
Darwin's evolution theory is the origin of algorithms that imitate the evolution process. The "fittest
individuals" of a population will survive. Various problems were addressed in experiments with these
algorithms, such as visual recognition of the sides of a dice, the transformation of sensor data from a color
camera (RGB-values) into hue, saturation and intensity of colors, and the coding of data [Tsinas,
Dachwald 94]. The wide spectrum of possible applications and the learning behavior shows the potential
of genetic algorithms. Further, we developed a method by which neural networks and genetic algorithms
can be combined in order to utilize their different advantages simultaneously.
Fuzzy Logic
We applied methods of fuzzy logic for developing controllers for an autonomous vehicle easily and
quickly. The controllers developed by this process proved to be robust and reliable within all speed ranges
of the vehicle.
Federal Armed Forces University Munich
- 5 -
Prof. Dr. V. Graefe, Measurement Science
Intelligent Mobile Robots
Basic Concept
Based on the assumption that the intelligence of living beings has evolved from a cooperation of vision,
motion control, and adaptation to the environment, we are investigating the foundations of such a cooperation within mobile robots. Our long-term goal is to build intelligent robots.
By intelligence we mean here a practical intelligence of the kind that enables, for instance, animals and
small children to orient themselves in their environment and to move in a purposeful and goal-directed
fashion. What we do not mean is the totally different kind of intelligence that enables human experts, for
example, to prove theorems, or to play chess.
Intelligence implies the ability to recognize situations in a dynamic environment on the basis of sensory
information, in addition to the ability of learning, i.e. the ability to acquire, or expand, by interaction with
the environment, knowledge and skills. It may be anticipated that mobile robots endowed with such an
intelligence should display, in their respective domains, a similar adaptability as living beings.
An intermediate goal to realizing such robots is a mobile robot able to navigate intelligently in natural
environments, e.g. networks of passageways or ordinary office or factory buildings.
Behavior-based Navigation
Two concepts for landmark-based robot navigation were implemented and studied: one, coordinatebased; the other, behavior-based [Kuhnert 90a, b], [Graefe, Wershofen 91], [Albert, Meier 92]. The basic
principle of behavior-based navigation (and of behavior-based motion control in general) is the achievement of a desired task by activating an appropriate sequence of elementary behavior patterns. Examples
of such behavior patterns are "following a hallway", "turning at an intersection", or "moving towards a
landmark".
According to our concept of behavior-based navigation and motion control a situation module is at the
core of the system [Graefe, Wershofen 92a; 93b]. Its task is the recognition and assessment of the current
situation and the selection of an adequate behavior in real time while the robot is moving. Key to our
concept is thus abundant, and timely, information about the environment as it may be supplied primarily
by a dynamic vision system [Graefe 1992a, b], [Bischoff et al. 96a, b]. Knowledge about the static
characteristics of the environment is represented in the form of an attributed topological map. It contains
not only data on the topology of the network of passageways, but also on approximate distances and
angles, on the visual appearance and locations of landmarks, and on certain behavior patterns that are
associated with particular locations (e.g. "slow down here" or "keep to the right here").
This behavior-based approach that was partly inspired by biological models was found to be more flexible
and more practical than the coordinate-based one. It is largely independent of accurate sensors and
calibrated cameras; also, an attributed topological map is much easier to generate than an accurate
geometrical map [Wershofen 96], [Wershofen, Graefe 96]. Modularity is another advantage of our
behavior-based approach; it makes a behavior-based robot a good basis for studying various aspects of
machine intelligence, including learning, in real-world experiments.
Vision-based Learning Robots
As a first step to the realization of an intelligent mobile robot the acquisition of knowledge regarding the
topology and geometry of a network of passageways by a learning mobile robot was realized [Wershofen,
Graefe 93a, c], [Wershofen 96]. A behavior-based robot proved especially suitable for such investigations
because of its modularity and the nature of its internal knowledge representation. During an exploration
run the robot generated an attributed topological map of a network of passageways. After that it was able
to navigate autonomously in the explored environment.
Another example of a vision-based learning robot is a manipulator that is able to grasp objects in 3-D
space without having any knowledge of the size of its arm and of the internal and external camera
parameters [Graefe, Ta 95]. It uses a novel method for stereo vision, termed "object- and behaviororiented robot vision" [Graefe 95b]. Key to this method is a direct transition from image coordinates to
the control word space of the robot. This eliminates cumbersome computations, such as inverse kinematics and inverse perspective, that would require detailed and accurate knowledge of numerous system
parameters.
Federal Armed Forces University Munich
- 6 -
Prof. Dr. V. Graefe, Measurement Science
A similar approach may be utilized for the navigation of vision-based mobile robots [Graefe 97].
Approaches to object recognition methods utilizing machine learning were developed by [Efenberger 96].
On the basis of the previously mentioned size-normalized 2-D object representation he succeeded in
building a knowledge base of object descriptions by machine learning and in recognizing previously seen
objects when they appeared again, either in laboratory scenes or in natural outdoor-scenes [Efenberger,
Graefe 96].
Machine learning is of great practical relevance. It makes it possible to introduce an intelligent learning
robot into a new operating environment with relatively little effort. Moreover, the robot adapts itself
automatically to changes in its internal characteristics caused, e.g. by degradation of its parts or by
maintenance activities, and in the environment.
Federal Armed Forces University Munich
- 7 -
Prof. Dr. V. Graefe, Measurement Science
Key Publications
Efenberger, W.; Graefe, V. (1996): Distance-invariant Object Recognition in Natural Scenes. Proceedings, IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS ‘96. Osaka, pp.
1433-1439.
Graefe, V. (1989b): Dynamic Vision Systems for Autonomous Mobile Robots. Proc. IEEE/RSJ International Workshop on Intelligent Robots and Systems, IROS '89. Tsukuba, pp 12-23.
Graefe, V. (1992a): Vision for Autonomous Mobile Robots. Proceedings, IEEE Workshop on Advanced
Motion Control. Nagoya, pp 57-64.
Graefe, V. (1992c): Visual Recognition of Traffic Situations by a Robot Car Driver. Proceedings, 25th
ISATA; Conference on Mechatronics. Florence, pp 439-446.
Graefe, V. (1993): Vision for Intelligent Road Vehicles. Proceedings, IEEE Symposium on Intelligent
Vehicles. Tokyo, pp 135-140.
Graefe, V. (1994): Echtzeit-Bildverarbeitung für ein Fahrer-Unterstützungssystem zum Einsatz auf
Autobahnen. Informationstechnik und Technische Informatik 1/94, Sonderheft Robotik, pp 16–24.
Graefe, V. (1995b): Object- and Behavior-oriented Stereo Vision for Robust and Adaptive Robot Control. International Symposium on Microsystems, Intelligent Materials, and Robots, Sendai,
pp 560-563.
Graefe, V.; Kuhnert, K.-D. (1992): Vision-based Autonomous Road Vehicles. In I. Masaki (Ed.):
Vision-based Vehicle Guidance. Springer-Verlag, pp 1-29.
Huber, J.; Graefe, V. (1991): Quantitative Interpretation of Image Velocities in Real Time. IEEE
Workshop on Visual Motion. Princeton, pp 211-216.
Kuhnert, K.-D. (1986a): A Model-driven Image Analysis System for Vehicle Guidance in Real Time.
Proceedings, Second International Electronic Image Week. CESTA, Nice, pp 216-221.
Regensburger, U.; Graefe, V. (1994): Visual Recognition of Obstacles on Roads. Proceedings,
IEEE/RSJ/GI International Conference on Intelligent Robots and Systems, IROS '94. München,
pp 980-987. Also in V. Graefe (ed.): Intelligent Robots and Systems. Amsterdam: Elsevier, 1995, pp
73-86.
Solder, U.; Graefe, V. (1993): Visual Detection of Distant Objects. IROS '93. Yokohama,
pp 1042-1049.
Wershofen, K. P., Graefe, V. (1992): An Intelligent Autonomous Vehicle Guided by Behavior-based
Navigation. IFToMM-jc International Symposium on Theory of Machines and Mechanisms. Nagoya,
pp 244-249.
Wershofen, K. P., Graefe, V. (1993b): A Vision-based Mobile Robot as a Test Bed for the Study of
Learning. WWW on Learning and Adaptive Systems. Nagoya, pp 110-117.
Dissertations
Efenberger, W. (1996): Zur Objekterkennung für Fahrzeuge durch Echtzeit-Rechnersehen.
Haas, G. (1982): Meßwertgewinnung durch Echtzeitauswertung von Bildfolgen.
Huber, J. (1993): Beiträge zur Gewinnung und zur räumlichen Deutung von Bildfolgen in Echtzeit.
Kuhnert, K.-D. (1988): Zur Echtzeit-Bildfolgenanalyse mit Vorwissen.
Meier, H. (1993): Zum Entwurf eines PC-basierten Multiprozessorsystems für die Echtzeit-Verarbeitung monochromer und farbiger Videobildfolgen.
Regensburger, U. (1994): Zur Erkennung von Hindernissen in der Bahn eines autonomen Straßenfahrzeugs durch maschinelles Echtzeitsehen.
Solder, U. (1992): Echtzeitfähige Entdeckung von Objekten in der weiten Vorausschau eines Straßenfahrzeugs.
Tsinas, L. (1996): Zum Einsatz von Farbinformation beim maschinellen Erkennen von Verkehrssituationen.
Wershofen, K. P. (1996): Zur Navigation sehender mobiler Roboter in Wegenetzen von Gebäuden –
Ein objektorientierter verhaltensbasierter Ansatz.
Federal Armed Forces University Munich
- 8 -
Prof. Dr. V. Graefe, Measurement Science
Selected Publications
- - - 1982 - 1986 - - Haas, G. (1982): Meßwertgewinnung durch Echtzeitauswertung von Bildfolgen. Dissertation, Fakultät
für Luft- und Raumfahrttechnik der Universität der Bundeswehr München.
Graefe, V. (1983a): A Preprocessor for the Real-time Interpretation of Dynamic Scenes. In T. S. Huang
(ed.): Image Sequence Processing and Dynamic Scene Analysis, Springer-Verlag, pp 519-531.
Graefe, V. (1983c): On the Representation of Moving Objects in Real-time Computer Vision Systems.
In A. G. Tescher (ed.): Applications of Digital Image Processing VI. Proceedings of the SPIE, Vol.
432, pp 129-132.
Haas, G.; Graefe, V. (1983): Locating Fast-moving Objects in TV Images in the Presence of Motion
Blur. In A. Oosterlinck and A. G. Tescher (eds.): Applications of Digital Image Processing V.
Proceedings of the SPIE, Vol. 397, pp 440-446.
Graefe, V. (1984): Two Multi-processor Systems for Low-level Real-time Vision. In J. M. Brady et al.
(eds.): Robotics and Artificial Intelligence, Springer-Verlag, pp 301-308.
Kuhnert, K.-D. (1984): Towards the Objective Evaluation of Low-level Vision Operators. In T. O'Shea
(ed.): ECAI 84, Proc. Sixth European Conference on Artificial Intelligence, Pisa, p 657.
Kuhnert, K.-D.; Zapp, A. (1985): Wissensgesteuerte Bildfolgenauswertung zur automatischen Führung von Straßenfahrzeugen in Echtzeit. In H. Niemann (Ed.): Mustererkennung 1985. Informatik
Fachberichte 107, Springer-Verlag, pp 102-106.
Kuhnert, K.-D. (1986a): A Model-driven Image Analysis System for Vehicle Guidance in Real Time.
Proceedings, Second International Electronic Image Week. CESTA, Nice, pp 216-221.
Kuhnert, K.-D. (1986b): A Vision System for Real-time Road and Object Recognition for Vehicle
Guidance. In W. J. Wolfe, N. Marquina (eds.): Mobile Robots. Proceedings of the SPIE, Vol. 727,
pp 267-272.
Kuhnert, K.-D. (1986c): Comparison of Intelligent Real-time Algorithms for Guiding an Autonomous
Vehicle. In L. O. Hertzberger (ed.): Proceedings, Intelligent Autonomous Systems, Amsterdam.
- - - 1988 - - Dickmanns, E.D.; Graefe, V. (1988a): Dynamic Monocular Machine Vision. Int. J. Machine Vision
and Applications. Vol. 1 (1988), pp 223-240.
Dickmanns, E.D.; Graefe, V. (1988b): Applications of Dynamic Monocular Machine Vision. Int. J.
Machine Vision and Applications. Vol. 1 (1988), pp 241-261.
Graefe, V.; Kuhnert, K.-D. (1988a): A High-speed Image Processing System Utilized in Autonomous
Vehicle Guidance. Proc. IAPR Workshop on Computer Vision. Tokyo, pp 10-13.
Graefe, V.; Kuhnert, K.-D. (1988b): Towards a Vision-based Robot with a Driver's License. Proceedings, IEEE/RSJ International Workshop on Intelligent Robots and Systems, IROS '88. Tokyo, pp
627-632. Reprinted in I. Masaki (ed.): Vision-based Vehicle Guidance. Springer-Verlag (1992), pp
1-29.
Graefe, V.; Regensburger, U.; Solder, U. (1988): Visuelle Entdeckung und Vermessung von Objekten
in der Bahn eines autonom mobilen Systems. In H. Bunke et al. (Eds.): Mustererkennung 1988. Informatik-Fachberichte 180, Springer, pp 312-318.
Kuhnert, K.-D. (1988): Zur Echtzeit-Bildfolgenanalyse mit Vorwissen. Dissertation, Fakultät für Luftund Raumfahrttechnik der Universität der Bundeswehr München.
Kuhnert, K.-D.; Graefe, V. (1988): Vision Systems for Autonomous Mobility. Proceedings, IEEE/RSJ
International Workshop on Intelligent Robots and Systems, IROS '88. Tokyo, pp 477-482.
Federal Armed Forces University Munich
- 9 -
Prof. Dr. V. Graefe, Measurement Science
- - - 1989 - - Graefe, V. (1989a): A Flexible Semi-automatic Program Generator for Dynamic Vision Systems.
Proceedings, International Workshop on Industrial Applications of Machine Intelligence and Vision
– MIV 89. Tokyo, pp 100-105.
Graefe, V. (1989b): Dynamic Vision Systems for Autonomous Mobile Robots. Proc. IEEE/RSJ International Workshop on Intelligent Robots and Systems, IROS '89. Tsukuba, pp 12-23.
Graefe, V. (1989d): A Processing Architecture for Sensor Fusion in Vision-guided Mobile Robots.
International Advanced Robotics Programme – Proceedings of the First Workshop on Multi-sensor
Fusion and Environment Modelling. Toulouse, LAAS.
Kuhnert, K.-D. (1989a): Sensor Modeling as Basis of Subpixel Image Processing. In J. Duvernay (ed.):
Image Processing III. Proceedings of the SPIE, Vol. 1135.
Kuhnert, K.-D. (1989b): Real-time Suited Road Border Recognition Utilizing a Neural Network
Technique. Proceedings, IEEE/RSJ International Workshop on Intelligent Robots and Systems,
IROS '89. Tsukuba, pp 358-365.
- - - 1990 - - Graefe, V. (1990a): An Approach to Obstacle Recognition for Autonomous Mobile Robots. IEEE/RSJ
International Workshop on Intelligent Robots and Systems, IROS '90. Tsuchiura, pp 151-158.
Graefe, V. (1990b): Precise Range Measurement by Monocular Stereo Vision. Japan-USA Symposium
on Flexible Automation. Kyoto, pp 1321-1324.
Graefe, V. (1990c): The BVV Family of Robot Vision Systems. In O. Kaynak (ed.): Proceedings of the
IEEE Workshop on Intelligent Motion Control. Istanbul, pp IP55-IP65. Reprinted in I. Masaki (ed.):
Vision-based Vehicle Guidance. Springer-Verlag (1992), pp 1-29.
Huber, J.; Graefe, V. (1990): Subpixelauflösung und Bewegungsstereo für die räumliche Deutung von
Bildfolgen. Fachgespräch Autonome Mobile Systeme. Karlsruhe, pp 173-184.
Kuhnert, K.-D. (1990a): Fusing Dynamic Vision and Landmark Navigation for Autonomous Driving.
IEEE/RSJ International Workshop on Intelligent Robots and Systems, IROS '90. Tsuchiura, pp 113119.
Kuhnert, K.-D. (1990b): Dynamic Vision Guides the Autonomous Vehicle ATHENE. Japan-USA
Symposium on Flexible Automation. Kyoto, pp 507-510.
Kuhnert, K.-D.; Wershofen, K. P. (1990): Echtzeit-Rechnersehen auf der Experimental-Plattform
ATHENE. Fachgespräch Autonome Mobile Systeme. Karlsruhe, pp 59-68.
Regensburger, U.; Graefe, V. (1990): Object Classification for Obstacle Avoidance. Proceedings of the
SPIE Symposium on Advances in Intelligent Systems. Boston, pp 112-119.
Solder, U.; Graefe, V. (1990): Object Detection in Real Time. Proceedings of the SPIE Symposium on
Advances in Intelligent Systems. Boston, pp 104-111.
- - - 1991 - - Blöchl, B.; Behrends, J.-U. (1991): Link-Verbindung eines Multiprozessor-Bildverarbeitungssystems
mit einem Transputercluster. TAT'91. Aachen, September.
Graefe, V. (1991): Robot Vision Based on Coarsely-grained Multi-processor Systems. In R. Vichnevetzky, J.J.H. Miller (eds.): Proc. IMACS World Congress. Dublin, pp 755-756.
Graefe, V.; Fleder, K. (1991): A Powerful and Flexible Co-Processor for Feature Extraction in a Robot
Vision System. International Conference on Industrial Electronics, Control Instrumentation and
Automation (IECON '91). Kobe, pp 2019-2024.
Graefe, V.; Jacobs, U. (1991): Detection of Passing Vehicles by a Robot Car Driver. IEEE/RSJ
International Workshop on Intelligent Robots and Systems IROS '91. Osaka, pp 391-396.
Graefe, V.; Wershofen, K. P. (1991): Robot Navigation and Environmental Modelling. International
Advanced Robotics Programme – Proceedings, Second Workshop on Multi-sensor Fusion and
Environment Modelling. Oxford, September.
Huber, J.; Graefe, V. (1991): Quantitative Interpretation of Image Velocities in Real Time. IEEE
Workshop on Visual Motion. Princeton, pp 211-216.
Federal Armed Forces University Munich
- 10 -
Prof. Dr. V. Graefe, Measurement Science
- - - 1992 - - Albert, M.; Meier, H. (1992): Dynamisches Rechnersehen zur verhaltensbasierten Landmarkennavigation. 8. Fachgespräch über Autonome Mobile Systeme. Karlsruhe, pp 15-33.
Blöchl, B., Tsinas, L. (1992): Object Recognition in Traffic Scenes by Neural Networks. International
Conference on Artificial Neural Networks. Brighton, pp 1671-1674.
Efenberger, W.; Ta, Q.; Tsinas, L.; Graefe, V. (1992): Automatic Recognition of Vehicles Approaching from Behind. Proc., IEEE Symposium on Intelligent Vehicles '92. Detroit, pp 57-62.
Graefe, V. (1992a): Vision for Autonomous Mobile Robots. Proceedings, IEEE Workshop on Advanced
Motion Control. Nagoya, pp 57-64.
Graefe, V. (1992b): Driverless Highway Vehicles. Proceedings, International Hi-Tech Forum. Osaka,
pp 86-95.
Graefe, V. (1992c): Visual Recognition of Traffic Situations by a Robot Car Driver. Proceedings, 25th
ISATA; Conference on Mechatronics. Florence, pp 439-446.
Graefe, V.; Kuhnert, K.-D. (1992): Vision-based Autonomous Road Vehicles. In I. Masaki (Ed.):
Vision-based Vehicle Guidance. Springer-Verlag, pp 1-29.
Liu, F. Y.; Wershofen, K. P. (1992): An Approach to the Robust Classification of Pathway Images for
Autonomous Mobile Robots. Proceedings, IEEE International Symposium on Industrial Electronics.
Xian, pp 390-394.
Solder, U. (1992): Echtzeitfähige Entdeckung von Objekten in der weiten Vorausschau eines Straßenfahrzeugs. Dissertation, Fakultät für Luft- und Raumfahrttechnik der Universität der Bundeswehr
München.
Tsinas, L., Graefe, V. (1992): Automatic Recognition of Lanes for Highway Driving. IFAC Conference
on Motion Control for Intelligent Automation. Perugia, pp 295-300.
Wershofen, K. P. (1992): Real-time Road Scene Classification Based on a Multiple-lane Tracker.
Proceedings, International Conference on Industrial Electronics, Control, Instrumentation and Automation (IECON '92). San Diego, pp 746-751.
Wershofen, K. P., Graefe, V. (1992): An Intelligent Autonomous Vehicle Guided by Behavior-based
Navigation. IFToMM-jc International Symposium on Theory of Machines and Mechanisms. Nagoya,
pp 244-249.
- - - 1993 - - Blöchl, B. (1993): Fuzzy Control in Real Time for Vision-guided Autonomous Mobile Robots. In E. P.
Klement, W. Slany (Ed.): Fuzzy Logic in Artificial Intelligence. Berlin: Springer, pp 114-125.
Blöchl, B.; Tsinas, L. (1993): A Simulated Neural Network for Object Recognition in Real Time. (5th
Irish Conference on AI). In K. Ryan, F. E. Sutcliffe (eds): AI and Cognitive Science '92. Berlin:
Springer, pp 303-306.
Graefe, V. (1993): Vision for Intelligent Road Vehicles. Proceedings, IEEE Symposium on Intelligent
Vehicles. Tokyo, pp 135-140.
Graefe, V.; Meier, H. (1993): A PC-based Multi-processor Robot Vision System. International Conference on Industrial Electronics, Control, Instrumentation and Automation (IECON '93). Maui,
pp1430-1435.
Graefe, V.; Wershofen, K. P.; Huber, J. (1993): Dynamic Vision for Precise Depth Measurement and
Robot Control. In Braggings, D. W.: Computer Vision for Industry. SPIE, Vol 1989. München, pp
146-155.
Huber, J. (1993): Beiträge zur Gewinnung und zur räumlichen Deutung von Bildfolgen in Echtzeit.
Dissertation, Fakultät für Luft- und Raumfahrttechnik der Universität der Bundeswehr München.
Huber, J.; Graefe, V. (1993): Motion Stereo for Mobile Robots. IEEE International Symposium on
Industrial Electronics ISIE '93. Budapest, pp 263-270.
Meier, H. (1993): Zum Entwurf eines PC-basierten Multiprozessorsystems für die Echtzeit-Verarbeitung monochromer und farbiger Videobildfolgen. Dissertation, Fakultät für Luft- und Raumfahrttechnik der Universität der Bundeswehr München.
Solder, U.; Graefe, V. (1993): Visual Detection of Distant Objects. IROS '93. Yokohama,
pp 1042-1049.
Tsinas, L.; Graefe, V. (1993): Coupled Neural Networks for Real-time Road and Obstacle Recognition
by Intelligent Road Vehicles. International Conference on Neural Networks, Nagoya, pp 2081-2084.
Federal Armed Forces University Munich
- 11 -
Prof. Dr. V. Graefe, Measurement Science
Tsinas, L.; Meier, H.; Efenberger, W. (1993): Farbgestützte Verfolgung von Objekten mit dem PCbasierten Multiprozessorsystem BVV 4. In S. J. Pöppl, H. Handels (Eds.): Mustererkennung 1993;
Reihe Informatik aktuell. Springer, pp 741-748.
Wershofen, K. P., Graefe, V. (1993a): Ein verhaltensbasierter Ansatz zur Steuerung sehender mobiler
Roboter. Fachtagung intelligente Steuerung und Regelung von Robotern; VDI-Berichte Nr. 1094.
VDI-Verlag, pp 441-450.
Wershofen, K. P., Graefe, V. (1993b): A Vision-based Mobile Robot as a Test Bed for the Study of
Learning. WWW on Learning and Adaptive Systems. Nagoya, pp 110-117.
- - - 1994 - - Graefe, V. (1994): Echtzeit-Bildverarbeitung für ein Fahrer-Unterstützungssystem zum Einsatz auf
Autobahnen. Informationstechnik und Technische Informatik 1/94, Sonderheft Robotik, pp 16–24.
Regensburger, U. (1994): Zur Erkennung von Hindernissen in der Bahn eines autonomen Straßenfahrzeugs durch maschinelles Echtzeitsehen. Dissertation, Fakultät für Luft- und Raumfahrttechnik
der Universität der Bundeswehr München.
Regensburger, U.; Graefe, V. (1994): Visual Recognition of Obstacles on Roads. Proceedings,
IEEE/RSJ/GI International Conference on Intelligent Robots and Systems, IROS '94. München,
pp 980-987. Also in V. Graefe (ed.): Intelligent Robots and Systems. Amsterdam: Elsevier, 1995, pp
73-86.
Tsinas, L.; Dachwald, B. (1994): A Combined Neural and Genetic Learning Algorithm. IEEE Conference on Evolutionary Computation, Orlando, pp 770-774.
- - - 1995 - - Graefe, V. (1995a): Merkmalsextraktion, Objekterkennung, Echtzeit-Bildverarbeitungssysteme. In H.H. Nagel (Ed.): Sichtsystemgestützte Fahrzeugführung und Fahrer-Fahrzeug-Wechselwirkung. Sankt
Augustin: Infix, pp 121-192.
Graefe, V. (1995b): Object- and Behavior-oriented Stereo Vision for Robust and Adaptive Robot Control. International Symposium on Microsystems, Intelligent Materials, and Robots, Sendai,
pp 560-563.
Graefe, V.; Albert, A. (1995): Automatic, Situation-dependent Control of the Sensitivity of a Camera
for a Vision-guided Mobile Robot. Reports on Researches and Developments; Foundation for Promotion of Advanced Automation Technology 1995, pp 17-20.
Graefe, V.; Ta, Q. (1995): An Approach to Self-learning Manipulator Control Based on Vision.
IMEKO International Symposium on Measurement and Control in Robotics, ISMCR '95. Smolenice,
pp 409-414.
Regensburger, U.; Graefe, V. (1995): Visual Recognition of Obstacles on Roads. In V. Graefe (ed.):
Intelligent Robots and Systems. Amsterdam: Elsevier, pp 73-86.
- - - 1996 - - Bischoff, R.; Graefe, V.; Wershofen, K. P. (1996a): Combining Object-Oriented Vision and BehaviorBased Robot Control. Robotics, Vision ... for Industrial Automation. Ipoh, pp. 222-227.
Bischoff, R.; Graefe, V.; Wershofen K. P. (1996b): Object-Oriented Vision for Behavior-Based
Robots. In D. Casasent (ed.): Intelligent Robots and Computer Vision XV, Proceedings of the SPIE,
Vol 2904. Boston, pp. 278-289.
Efenberger, W. (1996): Zur Objekterkennung für Fahrzeuge durch Echtzeit-Rechnersehen. Dissertation, Fakultät für Luft- und Raumfahrttechnik der Universität der Bundeswehr München.
Efenberger, W.; Graefe, V. (1996): Distance-invariant Object Recognition in Natural Scenes. Proceedings, IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS ‘96. Osaka, pp.
1433-1439.
Graefe, V.; Efenberger, W. (1996a): Distance-Invariant Object Recognition in Natural Scenes. Robotics Adria Alps Danube. Budapest, pp 609-614.
Federal Armed Forces University Munich
- 12 -
Prof. Dr. V. Graefe, Measurement Science
Graefe, V.; Efenberger, W. (1996b): A Novel Approach for the Detection of Vehicles on Freeways by
Real-Time Vision. International Symposium on Intelligent Vehicles. Tokyo, pp 363-368.
Tsinas, L. (1996): Zum Einsatz von Farbinformation beim maschinellen Erkennen von Verkehrssituationen. Dissertation, Fakultät für Luft- und Raumfahrttechnik der Universität der Bundeswehr
München.
Tsinas, L.; Graefe, V. (1996): Real-Time Recognition of Signaling Lights in Road Traffic. Proceedings,
IAPR Workshop on Machine Vision Applications ’96. Tokyo, pp. 71-74.
Wershofen, K. P. (1996): Zur Navigation sehender mobiler Roboter in Wegenetzen von Gebäuden –
Ein objektorientierter verhaltensbasierter Ansatz. Dissertation, Fakultät für Luft- und Raumfahrttechnik der Universität der Bundeswehr München.
Wershofen, K. P.; Graefe, V. (1996): Situationserkennung als Grundlage der Verhaltenssteuerung
eines mobilen Roboters. In Schmidt, G., Freyberger, F. (Hrsg.): Autonome Mobile Systeme. Springer,
pp. 170-179.
- - - 1997 / Under Preparation - - Graefe, V. (1997): Robot Vision Without Calibration. XIV Imeko World Congress. Tampere, 6/97.