Mobile robotic systems need to perceive their surroundings in order to act independently. In this work a perception framework is developed which interprets the data of a binocular camera in order to transform it into a compact, expressive model of the environment. This model enables a mobile system to move in a targeted way and interact with its surroundings. It is shown how the developed methods also provide a solid basis for technical assistive aids for visually impaired people.
This work develops a motion planner that compensates the deficiencies from perception modules by exploiting the reaction capabilities of a vehicle. The work analyzes present uncertainties and defines driving objectives together with constraints that ensure safety. The resulting problem is solved in real-time, in two distinct ways: first, with nonlinear optimization, and secondly, by framing it as a partially observable Markov decision process and approximating the solution with sampling.
Multi-camera systems are being deployed in a variety of vehicles and mobile robots today. To eliminate the need for cost and labor intensive maintenance and calibration, continuous self-calibration is highly desirable. In this book we present such an approach for self-calibration of multi-Camera systems for vehicle surround sensing. In an extensive evaluation we assess our algorithm quantitatively using real-world data.
This works describes an approach to lane-precise localization on current digital maps. A particle filter fuses data from production vehicle sensors, such as GPS, radar, and camera. Performance evaluations on more than 200 km of data show that the proposed algorithm can reliably determine the current lane. Furthermore, a possible architecture for an intuitive route guidance system based on Augmented Reality is proposed together with a lane-change recommendation for unclear situations.
In motion planning for automated vehicles, a thorough uncertainty consideration is crucial to facilitate safe and convenient driving behavior. This work presents three motion planning approaches which are targeted towards the predominant uncertainties in different scenarios, along with an extended safety verification framework. The approaches consider uncertainties from imperfect perception, occlusions and limited sensor range, and also those in the behavior of other traffic participants.
This work presents a behavior planning algorithm for automated driving in urban environments with an uncertain and dynamic nature. The algorithm allows to consider the prediction uncertainty (e.g. different intentions), perception uncertainty (e.g. occlusions) as well as the uncertain interactive behavior of the other agents explicitly. Simulating the most likely future scenarios allows to find an optimal policy online that enables non-conservative planning under uncertainty.
This work proposes novel approaches for object tracking in challenging scenarios like severe occlusion, deteriorated vision and long range multi-object reidenti?cation. All these solutions are only based on image sequence captured by a monocular camera and do not require additional sensors. Experiments on standard benchmarks demonstrate an improved state-of-the-art performance of these approaches. Since all the presented approaches are smartly designed, they can run at a real-time speed.
Mobile robotic systems need to perceive their surroundings in order to act independently. In this work a perception framework is developed which interprets the data of a binocular camera in order to transform it into a compact, expressive model of the environment. This model enables a mobile system to move in a targeted way and interact with its surroundings. It is shown how the developed methods also provide a solid basis for technical assistive aids for visually impaired people. This work was published by Saint Philip Street Press pursuant to a Creative Commons license permitting commercial use. All rights not granted by the work's license are retained by the author or authors.
Human Recognition in Unconstrained Environments provides a unique picture of the complete ‘in-the-wild’ biometric recognition processing chain; from data acquisition through to detection, segmentation, encoding, and matching reactions against security incidents. Coverage includes: Data hardware architecture fundamentals Background subtraction of humans in outdoor scenes Camera synchronization Biometric traits: Real-time detection and data segmentation Biometric traits: Feature encoding / matching Fusion at different levels Reaction against security incidents Ethical issues in non-cooperative biometric recognition in public spaces With this book readers will learn how to: Use computer vision, pattern recognition and machine learning methods for biometric recognition in real-world, real-time settings, especially those related to forensics and security Choose the most suited biometric traits and recognition methods for uncontrolled settings Evaluate the performance of a biometric system on real world data Presents a complete picture of the biometric recognition processing chain, ranging from data acquisition to the reaction procedures against security incidents Provides specific requirements and issues behind each typical phase of the development of a robust biometric recognition system Includes a contextualization of the ethical/privacy issues behind the development of a covert recognition system which can be used for forensics and security activities
This book constitutes the refereed proceedings of the First Pacific Rim Symposium on Image and Video Technology, PSIVT 2006, held in Hsinchu, Taiwan in December 2006. The 76 revised full papers and 58 revised poster papers cover a wide range of topics, including all aspects of video and multimedia, both technical and artistic perspectives and both theoretical and practical issues.