Many robotics researchers consider high-level vision algorithms (computational) too expensive for use in robot guidance. This book introduces the reader to an alternative approach to perception for autonomous, mobile robots. It explores how to apply methods of high-level computer vision and fuzzy logic to the guidance and control of the mobile robot. The book introduces a knowledge-based approach to vision modeling for robot guidance, where advantage is taken of constraints of the robot's physical structure, the tasks it performs, and the environments it works in. This facilitates high-level computer vision algorithms such as object recognition at a speed that is sufficient for real-time navigation. The texts presents algorithms that exploit these constraints at all levels of vision, from image processing to model construction and matching, as well as shape recovery. These algorithms are demonstrated in the navigation of a wheeled mobile robot.
This textbook offers a tutorial introduction to robotics and Computer Vision which is light and easy to absorb. The practice of robotic vision involves the application of computational algorithms to data. Over the fairly recent history of the fields of robotics and computer vision a very large body of algorithms has been developed. However this body of knowledge is something of a barrier for anybody entering the field, or even looking to see if they want to enter the field — What is the right algorithm for a particular problem?, and importantly: How can I try it out without spending days coding and debugging it from the original research papers? The author has maintained two open-source MATLAB Toolboxes for more than 10 years: one for robotics and one for vision. The key strength of the Toolboxes provide a set of tools that allow the user to work with real problems, not trivial examples. For the student the book makes the algorithms accessible, the Toolbox code can be read to gain understanding, and the examples illustrate how it can be used —instant gratification in just a couple of lines of MATLAB code. The code can also be the starting point for new work, for researchers or students, by writing programs based on Toolbox functions, or modifying the Toolbox code itself. The purpose of this book is to expand on the tutorial material provided with the toolboxes, add many more examples, and to weave this into a narrative that covers robotics and computer vision separately and together. The author shows how complex problems can be decomposed and solved using just a few simple lines of code, and hopefully to inspire up and coming researchers. The topics covered are guided by the real problems observed over many years as a practitioner of both robotics and computer vision. It is written in a light but informative style, it is easy to read and absorb, and includes a lot of Matlab examples and figures. The book is a real walk through the fundamentals light and color, camera modelling, image processing, feature extraction and multi-view geometry, and bring it all together in a visual servo system. “An authoritative book, reaching across fields, thoughtfully conceived and brilliantly accomplished Oussama Khatib, Stanford
Most industrial robots today have little or no sensory capability. Feedback is limited to information about joint positions, combined with a few interlock and timing signals. These robots can function only in an environment where the objects to be manipulated are precisely located in the proper position for the robot to grasp (i. e. , in a structured environment). For many present industrial applications, this level of performance has been adequate. With the increasing demand for high performance sensor-based robot manipulators in assembly tasks, meeting this demand and challenge can only be achieved through the consideration of: 1) efficient acquisition and processing of intemaVextemal sensory information, 2) utilization and integration of sensory information from various sensors (tactile, force, and vision) to acquire knowledge in a changing environment, 3) exploitation of inherent robotic parallel algorithms and efficient VLSI architectures for robotic computations, and finally 4) system integration into a working and functioning robotic system. This is the intent of the Workshop on Sensor-Based Robots: Algorithms and Architectures - to study the fundamental research issues and problems associated with sensor-based robot manipulators and to propose approaches and solutions from various viewpoints in improving present day robot manipula tors in the areas of sensor fusion and integration, sensory information processing, and parallel algorithms and architectures for robotic computations.
This book develops the core system science needed to enable the development of a complex industrial internet of things/manufacturing cyber-physical systems (IIoT/M-CPS). Gathering contributions from leading experts in the field with years of experience in advancing manufacturing, it fosters a research community committed to advancing research and education in IIoT/M-CPS and to translating applicable science and technology into engineering practice. Presenting the current state of IIoT and the concept of cybermanufacturing, this book is at the nexus of research advances from the engineering and computer and information science domains. Readers will acquire the core system science needed to transform to cybermanufacturing that spans the full spectrum from ideation to physical realization.
This book provides an overview of model-based environmental visual perception for humanoid robots. The visual perception of a humanoid robot creates a bidirectional bridge connecting sensor signals with internal representations of environmental objects. The objective of such perception systems is to answer two fundamental questions: What & where is it? To answer these questions using a sensor-to-representation bridge, coordinated processes are conducted to extract and exploit cues matching robot’s mental representations to physical entities. These include sensor & actuator modeling, calibration, filtering, and feature extraction for state estimation. This book discusses the following topics in depth: • Active Sensing: Robust probabilistic methods for optimal, high dynamic range image acquisition are suitable for use with inexpensive cameras. This enables ideal sensing in arbitrary environmental conditions encountered in human-centric spaces. The book quantitatively shows the importance of equipping robots with dependable visual sensing. • Feature Extraction & Recognition: Parameter-free, edge extraction methods based on structural graphs enable the representation of geometric primitives effectively and efficiently. This is done by eccentricity segmentation providing excellent recognition even on noisy & low-resolution images. Stereoscopic vision, Euclidean metric and graph-shape descriptors are shown to be powerful mechanisms for difficult recognition tasks. • Global Self-Localization & Depth Uncertainty Learning: Simultaneous feature matching for global localization and 6D self-pose estimation are addressed by a novel geometric and probabilistic concept using intersection of Gaussian spheres. The path from intuition to the closed-form optimal solution determining the robot location is described, including a supervised learning method for uncertainty depth modeling based on extensive ground-truth training data from a motion capture system. The methods and experiments are presented in self-contained chapters with comparisons and the state of the art. The algorithms were implemented and empirically evaluated on two humanoid robots: ARMAR III-A & B. The excellent robustness, performance and derived results received an award at the IEEE conference on humanoid robots and the contributions have been utilized for numerous visual manipulation tasks with demonstration at distinguished venues such as ICRA, CeBIT, IAS, and Automatica.
This book constitutes the refereed proceedings of the 14th Iberoamerican Congress on Pattern Recognition, CIARP 2009, held in Guadalajara, Mexico, in November 2009. The 64 revised full papers presented together with 44 posters were carefully reviewed and selected from 187 submissions. The papers are organized in topical sections on image coding, processing and analysis; segmentation, analysis of shape and texture; geometric image processing and analysis; analysis of signal, speech and language; document processing and recognition; feature extraction, clustering and classification; statistical pattern recognition; neural networks for pattern recognition; computer vision; video segmentation and tracking; robot vision; intelligent remote sensing, imagery research and discovery techniques; intelligent computing for remote sensing imagery; as well as intelligent fusion and classification techniques.
This book reports recent advances in the use of pattern recognition techniques for computer and robot vision. The sciences of pattern recognition and computational vision have been inextricably intertwined since their early days, some four decades ago with the emergence of fast digital computing. All computer vision techniques could be regarded as a form of pattern recognition, in the broadest sense of the term. Conversely, if one looks through the contents of a typical international pattern recognition conference proceedings, it appears that the large majority (perhaps 70-80%) of all pattern recognition papers are concerned with the analysis of images. In particular, these sciences overlap in areas of low level vision such as segmentation, edge detection and other kinds of feature extraction and region identification, which are the focus of this book.