Vision and Navigation: The Carnegie Mellon Navlab
Charles E. Thorpe
Springer US, Apr 30, 1990 - Computers - 370 pages
Mobile robots are playing an increasingly important role in our world. Remotely operated vehicles are in everyday use for hazardous tasks such as charting and cleaning up hazardous waste spills, construction work of tunnels and high rise buildings, and underwater inspection of oil drilling platforms in the ocean. A whole host of further applications, however, beckons robots capable of autonomous operation without or with very little intervention of human operators. Such robots of the future will explore distant planets, map the ocean floor, study the flow of pollutants and carbon dioxide through our atmosphere and oceans, work in underground mines, and perform other jobs we cannot even imagine; perhaps even drive our cars and walk our dogs. The biggest technical obstacles to building mobile robots are vision and navigation-enabling a robot to see the world around it, to plan and follow a safe path through its environment, and to execute its tasks. At the Carnegie Mellon Robotics Institute, we are studying those problems both in isolation and by building complete systems. Since 1980, we have developed a series of small indoor mobile robots, some experimental, and others for practical applicationr Our outdoor autonomous mobile robot research started in 1984, navigating through the campus sidewalk network using a small outdoor vehicle called the Terregator. In 1985, with the advent of DARPA's Autonomous Land Vehicle Project, we constructed a computer controlled van with onboard sensors and researchers. In the fall of 1987, we began the development of a six-legged Planetary Rover.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Color Vision for Road Following
Explicit Models for Robot Road Following
13 other sections not shown
Other editions - View all
adjusted algorithm architecture Autonomous camera Carnegie Mellon Carnegie-Mellon University clothoid CODGER color models computed Computer Vision configuration constraints curvature data abstraction database detected Driving Pipeline driving unit elevation map environment Equation ERIM error estimate evaluation explicit models external host extracted feedforward FIDO Figure function geometric global IEEE implementation initial input interpretation cycle intersection Kanade labeled regions laser match mobile robot modules Navlab neural network non-road objects obstacles off-road pair parallel parameters path planner Path Planning perception pixel polygonal Prediction problem Proc processing steps processor range image resolution road edges road following road model road scene Robotics Institute scanning SCARF scene interpretation Section segments sensor steering angle supervised classification systolic array Takeo Kanade tracker tracking trajectory transformation trapezoids uncertainty vehicle position vehicle speed vision voxel Warp array Warp cell Warp machine Warp software