Three-Dimensional Machine Vision
Arobotmustperceivethethree-dimensionalworldifitistobeeffective there. Yet recovering 3-D information from projected images is difficult, and still remains thesubjectofbasic research. Alternatively, onecan use sensorsthatcanprovidethree-dimensionalrangeinformationdirectly. The technique ofprojecting light-stripesstartedto be used in industrialobject recognition systems asearly asthe 1970s, andtime-of-flight laser-scanning range finders became available for outdoor mobile robotnavigation in the mid-eighties. Once range data are obtained, a vision system must still describe the scene in terms of 3-D primitives such as edges, surfaces, and volumes, and recognize objeCts of interest. Today, the art of sensing, extractingfeatures, and recognizing objectsbymeans ofthree-dimensional rangedataisoneofthemostexcitingresearchareasincomputervision. Three-Dimensional Machine Vision is a collection of papers dealing withthree-dimensionalrangedata. Theauthorsarepioneeringresearchers: some are founders and others are bringingnew excitements in thefield. I have tried to select milestone papers, and my goalhas beento make this bookareferenceworkforresearchersinthree-dimensionalvision. The book is organized into four parts: 3-D Sensors, 3-D Feature Extractions, ObjectRecognitionAlgorithms, andSystemsandApplications. Part I includes four papers which describe the development of unique, capable 3-D range sensors, as well as discussions of optical, geometrical, electronic, and computational issues. Mundy and Porter describe asensor systembasedonstructuredilluminationforinspectingmetalliccastings. In order to achieve high-speed data acquisition, it uses multiple lightstripes withwavelength multiplexing. Case, Jalkio, andKim alsopresentamulti- stripe system and discuss various design issues in range sensing by triangulation. ThenumericalstereocameradevelopedbyAltschuler, Bae, Altschuler, Dijak, Tamburino, and Woolford projects space-coded grid patterns which are generated by an electro-optical programmable spatial viii PREFACE light modulator. Kanade and Fuhrman present a proximity sensor using multipleLEDswhich areconically arranged. Itcan measurebothdistance andorientationofanobject'ssurface.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Other editions - View all
active camera algorithm analysis applied array Artificial Intelligence calibration cavity coefficients component computed Computer Vision consistent constraints coordinates corresponding curved cylinder data points depth described detection detector determine dimensional discontinuities distance edge edge detection Equation example extracted field of view Figure 13 filtering galvo Gaussian hypothesis illumination image points intensity laser laser beam LCDS lens light source light spot lines of curvature matching measured method normalized signal obtained optical orientation pair parallax parameters passive camera passive image pattern pixel planar position predicted primitives principal curvatures prism problem Proc projected proximity sensor quadric quaternion range data range image recognition reflectance regions representation Robot Vision rotation scan scene segments sensor chip shown in Figure slit space spacecode step stereo stripe structured light surface intersections surface normal target surface technique three-dimensional transformation triangulation values vector vision system voxels zero crossing
All Book Search results »
Dynamic Vision for Perception and Control of Motion
Ernst Dieter Dickmanns
Limited preview - 2007