Wöhler | 3D Computer Vision | E-Book | www.sack.de
E-Book

E-Book, Englisch, 382 Seiten

Reihe: X.media.publishing

Wöhler 3D Computer Vision

Efficient Methods and Applications
2. Auflage 2013
ISBN: 978-1-4471-4150-1
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark

Efficient Methods and Applications

E-Book, Englisch, 382 Seiten

Reihe: X.media.publishing

ISBN: 978-1-4471-4150-1
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark



This indispensable text introduces the foundations of three-dimensional computer vision and describes recent contributions to the field. Fully revised and updated, this much-anticipated new edition reviews a range of triangulation-based methods, including linear and bundle adjustment based approaches to scene reconstruction and camera calibration, stereo vision, point cloud segmentation, and pose estimation of rigid, articulated, and flexible objects. Also covered are intensity-based techniques that evaluate the pixel grey values in the image to infer three-dimensional scene structure, and point spread function based approaches that exploit the effect of the optical system. The text shows how methods which integrate these concepts are able to increase reconstruction accuracy and robustness, describing applications in industrial quality inspection and metrology, human-robot interaction, and remote sensing.

Christian Wöhler is Professor of Image Analysis at the Department of Electrical Engineering and Information Technology of TU Dortmund, Germany. His scientific interests are in the domains of computer vision, photogrammetry, remote sensing, and pattern classification, with applications in various fields including machine vision, robotics, advanced driver assistance systems, and planetary science.

Wöhler 3D Computer Vision jetzt bestellen!

Autoren/Hrsg.


Weitere Infos & Material


1;Preface;6
2;Acknowledgements;9
3;Contents;11
4;Part I: Methods of 3D Computer Vision;16
4.1;Chapter 1: Triangulation-Based Approaches to Three-Dimensional Scene Reconstruction;17
4.1.1;1.1 The Pinhole Model;17
4.1.2;1.2 Geometric Aspects of Stereo Image Analysis;20
4.1.2.1;1.2.1 Euclidean Formulation of Stereo Image Analysis;20
4.1.2.2;1.2.2 Stereo Image Analysis in Terms of Projective Geometry;22
4.1.2.2.1;1.2.2.1 De nition of Coordinates and Camera Properties;22
4.1.2.2.2;1.2.2.2 The Essential Matrix;23
4.1.2.2.3;1.2.2.3 The Fundamental Matrix;24
4.1.2.2.4;1.2.2.4 Projective Reconstruction of the Scene;25
4.1.3;1.3 The Bundle Adjustment Approach;28
4.1.4;1.4 Geometric Calibration of Single and Multiple Cameras;29
4.1.4.1;1.4.1 Methods for Intrinsic Camera Calibration;29
4.1.4.2;1.4.2 The Direct Linear Transform (DLT) Method;30
4.1.4.3;1.4.3 The Camera Calibration Method by Tsai (1987);33
4.1.4.4;1.4.4 The Camera Calibration Method by Zhang (1999a);34
4.1.4.5;1.4.5 The Camera Calibration Toolbox by Bouguet (2007);37
4.1.4.6;1.4.6 Self-calibration of Camera Systems from Multiple Views of a Static Scene;37
4.1.4.6.1;1.4.6.1 Projective Reconstruction: Determination of the Fundamental Matrix;37
4.1.4.6.2;1.4.6.2 Metric Self-calibration;40
4.1.4.6.2.1;The Basic Equations for Self-calibration and Methods for Their Solution;41
4.1.4.6.3;1.4.6.3 Self-calibration Based on Vanishing Points;43
4.1.4.7;1.4.7 Semi-automatic Calibration of Multiocular Camera Systems;44
4.1.4.7.1;1.4.7.1 The Calibration Rig;45
4.1.4.7.2;1.4.7.2 Existing Algorithms for Extracting the Calibration Rig;46
4.1.4.7.3;1.4.7.3 A Graph-Based Rig Extraction Algorithm;47
4.1.4.7.3.1;Outline of the Rig Finding Algorithm;47
4.1.4.7.3.2;De nition of the Graph;49
4.1.4.7.3.3;Extraction of Corner Candidates;49
4.1.4.7.3.4;Candidate Filter and Graph Construction;50
4.1.4.7.3.5;Non-bidirectional Edge Elimination;50
4.1.4.7.3.6;Edge Circle Filter;51
4.1.4.7.3.7;Edge Length Filter;51
4.1.4.7.3.8;Corner Enumeration;52
4.1.4.7.3.9;Notch Direction Detector;52
4.1.4.7.3.10;Rig Direction;52
4.1.4.7.4;1.4.7.4 Discussion;52
4.1.4.8;1.4.8 Accurate Localisation of Chequerboard Corners;53
4.1.4.8.1;1.4.8.1 Different Types of Calibration Targets and Their Localisationin Images;54
4.1.4.8.2;1.4.8.2 A Model-Based Method for Chequerboard Corner Localisation;57
4.1.4.8.3;1.4.8.3 Experimental Evaluation;60
4.1.4.8.4;1.4.8.4 Discussion;65
4.1.5;1.5 Stereo Image Analysis in Standard Geometry;66
4.1.5.1;1.5.1 Image Recti cation According to Standard Geometry;66
4.1.5.2;1.5.2 The Determination of Corresponding Points;69
4.1.5.2.1;1.5.2.1 Correlation-Based Blockmatching Stereo Vision Algorithms;70
4.1.5.2.2;1.5.2.2 Feature-Based Stereo Vision Algorithms;71
4.1.5.2.2.1;General Overview;71
4.1.5.2.2.2;A Contour-Based Stereo Vision Algorithm;73
4.1.5.2.3;1.5.2.3 Dense Stereo Vision Algorithms;79
4.1.5.2.4;1.5.2.4 Model-Based Stereo Vision Algorithms;80
4.1.5.2.5;1.5.2.5 Spacetime Stereo Vision and Scene Flow Algorithms;81
4.1.5.2.5.1;General Overview;81
4.1.5.2.5.2;Local Intensity Modelling;83
4.1.6;1.6 Resolving Stereo Matching Errors due to Repetitive Structures Using Model Information;88
4.1.6.1;1.6.1 Plane Model;90
4.1.6.1.1;1.6.1.1 Detection and Characterisation of Repetitive Structures;90
4.1.6.1.2;1.6.1.2 Determination of Model Parameters;91
4.1.6.2;1.6.2 Multiple-plane Hand-Arm Model;93
4.1.6.3;1.6.3 Decision Feedback;93
4.1.6.4;1.6.4 Experimental Evaluation;95
4.1.6.5;1.6.5 Discussion;101
4.2;Chapter 2: Three-Dimensional Pose Estimation and Segmentation Methods;102
4.2.1;2.1 Pose Estimation of Rigid Objects;102
4.2.1.1;2.1.1 General Overview;103
4.2.1.1.1;2.1.1.1 Pose Estimation Methods Based on Explicit Feature Matching;103
4.2.1.1.2;2.1.1.2 Appearance-Based Pose Estimation Methods;104
4.2.1.1.2.1;Methods Based on Monocular Image Data;105
4.2.1.1.2.2;Methods Based on Multiocular Image Data;106
4.2.1.2;2.1.2 Template-Based Pose Estimation;107
4.2.2;2.2 Pose Estimation of Non-rigid and Articulated Objects;110
4.2.2.1;2.2.1 General Overview;110
4.2.2.1.1;2.2.1.1 Non-rigid Objects;110
4.2.2.1.2;2.2.1.2 Articulated Objects;112
4.2.2.2;2.2.2 Three-Dimensional Active Contours;117
4.2.2.2.1;2.2.2.1 Active Contours;117
4.2.2.2.2;2.2.2.2 Three-Dimensional Multiple-View Active Contours;118
4.2.2.2.3;2.2.2.3 Experimental Results on Synthetic Image Data;120
4.2.2.3;2.2.3 Three-Dimensional Spatio-Temporal Curve Fitting;122
4.2.2.3.1;2.2.3.1 Modelling the Hand-Forearm Limb;122
4.2.2.3.2;2.2.3.2 Principles and Extensions of the CCD Algorithm;124
4.2.2.3.2.1;Step 1: Learning Local Probability Distributions;125
4.2.2.3.2.2;Step 2: Re nement of the Estimate (MAP Estimation);127
4.2.2.3.3;2.2.3.3 The Multiocular Extension of the CCD Algorithm;129
4.2.2.3.3.1;Step 1: Extraction and Projection of the Three-Dimensional Model;129
4.2.2.3.3.2;Step 2: Learning Local Probability Distributions from all Nc Images;129
4.2.2.3.3.3;Step 3: Re nement of the Estimate (MAP Estimation);129
4.2.2.3.4;2.2.3.4 The Shape Flow Algorithm;130
4.2.2.3.4.1;Step 1: Projection of the Spatio-Temporal Three-Dimensional Contour Model;131
4.2.2.3.4.2;Step 2: Learn Local Probability Distributions from all Nc Images;132
4.2.2.3.4.3;Step 3: Re ne the Estimate (MAP Estimation);132
4.2.2.3.5;2.2.3.5 Veri cation and Recovery of the Pose Estimation Results;133
4.2.2.3.5.1;Pose Veri cation;133
4.2.2.3.5.2;Pose Recovery on Loss of Object;134
4.2.3;2.3 Point Cloud Segmentation Approaches;135
4.2.3.1;2.3.1 General Overview;136
4.2.3.1.1;2.3.1.1 The k-Means Clustering Algorithm;136
4.2.3.1.2;2.3.1.2 Agglomerative Clustering;136
4.2.3.1.3;2.3.1.3 Mean-Shift Clustering;137
4.2.3.1.4;2.3.1.4 Graph Cut and Spectral Clustering;137
4.2.3.1.5;2.3.1.5 The ICP Algorithm;138
4.2.3.1.6;2.3.1.6 Photogrammetric Approaches;139
4.2.3.2;2.3.2 Mean-Shift Tracking of Human Body Parts;139
4.2.3.2.1;2.3.2.1 Clustering and Object Detection;139
4.2.3.2.2;2.3.2.2 Target Model;140
4.2.3.2.3;2.3.2.3 Image-Based Mean-Shift;141
4.2.3.2.4;2.3.2.4 Point Cloud-Based Mean-Shift;141
4.2.3.3;2.3.3 Segmentation and Spatio-Temporal Pose Estimation;142
4.2.3.3.1;2.3.3.1 Scene Clustering and Model-Based Pose Estimation;143
4.2.3.3.2;2.3.3.2 Estimation of the Temporal Pose Derivatives;144
4.2.3.4;2.3.4 Object Detection and Tracking in Point Clouds;147
4.2.3.4.1;2.3.4.1 Motion-Attributed Point Cloud;147
4.2.3.4.2;2.3.4.2 Over-Segmentation for Motion-Attributed Clusters;148
4.2.3.4.3;2.3.4.3 Generation and Tracking of Object Hypotheses;149
4.3;Chapter 3: Intensity-Based and Polarisation-Based Approaches to Three-Dimensional Scene Reconstruction;151
4.3.1;3.1 Shape from Shadow;151
4.3.1.1;3.1.1 Extraction of Shadows from Image Pairs;152
4.3.1.2;3.1.2 Shadow-Based Surface Reconstruction from Dense Sets of Images;154
4.3.2;3.2 Shape from Shading;155
4.3.2.1;3.2.1 The Bidirectional Re ectance Distribution Function (BRDF);156
4.3.2.2;3.2.2 Determination of Surface Gradients;160
4.3.2.2.1;3.2.2.1 Photoclinometry;160
4.3.2.2.2;3.2.2.2 Single-Image Approaches with Regularisation Constraints;162
4.3.2.3;3.2.3 Reconstruction of Height from Gradients;165
4.3.2.4;3.2.4 Surface Reconstruction Based on Partial Differential Equations;167
4.3.3;3.3 Photometric Stereo;170
4.3.3.1;3.3.1 Photometric Stereo: Principle and Extensions;170
4.3.3.2;3.3.2 Photometric Stereo Approaches Based on Ratio Images;172
4.3.3.2.1;3.3.2.1 Ratio-Based Photoclinometry of Surfaces with Non-uniform Albedo;173
4.3.3.2.2;3.3.2.2 Ratio-Based Variational Photometric Stereo Approach;174
4.3.4;3.4 Shape from Polarisation;175
4.3.4.1;3.4.1 Surface Orientation from Dielectric Polarisation Models;175
4.3.4.2;3.4.2 Determination of Polarimetric Properties of Rough Metallic Surfaces for Three-Dimensional Reconstruction Purposes;178
4.4;Chapter 4: Point Spread Function-Based Approaches to Three-Dimensional Scene Reconstruction;183
4.4.1;4.1 The Point Spread Function;183
4.4.2;4.2 Reconstruction of Depth from Defocus;184
4.4.2.1;4.2.1 Basic Principles;184
4.4.2.2;4.2.2 Determination of Small Depth Differences;188
4.4.2.3;4.2.3 Determination of Absolute Depth Across Broad Ranges;191
4.4.2.3.1;4.2.3.1 De nition of the Depth-Defocus Function;192
4.4.2.3.2;4.2.3.2 Calibration of the Depth-Defocus Function;192
4.4.2.3.2.1;Stationary Camera;192
4.4.2.3.2.2;Moving Camera;193
4.4.2.3.3;4.2.3.3 Determination of the Depth Map;194
4.4.2.3.3.1;Stationary Camera;194
4.4.2.3.3.2;Moving Camera;195
4.4.2.3.4;4.2.3.4 Estimation of the Useful Depth Range;197
4.4.3;4.3 Reconstruction of Depth from Focus;198
4.5;Chapter 5: Integrated Frameworks for Three-Dimensional Scene Reconstruction;200
4.5.1;5.1 Monocular Three-Dimensional Scene Reconstruction at Absolute Scale;201
4.5.1.1;5.1.1 Combining Motion, Structure, and Defocus;202
4.5.1.2;5.1.2 Online Version of the Algorithm;203
4.5.1.3;5.1.3 Experimental Evaluation Based on Tabletop Scenes;203
4.5.1.3.1;5.1.3.1 Evaluation of the Of ine Algorithm;204
4.5.1.3.1.1;Cuboid Sequence;207
4.5.1.3.1.2;Bottle Sequence;207
4.5.1.3.1.3;Lava Stone Sequence;208
4.5.1.3.2;5.1.3.2 Evaluation of the Online Algorithm;209
4.5.1.3.3;5.1.3.3 Random Errors vs. Systematic Deviations;210
4.5.1.4;5.1.4 Discussion;212
4.5.2;5.2 Self-consistent Combination of Shadow and Shading Features;213
4.5.2.1;5.2.1 Selection of a Shape from Shading Solution Based on Shadow Analysis;214
4.5.2.2;5.2.2 Accounting for the Detailed Shadow Structure in the Shape from Shading Formalism;217
4.5.2.3;5.2.3 Initialisation of the Shape from Shading Algorithm Based on Shadow Analysis;218
4.5.2.4;5.2.4 Experimental Evaluation Based on Synthetic Data;220
4.5.2.5;5.2.5 Discussion;221
4.5.3;5.3 Shape from Photopolarimetric Re ectance and Depth;222
4.5.3.1;5.3.1 Shape from Photopolarimetric Re ectance;224
4.5.3.1.1;5.3.1.1 Global Optimisation Scheme;225
4.5.3.1.2;5.3.1.2 Local Optimisation Scheme;227
4.5.3.2;5.3.2 Estimation of the Surface Albedo;228
4.5.3.3;5.3.3 Integration of Depth Information;229
4.5.3.3.1;5.3.3.1 Fusion of SfPR with Depth from Defocus;230
4.5.3.3.2;5.3.3.2 Integration of Accurate but Sparse Depth Information;231
4.5.3.4;5.3.4 Experimental Evaluation Based on Synthetic Data;233
4.5.3.5;5.3.5 Discussion;238
4.5.4;5.4 Stereo Image Analysis of Non-Lambertian Surfaces;239
4.5.4.1;5.4.1 Iterative Scheme for Disparity Estimation;242
4.5.4.2;5.4.2 Qualitative Behaviour of the Specular Stereo Algorithm;245
4.5.5;5.5 Combination of Shape from Shading and Active Range Scanning Data;246
4.5.6;5.6 Three-Dimensional Pose Estimation Based on Combinations of Monocular Cues;249
4.5.6.1;5.6.1 Photometric and Polarimetric Information;250
4.5.6.2;5.6.2 Edge Information;251
4.5.6.3;5.6.3 Defocus Information;252
4.5.6.4;5.6.4 Total Error Optimisation;252
4.5.6.5;5.6.5 Experimental Evaluation Based on a Simple Real-World Object;253
4.5.6.6;5.6.6 Discussion;255
5;Part II: Application Scenarios;256
5.1;Chapter 6: Applications to Industrial Quality Inspection;257
5.1.1;6.1 Inspection of Rigid Parts;258
5.1.1.1;6.1.1 Object Detection by Pose Estimation;258
5.1.1.1.1;Comparison with Other Pose Estimation Methods;260
5.1.1.2;6.1.2 Pose Re nement;262
5.1.1.2.1;Comparison with Other Pose Re nement Methods;266
5.1.2;6.2 Inspection of Non-rigid Parts;267
5.1.3;6.3 Inspection of Metallic Surfaces;270
5.1.3.1;6.3.1 Inspection Based on Integration of Shadow and Shading Features;271
5.1.3.2;6.3.2 Inspection of Surfaces with Non-uniform Albedo;271
5.1.3.3;6.3.3 Inspection Based on SfPR and SfPRD;273
5.1.3.3.1;6.3.3.1 Results Obtained with the SfPR Technique;274
5.1.3.3.2;6.3.3.2 Results Obtained with the SfPRD Technique;277
5.1.3.4;6.3.4 Inspection Based on Specular Stereo;280
5.1.3.4.1;6.3.4.1 Qualitative Discussion of the Three-Dimensional Reconstruction Results;280
5.1.3.4.2;6.3.4.2 Comparison to Ground Truth Data;282
5.1.3.4.3;6.3.4.3 Self-consistency Measures for Three-Dimensional Reconstruction Accuracy;283
5.1.3.4.4;6.3.4.4 Consequences of Poorly Known Re ectance Parameters;285
5.1.3.5;6.3.5 Inspection Based on Integration of Photometric Image Information and Active Range Scanning Data;287
5.1.3.6;6.3.6 Discussion;289
5.2;Chapter 7: Applications to Safe Human-Robot Interaction;291
5.2.1;7.1 Vision-Based Human-Robot Interaction;291
5.2.1.1;7.1.1 Vision-Based Safe Human-Robot Interaction;292
5.2.1.2;7.1.2 Pose Estimation of Articulated Objects in the Context of Human-Robot Interaction;295
5.2.1.2.1;7.1.2.1 The Role of Gestures in Human-Robot Interaction;296
5.2.1.2.2;7.1.2.2 Recognition of Gestures;296
5.2.1.2.3;7.1.2.3 Including Context Information: Pointing Gestures and Interactions with Objects;297
5.2.1.2.4;7.1.2.4 Discussion in the Context of Industrial Safety Systems;298
5.2.2;7.2 Object Detection and Tracking in Three-Dimensional Point Clouds;299
5.2.3;7.3 Detection and Spatio-Temporal Pose Estimation of Human Body Parts;301
5.2.4;7.4 Three-Dimensional Tracking of Human Body Parts;304
5.2.4.1;7.4.1 Image Acquisition;304
5.2.4.2;7.4.2 Data Set Used for Evaluation;305
5.2.4.3;7.4.3 Fusion of the ICP and MOCCD Poses;307
5.2.4.4;7.4.4 System Con gurations Regarded for Evaluation;309
5.2.4.4.1;Con guration 1: Tracking Based on the MOCCD;309
5.2.4.4.2;Con guration 2: Tracking Based on the Shape Flow Method;309
5.2.4.4.3;Con guration 3: ICP-Based Tracking;309
5.2.4.4.4;Con guration 4: Fusion of ICP and MOCCD;310
5.2.4.4.5;Con guration 5: Fusion of ICP, MOCCD, and SF;310
5.2.4.5;7.4.5 Evaluation Results;310
5.2.4.6;7.4.6 Comparison with Other Methods;314
5.2.4.7;7.4.7 Evaluation of the Three-Dimensional Mean-Shift Tracking Stage;316
5.2.4.8;7.4.8 Discussion;318
5.2.5;7.5 Recognition of Working Actions in an Industrial Environment;318
5.3;Chapter 8: Applications to Lunar Remote Sensing;321
5.3.1;8.1 Three-Dimensional Surface Reconstruction Methodsfor Planetary Remote Sensing;321
5.3.1.1;8.1.1 Topographic Mapping of the Terrestrial Planets;321
5.3.1.1.1;8.1.1.1 Active Methods;321
5.3.1.1.2;8.1.1.2 Shadow Length Measurements;322
5.3.1.1.3;8.1.1.3 Stereo and Multi-image Photogrammetry;323
5.3.1.1.4;8.1.1.4 Photoclinometry and Shape from Shading;324
5.3.1.2;8.1.2 Re ectance Behaviour of Planetary Regolith Surfaces;325
5.3.2;8.2 Three-Dimensional Reconstruction of Lunar Impact Craters;328
5.3.2.1;8.2.1 Shadow-Based Measurement of Crater Depth;328
5.3.2.2;8.2.2 Three-Dimensional Reconstruction of Lunar Impact Craters at High Resolution;329
5.3.2.3;8.2.3 Discussion;339
5.3.3;8.3 Three-Dimensional Reconstructionof Lunar Wrinkle Ridges and Faults;340
5.3.4;8.4 Three-Dimensional Reconstruction of Lunar Domes;343
5.3.4.1;8.4.1 General Overview of Lunar Domes;343
5.3.4.2;8.4.2 Observations of Lunar Domes;344
5.3.4.2.1;8.4.2.1 Spacecraft Observations of Lunar Mare Domes;344
5.3.4.2.2;8.4.2.2 Telescopic CCD Imagery;348
5.3.4.3;8.4.3 Image-Based Determination of Morphometric Data;349
5.3.4.3.1;8.4.3.1 Construction of DEMs;349
5.3.4.3.2;8.4.3.2 Error Estimation;358
5.3.4.3.3;8.4.3.3 Comparison to Other Height Measurements;360
5.3.4.4;8.4.4 Discussion;363
5.4;Chapter 9: Conclusion;366
6;References;372



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.