Escolano Ruiz / Suau Pérez / Bonev | Information Theory in Computer Vision and Pattern Recognition | E-Book | www.sack.de
E-Book

E-Book, Englisch, 364 Seiten

Escolano Ruiz / Suau Pérez / Bonev Information Theory in Computer Vision and Pattern Recognition


1. Auflage 2009
ISBN: 978-1-84882-297-9
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark

E-Book, Englisch, 364 Seiten

ISBN: 978-1-84882-297-9
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark



Information theory has proved to be effective for solving many computer vision and pattern recognition (CVPR) problems (such as image matching, clustering and segmentation, saliency detection, feature selection, optimal classifier design and many others). Nowadays, researchers are widely bringing information theory elements to the CVPR arena. Among these elements there are measures (entropy, mutual information...), principles (maximum entropy, minimax entropy...) and theories (rate distortion theory, method of types...). This book explores and introduces the latter elements through an incremental complexity approach at the same time where CVPR problems are formulated and the most representative algorithms are presented. Interesting connections between information theory principles when applied to different problems are highlighted, seeking a comprehensive research roadmap. The result is a novel tool both for CVPR and machine learning researchers, and contributes to a cross-fertilization of both areas.

Escolano Ruiz / Suau Pérez / Bonev Information Theory in Computer Vision and Pattern Recognition jetzt bestellen!

Weitere Infos & Material


1;Foreword;6
2;Preface;8
3;Contents;11
4;Chapter 1 Introduction;16
4.1;1.1 Measures, Principles, Theories, and More;16
4.2;1.2 Detailed Organization of the Book;18
4.3;1.3 The ITin CVPR Roadmap;25
5;Chapter 2 Interst Points, Edges, and Contour Grouping;26
5.1;2.1 Introduction;26
5.2;2.2 Entropy and Interest Points;26
5.2.1;2.2.1 Kadir and Brady Scale Saliency Detector;27
5.2.2;2.2.2 Point Filtering by Entropy Analysis Through Scale Space;29
5.2.3;2.2.3 Chernoff Information and Optimal Filtering;31
5.2.4;2.2.4 Bayesian Filtering of the Scale Saliency Feature Extractor: The Algorithm;33
5.3;2.3 Information Theory as Evaluation Tool: The Statistical Edge Detection Case;35
5.3.1;2.3.1 Statistical Edge Detection;37
5.3.2;2.3.2 Edge Localization;38
5.4;2.4 Finding Contours Among Clutter;41
5.4.1;2.4.1 Problem Statement;42
5.4.2;2.4.2 A* Road Tracking;44
5.4.3;2.4.3 A* Convergence Proof;46
5.5;2.5 Junction Deterction and Grouping;48
5.5.1;2.5.1 Junction Detection;48
5.5.2;2.5.2 Connecting and Filtering Junctions;50
5.6;Problems;56
5.6.1;2.1 Understanding entropy;53
5.6.2;2.2 Symmetry property;54
5.6.3;2.3 Entropy limits;54
5.6.4;2.4 Color saliency;54
5.6.5;2.5 Saliency numeric example;54
5.6.6;2.6 Conditional entropy as filter evaluator;54
5.6.7;2.7 Kullback-Leibler divergence as filter evaluator;55
5.6.8;2.8 Conditional entropy and not informative classifiers;55
5.6.9;2.9 Sanov's theorem;55
5.6.10;2.10 MAP and road tracking;55
5.7;2.6 Key References;56
6;Chapter 3 Contour and Region - Based Image Segmentation ;119
6.1;3.1 Introduction;57
6.2;3.2 Discriminative Segmentation with Jensen-Shannon Divergence;58
6.2.1;3.2.1 The Active Polygons Functional;58
6.2.2;3.2.2 Jensen - Shannon Divergence and the Speed Function;60
6.3;3.3 MDL in Contour - Based Segmentation;67
6.3.1;3.3.1 B-Spline Parameterization of Contours;67
6.3.2;3.3.2 MDL for B-Spline Parameterization;72
6.3.3;3.3.3 MDL Contour - based Segmentation;74
6.3.3.1;Contour-fitting Algorithm;76
6.3.3.2;Intensity-inference Algorithm;76
6.3.3.3;MDL Contour fitting and intensity inference;77
6.4;3.4 Model Order Selection in Region - Based Segmentation;77
6.4.1;3.4.1 Jump - Diffusion for Optimal Segmentation;77
6.4.1.1;Bayesian formulation of the problem;79
6.4.1.2;Reversible jumps;81
6.4.1.3;Stochastic diffusious;83
6.4.2;3.4.2 Speeding-up the Jump- Diffusion Process;85
6.4.2.1;The speed bottlenecks;86
6.4.2.2;Data - driver techniques;87
6.4.3;3.4.3 K-adventurers Algorithm;88
6.5;3.5 Model - Based Segmentation Exploiting The Maximum Entropy Principle;93
6.5.1;3.5.1 Maximum Entropy and Markov Random Fields;93
6.5.2;3.5.2 Efficient Learning with Belif Propagation;97
6.5.3;3.6 Integrating Segmentation, Detection and Recognition;100
6.5.3.1;3.6.1 Image Parsing;100
6.5.3.2;3.6.2 The Data-Driven Generative Model;105
6.5.3.3;3.6.3 The Power of Discriminative Processes;110
6.5.3.4;3.6.4 The Usefulness of Combining Generative and Discriminative;113
6.6;Problems;114
6.6.1;3.1 Implicit MDL and region competition;114
6.6.2;3.2 Green's theorem and flow;115
6.6.3;3.3 The active square ;115
6.6.4;3.4 Active polygons and maximum entropy;115
6.6.5;3.5 Jensen - Shannon divergence;115
6.6.6;3.6 Jensen - Shannon divergence and active polygons];116
6.6.7;3.7 Jensen - Shannon divergence and active contours;116
6.6.8;3.8 Alternative contour representations and MDL;116
6.6.9;3.9 Jump-diffusion: energy function;116
6.6.10;3.10 Jump-diffusion: models and stochastic diffusions;116
6.6.11;3.11 Jump-diffusion: distance between solutions in K-adventurers;116
6.6.12;3.12 Using Chernoff information in discrminative learning;116
6.6.13;3.13 Proposal probabilities for splitting and merging regions;117
6.6.14;3.14 Markov model and skin detection;117
6.6.15;3.15 Maximum Entropy for detection;117
6.7;3.7 Key References;117
7;Chapter 4 Registration, Matching, and Recognition;119
7.1;4.1 Introduction;119
7.2;4.2 Image Alignment and Mutual Information;120
7.2.1;4.2.1 Alignment and Image Statistics;120
7.2.2;4.2.2 Entropy Estimation with Parzen's Windows;122
7.2.3;4.2.3 The EMMA Algorithm;124
7.2.4;4.2.4 Solving the Histogram - Binning Problem;125
7.3;4.3 Alternative Metrics for Image Alignment;133
7.3.1;4.3.1 Normalizing Mutual Information;133
7.3.2;4.3.2 Conditional Entropies;134
7.3.3;4.3.3 Extension to the Multimodal Case;135
7.3.4;4.3.4 Affine Alignment of Multiple Images;136
7.3.5;4.3.5 The R´enyi Entropy;138
7.3.6;4.3.6 R´enyi’s Entropy and Entropic Spanning Graphs;140
7.3.7;4.3.7 The Jensen–R´enyi Divergence and Its Applications;142
7.3.8;4.3.8 Other Measures Related to R´enyi Entropy;143
7.3.9;4.3.9 Experimental Results;146
7.4;4.4 Deformable Matching with Jensen Divergence and Fisher Information;146
7.4.1;4.4.1 The Distributional Shape Model;146
7.4.2;4.4.2 Multiple Registration and Jensen - Shannon Divergence;150
7.4.3;4.4.3 Information Geometry and Fisher - Rao Information;154
7.4.4;4.4.4 Dynamics of the Fisher Information Metric;157
7.5;4.5 Structural Learning with MDL;160
7.5.1;4.5.1 The Usefulness of Shock Trees;160
7.5.2;4.5.2 A Generative Tree Model Based on Mixtures;161
7.5.3;4.5.3 Learning the Mixture;164
7.5.4;4.5.4 Tree Edit - Distance and MDL;165
7.6;Problems;167
7.6.1;4.1 Distributions in multidimensional spaces;167
7.6.2;4.2 Parzen window;167
7.6.3;4.3 Image alignment;167
7.6.4;4.4 Joint histograms;167
7.6.5;4.5 The histogram-binning problem;168
7.6.6;4.6 Alternative metrics and pseudometrics;168
7.6.7;4.7 Entropic graphs;168
7.6.8;4.8 Jensen–R´enyi divergence;168
7.6.9;4.9 Unbiased JS divergence for multiple registration;168
7.6.10;4.10 Fisher matrix of multinomial distrinbutions;168
7.6.11;4.11 The a-order metric tensor of Gaussians and others;169
7.6.12;4.12 Sampling from a tree modle;169
7.6.13;4.13 Computing the MDL of a tree merging process;169
7.6.14;4.14 Computing the MDL edit distance;169
7.6.15;4.15 Extending MDL for trees with weights;169
7.7;4.6 Key References;170
8;Chapter 5 Image and Pattern Clustering;171
8.1;Image and Pattern Clustering;171
8.1.1;5.1 Introduction;171
8.1.2;5.2 Gaussian Mixtures and Modle Selection;171
8.1.2.1;5.2.1 Gaussian Mixtures Methods;171
8.1.2.2;5.2.2 Defingin Gaussian Mixtures;172
8.1.2.3;5.2.3 EM Algorithm and Its Drawbacks;173
8.1.2.3.1;E Step;174
8.1.2.3.2;M Step;174
8.1.2.4;5.2.4 Model Order Selection;175
8.1.3;5.3 EBEM Algorithm: Exploting Entropic Graphs;176
8.1.3.1;5.3.1 The Gaussianity Criterion and Entropy Estimation;176
8.1.3.2;5.3.2 Shannon Entropy from Re´nyi Entropy Estimation;177
8.1.3.3;5.3.3 Minimum Description Length for EBEM;180
8.1.3.4;5.3.4 Kernel - Splitting Equations;181
8.1.3.5;5.3.5 Experiments;182
8.1.4;5.4 Information Bottleneck and Rate Distortion Theory;184
8.1.4.1;5.4.1 Rate Distortion Theory Based Clustering;184
8.1.4.2;5.4.2 The Information Bottleneck Principle;187
8.1.5;5.5 Agglomerative IB Clustering;191
8.1.5.1;5.5.1 Jensen - Shannon Divergence and Bayesian Classification Error;191
8.1.5.2;5.5.2 The AIB Algorithm;192
8.1.5.3;5.5.3 Unsupervised Clustering of Images;195
8.1.6;5.6 Robust Information Clustering;198
8.1.7;5.7 IT - Based Mean Shift;203
8.1.7.1;5.7.1 The Mean Shift Algorithm;203
8.1.7.2;5.7.2 Mean Shift Stop Criterion and Examples;205
8.1.7.3;5.7.3 R´enyi Quadratic and Cross Entropy from Parzen Windows;207
8.1.7.4;5.7.4 Mean Shift from an IT Perspective;210
8.1.8;5.8 Unsupervised Classification and Clustering Ensembles;211
8.1.8.1;5.8.1 Representation of Multiple Partitions;212
8.1.8.2;5.8.2 Consensus Functions;213
8.1.8.2.1;A Mixture Model of Consensus;214
8.1.8.2.2;Mutual Information Based Consensus;217
8.1.9;Problems;220
8.1.9.1;5.1 The entropy based EM algorithm and Gaussianity;220
8.1.9.2;5.2 The entropy based EM algorithm for color image segmentation;220
8.1.9.3;5.3 The entropy based EM algorithm and MDL;220
8.1.9.4;5.4 Information Theory and clustering;220
8.1.9.5;5.5 Rate Distortion Theory;221
8.1.9.6;5.6 Information Bottleneck;221
8.1.9.7;5.7 Blahut–Arimoto Information Bottleneck;221
8.1.9.8;8.5 Deterministic annealing;221
8.1.9.9;5.9 Agglomerative Information Bottleneck;221
8.1.9.10;5.10 Model order selection in AIB;222
8.1.9.11;5.11 Channel capacity;222
8.1.9.12;5.12 RIC and alternative model order selection;222
8.1.9.13;5.13 Mean Shift and Information Theory;222
8.1.9.14;5.14 X-means clustering and Bayesian Information Criterion;222
8.1.9.15;5.15 Clustering ensembles;223
8.1.10;5.9 Key References;223
9;Chapter 6 Feature Selection and Transformation;224
9.1;6.1 Introduction;224
9.2;6.2 Wrapper and the Cross Validation Criterion;225
9.2.1;6.2.1 Wrapper for Classifier Evaluation;225
9.2.2;6.2.2 Cross Validation;227
9.2.3;6.2.3 Image Classification Example;228
9.2.4;6.2.4 Experiments;232
9.3;6.3 Filters Based on Mutual Information;233
9.3.1;6.3.1 Criteria for Filter Featuer Selection;233
9.3.2;6.3.2 Mutual Information for Feature Selection;235
9.3.3;6.3.3 Individual Features Evaluation, Dependence and Redundancy;236
9.3.4;6.3.4 The min - Redundancy Max - Relevance Criterion;238
9.3.5;6.3.5 The Max - Dependency Criterion;240
9.3.6;6.3.6 Limitations of the Greedy Search;241
9.3.7;6.3.7 Greedy Backward Search;244
9.3.8;6.3.8 Markov Blankets for Feature Selection;247
9.3.9;6.3.9 Applications and Experiments;249
9.4;6.4 Minimax Feature Selection for Generative Models;251
9.4.1;6.4.1 Filters and the Maximum Entropy Primciple;251
9.4.2;6.4.2 Filter Pursuit through Minimax Entropy;255
9.5;6.5 From PCA to gPCA;257
9.5.1;6.5.1 PCA, FastICA, and Informax;257
9.5.2;6.5.2 Minimax Mutual Information ICA;263
9.5.3;6.5.3 Generalized PCA (gPCA) and Effective Dimension;267
9.6;Problems;278
9.6.1;6.1 Filters and wrappers;278
9.6.2;6.2 Filter based on mutual information – Estimation of mutual information;278
9.6.3;6.3 Mutual information calculation;278
9.6.4;6.4 Markov blankets for feature selection;278
9.6.5;6.5 Conditional dependence in feature selection;278
9.6.6;6.6 Mutual information and conditional dependence;279
9.6.7;6.7 Filter based on mutual information – complexity;279
9.6.8;6.8 Satisfying additional constraints in maximum entropy;279
9.6.9;6.9 MML for feature selection and Gaussian clusters;279
9.6.10;6.10 Complexity of the FRAME algorithm;280
9.6.11;6.11 FRAME and one-dimensional patterns;280
9.6.12;6.12 Minimax entropy and filter pursuit;280
9.6.13;6.13 Kullback–Leibler gradient;280
9.6.14;6.14 Arrangements, Veronese maps, and gPCA;281
9.6.15;6.15 gPCA and Minimum Effective Dimension;281
9.7;6.6 Key References;282
10;Chapter 7 Classifier Design;283
10.1;7.1 Introduction;283
10.2;7.2 Model - Based Decision Trees;284
10.2.1;7.2.1 Reviewing Information Gain;284
10.2.2;7.2.2 The Global Criterion;285
10.2.3;7.2.3 Rare Classes with the Greedy Approach;287
10.2.4;7.2.4 Rare Classes with Global Optimization;292
10.3;7.3 Shape Quantization and Multiple Randomized Trees;296
10.3.1;7.3.1 Simple Tags and Their Arrangements;296
10.3.2;7.3.2 Algorithm for the Simple Tree;297
10.3.3;7.3.3 More Complex Tags and Arrangements;299
10.3.4;7.3.4 Randomizing and Multiple Trees;301
10.4;7.4 Random Forests;303
10.4.1;7.4.1 The Basic Concept;303
10.4.2;7.4.2 The Generalization Error of the RF Ensemble;303
10.4.3;7.4.3 Out-of-the-Bag Estimates of the Error Bound;306
10.4.4;7.4.4 Variable Selection: Forest RI vs. Forest -RC;307
10.5;7.5 Infomax and Jensen–Shannon Boosting;310
10.5.1;7.5.1 The Infomax Boosting Algorithm;311
10.5.1.1;Infomax feature selection;313
10.5.1.2;Infomax Boosting vs. Adaboost;316
10.5.2;7.5.2 Jensen–Shannon Boosting;317
10.5.2.1;Jensen–Shannon Feature Pursuit;317
10.5.2.2;JSBoost vs. other boosting algorithms;319
10.6;7.6 Maximum Entropy Principle for Classification;320
10.6.1;7.6.1 Improved Iterative Scaling;320
10.6.2;7.6.2 Maximum Entropy and Information Projection;325
10.7;7.7 Bregman Divergences and Classification;336
10.7.1;7.7.1 Concept and Properties;336
10.7.2;7.7.2 Bregman Balls and Core Vector Machines;338
10.7.3;7.7.3 Unifying Classification: Bregman Divergences and Surrogates;343
10.8;Problems;351
10.8.1;7.1 Rare classes and Twenty Questions;351
10.8.2;7.2 Working with simple tags;351
10.8.3;7.3 Randomization and multiple trees;351
10.8.4;7.4 Gini index vs. entropy;351
10.8.5;7.5 Breimans's conjecture;351
10.8.6;7.6 Weak dependence of multiple trees;351
10.8.7;7.7 Derivation of mutual information;352
10.8.8;7.8 Discrete infomax classifier;352
10.8.9;7.9 How Information Theory improves Adaboost?;352
10.8.10;7.10 Maximum entropy classifiers;352
10.8.11;7.11 Parameterizing the Gaussian;353
10.8.12;7.12 Improved iterative scaling equations;353
10.8.13;7.13 Exponential distributions and bijections with Bregman divergences;353
10.8.14;7.14 Bregman balls for Gaussian distributions;353
10.8.15;7.15 Bregman surrogates for trees and random forests;353
10.9;7.8 Key References;353
11;References;355
12;Index;365
13;Color Plates;368



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.