E-Book, Englisch, 220 Seiten
Andina / Pham Computational Intelligence
1. Auflage 2007
ISBN: 978-0-387-37452-9
Verlag: Springer US
Format: PDF
Kopierschutz: 1 - PDF Watermark
for Engineering and Manufacturing
E-Book, Englisch, 220 Seiten
ISBN: 978-0-387-37452-9
Verlag: Springer US
Format: PDF
Kopierschutz: 1 - PDF Watermark
Computational Intelligence is tolerant of imprecise information, partial truth and uncertainty. This book presents a selected collection of contributions on a focused treatment of important elements of CI, centred on its key element: learning. This book presents novel applications and real world applications working in Manufacturing and Engineering, and it sets a basis for understanding Domotic and Production Methods of the XXI Century.
Autoren/Hrsg.
Weitere Infos & Material
1;CONTENTS;7
2;CONTRIBUTING AUTHORS;9
3;PREFACE;11
4;ACKNOWLEDGEMENTS;13
5;1 SOFT COMPUTING AND ITS APPLICATIONS IN ENGINEERING AND MANUFACTURE;14
5.1;INTRODUCTION;14
5.2;1. KNOWLEDGE-BASED SYSTEMS;14
5.3;2. FUZZY LOGIC;18
5.4;3. INDUCTIVE LEARNING;22
5.5;4. NEURAL NETWORKS;27
5.6;5. GENETIC ALGORITHMS;32
5.6.1;5.1 Representation;33
5.6.2;5.2 Creation of Initial Population;33
5.6.3;5.3 Genetic Operators;34
5.6.4;5.4 Control Parameters;36
5.6.5;5.5 Fitness Evaluation Function;37
5.7;6. SOME APPLICATIONS IN ENGINEERING AND MANUFACTURE;38
5.7.1;6.1 Expert Statistical Process Control;38
5.7.2;6.2 Fuzzy Modelling of a Vibratory Sensor for Part Location;38
5.7.3;6.3 Induction of Feature Recognition Rules in a Geometric Reasoning System for Analysing 3D Assembly Models;40
5.7.4;6.4 Neural-network-based Automotive Product Inspection;42
5.7.5;6.5 GA-based Conceptual Design;43
5.8;7. CONCLUSION;44
5.9;8. ACKNOWLEDGEMENTS;45
5.10;REFERENCES;45
6;2 NEURAL NETWORKS HISTORICAL REVIEW;52
6.1;INTRODUCTION;52
6.2;1. HISTORICAL PERSPECTIVE;53
6.2.1;1.1 First Computational Model of Nervous Activity: The Model of McCulloch and Pitts;53
6.2.2;1.2 Training of Neural Networks: Hebbian Learning;56
6.2.3;1.3 Supervised Learning: Rosenblatt and Widrow;56
6.2.4;1.4 Partial eclipse of Neural Networks: Minsky and Papert;58
6.2.5;1.5 Backpropagation algorithm: Werbos, Rumelhart et al. and Parker;60
6.3;2. NEURAL NETWORKS VS CLASSICAL COMPUTERS;61
6.4;3. BIOLOGICAL AND ARTIFICIAL NEURONS;62
6.4.1;3.1 The Biological Neuron;62
6.4.2;3.2 The Artificial Neuron;63
6.5;4. NEURAL NETWORKS: CHARACTERISTICS AND TAXONOMY;64
6.6;5. FEED FORWARD NEURAL NETWORKS: THE PERCEPTRON;65
6.6.1;5.1 One Layer Perceptron;67
6.7;6. LMS LEARNING RULE;68
6.7.1;6.1 The Multilayer Perceptron;69
6.7.2;6.2 Acceleration of the training procedure;72
6.7.3;6.3 On-Line and Off-Line training;73
6.7.4;6.4 Selection of the Network size;74
6.8;7. KOHONEN NETWORKS;74
6.8.1;7.1 Training;76
6.9;8. FUTURE PERSPECTIVES;77
6.10;REFERENCES;77
7;3 ARTIFICIAL NEURAL NETWORKS;80
7.1;INTRODUCTION;80
7.2;1. TYPES OF NEURAL NETWORKS;80
7.2.1;1.1 Structural Categorisation;80
7.2.2;1.2 Learning Algorithm Categorisation;81
7.3;2. NEURAL NETWORKS EXAMPLE;81
7.3.1;2.1 Multi-layer Perceptron (MLP);82
7.3.2;2.2 Radial Basis Function (RBF) Network;84
7.3.3;2.3 Learning Vector Quantization (LVQ) Network;87
7.3.4;2.4 CMAC Network;88
7.3.5;2.5 Group Method of Data Handling (GMDH) Network;90
7.3.6;2.6 Hopfield Network;92
7.3.7;2.7 Elman and Jordan Nets;93
7.3.8;2.8 Kohonen Network;95
7.3.9;2.9 ART Networks;96
7.3.10;2.10 Spiking Neural Network;101
7.4;3. SUMMARY;104
7.5;4. ACKNOWLEDGEMENTS;104
7.6;REFERENCES;104
8;4 APPLICATION OF NEURAL NETWORKS;106
8.1;INTRODUCTION;106
8.2;1. FEASIBILITY STUDY;107
8.3;2. APPLICATION OF NNs TO BINARY DETECTION;107
8.4;3. THE NEURAL DETECTOR;109
8.4.1;3.1 The Multi-Layer Perceptron (MLP);109
8.4.2;3.2 The MLP Structure;110
8.5;4. THE TRAINING ALGORITHM;111
8.5.1;4.1 The BackPropagation (BP) Algorithm;111
8.5.2;4.2 The Training Sets;113
8.6;5. COMPUTER RESULTS;114
8.6.1;5.1 The Criterion Function;115
8.6.2;5.2 Robustness Under Different Target Models;117
8.7;APPENDIX: ON BACKPROPAGATION AND THE CRITERION FUNCTIONS;118
8.7.1;1. THE BACKPROPAGATION ALGORITHM;118
8.7.2;2. LEAST MEAN SQUARES (LMS);119
8.7.3;3. MINIMUM MISSCLASSIFICATION ERROR (MME);119
8.7.4;4. THE JM CRITERION;120
8.8;REFERENCES;121
9;5 RADIAL BASIS FUNCTION NETWORKS AND THEIR APPLICATION IN COMMUNICATION SYSTEMS;122
9.1;1. RADIAL BASIS FUNCTION NETWORKS;122
9.2;2. ARCHITECTURE;124
9.3;3. TRAINING ALGORITHMS;125
9.3.1;3.1 Fixed Centers Selected at Random;126
9.3.2;3.2 Self-Organized Selection of Centers;126
9.3.3;3.3 Supervised Selection of Centers;128
9.3.4;3.4 Orthogonal Least Squares (OLS);130
9.4;4. RELATION WITH SUPPORT VECTOR MACHINES (SVM);132
9.5;5. APPLICATIONS OF RADIAL BASIS FUNCTION NETWORKS TO COMMUNICATION SYSTEMS;132
9.5.1;5.1 Antenna Array Signal Processing;133
9.5.2;5.2 Channel Equalization;135
9.5.3;5.3 Other RBF Networks Applications;140
9.6;REFERENCES;141
10;6 BIOLOGICAL CLUES FOR UP-TO-DATE ARTIFICIAL NEURONS;144
10.1;INTRODUCTION;144
10.2;1. BIOLOGICAL PROPERTIES;145
10.2.1;1.1 Synaptic Plasticity Properties;145
10.2.2;1.2 Neuron’s Properties;149
10.2.3;1.3 Network Properties;150
10.3;2. UPDATING MC CULLOCH-PITTS MODEL;152
10.3.1;2.1 Up-to-date Synaptic Model;152
10.3.2;2.2 Up-to-date Neuron Model;154
10.3.3;2.3 Up-to-date Network Model;155
10.4;3. JOINING THE BLOCKS: A NEURAL NETWORK MODEL OF THE THALAMUS;156
10.5;4. CONCLUSIONS;156
10.6;REFERENCES;158
11;7 SUPPORT VECTOR MACHINES;160
11.1;INTRODUCTION;160
11.2;1. SVM DEFINITION;161
11.2.1;1.1 Structural Risk;161
11.2.2;1.2 Linear SVM for Separable Data;164
11.2.3;1.3 Karush-Khun-Tucker Conditions;168
11.2.4;1.4 Optimisation Example;169
11.2.5;1.5 Test Phase;172
11.2.6;1.6 Non-Separable Linear Case;172
11.2.7;1.7 Non-Linear Case;175
11.2.8;1.8 Mapping Function Example;177
11.2.9;1.9 Mercer Conditions;179
11.2.10;1.10 Kernel Examples;180
11.2.11;1.11 Global Solutions and Uniqueness;181
11.2.12;1.12 Generalization Performance Analysis;181
11.3;2. SVM MATHEMATICAL APLICATIONS;184
11.3.1;2.1 Pattern Recognition;184
11.3.2;2.2 Regression;184
11.3.3;2.3 Principal Component Analysis;190
11.4;3. SVM VERSUS NEURAL NETWORKS;193
11.5;4. SVM OPTIMISATION METHODS;195
11.5.1;4.1 Optimisation Methods Overview;195
11.5.2;4.2 SMO Algorithm;197
11.6;5. CONCLUSIONS;203
11.7;6. ACKNOWLEDGEMENTS;204
11.8;REFERENCES;204
12;8 FRACTALS AS PRE-PROCESSING TOOL FOR COMPUTATIONAL INTELLIGENCE APPLICATION;206
12.1;INTRODUCTION;206
12.2;1. STATE OF THE ART;207
12.3;2. FRACTAL CALCULATIONS;209
12.3.1;2.1 Box-counting Method;209
12.3.2;2.2 Dilation Method;209
12.3.3;2.3 Random Walk;210
12.4;3. CALCULATION OF GENERALIZED FRACTAL DIMENSIONS;212
12.4.1;3.1 Box-counting Method;212
12.4.2;3.2 Gliding Box Method;214
12.5;4. IMAGES FOR THE CASE STUDY;215
12.6;5. RESULTS OF THE CASE STUDY AND DISCUSSION ;215
12.6.1;5.1 Generating Function with the Box-counting Method;215
12.6.2;5.2 Generalized Dimensions Using the Box-counting Method;218
12.6.3;5.3 Generalized Dimensions Using the Gliding Box Method;218
12.7;6. CONCLUSIONS;220
12.8;7. ACKNOWLEDGEMENTS;223
12.9;REFERENCES;223
CHAPTER 2 NEURAL NETWORKS HISTORICAL REVIEW (p. 39)
D. ANDINA, A. VEGA-CORONA, J. I. SEIJAS, J. TORRES-GARCÍA
Abstract:
This chapter starts with a historical summary of the evolution of Neural Networks from the first models which are very limited in application capabilities to the present ones that make possible to think in applying automatic process to tasks that formerly had been reserved to the human intelligence. After the historical review, Neural Networks are dealt from a computational point of view. This perspective helps to compare Neural Systems with classical Computing Systems and leads to a formal and common presentation that will be used throughout the book
INTRODUCTION
Computers used nowadays can make a great variety of tasks (whenever they are well defined) at a higher speed and with more reliability than those reached by the human beings. None of us will be, for example, able to solve complex mathematical equations at the speed that a personal computer will. Nevertheless, mental capacity of the human beings is still higher than the one of machines in a wide variety of tasks.
No artificial system of image recognition is able to compete with the capacity of a human being to discern between objects of diverse forms and directions, in fact it would not even be able to compete with the capacity of an insect. In the same way, whereas a computer performs an enormous amount of computation and restrictive conditions to recognize, for example, phonemes, an adult human recognizes without no effort words pronounced by different people, at different speeds, accents and intonations, even in the presence of environmental noise.
It is observed that, by means of rules learned from the experience, the human being is much more effective than the computers in the resolution of imprecise problems (ambiguous problems), or problems that require great amount of information. Our brain reaches these objectives by means of thousands of millions of simple cells, called neurons, which are interconnected to each other.
However, it is estimated that the operational amplifiers and logical gates can make operations several orders of magnitude faster than the neurons. If the same processing technique of biological elements were implemented with operational amplifiers and logical gates, one could construct machines relatively cheap and able to process as much information, at least, as the one that processes a biological brain. Of course, we are too far from knowing if these machines will be constructed one day.
Therefore, there are strong reasons to think about the viability to tackle certain problems by means of parallel systems that process information and learn by means of principles taken from the brain systems of living beings. Such systems are called Artificial Neural Networks, connexionist models or distributed parallel process models. Artificial Neural Networks (ANNs or, simply, NNs) come then from the man’s intention of simulating the biological brain system in an artificial way.
1. HISTORICAL PERSPECTIVE
The science of Artificial Neural Networks did his first significant appearance during the 1940’s. Researchers who tried to emulate the functions of the human brain developed physical models (later, simulations by means of programs) of the biological neurons and their interconnections.




