Masters | Practical Neural Network Recipies in C++ | E-Book | www.sack.de
E-Book

E-Book, Englisch, 493 Seiten, Web PDF

Masters Practical Neural Network Recipies in C++


1. Auflage 2014
ISBN: 978-0-08-051433-8
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark

E-Book, Englisch, 493 Seiten, Web PDF

ISBN: 978-0-08-051433-8
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark



This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assumed and all models are presented from the ground up.The principle focus of the book is the three layer feedforward network, for more than a decade as the workhorse of professional arsenals. Other network models with strong performance records are also included.Bound in the book is an IBM diskette that includes the source code for all programs in the book. Much of this code can be easily adapted to C compilers. In addition, the operation of all programs is thoroughly discussed both in the text and in the comments within the code to facilitate translation to other languages.

Masters Practical Neural Network Recipies in C++ jetzt bestellen!

Autoren/Hrsg.


Weitere Infos & Material


1;Front Cover;1
2;Practical Neural Network Recipes in C++;4
3;Copyright Page;5
4;Table of Contents;10
5;Dedication;8
6;Preface;18
7;Chapter 1. Foundations;20
7.1;Motivation;21
7.2;New Life for Old Techniques;22
7.3;Perceptrons and Linear Separability;23
7.4;Neural Network Capabilities;25
7.5;Basic Structure of a Neural Network;27
7.6;Training;28
7.7;Validation;29
8;Chapter 2. Classification;34
8.1;Binary Decisions;35
8.2;Multiple Classes;37
8.3;Supervised versus Unsupervised Training;40
9;Chapter 3. Autoassociation;42
9.1;Autoassociative Filtering;43
9.2;Noise Reduction;48
9.3;Learning a Prototype from Exemplars;50
9.4;Exposing Isolated Events;51
9.5;Pattern Completion;59
9.6;Error Correction;60
9.7;Data Compression;63
10;Chapter 4. Time-Series Prediction;66
10.1;The Basic Model;68
10.2;Input Data;69
10.3;Multiple Prediction;80
10.4;Multiple Predictors;81
10.5;Measuring Prediction Error;83
11;Chapter 5. Function Approximation;86
11.1;Univariate Function Approximation;87
11.2;Inverse Modeling;91
11.3;Multiple Regression;93
12;Chapter 6. Multilayer Feedforward Networks;96
12.1;Basic Architecture;97
12.2;Theoretical Discussion;104
12.3;Algorithms for Executing the Network;109
12.4;Training the Network;113
12.5;Training by Backpropagation of Errors;119
12.6;Training by Conjugate Gradients;124
12.7;Eluding Local Minima in Learning;130
12.8;When to Use a Multiple-Layer Feedforward Network;135
13;Chapter 7. Eluding Local Minima I: Simulated Annealing;136
13.1;Overview;137
13.2;Choosing the Annealing Parameters;138
13.3;Implementation in Feedforward Network Learning;140
13.4;A Sample Program;141
13.5;A Sample Function;145
13.6;Random Number Generation;147
13.7;Going on from Here;151
14;Chapter 8. Eluding Local Minima II: Genetic Optimization;154
14.1;Overview;155
14.2;Designing the Genetic Structure;157
14.3;Evaluation;159
14.4;Parent Selection;163
14.5;Reproduction;166
14.6;Mutation;167
14.7;A Genetic Minimization Subroutine;168
14.8;Some Functions for Genetic Optimization;174
14.9;Advanced Topics in Genetic Optimization;176
15;Chapter 9. Regression and Neural Networks;184
15.1;Overview;185
15.2;Singular-Value Decomposition;186
15.3;Regression in Neural Networks;188
16;Chapter 10. Designing Feedforward Network Architectures;192
16.1;How Many Hidden Layers?;193
16.2;How Many Hidden Neurons?;195
16.3;How Long Do I Train This Thing???;199
17;Chapter 11. Interpreting Weights: How Does This Thing Work?;206
17.1;Features Used by Networks in General;209
17.2;Features Used by a Particular Network;210
18;Chapter 12. Probabilistic Neural Networks;220
18.1;Overview;221
18.2;Computational Aspects;227
18.3;Optimizing Sigma;228
18.4;A Sample Program;230
18.5;Bayesian Confidence Measures;238
18.6;Autoassociative Versions;239
18.7;When to Use a Probabilistic Neural Network;240
19;Chapter 13. Functional Link Networks;242
19.1;Application to Nonlinear Approximation;245
19.2;Mathematics of the Functional Link Network;246
19.3;When to Use a Functional Link Network;248
20;Chapter 14. Hybrid Networks;250
20.1;Functional Link Net as a Hidden Layer;251
20.2;Fast Bayesian Confidences;254
20.3;Attention-based Processing;258
20.4;Factorable Problems;261
21;Chapter 15. Designing the Training Set;264
21.1;Number of Samples;265
21.2;Borderline Cases;268
21.3;Hidden Bias;269
21.4;Balancing the Classes;270
21.5;Fudging Cases;270
22;Chapter 16. Preparing Input Data;272
22.1;General Considerations;273
22.2;Types of Measurements;274
22.3;Is Scaling Always Necessary?;285
22.4;Transformations;286
22.5;Circular Discontinuity;289
22.6;Outliers;293
22.7;Missing Data;295
23;Chapter 17. Fuzzy Data and Processing;298
23.1;Treating Fuzzy Values as Nominal and Ordinal;300
23.2;Advantages of Fuzzy Set Processing;301
23.3;The Neural Network - Fuzzy Set Interface;302
23.4;Membership Functions;303
23.5;Negation, Conjunction, and Disjunction;309
23.6;Modus Ponens;311
23.7;Combining Operations;314
23.8;Defuzzification;318
23.9;Code for Fuzzy Set Operations;322
23.10;Examples of Neural Network Fuzzy Preprocessing;335
23.11;Examples of Neural Network Fuzzy Postprocessing;338
24;Chapter 18. Unsupervised Training;346
24.1;Input Normalization;349
24.2;Training the Kohonen Network;351
24.3;Self-Organization;359
25;Chapter 19. Evaluating Performance of Neural Networks;362
25.1;Overview;363
25.2;Mean Square Error;363
25.3;Cost Functions;366
25.4;Confusion Matrix;367
25.5;ROC (Receiver Operating Characteristic) Curves;370
25.6;Signal-to-Noise Ratio;378
26;Chapter 20. Confidence Measures;380
26.1;Testing Individual Hypotheses;381
26.2;Computing Confidence;386
26.3;Confidence in the Null Hypothesis;387
26.4;Multiple Classes;388
26.5;Confidence in the Confidence;389
26.6;Example Programs;390
26.7;Bayesian Methods;395
26.8;Example Program;400
26.9;Multiple Classes;401
26.10;Hypothesis Testing versus Bayes' Method;403
27;Chapter 21. Optimizing the Decision Threshold;408
28;Chapter 22. Using the NEURAL Program;422
28.1;Output Models;424
28.2;Building the Training Set;425
28.3;The LAYER Network Model;425
28.4;The KOHONEN Network Model;428
28.5;Confusion Matrices;431
28.6;Saving Weights and Execution Results;431
28.7;Alphabetical Glossary of Commands;432
28.8;Verification of Program Operation;436
29;Appendix;442
30;Bibliography;498
31;Index;510



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.