Tenne / Goh | Computational Intelligence in Optimization | E-Book | www.sack.de
E-Book

E-Book, Englisch, Band 7, 412 Seiten

Reihe: Adaptation, Learning, and Optimization

Tenne / Goh Computational Intelligence in Optimization

Applications and Implementations
1. Auflage 2010
ISBN: 978-3-642-12775-5
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark

Applications and Implementations

E-Book, Englisch, Band 7, 412 Seiten

Reihe: Adaptation, Learning, and Optimization

ISBN: 978-3-642-12775-5
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark



This collection of recent studies spans a range of computational intelligence applications, emphasizing their application to challenging
real-world problems. Covers Intelligent agent-based algorithms, Hybrid intelligent systems, Machine learning and more.

Tenne / Goh Computational Intelligence in Optimization jetzt bestellen!

Autoren/Hrsg.


Weitere Infos & Material


1;0007;1
2;Preface;6
3;Acknowledgement;10
4;Contents;11
5;New Hybrid Intelligent Systems to Solve Linear and Quadratic Optimization Problems and Increase Guaranteed Optimal Convergence Speed of Recurrent ANN;19
5.1;Introduction;19
5.2;Neural Network of Maa and Shanblatt: Two-Phase Optimization;22
5.3;Hybrid Intelligent System Description;25
5.3.1;Method of Tendency Based on the Dynamics in Space-Time (TDST);27
5.3.2;Method of Tendency Based on the Dynamics in State-Space (TDSS);31
5.4;Case Studies;34
5.4.1;Case 1: Mathematical Linear Programming Problem – Four Variables;34
5.4.2;Case 2: Mathematical Linear Programming Problem – Eleven Variables;34
5.4.3;Case 3: Mathematical Quadratic Programming Problem – Three Variables;36
5.5;Simulations;37
5.6;Conclusion;41
5.7;References;43
6;A Novel Optimization Algorithm Based on Reinforcement Learning;45
6.1;Introduction;45
6.2;Optimization Algorithm;48
6.2.1;Basic Search Procedure;48
6.2.2;Extracting Historical Information by Weighted Optimized Approximation;48
6.2.3;Predicting New Step Sizes;54
6.2.4;Stopping Criterion;55
6.2.5;Optimization Algorithm;56
6.3;Simulation and Discussion;57
6.3.1;Finding Global Minimum of a Multi-variable Function;57
6.3.2;Optimization of Weights in Multi-layer Perceptron Training;60
6.3.3;Micro-saccade Optimization in Active Vision for Machine Intelligence;61
6.4;Conclusions;63
6.5;References;64
7;The Use of Opposition for Decreasing Function Evaluations in Population-Based Search;66
7.1;Introduction;66
7.2;Theoretical Motivations;67
7.2.1;Definitions and Notations;68
7.2.2;Consequences of Opposition;69
7.2.3;Lowering Function Evaluations;70
7.2.4;Comparison to Existing Methods;71
7.3;Algorithms;72
7.3.1;Differential Evolution;73
7.3.2;Opposition-Based Differential Evolution;74
7.3.3;Population-Based Incremental Learning;74
7.3.4;Oppositional Population-Based Incremental Learning;75
7.4;Experimental Setup;76
7.4.1;Evolutionary Image Thresholding;76
7.4.2;Parameter Settings and Solution Representation;80
7.5;Experimental Results;81
7.5.1;ODE;81
7.5.2;OPBIL;82
7.6;Conclusion;85
7.7;References;86
8;Search Procedure Exploiting Locally Regularized Objective Approximation: A Convergence Theorem for Direct Search Algorithms;89
8.1;Introduction;89
8.2;The Search Procedure;90
8.3;Zangwill’s Method to Prove Convergence;91
8.4;The Main Result;93
8.4.1;Closedness of the Algorithmic Transformation;94
8.4.2;A Perturbation in the Line Search;96
8.5;The Radial Basis Appproximation;103
8.5.1;Detecting Dense Regions;103
8.5.2;Regularization Training;104
8.5.3;Choice of the Regularization Parameter $\lambda$ Value;106
8.5.4;Error Bounds for Radial Basis Approximation;107
8.6;Numerical Results;110
8.6.1;Test Problems;110
8.6.2;Results;111
8.7;Summary;115
8.8;References;118
9;Optimization Problems with Cardinality Constraints;120
9.1;Introduction;120
9.2;Approximate Methods for the Solution of Optimization Problems with Cardinality Constrains;123
9.2.1;Simulated Annealing;123
9.2.2;Genetic Algorithms;125
9.2.3;Estimation of Distribution Algorithms;126
9.3;Benchmark Optimization Problems with Cardinality Constraints;128
9.3.1;The Knapsack Problem;129
9.3.2;Ensemble Pruning;131
9.3.3;Portfolio Optimization with Cardinality Constraints;134
9.3.4;Index Tracking by Partial Replication;137
9.3.5;Sparse Principal Component Analysis;139
9.4;Conclusions;142
9.5;References;143
10;Learning Global Optimization through a Support Vector Machine Based Adaptive Multistart Strategy;146
10.1;Introduction and Background Research;147
10.2;Global Optimization with Support Vector Regression Based Adaptive Multistart (GOSAM);149
10.3;Experimental Results;151
10.3.1;One Dimensional Wave Function;152
10.3.2;Two Dimensional Case: Ackley’s Function;155
10.3.3;Comparison with PSO and GA on Higher Dimensional Problems;156
10.4;Extension to Constrained Optimization Problems;158
10.4.1;Sequential Unconstrained Minimization Techniques;158
10.5;Design Optimization Problems;162
10.5.1;Sample and Hold Circuit;163
10.5.2;Folded Cascode Amplifier;164
10.6;Discussion;164
10.7;Conclusion and Future Work;167
10.8;References;168
11;Multi-objective Optimization Using Surrogates;170
11.1;Introduction;170
11.2;Surrogate Models for Optimization;172
11.3;Multi-objective Optimization Using Surrogates;173
11.4;Pareto Fronts - Challenges;174
11.5;Response Surface Methods, Optimization Procedure and Test Functions;176
11.6;Update Strategies and Related Parameters;178
11.7;Test Functions;179
11.8;Pareto Front Metrics;180
11.8.1;Generational Distance ([3], pp.326);180
11.8.2;Spacing;180
11.8.3;Spread;180
11.8.4;Maximum Spread;181
11.9;Results;181
11.9.1;Understanding the Results;181
11.9.2;Preliminary Calculations;182
11.9.3;The Effect of the Update Strategy Selection;182
11.9.4;The Effect of the Initial Design of Experiments;186
11.10;Summary;189
11.11;References;189
12;A Review of Agent-Based Co-Evolutionary Algorithms for Multi-Objective Optimization;191
12.1;Introduction;191
12.2;Model of Co-Evolutionary Multi-Agent System;193
12.2.1;Co-Evolutionary Multi-Agent System;194
12.2.2;Environment;194
12.2.3;Species;195
12.2.4;Sex;196
12.2.5;Agent;197
12.3;Co-Evolutionary Multi-Agent Systems for Multi-Objective Optimization;201
12.3.1;Co-Evolutionary Multi-Agent System with Co-Operation Mechanism (CCoEMAS);201
12.3.2;Co-Evolutionary Multi-Agent System with Predator-Prey Interactions (PPCoEMAS);204
12.4;Experimental Results;210
12.4.1;Test Suite, Performance Metric and State-of-the-Art Algorithms;210
12.4.2;A Glance at Assessing Co-operation Based Approach (CCoEMAS);211
12.4.3;A Glance at Assessing Predator-Prey Based Approach (PPCoEMAS);214
12.5;Summary and Conclusions;221
12.6;References;221
13;A Game Theory-Based Multi-Agent System for Expensive Optimisation Problems;224
13.1;Introduction;224
13.2;Background;226
13.2.1;Optimisation;226
13.2.2;Game Theory: The Iterated Priosoners’ Dilemma;226
13.2.3;Multi-Agent Systems;227
13.3;Constructing GTMAS;228
13.3.1;GTMAS at Work: Illustration;229
13.4;The GTMAS Algorithm;231
13.4.1;Solver-Agents Decision Making Procedure;232
13.5;Application of GTMAS to TSP;234
13.6;Tests and Results;241
13.7;Conclusion and Further Work;242
13.8;References;243
14;Optimization with Clifford Support Vector Machines and Applications;246
14.1;Introduction;246
14.2;Geometric Algebra;247
14.2.1;The Geometric Algebra of n-D Space;248
14.2.2;The Geometric Algebra of 3-D Space;250
14.3;Linear Clifford Support Vector Machines for Classification;250
14.4;Non Linear Clifford Support Vector Machines for Classification;255
14.5;Clifford SVM for Regression;256
14.6;Recurrent Clifford SVM;258
14.7;Applications;260
14.7.1;3D Spiral: Nonlinear Classification Problem;260
14.7.2;Object Recognition;263
14.7.3;Multi-case Interpolation;269
14.7.4;Experiments Using Recurrent CSVM;270
14.8;Conclusions;273
14.9;References;273
15;A Classification Method Based on Principal Component Analysis and Differential Evolution Algorithm Applied for Prediction Diagnosis from Clinical EMR Heart Data Sets;276
15.1;Introduction;277
15.2;Heart Data Sets;280
15.3;Classification Method;281
15.3.1;Dimension Reduction Using Principal Component Analysis;281
15.3.2;Classification Based on Differential Evolution;282
15.3.3;Differential Evolution;284
15.4;Classification Results;285
15.5;Discussion and Conclusions;293
15.6;References;294
16;An Integrated Approach to Speed Up GA-SVM Feature Selection Model;297
16.1;Introduction;297
16.2;Methodology;300
16.2.1;Parallel/Distributed GA;300
16.2.2;Parallel SVM;302
16.2.3;Neighbor Search;303
16.2.4;Evaluation Caching;304
16.3;Experiments and Results;304
16.4;Conclusion;309
16.5;References;310
17;Computation in Complex Environments; Optimizing Railway Timetable Problems with Symbiotic Networks;311
17.1;Introduction;311
17.1.1;Convergence Inducing Process;312
17.1.2;A Classification of Problem Domains;313
17.2;Railway Timetable Problems;314
17.3;Symbiotic Networks;316
17.3.1;A Theory of Symbiosis;318
17.3.2;Premature Convergence;323
17.4;Symbiotic Networks as Optimizers;325
17.5;Trains as Symbiots;326
17.5.1;Trains in Symbiosis;327
17.5.2;The Environment;328
17.5.3;The Trains;328
17.5.4;The Optimizing Layer;330
17.5.5;Computational Complexity;331
17.5.6;Results;331
17.5.7;A Symbiotic Network as a CCGA;333
17.5.8;Discussion;334
17.6;References;334
18;Project Scheduling: Time-Cost Tradeoff Problems;337
18.1;Introduction;337
18.1.1;A Mathematical Description of TCT Problems;340
18.2;Resource-Constrained Nonlinear TCT;341
18.2.1;Artificial Neural Networks;342
18.2.2;Working of ANN and Heuristic Embedded Genetic Algorithm;343
18.2.3;ANNHEGA for a Case Study;346
18.3;Sensitivity Analysis of TCT Profiles;348
18.3.1;Working of IFAG;351
18.3.2;IFAG for a Case Study;351
18.4;Hybrid Meta Heuristic;355
18.4.1;Working of Hybrid Meta Heuristic;357
18.4.2;HMH Approach for Case Studies;360
18.4.3;Standard Test Problems;364
18.5;Conclusions;366
18.6;References;367
19;Systolic VLSI and FPGA Realization of Artificial Neural Networks;370
19.1;Introduction;371
19.2;Direct-Design of VLSI for Artificial Neural Network;373
19.3;Design Considerations and Systolic Building Blocks for ANN;375
19.4;Systolic Architectures for ANN;382
19.4.1;Systolic Architecture for Hopfield Net;382
19.4.2;Systolic Architecture for Multilayer Neural Network;384
19.4.3;Systolic Implementation of Back-Propagation Algorithm;384
19.4.4;Implementation of Advance Algorithms and Applications;387
19.5;Conclusion;387
19.6;References;388
20;Application of Coarse-Coding Techniques for Evolvable Multirobot Controllers;392
20.1;Introduction;392
20.2;Background;395
20.2.1;The Body and the Brain;396
20.2.2;Task Decomposition;396
20.2.3;Machine-Learning Techniques and Modularization;397
20.2.4;Fixed versus Variable Topologies;398
20.2.5;Regularity in the Environment;399
20.3;Artificial Neural Tissue Model;400
20.3.1;Computation;400
20.3.2;The Decision Neuron;401
20.3.3;Evolution and Development;402
20.3.4;Sensory Coarse Coding Model;404
20.4;An Example Task: Resource Gathering;406
20.4.1;Coupled Motor Primitives;408
20.4.2;Evolutionary Parameters;410
20.5;Results;410
20.5.1;Evolution and Robot Density;414
20.5.2;Behavioral Adaptations;414
20.5.3;Evolved Controller Scalability;417
20.6;Discussion;418
20.7;Conclusion;420
20.8;References;421



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.