Kumar / Kitano / Suttner | Parallel Processing for Artificial Intelligence 2 | E-Book | www.sack.de
E-Book

E-Book, Englisch, Band Volume 15, 245 Seiten, Web PDF

Reihe: Machine Intelligence and Pattern Recognition

Kumar / Kitano / Suttner Parallel Processing for Artificial Intelligence 2


1. Auflage 2014
ISBN: 978-1-4832-9575-6
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark

E-Book, Englisch, Band Volume 15, 245 Seiten, Web PDF

Reihe: Machine Intelligence and Pattern Recognition

ISBN: 978-1-4832-9575-6
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark



With the increasing availability of parallel machines and the raising of interest in large scale and real world applications, research on parallel processing for Artificial Intelligence (AI) is gaining greater importance in the computer science environment. Many applications have been implemented and delivered but the field is still considered to be in its infancy.This book assembles diverse aspects of research in the area, providing an overview of the current state of technology. It also aims to promote further growth across the discipline. Contributions have been grouped according to their subject: architectures (3 papers), languages (4 papers), general algorithms (6 papers), and applications (5 papers). The internationally sourced papers range from purely theoretical work, simulation studies, algorithm and architecture proposals, to implemented systems and their experimental evaluation.Since the book is a second volume in the parallel processing for AI series, it provides a continued documentation of the research and advances made in the field. The editors hope that it will inspire readers to investigate the possiblities for enhancing AI systems by parallel processing and to make new discoveries of their own!

Kumar / Kitano / Suttner Parallel Processing for Artificial Intelligence 2 jetzt bestellen!

Weitere Infos & Material


1;Front Cover;1
2;Parallel Processingfor Artificial Intelligence 2;4
3;Copyright Page;5
4;Table of Contents;8
5;PREFACE;6
6;SECTION 1: ARCHITECTURES;10
6.1;Chapter 1. Hybrid Systems on a Multi-Grain Parallel Architecture;12
6.1.1;Abstract;12
6.1.2;1. Introduction;12
6.1.3;2. Hybrid systems;13
6.1.4;3. ArMenX;15
6.1.5;4. Implementation of multiple granularity algorithms on Ar-MenX;16
6.1.6;5. Conclusion;18
6.1.7;References;18
6.2;Chapter 2. An Abstract Machine for Implementing Connectionist and Hybrid Systems on Multi-processor Architectures;20
6.2.1;Abstract;20
6.2.2;1. Introduction;20
6.2.3;2. Hybrid systems for new AI applications;21
6.2.4;3. The Cellular Abstract Machine (CAM);22
6.2.5;4. Implementation of a hybrid model on the CAM: an example;27
6.2.6;5. Multi-processor implementation of the CAM;32
6.2.7;6. Advancement of the implementation and conclusion;34
6.2.8;Acknowledgements;35
6.2.9;References;35
6.3;Chapter 3. A Dense, Massively Parallel Architecture;38
6.3.1;1. INTRODUCTION;38
6.3.2;2. DESCRIPTION OF THE GRAPH CLASS;38
6.3.3;3. DIAMETER AND MEAN DISTANCE;42
6.3.4;4. ROUTING;44
6.3.5;5. CONCLUSION AND FUTURE WORK;45
6.3.6;REFERENCES;46
7;SECTION 2: LANGUAGES;48
7.1;Chapter 4. Using Confluence to Control Parallel Production Systems;50
7.1.1;1. Introduction;50
7.1.2;2. A Brief Introduction to Term Rewriting Systems;51
7.1.3;3. Relating Production Systems to Term Rewriting Systems;53
7.1.4;4. Determining Confluence Among Production Rule Sets;56
7.1.5;5. Examples;58
7.1.6;6. Summary and Future Work;62
7.1.7;References;63
7.2;Chapter 5. Toward An Architecture Independent High Level Parallel Programming Model For Artificial Intelligence;66
7.2.1;Abstract;66
7.2.2;1. Introduction;66
7.2.3;2. Design Considerations;67
7.2.4;3. The Programming Model;68
7.2.5;4. Examples;70
7.2.6;5. Exploiting Parallelism;73
7.2.7;6. Development Status;75
7.2.8;7. Conclusions;75
7.2.9;References;75
7.3;Chapter 6. An Object-Oriented Approach for Programming the Connection Machine;76
7.3.1;Abstract;76
7.3.2;1. Programming Model;76
7.3.3;2. A more detailed view;78
7.3.4;3. Conclusion;82
7.3.5;References;82
7.4;Chapter 7. Automatic Parallelisation of LISP programs;86
7.4.1;Abstract;86
7.4.2;1. Introduction;86
7.4.3;2. Parallel Analysis;89
7.4.4;3. The PARALLEL Subsystem;93
7.4.5;4. Discussion;97
7.4.6;5. References;98
8;SECTION 3: GENERAL ALGORITHMS;100
8.1;Chapter 8.Simulation Analysis of Static Partitioning with Slackness;102
8.1.1;Abstract;102
8.1.2;1. Introduction;102
8.1.3;2. Simulation Analysis;103
8.1.4;3. Related Work;111
8.1.5;4. Summary;113
8.1.6;References;114
8.2;Chapter 9. A distributed realization for constraint satisfaction;116
8.2.1;1. INTRODUCTION;116
8.2.2;2. THE CSP;116
8.2.3;3. RELATED WORK;117
8.2.4;4. OUR DISTRIBUTED APPROACH;118
8.2.5;5. A MULTI-MASTER ENVIRONMENT;120
8.2.6;6. FINAL REMARKS;120
8.2.7;REFERENCES;123
8.3;Chapter 10. A First Step Towards the Massively Parallel Game-Tree Search : a SIMD Approach;126
8.3.1;Abstract;126
8.3.2;1. Introduction;126
8.3.3;2. Minimax theory;127
8.3.4;3· a–ß pruning;128
8.3.5;4. Search parallelization techniques;129
8.3.6;5. Motivations;131
8.3.7;6· SIMD a–ß algorithm;132
8.3.8;7. Implementation on CM-2;133
8.3.9;8. Empirical results;135
8.3.10;9. Concluding remarks and future works;136
8.3.11;References;137
8.4;Chapter 11. Initialization of Parallel Branch-and-bound Algorithms;140
8.4.1;Abstract;140
8.4.2;1. Introduction;140
8.4.3;2. Parallel Branch-and-bound;141
8.4.4;3. Initialization Methods;143
8.4.5;4. Analysis;147
8.4.6;5. Experimental Results;151
8.4.7;6. Conclusion;152
8.4.8;7· Acknowledgements;152
8.4.9;8. References;152
8.5;Chapter 12. A Model for Parallel Deduction;154
8.5.1;Abstract;154
8.5.2;1. Introduction;154
8.5.3;2. A Parallel Model for Horn Clauses;156
8.5.4;3. A Complete Parallel Model;162
8.5.5;4. Concluding Remarks;168
8.5.6;References;168
9;SECTION 4: APPLICATIONS;170
9.1;Chapter 13. Toward Real-Time Motion Planning;172
9.1.1;Abstract;172
9.1.2;1. Introduction;172
9.1.3;2. A Parallel Motion Planning Algorithm for MIMD multicomputers;176
9.1.4;3. Future work;180
9.1.5;4. Conclusion;182
9.1.6;5. Acknowledgements;182
9.1.7;References;182
9.2;Chapter 14. Toward Massively Parallel Spoken Language Translation;186
9.2.1;Abstract;186
9.2.2;1. Introduction;186
9.2.3;2. TDMT and Massively Parallel EBMT;187
9.2.4;3. Massively Parallel TDMT;189
9.2.5;4. Performance Analysis of Sequential TDMT vs. Massively Parallel TDMT;191
9.2.6;5. Conclusion;193
9.2.7;References;193
9.3;Chapter 15. Weather Forecasting Using Memory-Based Reasoning;194
9.3.1;1. Introduction;194
9.3.2;2. Weather Forecasting Using MBR;195
9.3.3;3. Implementation;196
9.3.4;4. Experimental Results;199
9.3.5;5. Discussion;203
9.3.6;6. Conclusions and Future work;205
9.3.7;REFERENCES;205
9.4;Chapter 16. Scalability of an OR-parallel Theorem Prover — A Modelling Approach —;208
9.4.1;Abstract;208
9.4.2;1. Introduction;208
9.4.3;2. OR-Parallelism in PARTHEO;209
9.4.4;3. Modelling PARTHEO;210
9.4.5;4. Results and Conclusions;212
9.4.6;References;214
9.5;Chapter 17. A Coarse Grained Parallel Induction Heuristic;216
9.5.1;Abstract;216
9.5.2;1. Introduction;216
9.5.3;2. Outline of Induction Heurisitic;217
9.5.4;3. Data Parallel;218
9.5.5;4. Brief Discussion of Options;219
9.5.6;5. Implementation;219
9.5.7;6. Details of Comparisons;222
9.5.8;7. Results;223
9.5.9;8. Conclusion;230
9.5.10;9. Acknowledgements;231
9.5.11;References;231
9.6;Chapter 18. Fuzzy Logic controlled dynamic allocation system;236
9.6.1;Abstract;236
9.6.2;1. Introduction;236
9.6.3;2. Structure of the allocation system;237
9.6.4;3. The fuzzy-control allocator;241
9.6.5;4. Example of an allocation strategy;242
9.6.6;5. A Test Case;245
9.6.7;6. Conclusion;246
9.6.8;References;246



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.