Kanal / Kitano / Kumar | Parallel Processing for Artificial Intelligence 1 | E-Book | sack.de
E-Book

E-Book, Englisch, Band Volume 14, 443 Seiten, Web PDF

Reihe: Machine Intelligence and Pattern Recognition

Kanal / Kitano / Kumar Parallel Processing for Artificial Intelligence 1


1. Auflage 2014
ISBN: 978-1-4832-9574-9
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark

E-Book, Englisch, Band Volume 14, 443 Seiten, Web PDF

Reihe: Machine Intelligence and Pattern Recognition

ISBN: 978-1-4832-9574-9
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark



Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discusses parallel processing for semantic networks, which are widely used means for representing knowledge - methods which enable efficient and flexible processing of semantic networks are expected to have high utility for building large-scale knowledge-based systems. The third section explores the automatic parallel execution of production systems, which are used extensively in building rule-based expert systems - systems containing large numbers of rules are slow to execute and can significantly benefit from automatic parallel execution. The exploitation of parallelism for the mechanization of logic is dealt with in the fourth section. While sequential control aspects pose problems for the parallelization of production systems, logic has a purely declarative interpretation which does not demand a particular evaluation strategy. In this area, therefore, very large search spaces provide significant potential for parallelism. In particular, this is true for automated theorem proving. The fifth section considers the problem of constraint satisfaction, which is a useful abstraction of a number of important problems in AI and other fields of computer science. It also discusses the technique of consistent labeling as a preprocessing step in the constraint satisfaction problem. Section VI consists of two articles, each on a different, important topic. The first discusses parallel formulation for the Tree Adjoining Grammar (TAG), which is a powerful formalism for describing natural languages. The second examines the suitability of a parallel programming paradigm called Linda, for solving problems in artificial intelligence.Each of the areas discussed in the book holds many open problems, but it is believed that parallel processing will form a key ingredient in achieving at least partial solutions. It is hoped that the contributions, sourced from experts around the world, will inspire readers to take on these challenging areas of inquiry.

Kanal / Kitano / Kumar Parallel Processing for Artificial Intelligence 1 jetzt bestellen!

Weitere Infos & Material


1;Front Cover;1
2;Parallel Processing for Artificial Intelligence 1;4
3;Copyright Page;5
4;Table of Contents;10
5;PREFACE;6
6;EDITORS;12
7;AUTHORS;14
8;PART I: IMAGE PROCESSING;18
8.1;Chapter 1. A Perspective on Parallel Processing in Computer Vision and Image Understanding;20
8.1.1;1. Introduction;20
8.1.2;2. Parallelism in Vision Systems;22
8.1.3;3. Representation Based Classification of Vision Computations;23
8.1.4;4. Issues in Data and Computation Partitioning;25
8.1.5;5. Architectural Requirements;29
8.1.6;6. Future Directions;34
8.1.7;Acknowledgments;36
8.1.8;References;36
8.2;Chapter 2. On Supporting Rule-Based Image Interpretation Using a Distributed Memory Multicomputer;38
8.2.1;1. Introduction;38
8.2.2;2. Software and Hardware Strategies for Supporting RBS;40
8.2.3;3. AIMS: A Multi-Sensor Image Interpretation System;43
8.2.4;4. Parallel Implementation;47
8.2.5;5. Discussion;52
8.2.6;6. Conclusion;54
8.2.7;References;55
8.3;Chapter 3. Parallel Affine Image Warping ;62
8.3.1;1. Introduction;62
8.3.2;2. Forward versus inverse algorithms in affine image warping;64
8.3.3;3. Other important characteristics of affine image warping;66
8.3.4;4. Machines;66
8.3.5;5. Classification of implementations;67
8.3.6;6. Systolic methods;69
8.3.7;7. Data partitioned methods;74
8.3.8;8. A scanline method;77
8.3.9;9. A Sweep-Based Method;78
8.3.10;10. Conclusions;82
8.3.11;References;83
8.4;Chapter 4. Image Processing On Reconfigurable Meshes With Buses;84
8.4.1;Abstract;84
8.4.2;1. Introduction;84
8.4.3;2· Data Manipulation Operations;88
8.4.4;3. Area And Perimeter Of Connected Components;91
8.4.5;4. Shrinking And Expanding;94
8.4.6;5. Clustering;97
8.4.7;6. Template Matching;99
8.4.8;7. Conclusions;104
8.4.9;8. References;104
9;PART II: SEMANTIC NETWORKS;110
9.1;Chapter 5. Inheritance Operations in Massively Parallel Knowledge Representation;112
9.1.1;1. Massively Parallel Knowledge Representation;112
9.1.2;2. Schubert's Tree Encoding of IS-Á Hierarchies;113
9.1.3;3. How to Achieve the Same Effect Without Trees;115
9.1.4;4. Parallelizing the Update Algorithm;119
9.1.5;5. Inheritance Terminology;120
9.1.6;6. Upward-Inductive Inheritance;121
9.1.7;7. Downward Inheritance Algorithm;122
9.1.8;8. Upward-Inductive Inheritance Algorith;124
9.1.9;9. Experimental Results;125
9.1.10;10. Conclusions;128
9.1.11;Acknowledgement ;128
9.1.12;References;129
9.2;Chapter 6. Providing Computationally Effective Knowledge Representation via Massive Parallelism;132
9.2.1;1. Introduction;132
9.2.2;2. Description of PARKA;134
9.2.3;3. Performance;139
9.2.4;4. Future & Related Work;148
9.2.5;5. Conclusion;149
9.2.6;6. Acknowledgments;150
9.2.7;References;150
10;PART III: PRODUCTION SYSTEMS III;154
10.1;Chapter 7. Speeding Up Production Systems: From Concurrent Matching to Parallel Rule Firing;156
10.1.1;1. Introduction;156
10.1.2;2. A Generic Production System Architecture;158
10.1.3;3. State-Saving Algorithms;160
10.1.4;4. Parallel Execution of Rete;164
10.1.5;5. Compile Time Optimization of Rete;168
10.1.6;6. Parallel Rule Firing;168
10.1.7;7. Discussion;173
10.1.8;References;174
10.2;Chapter 8. Guaranteeing Serializability in Parallel Production Systems;178
10.2.1;1. Execution Models for Production Systems;179
10.2.2;2. The Serialization Problem;184
10.2.3;3. Ishida and Stolfo's Work;186
10.2.4;4. Definitions and Tests;188
10.2.5;5. Solution to the Serialization Problem;194
10.2.6;6. Algorithms to Guarantee Serializaibilty;199
10.2.7;7. Performance Analysis;205
10.2.8;8. Related Work;215
10.2.9;9. Conclusions;218
10.2.10;10. Acknowledgments;219
10.2.11;References;219
11;PART IV: MECHANIZATION OF LOGIC IV;224
11.1;Chapter 9. Parallel Automated Theorem Proving;226
11.1.1;Abstract;226
11.1.2;1. Introduction;226
11.1.3;2. Classification of Parallelization Approaches;228
11.1.4;3. Partitioning-based Parallel Theorem Provers;233
11.1.5;4. Competition-based Parallel Theorem Provers;255
11.1.6;5. Summary;264
11.1.7;Appendix;267
11.1.8;References;268
11.2;Chapter 10. Massive Parallelism in Inference Systems;276
11.2.1;1. Parallelism in Logic;276
11.2.2;2. Massive Parallelism;279
11.2.3;3. The Potential of Massive Parallelism for Logic;282
11.2.4;4. CHCL: A Connectionist Inference System;285
11.2.5;References;288
11.3;Chapter 11. Representing Propositional Logic and Searching for Satisfiability in Connectionist Networks;296
11.3.1;1. Introduction;296
11.3.2;2. The energy paradigm;298
11.3.3;3. Propositional Logic and Energy Functions;302
11.3.4;4. Experimental Results;307
11.3.5;5. Discussion;311
11.3.6;Acknowledgment;315
11.3.7;References;316
12;PART V: CONSTRAINT SATISFACTION;320
12.1;Chapter 12. Parallel and Distributed Finite Constraint Satisfaction: Complexity, Algorithms and Experiments;322
12.1.1;1. Introduction;322
12.1.2;2. Properties of Constraint Networks;325
12.1.3;3. A Parallel Algorithm and Complexity;332
12.1.4;4. A Distributed Algorithm and Complexity;338
12.1.5;5. A Coarse-Grain Distributed Algorithm;341
12.1.6;6. Experimental Results;346
12.1.7;7. Conclusions;349
12.1.8;Acknowledgements;349
12.1.9;References;349
12.2;Chapter 13. PARALLEL ALGORITHMS AND ARCHITECTURES FOR CONSISTENT LABELING;352
12.2.1;1. Introduction;352
12.2.2;2. Consistent Labeling;353
12.2.3;3. Previous Designs;355
12.2.4;4. Implementations on Special Purpose Architectures;357
12.2.5;5. Implementations on General Purpose Parallel Architectures;372
12.2.6;6. Conclusion;377
12.2.7;Acknowledgement;377
12.2.8;References;377
13;PART VI: OTHER TOPICS;380
13.1;Chapter 14. Massively Parallel Parsing Algorithms for Natural Language;382
13.1.1;1. Introduction;382
13.1.2;2. Tree Adjoining Grammar;387
13.1.3;3. The Connection Machine Model CM-2;396
13.1.4;4. Parsing Sparse TAGs: Parallel Algorith I;399
13.1.5;5. Parsing Sparse TAGs: Parallel Algorithm II;404
13.1.6;6. Parallel Algorithms for Parsing Dense TAGs;409
13.1.7;7. Conclusions and Future Work;417
13.1.8;8. Appendix;419
13.1.9;References;422
13.2;Chapter 15. Process Trellis and FGP: Software Architectures for Data Filtering and Mining;426
13.2.1;1. Introduction;426
13.2.2;2. Linda and the Master/Worker model;427
13.2.3;3. The FGP Machine;429
13.2.4;4. The Process Trellis;434
13.2.5;5. Combining the Trellis and FGP programs for Real-Time Data Management;439
13.2.6;6. An Integrated Program for Network Monitoring;441
13.2.7;7. Conclusions;443
13.2.8;References;443



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.