Laird | Machine Learning Proceedings 1988 | E-Book | www.sack.de
E-Book

E-Book, Englisch, 467 Seiten, Web PDF

Laird Machine Learning Proceedings 1988


1. Auflage 2014
ISBN: 978-1-4832-9769-9
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark

E-Book, Englisch, 467 Seiten, Web PDF

ISBN: 978-1-4832-9769-9
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark



Machine Learning Proceedings 1988

Laird Machine Learning Proceedings 1988 jetzt bestellen!

Autoren/Hrsg.


Weitere Infos & Material


1;Front Cover
;1
2;Proceedings of the Fifth International Conference on Machine Learning;2
3;Copyright Page;3
4;Table of Contents;4
5;PREFACE;8
6;Part 1: Empirical Learning;10
6.1;Chapter 1. Using a Generalization Hierarchy to Learn from Examples;10
6.1.1;Abstract;10
6.1.2;1. Introduction;10
6.1.3;2. Approach;11
6.1.4;3. The Generalization Hierarchy;12
6.1.5;4. Using the Generalization Hierarchy;13
6.1.6;5. Results;15
6.1.7;6. Future Work;15
6.1.8;7. Summary;16
6.1.9;Acknowledgements;16
6.1.10;References;16
6.2;Chapter 2. Tuning Rule-Based Systems to Their Environments;17
6.2.1;Abstract;17
6.2.2;1. The Meaning of Symbols;17
6.2.3;2. Related Work;18
6.2.4;3. The Application Domain;18
6.2.5;4. The Critic;19
6.2.6;5. Learning Mechanisms;20
6.2.7;6. Results;21
6.2.8;7. Summary;22
6.2.9;Acknowledgements;23
6.2.10;References;23
6.3;Chapter 3. ON ASKING THE RIGHT QUESTIONS;24
6.3.1;Abstract;24
6.3.2;0. Introduction;24
6.3.3;1. Picking questions to ask;24
6.3.4;2. The method of conservative selection;25
6.3.5;3· Misleading questions: the problem with tangled hierarchies;27
6.3.6;4. Asking better questions;28
6.3.7;5. Learning two-leggedness: an example of ALVIN;28
6.3.8;6. Conclusion;30
6.3.9;Acknowledgements;30
6.3.10;References;30
6.4;Chapter 4. Concept Simplification and Prediction Accuracy;31
6.4.1;Abstract;31
6.4.2;1. Concept Learning, Simplification, and Independence;31
6.4.3;2. Simplification and Accuracy Using ID3;31
6.4.4;3. Simplification and Conceptual Clustering;34
6.4.5;4. Concluding Remarks;35
6.4.6;Acknowledgements;37
6.4.7;References;37
6.5;Chapter 5. Learning Graph Models of Shape;38
6.5.1;Abstract;38
6.5.2;1. Introduction;38
6.5.3;2. Input representation: set of local features;39
6.5.4;3. Constructive learning of shape descriptors;39
6.5.5;4. Structural representation of shape;39
6.5.6;5. H-graph matching;40
6.5.7;6. Learning h-graph models;42
6.5.8;7. Results;42
6.5.9;8. Conclusion;43
6.5.10;References;44
6.6;Chapter 6. Learning Categorical Decision Criteria in Biomédical Domains;45
6.6.1;Abstract;45
6.6.2;1. Introduction;45
6.6.3;2. Criteria-based Knowledge Representation;45
6.6.4;3. The Biases of Criteria Tables;46
6.6.5;4. The CRiteria Learning System (CRLS);48
6.6.6;5. Evaluation of CRLS;51
6.6.7;6. Conclusion;54
6.6.8;Acknowledgements;54
6.6.9;References;54
6.7;Chapter 7. Conceptual Clumping of Binary Vectors with Occam's Razor;56
6.7.1;Abstract;56
6.7.2;1. Introduction;56
6.7.3;2. Cluster configuration cost;57
6.7.4;3. Finding clusters by minimizing the configuration cost;58
6.7.5;4. Concluding remarks;61
6.7.6;References;61
6.8;Chapter 8. AutoClass: A Bayesian Classification System;63
6.8.1;Abstract;63
6.8.2;1 Introduction;63
6.8.3;2 Overview of Bayesian Classification;64
6.8.4;3 The AutoClass II Program;67
6.8.5;4 Extensions to the Model;70
6.8.6;5 Results;71
6.8.7;6 Conclusion;72
6.8.8;References;72
6.9;Chapter 9. Incremental Multiple Concept Learning Using Experiments;74
6.9.1;Abstract;74
6.9.2;1. Introduction;74
6.9.3;2. Terminology;75
6.9.4;3. The primitive operations;75
6.9.5;4. Algorithm outline;78
6.9.6;5. Efficacy of the experimentation;79
6.9.7;6. Summary and Criticisms;80
6.9.8;Acknowledgments;81
6.9.9;References;81
6.10;Chapter 10. Trading Off Simplicity and Coverage in Incremental Concept Learning;82
6.10.1;Abstract;82
6.10.2;1. Introduction;82
6.10.3;2. HILLARY: An Incremental Hill-Climbing System;83
6.10.4;3 . Experimental Results;86
6.10.5;4. Conclusion;87
6.10.6;Acknowledgements;88
6.10.7;References;88
6.11;Chapter 11. Deferred Commitment in UNIMEM: Waiting to Learn;89
6.11.1;Abstract;89
6.11.2;1 Introduction;89
6.11.3;2 The basic UNIMEM concept formation algorithm;90
6.11.4;3 An order-related problem in more detail;90
6.11.5;4 Deferred commitment learning;92
6.11.6;5 Results from Deferred Commitment UNIMEM;93
6.11.7;6 Conclusion;95
6.11.8;Acknowledgments;95
6.11.9;References;95
6.12;Chapter 12. Experiments on the Costs and Benefits of Windowing in ID3;96
6.12.1;Abstract;96
6.12.2;1. Introduction;96
6.12.3;2. ID3 and Windowing;96
6.12.4;3. Experiments;97
6.12.5;4. Analysis;103
6.12.6;5. Conclusion;106
6.12.7;Acknowledgments;107
6.12.8;References;107
6.13;Chapter 13. Improved Decision Trees: A Generalized Version of ID3;109
6.13.1;Abstract;109
6.13.2;1 Introduction;109
6.13.3;2 The ID3 Approach;109
6.13.4;3 Problems with the ID3 Approach;110
6.13.5;4 An Alternate Approach;111
6.13.6;5 Evaluation Criteria and Test Results;112
6.13.7;6 Conclusions and Future Work;115
6.13.8;7 Acknowledgements;115
6.13.9;References;115
6.14;Chapter 14. ID5: An Incremental ID3;116
6.14.1;Abstract;116
6.14.2;1. Introduction;116
6.14.3;2. ID5;117
6.14.4;3. Analysis;123
6.14.5;4. Experiments;125
6.14.6;5. Conclusion;127
6.14.7;Acknowledgements;129
6.14.8;References;129
6.15;Chapter 15. Using Weighted Networks to Represent Classification Knowledge in Noisy Domains;130
6.15.1;Abstract;130
6.15.2;1. Introduction;130
6.15.3;2. IWN's Knowledge Representation;131
6.15.4;3. IWN's Algorithm for Building Networks;133
6.15.5;4. Experimental Results;137
6.15.6;5. Conclusion;142
6.15.7;Acknowledgements;142
6.15.8;References;143
7;Part 2: Genetic Learning;144
7.1;Chapter 16. An Empirical Comparison of Genetic and Decision-Tree Classifiers;144
7.1.1;Abstract;144
7.1.2;1. Introduction;144
7.1.3;2. The Learning Task;145
7.1.4;3. Results on F6;147
7.1.5;4. How Do The Systems Scale Up?;149
7.1.6;5. Conclusion;150
7.1.7;References;150
7.2;Chapter 17. Population Size In Classifier Systems;151
7.2.1;Abstract;151
7.2.2;1 Introduction;151
7.2.3;2 Learning Classifier Systems;152
7.2.4;3 Classifier System Empirical Studies;155
7.2.5;4 Population Size;156
7.2.6;5 Acknowledgements;159
7.2.7;References;159
7.3;Chapter 18. Representation and Hidden Bias: Gray vs. Binary Coding for Genetic Algorithms;162
7.3.1;Abstract;162
7.3.2;1. Introduction;162
7.3.3;2. Background;163
7.3.4;3. Empirical Results;165
7.3.5;4. Discussion;167
7.3.6;5. Summary;169
7.3.7;References;169
7.4;Chapter 19. Classifier Systems with Hamming Weights;171
7.4.1;Abstract;171
7.4.2;1 How Classifier Systems Match;172
7.4.3;2 The Binary Response Problem;174
7.4.4;3 Matching With Hamming Weights;176
7.4.5;4 The Binary Response Problem With Level Noise;178
7.4.6;5 Binary Responses With Varying Noise Levels;180
7.4.7;6 Conclusions;182
7.5;Chapter 20. Midgard: A Genetic Approach to Adaptive Load Balancing for Distributed Systems;183
7.5.1;Abstract;183
7.5.2;Introduction;183
7.5.3;Model Specification;184
7.5.4;An Augmented Classifier Language;184
7.5.5;Midgard;186
7.5.6;Evaluation;187
7.5.7;Discussion;189
7.5.8;References;189
8;PArt 3: Connectionist Learning;190
8.1;Chapter 21. Some Interesting Properties of a Connectionist Inductive Learning System;190
8.1.1;Abstract;190
8.1.2;1. Introduction;190
8.1.3;2.0 The Architecture of the Learning System;191
8.1.4;3·0 Simulations;193
8.1.5;4.0 Summary;196
8.1.6;References;196
8.2;Chapter 22. Competitive Reinforcement Learning;197
8.2.1;Abstract;197
8.2.2;1 Introduction;197
8.2.3;2 The Competitive Reinforcement Algorithm;199
8.2.4;3 Demonstration;202
8.2.5;4 Informal Analysis;205
8.2.6;5 Discussion;206
8.2.7;6 Acknowledgements;207
8.2.8;References;207
8.3;Chapter 23. Connectionist Learning of Expert Backgammon Evaluations;209
8.3.1;ABSTRACT;209
8.3.2;1. Introduction;209
8.3.3;2. Network and Database Set-up;210
8.3.4;3. Training and Testing Procedures;212
8.3.5;4. Results of Training;213
8.3.6;5. Discussion;214
8.3.7;Acknowledgements;215
8.3.8;References;215
8.4;Chapter 24. Building and Using Mental Models in a Sensory-Motor Domain: A Connectionist Approach;216
8.4.1;1. Introduction;216
8.4.2;2. A Robot Called MURPHY;217
8.4.3;3. How MURPHY Learns;218
8.4.4;4. What MURPHY Does;220
8.4.5;5. Discussion and Future Work;221
8.4.6;6. Acknowledgements;222
8.4.7;References;222
9;Part 4: Explanation-Based Learning;223
9.1;Chapter 25. Reasoning about Operationality for Explanation-Based Learning;223
9.1.1;Abstract;223
9.1.2;1. Introduction;223
9.1.3;2. Reasoning about Operationality;224
9.1.4;3. Generalizing Operationality;224
9.1.5;4. ROE;225
9.1.6;5. Related Work;226
9.1.7;6. Conclusion;228
9.1.8;Acknowledgements;228
9.1.9;Appendix I. PROLOG Implementation of ROE;228
9.1.10;References;229
9.2;Chapter 26. Boundaries of Operationality;230
9.2.1;Abstract;230
9.2.2;1. Introduction;230
9.2.3;2. Utility;231
9.2.4;3. The Boundary of Operationality;232
9.2.5;4. EBL and Searching Through the Generalized Concept Partial Order;235
9.2.6;5. Locality;236
9.2.7;6. Non-Locality with a Boundary of Operationally;237
9.2.8;7. Degrees of Operationality;239
9.2.9;8. Discussion and Further Work;241
9.2.10;Acknowledgements;242
9.2.11;References;242
9.3;Chapter 27. On the Tractability of Learning from Incomplete Theories;244
9.3.1;Abstract;244
9.3.2;1. Introduction;244
9.3.3;2. Determinations: A Form of Incomplete Theory;245
9.3.4;3. Overview of the Formal Learning Framework;246
9.3.5;4. Learnability Results;247
9.3.6;5. Conclusions;250
9.3.7;6. Acknowledgements;250
9.3.8;7. References;250
9.4;Chapter 28. ACTIVE EXPLANATION REDUCTION: An Approach to the Multiple Explanations Problem;251
9.4.1;Abstract;251
9.4.2;1. Introduction;251
9.4.3;2. Active Explanation Reduction;253
9.4.4;3. Representation of the Domain Theories;256
9.4.5;4. Multiple Explanations from Intractable Theories;257
9.4.6;5. Evaluation of the Active Explanation Reduction Technique;261
9.4.7;6. Related Work;262
9.4.8;References;264
9.5;Chapter 29. Generalizing Number and Learning from Multiple Examples in Explanation Based Learning;265
9.5.1;Abstract;265
9.5.2;1 Introduction;265
9.5.3;2 Problem Description;266
9.5.4;3 ADEPT;267
9.5.5;4 Combining Examples;274
9.5.6;5 Results;275
9.5.7;6 Conclusions;276
9.5.8;Acknowledgements;277
9.5.9;References;277
9.6;Chapter 30. Generalizing the Order of Operators in Macro-Operators;279
9.6.1;Abstract;279
9.6.2;1. Introduction;279
9.6.3;2. An Example;280
9.6.4;3. Overview of EGGS;280
9.6.5;4. Generating Partially-Ordered Macro-Operators;281
9.6.6;5. Relation to Nonlinear Planning;287
9.6.7;6. Another Example;288
9.6.8;7. An Example Requiring Structural Generalization;289
9.6.9;8. Conclusions and Problems for Future Research;290
9.6.10;Acknowledgements;291
9.6.11;References;291
9.7;Chapter 31.
Using Experience-Based Learning in Game Playing;293
9.7.1;ABSTRACT;293
9.7.2;1. INTRODUCTION;293
9.7.3;2. SOME GENERAL DESIGN ISSUES;293
9.7.4;3. THE STRUCTURE AND CONTENT OF AN EXPERIENCE BASE;294
9.7.5;4. GINA: A CASE STUDY USING OTHELLO;295
9.7.6;5. Future Research;299
9.7.7;REFERENCES;299
10;Part 5: Integrated Explanation-Based and Empirical Learning;300
10.1;Chapter 32. Integrated Learning with Incorrect and Incomplete Theories;300
10.1.1;Abstract;300
10.1.2;1. Introduction;300
10.1.3;2. Explanation-based learning with an incorrect theory;301
10.1.4;3. Learning with an incomplete theory;304
10.1.5;4. Conclusion;306
10.1.6;Acknowledgments;306
10.1.7;References;306
10.2;Chapter 33. An Approach Based on Integrated Learning to Generating Stories from Stories;307
10.2.1;Abstract;307
10.2.2;1. Introduction;307
10.2.3;2. Integrating EBL and SBL;307
10.2.4;3. The learning problem of IVAN;308
10.2.5;4. EBL step;308
10.2.6;5. Generalization rules;310
10.2.7;6. SBL step;311
10.2.8;7. Evaluation;312
10.2.9;8. Conclusions and directions;313
10.2.10;Acknowledgements;313
10.2.11;References;313
10.3;Chapter 34. A KNOWLEDGE INTENSIVE APPROACH TO CONCEPT INDUCTION;314
10.3.1;Abstract;314
10.3.2;1 Introduction;314
10.3.3;2 A Framework for Inducing Concept Descriptions;315
10.3.4;3 Using Deduction to Drive Induction;318
10.3.5;4 An Example;321
10.3.6;5 Conclusions;324
10.3.7;References;325
11;Part 6: Case-Based Learning;327
11.1;Chapter 35. Learning to Program by Examining and Modifying Cases;327
11.1.1;Abstract;327
11.1.2;1 Introduction;327
11.1.3;2 The General Approach;328
11.1.4;3 The System Architecture;329
11.1.5;4 An Example;331
11.1.6;5 Further Work;332
11.1.7;6 Conclusions;332
11.1.8;Acknowledgements;333
11.1.9;References;333
12;Part 7: Machine Discovery;334
12.1;Chapter 36. Theory Discovery and the Hypothesis Language;334
12.1.1;Abstract;334
12.1.2;1. Introduction;334
12.1.3;2. Two Senses of Success;335
12.1.4;3. A Mathematical Framework;335
12.1.5;4. Theorems;340
12.1.6;5. Conclusion;346
12.1.7;References;346
12.2;Chapter 37. Machine Invention of First-order Predicates by Inverting Resolution;348
12.2.1;Abstract;348
12.2.2;1. Introduction;348
12.2.3;2. CIGOL sessions;349
12.2.4;3. Preliminaries;351
12.2.5;4. Inverting resolution;353
12.2.6;5. CIGOL;359
12.2.7;6. Discussion;360
12.2.8;Acknowledgements;361
12.2.9;References;361
12.3;Chapter 38. The Interdependences of Theory Formation, Revision, and Experimentation;362
12.3.1;Abstract;362
12.3.2;1. Introduction;362
12.3.3;2. An Integrated Model of Theory Development;364
12.3.4;3. An Additional Example: Understanding Osmosis;372
12.3.5;4. Evidence from the History of Science and Psychology;373
12.3.6;5. Related Work;373
12.3.7;6. Discussion;374
12.3.8;7. Acknowledgements;374
12.3.9;8. References;374
12.4;Chapter 39. A Hill-Climbing Approach to Machine Discovery;376
12.4.1;Abstract;376
12.4.2;1. Introduction;376
12.4.3;2. The REVOLVER System;376
12.4.4;3. Evaluating the System;378
12.4.5;4. Discussion;381
12.4.6;Acknowledgements;382
12.4.7;References;382
12.5;Chapter 40. REDUCTION: A PRACTICAL MECHANISM OF SEARCHING FOR REGULARITY IN DATA;383
12.5.1;Abstract;383
12.5.2;1. Introduction;383
12.5.3;2. Outlining Reduction;383
12.5.4;3. A Generate and Test Search for a Primitive Function;385
12.5.5;4. Implementing Reduction;387
12.5.6;5. Concluding Remarks;388
12.5.7;Acknowledgements;389
12.5.8;References;389
13;Part 8: Formal Models of Concept Learning;390
13.1;Chapter 41. Extending the Valiant Learning Model;390
13.1.1;Abstract;390
13.1.2;1 Introduction;390
13.1.3;2 The Valiant Model;391
13.1.4;3 Experimentation;392
13.1.5;4 Heuristic Learnability and Density;396
13.1.6;5 Conclusion;402
13.1.7;Acknowledgements;402
13.1.8;References;402
13.2;Chapter 42. LEARNING SYSTEMS OF FIRST-ORDER RULES;404
13.2.1;Abstract;404
13.2.2;1 Introduction;404
13.2.3;2 The Setting;406
13.2.4;3 The Algorithm;407
13.2.5;4 Pragmatic Issues;409
13.2.6;5 Conclusion;409
13.2.7;References;410
13.3;Chapter 43. Two New Frameworks for Learning;411
13.3.1;Abstract;411
13.3.2;1. Introduction;411
13.3.3;2. Preliminaries;412
13.3.4;3. Learning from Examples and Background Information;415
13.3.5;4. Learning as Improvement in Computational Efficiency;418
13.3.6;5. Implementation;422
13.3.7;6. Conclusion;423
13.3.8;7. Acknowledgements;423
13.3.9;8. References;423
13.4;Chapter 44. Hypothesis Filtering: A Practical Approach to Reliable Learning;425
13.4.1;Abstract;425
13.4.2;1 Introduction;425
13.4.3;2 Two Kinds of Justification;426
13.4.4;3 Reliable Learning;427
13.4.5;4 Statistical Foundations;429
13.4.6;5 The Hypothesis Filtering Method;431
13.4.7;6 Concept Learning and Hypothesis Filtering;433
13.4.8;7 Predictive Clustering and Hypothesis Filtering;434
13.4.9;8 Guaranteed Learning;435
13.4.10;9 Conclusion;437
13.4.11;Acknowledgments;437
13.4.12;References;438
14;Part 9: Experimental Results in Machine Learning;439
14.1;Chapter 45. Diffy-S: Learning Robot Operator Schemata from Examples;439
14.1.1;Abstract;439
14.1.2;1· Introduction;439
14.1.3;2. Learning Task;439
14.1.4;3. Related Work;440
14.1.5;4. Representation;441
14.1.6;5. Performance Tasks;442
14.1.7;6. Learning Algorithm;442
14.1.8;7· Evaluation;443
14.1.9;8. Future Research;444
14.1.10;9. Conclusion;444
14.1.11;Acknowledgments;445
14.1.12;References;445
14.2;Chapter 46. Experimental Results from an Evaluation of Algorithms that Learn to Control Dynamic Systems;446
14.2.1;Abstract;446
14.2.2;1. Introduction;446
14.2.3;2. The Problem Domain;447
14.2.4;3. BOXES;447
14.2.5;4. The AHC Algorithm;448
14.2.6;5. CART;449
14.2.7;6. Combining Reinforcement Learning with Induction;449
14.2.8;7. Comparison of Methods;451
14.2.9;Acknowledgements;452
14.2.10;References;452
14.3;Chapter 47. Utilizing Experience for Improving the Tactical Manager;453
14.3.1;Abstract;453
14.3.2;The Learning Task;453
14.3.3;The Performance Element: Tactical Manager and Simulation;454
14.3.4;Learning Part 1: Accumulating Experience;455
14.3.5;Learning Part 2: Utilizing Experience;457
14.3.6;Complexity Problems;458
14.3.7;Relation to Earlier Research;459
14.3.8;Acknowledgements;459
14.3.9;References;459
15;Part 10: Computational Impact of Learning and Forgetting;460
15.1;Chapter 48. Some Chunks Are Expensive;460
15.1.1;Abstract;460
15.1.2;1. Introduction;460
15.1.3;2. Expensive Chunks Exist;461
15.1.4;3. Soar;462
15.1.5;4. The Matcher;462
15.1.6;5. Expensive Chunks: The three contributing factors;463
15.1.7;6. Discussion;465
15.1.8;Acknowledgements;467
15.1.9;References;467
15.2;Chapter 49. The Role of Forgetting in Learning;468
15.2.1;Abstract;468
15.2.2;1. Introduction;468
15.2.3;2. The Economics of Learning;468
15.2.4;3. Learning to Search Graphs;470
15.2.5;4. Conclusions;473
15.2.6;References;474
16;INDEX;476



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.