Prieditis / Russell | Machine Learning Proceedings 1995 | E-Book | www.sack.de
E-Book

E-Book, Englisch, 400 Seiten, Web PDF

Prieditis / Russell Machine Learning Proceedings 1995

Proceedings of the Twelfth International Conference on Machine Learning, Tahoe City, California, July 9-12 1995
1. Auflage 2014
ISBN: 978-1-4832-9866-5
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark

Proceedings of the Twelfth International Conference on Machine Learning, Tahoe City, California, July 9-12 1995

E-Book, Englisch, 400 Seiten, Web PDF

ISBN: 978-1-4832-9866-5
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark



Machine Learning Proceedings 1995

Prieditis / Russell Machine Learning Proceedings 1995 jetzt bestellen!

Weitere Infos & Material


1;Front Cover;1
2;Machine Learning;2
3;Copyright Page;3
4;Table of Contents;4
5;Preface;10
6;Advisory Committee;11
7;Program Committee;11
8;Auxiliary Reviewers;12
9;Workshops;12
10;Tutorials;12
11;PART 1: CONTRIBUTED PAPERS;16
11.1;Chapter 1. On-line Learning of Binary Lexical Relations Using Two-dimensional Weighted Majority Algorithms;18
11.1.1;ABSTRACT;18
11.1.2;1 Introduction;18
11.1.3;2 On-line Learning Model for Binary Relations;20
11.1.4;3 Two-dimensional Weighted Majority Prediction Algorithms;20
11.1.5;4 Experimental Results;21
11.1.6;5 Theoretical Performance Analysis;23
11.1.7;6 Concluding Remarks;26
11.1.8;Acknowledgement;26
11.1.9;References;26
11.2;Chapter 2. On Handling Tree-Structured Attributes in Decision Tree Learning;27
11.2.1;Abstract;27
11.2.2;1 Introduction;27
11.2.3;2 Decision Trees With Tree-Structured Attributes;28
11.2.4;3 Pre-processing Approaches;29
11.2.5;4 A Direct Approach;30
11.2.6;5 Analytical Comparison;31
11.2.7;6 Experimental Comparison;33
11.2.8;7 Summary and Conclusion;34
11.2.9;Acknowledgement;35
11.2.10;References;35
11.3;Chapter 3. Theory and Applications of Agnostic PAC-Learning with Small Decision Trees;36
11.3.1;Abstract;36
11.3.2;1 INTRODUCTION;36
11.3.3;2 THE AGNOSTIC PAC-LEARNING ALGORITHM T2;38
11.3.4;3 EVALUATION OF T2 ON "REAL-WORLD" CLASSIFICATION PROBLEMS;40
11.3.5;4 LEARNING CURVES FOR DECISION TREES OF SMALL DEPTH;42
11.3.6;5 CONCLUSION;43
11.3.7;Acknowledgement;43
11.3.8;References;44
11.4;Chapter 4. Residual Algorithms: Reinforcement Learning with Function Approximation;45
11.4.1;ABSTRACT;45
11.4.2;1 INTRODUCTION;45
11.4.3;2 ALGORITHMS FOR LOOKUP TABLES;46
11.4.4;3 DIRECT ALGORITHMS;46
11.4.5;4 RESIDUAL GRADIENT ALGORITHMS;47
11.4.6;5 RESIDUAL ALGORITHMS;48
11.4.7;6 STOCHASTIC MDPS AND MODELS;50
11.4.8;7 MDPS WITH MULTIPLE ACTIONS;50
11.4.9;8 RESIDUAL ALGORITHM SUMMARY;50
11.4.10;9 SIMULATION RESULTS;51
11.4.11;10 CONCLUSIONS;52
11.4.12;Acknowledgments;52
11.4.13;References;52
11.5;Chapter 5. Removing the Genetics from the Standard Genetic Algorithm;53
11.5.1;Abstract;53
11.5.2;1. THE GENETIC ALGORITHM (GA);53
11.5.3;2. FOUR PEAKS: A PROBLEM DESIGNED TO BE GA-FRIENDLY;54
11.5.4;3. SELECTING THE GA'S PARAMETERS;55
11.5.5;4. POPULATION-BASED INCREMENTAL LEARNING;56
11.5.6;5. EMPIRICAL ANALYSIS ON THE FOUR PEAKS PROBLEM;57
11.5.7;6. DISCUSSION;59
11.5.8;7. CONCLUSIONS;60
11.5.9;ACKNOWLEDGEMENTS;60
11.5.10;REFERENCES;60
11.6;Chapter 6. Inductive Learning of Reactive Action Models;62
11.6.1;Abstract;62
11.6.2;1 INTRODUCTION;62
11.6.3;2 CONTEXT OF THE LEARNER;62
11.6.4;3 ACTIONS AND TELEO-OPERATORS;63
11.6.5;4 COLLECTING INSTANCES FOR LEARNING;64
11.6.6;5 THE INDUCTIVE LOGIC PROGRAMMING ALGORITHM;65
11.6.7;6 EVALUATION;66
11.6.8;7 RELATED WORK;67
11.6.9;8 FUTURE WORK;68
11.6.10;Acknowledgements;68
11.6.11;References;68
11.7;Chapter 7. Visualizing High-Dimensional Structure with the Incremental Grid Growing Neural Network;70
11.7.1;Abstract;70
11.7.2;1 INTRODUCTION;70
11.7.3;2 INCREMENTAL GRID GROWING;71
11.7.4;3 COMPARISON USING MINIMUM SPANNING TREEDATA;73
11.7.5;4 DEMONSTRATION USING REALWORLD SEMANTIC DATA;73
11.7.6;5 DISCUSSION AND FUTURE WORK;75
11.7.7;6 CONCLUSION;77
11.7.8;References;77
11.8;Chapter 8. Empirical support for Winnow and Weighted-Majority based algorithms: results on a calendar scheduling domain;79
11.8.1;Abstract;79
11.8.2;1 Introduction;79
11.8.3;2 The learning problem;80
11.8.4;3 Description of the algorithms;80
11.8.5;4 Experimental results;82
11.8.6;5 Theoretical results;85
11.8.7;Acknowledgements;87
11.8.8;References;87
11.8.9;Appendix;87
11.9;Chapter 9. Automatic Selection of Split Criterion during Tree Growing Based on Node Location;88
11.9.1;Abstract;88
11.9.2;1 DECISION TREE CONSTRUCTION;88
11.9.3;2 SITUATIONS IN WHICH ACCURACY IS THE BEST SPLITCRITERION;89
11.9.4;3 IMPLICATIONS FOR TREE-GROWING ALGORITHMS;90
11.9.5;4 EMPIRICAL SUPPORT OF THE HYPOTHESIS;90
11.9.6;5 FUTURE DIRECTIONS;94
11.9.7;References;94
11.10;Chapter 10. A Lexically Based Semantic Bias for Theory Revision;96
11.10.1;Abstract;96
11.10.2;1 INTRODUCTION;96
11.10.3;2 BACKGROUND;97
11.10.4;3 CLARUS;97
11.10.5;4 RESULTS;100
11.10.6;5 Discussion;103
11.10.7;6 CONCLUSION;104
11.10.8;Acknowledgments;104
11.10.9;References;104
11.11;Chapter 11. A Comparative Evaluation of Voting and Meta-learning on Partitioned Data;105
11.11.1;Abstract;105
11.11.2;1 Introduction;105
11.11.3;2 Common Voting and Statistical Techniques;105
11.11.4;3 Meta-learning Techniques;106
11.11.5;4 Experiments and Results;107
11.11.6;5 Arbiter Tree;110
11.11.7;6 Discussion;112
11.11.8;7 Concluding Remarks;112
11.11.9;References;113
11.12;Chapter 12. Fast and Efficient Reinforcement Learning with Truncated Temporal Differences;114
11.12.1;Abstract;114
11.12.2;1 INTRODUCTION;114
11.12.3;2 TD-BASED ALGORITHMS;115
11.12.4;3 TRUNCATED TEMPORAL DIFFERENCES;116
11.12.5;4 EXPERIMENTAL STUDIES;120
11.12.6;5 CONCLUSION;120
11.12.7;Acknowledgements;122
11.12.8;References;122
11.13;Chapter 13. K*: An Instance-based Learner Using an Entropie Distance Measure;123
11.13.1;Abstract;123
11.13.2;1 INTRODUCTION;123
11.13.3;2 ENTROPY AS A DISTANCE MEASURE;124
11.13.4;3 K* ALGORITHM;127
11.13.5;4 RESULTS;128
11.13.6;5 CONCLUSIONS;129
11.13.7;Acknowledgments;129
11.13.8;References;129
11.14;Chapter 14. Fast Effective Rule Induction;130
11.14.1;Abstract;130
11.14.2;1 INTRODUCTION;130
11.14.3;2 PREVIOUS WORK;130
11.14.4;3 EXPERIMENTS WITH IREP;132
11.14.5;4 IMPROVEMENTS TO IREP;134
11.14.6;5 CONCLUSIONS;137
11.14.7;References;138
11.15;Chapter 15. Chapter Text Categorization and Relational Learning;139
11.15.1;Abstract;139
11.15.2;1 INTRODUCTION;139
11.15.3;2 TEXT CATEGORIZATION;139
11.15.4;3 AN EXPERIMENTAL TESTBED;140
11.15.5;4 THE LEARNING METHOD;140
11.15.6;5 EVALUATING THERELATIONAL ENCODING;141
11.15.7;6 RELATION SELECTION;143
11.15.8;7 MONOTONICITY CONSTRAINTS;144
11.15.9;8 COMPARISON TO OTHER METHODS;145
11.15.10;9 CONCLUSIONS;146
11.15.11;Acknowledgements;146
11.15.12;References;147
11.16;Chapter 16. Protein Folding: Symbolic Refinement Competes with Neural Networks;148
11.16.1;Abstract;148
11.16.2;1 INTRODUCTION;148
11.16.3;2 THE PROTEIN FOLDING DOMAIN;148
11.16.4;3 RELATED WORK;150
11.16.5;4 KRUST'S SYMBOLIC REFINEMENT;151
11.16.6;5 EXPERIMENTAL RESULTS;153
11.16.7;6 SUMMARY;155
11.16.8;References;156
11.17;Chapter 17. A Bayesian Analysis of Algorithms for Learning Finite Functions;157
11.17.1;Abstract;157
11.17.2;1 Introduction;157
11.17.3;2 Preliminaries;158
11.17.4;3 Algorithms and priors;159
11.17.5;4 Approaches to prior and algorithm selection;161
11.17.6;5 Discussion and future work;162
11.17.7;Acknowledgements;164
11.17.8;References;164
11.18; Chapter 18. Committee-Based Sampling For Training Probabilistic Classifiers;165
11.18.1;Abstract;165
11.18.2;1 INTRODUCTION;165
11.18.3;2 BACKGROUND;166
11.18.4;3 COMMITTEE-BASEDSAMPLING;167
11.18.5;4 HMMS AND PART-OF-SPEECHTAGGING;168
11.18.6;5 COMMITTEE-BASEDSAMPLING FOR HMMS;168
11.18.7;6 EXPERIMENTAL RESULTS;170
11.18.8;7 CONCLUSIONS;171
11.18.9;References;171
11.19;Chapter 19. Learning Prototypical Concept Descriptions;173
11.19.1;Abstract;173
11.19.2;1 INTRODUCTION;173
11.19.3;2 LEARNING PROTOTYPICALDESCRIPTIONS;174
11.19.4;3 EVALUATION;176
11.19.5;4 DISCUSSION AND FUTUREDIRECTIONS;180
11.19.6;Acknowledgments;181
11.19.7;References;181
11.20;Chapter 20. A Case Study of Explanation-Based Control;182
11.20.1;Abstract;182
11.20.2;1 INTRODUCTION;182
11.20.3;2 THE ACROBOT;182
11.20.4;3 THE EBC APPROACH;183
11.20.5;4 A CONTROL THEORY SOLUTION;186
11.20.6;5 THE EBC SOLUTION;186
11.20.7;6 EMPIRICAL EVALUATION;188
11.20.8;7 CONCLUSIONS;189
11.20.9;Acknowledgements;190
11.20.10;References;190
11.21;Chapter 21. Explanation-Based Learning and Reinforcement Learning: A Unified View;191
11.21.1;Abstract;191
11.21.2;1 Introduction;191
11.21.3;2 Methods;193
11.21.4;3 Experiments and Results;196
11.21.5;4 Discussion;198
11.21.6;5 Conclusion;199
11.21.7;Acknowledgements;199
11.21.8;References;199
11.22;Chapter 22. Lessons from Theory Revision Applied to Constructive Induction;200
11.22.1;Abstract;200
11.22.2;1 Introduction;200
11.22.3;2 Context and Related Work;201
11.22.4;3 Demonstrations of Related Work;202
11.22.5;4 Theory-Guided Constructive Induction;205
11.22.6;5 Experiments;206
11.22.7;6 Discussion;207
11.22.8;References;208
11.23;Chapter 23. Supervised and Unsupervised Discretization of Continuous Features;209
11.23.1;Abstract;209
11.23.2;1 Introduction;209
11.23.3;2 Related Work;210
11.23.4;3 Methods;212
11.23.5;4 Results;213
11.23.6;5 Discussion;213
11.23.7;6 Summary;216
11.23.8;References;216
11.24;Chapter 24. Bounds on the Classification Error of the Nearest Neighbor Rule;218
11.24.1;Abstract;218
11.24.2;1 INTRODUCTION;218
11.24.3;2 DEFINITIONS AND THEOREMS;219
11.24.4;3 DISCUSSION AND CONCLUSION;222
11.24.5;Acknowledgements;222
11.24.6;References;222
11.25;Chapter 25. Q-Learning for Bandit Problems;224
11.25.1;Abstract;224
11.25.2;1 INTRODUCTION;224
11.25.3;2 BANDIT PROBLEMS;225
11.25.4;3 THE GITTINS INDEX;226
11.25.5;4 RESTART-IN-STATE-i PROBLEMS AND THE GITTINSINDEX;227
11.25.6;5 ON-LINE ESTIMATION OFGITTINS INDICES VIAQ-LEARNING;228
11.25.7;6 EXAMPLES;229
11.25.8;7 CONCLUSION;231
11.25.9;Acknowledgements;232
11.25.10;References;232
11.26;Chapter 26. Distilling Reliable Information From Unreliable Theories;233
11.26.1;Abstract;233
11.26.2;1 INTRODUCTION;233
11.26.3;2 IDENTIFYING STABLE EXAMPLES;233
11.26.4;3 USING STABILITY TO ELIMINATE NOISE;236
11.26.5;4 RESULTS;237
11.26.6;5 DISCUSSION;238
11.26.7;Acknowledgements;239
11.26.8;References;239
11.27;Chapter 27. A Quantitative Study of Hypothesis Selection;241
11.27.1;Abstract;241
11.27.2;1 Introduction;241
11.27.3;2 The Hypothesis Selection Problem;242
11.27.4;3 PAO Algorithms for Hypothesis Selection;242
11.27.5;4 Trading Off Exploitation and Exploration;245
11.27.6;5 Implication to Probabilistic Hill-Climbing;247
11.27.7;6 Related Work;248
11.27.8;7 Conclusion;248
11.27.9;Acknowledgements;249
11.27.10;References;249
11.28;Chapter 28. Learning proof heuristics by adapting parameters;250
11.28.1;Abstract;250
11.28.2;1 INTRODUCTION;250
11.28.3;2 FUNDAMENTALS;251
11.28.4;3 LEARNING PARAMETERS WITH A GA;252
11.28.5;4 THE UKB-PROCEDURE;253
11.28.6;5 DESIGNING A FITNESS FUNCTION;254
11.28.7;6 EXPERIMENTAL RESULTS;256
11.28.8;7 DISCUSSION;257
11.28.9;Acknowledgements;258
11.28.10;References;258
11.29;Chapter 29. Efficient Algorithms for Finding Multi-way Splits for Decision Trees;259
11.29.1;Abstract;259
11.29.2;1 Introduction;259
11.29.3;2 Computing Multi-Split Partitions;260
11.29.4;3 Experiments;262
11.29.5;4 Conclusion;265
11.29.6;Acknowledgements;266
11.29.7;References;266
11.30;Chapter 30. Ant-Q: A Reinforcement Learning approach to the traveling salesman problem;267
11.30.1;Abstract;267
11.30.2;1 INTRODUCTION;267
11.30.3;2 THE ANT-Q FAMILY OF ALGORITHMS;267
11.30.4;3 AN EXPERIMENTAL COMPARISONOF ANT-Q ALGORITHMS;268
11.30.5;4. TWO INTERESTING PROPERTIES OF ANT-Q;271
11.30.6;5 COMPARISONS WITH OTHER HEURISTICS AND SOME RESULTS ON DIFFICULT PROBLEMS;273
11.30.7;6 CONCLUSIONS;273
11.30.8;Acknowledgements;275
11.30.9;References;275
11.31;Chapter 31. Stable Function Approximation in Dynamic Programming;276
11.31.1;Abstract;276
11.31.2;1 INTRODUCTION AND BACKGROUND;276
11.31.3;2 DEFINITIONS AND BASIC THEOREMS;277
11.31.4;3 MAIN RESULTS: DISCOUNTED PROCESSES;278
11.31.5;4 NONDISCOUNTED PROCESSES;279
11.31.6;5 CONVERGING TO WHAT;281
11.31.7;6 EXPERIMENTS: HILL-CAR THE HARD WAY;281
11.31.8;7 CONCLUSIONS AND FURTHER RESEARCH;282
11.31.9;References;282
11.32;Chapter 32. The Challenge of Revising an Impure Theory;284
11.32.1;Abstract;284
11.32.2;1 Introduction;284
11.32.3;2 Framework;285
11.32.4;3 Computational Complexity;287
11.32.5;4 Prioritizing Default Theories;289
11.32.6;5 Conclusion;290
11.32.7;References;291
11.33;Chapter 33. Symbiosis in Multimodal Concept Learning;293
11.33.1;Abstract;293
11.33.2;1 INTRODUCTION;293
11.33.3;2 NICHE TECHNIQUES;294
11.33.4;3 SYSTEM OVERVIEW;294
11.33.5;4 INDIVIDUAL AND GROUP OPERATORS;296
11.33.6;5 FITNESS FUNCTION;297
11.33.7;6 COMPARISONS TO OTHER SYSTEMS;297
11.33.8;7 RESULTS;298
11.33.9;8 CONCLUSIONS;299
11.33.10;Acknowledgements;299
11.33.11;References;300
11.34;Chapter 34. Tracking the Best Expert;301
11.34.1;Abstract;301
11.34.2;1 INTRODUCTION;301
11.34.3;2 PRELIMINARIES;303
11.34.4;3 THE ALGORITHMS;303
11.34.5;4 FIXED SHARE ANALYSIS;304
11.34.6;5 VARIABLE SHARE ANALYSIS;305
11.34.7;6 EXPERIMENTAL RESULTS;308
11.34.8;References;309
11.35;Chapter 35. Reinforcement Learning by Stochastic Hill Climbing on Discounted Reward;310
11.35.1;Abstract;310
11.35.2;1 Introduction;310
11.35.3;2 Domain;310
11.35.4;3 Difficulties of Q-learning;312
11.35.5;4 Hill Climbing for Reinforcement Learning;312
11.35.6;5 Experiments;314
11.35.7;6 Discussion;316
11.35.8;7 Conclusion;316
11.35.9;Appendix;317
11.35.10;References;318
11.36;Chapter 36. Automatic Parameter Selection by Minimizing Estimated Error;319
11.36.1;Abstract;319
11.36.2;1 Introduction;319
11.36.3;2 The Parameter Selection Problem;320
11.36.4;3 The Wrapper Method;321
11.36.5;4 Automatic Parameter Selection for C4.5;322
11.36.6;5 Experiments with C4.5-AP;322
11.36.7;6 Related Work;325
11.36.8;7 Conclusion;326
11.36.9;Acknowledgments;326
11.36.10;References;326
11.37;Chapter 37. Error-Correcting Output Coding Corrects Bias and Variance;328
11.37.1;Abstract;328
11.37.2;1 Introduction;328
11.37.3;2 Definitions and Previous Work;329
11.37.4;3 Decomposing the Error Rate into Bias and Variance Components;331
11.37.5;4 ECOC and Voting;332
11.37.6;5 ECOC Reduces Variance and Bias;334
11.37.7;6 Bias Differences are Caused by Non-Local Behavior;334
11.37.8;7 Discussion and Conclusions;335
11.37.9;Acknowledgements;336
11.37.10;References;336
11.38;Chapter 38. Learning to Make Rent-to-Buy Decisions with Systems Applications;337
11.38.1;Abstract;337
11.38.2;1 Introduction;337
11.38.3;2 Definitions and Main Analytical Results;339
11.38.4;3 Algorithm Ae;339
11.38.5;4 Analysis;340
11.38.6;5 Adaptive Disk Spindown andRent-to-Buy;343
11.38.7;6 Experimental Results;343
11.38.8;Acknowledgements;344
11.38.9;References;344
11.39;Chapter 39. NewsWeeder: Learning to Filter Netnews;346
11.39.1;Abstract;346
11.39.2;1 INTRODUCTION;346
11.39.3;2 APPROACH;347
11.39.4;3 RESULTS;350
11.39.5;4 CONCLUSION;353
11.39.6;5 FUTURE WORK;353
11.39.7;Acknowledgments;353
11.39.8;References;353
11.40;Chapter 40. Hill Climbing Beats Genetic Search on a Boolean Circuit Synthesis Problem of Koza's;355
11.40.1;Abstract;355
11.40.2;1 Introduction;355
11.40.3;2 Genetic Programming;355
11.40.4;3 GP vs RGAT;356
11.40.5;4 Hill Climbing;356
11.40.6;5 Interpretation and Speculation;356
11.40.7;6 References;357
11.41;Chapter 41. Case-Based Acquisition of Place Knowledge;359
11.41.1;Abstract;359
11.41.2;1. Introduction and Basic Concepts;359
11.41.3;2. The Evidence Grid Representation;360
11.41.4;3. Case-Based Recognition of Places;361
11.41.5;4. Case-Based Learning of Places;362
11.41.6;5. Experiments with Place Learning;363
11.41.7;6. Related Work on Spatial Learning;365
11.41.8;7. Directions for Future Work;366
11.41.9;Acknowledgements;367
11.41.10;References;367
11.42;Chapter 42. Comparing Several Linear-threshold Learning Algorithms on Tasks Involving Superfluous Attributes;368
11.42.1;Abstract;368
11.42.2;1 INTRODUCTION;368
11.42.3;2 THE LEARNING TASKS;369
11.42.4;3 THE ALGORIT;369
11.42.5;4 DESCRIPTION OF THE PLOTS;371
11.42.6;5 CHECKING PROCEDURES;371
11.42.7;6 OBSERVATIONS;375
11.42.8;7 CONCLUSION;376
11.43;Chapter 43. Learning policies for partially observable environments: Scaling up;377
11.43.1;Abstract;377
11.43.2;1 INTRODUCTION;377
11.43.3;2 PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES;378
11.43.4;3 SOME SOLUTION METHODS FOR POMDP's;379
11.43.5;4 HANDLING LARGER POMDP's: A HYBRID APPROACH;381
11.43.6;5 MORE ADVANCED REPRESENTATIONS;383
11.43.7;References;384
11.44;Chapter 44. Increasing the performance and consistency of classification trees by using the accuracy criterion at the leaves;386
11.44.1;Abstract;386
11.44.2;1 Introduction and Outline;386
11.44.3;2 Comparison of accuracy characteristics of split criteria;387
11.44.4;3 Revised Tree Growing Strategy;388
11.44.5;4 Empirical Results with revised strategy;389
11.44.6;Acknowledgements;390
11.44.7;References;390
11.45;Chapter 45. Efficient Learning with Virtual Threshold Gates;393
11.45.1;Abstract;393
11.45.2;1 Introduction;393
11.45.3;2 Preliminaries;395
11.45.4;3 The Winnow algorithms;395
11.45.5;4 Efficient On-line Learning of Simple Geometrical Objects When Dimension is Variable;396
11.45.6;5 Efficient On-line Learning of Simple Geometrical Objects When Dimension is Fixed;399
11.45.7;6 Conclusions;399
11.45.8;Acknowledgements;400
11.45.9;References;400
11.46;Chapter 46. Instance-Based Utile Distinctions for Reinforcement Learning with Hidden State;402
11.46.1;Abstract;402
11.46.2;1 INTRODUCTION;402
11.46.3;2 UTILE SUFFIX MEMORY;404
11.46.4;3 DETAILS OF THE ALGORITHM;404
11.46.5;4 EXPERIMENTAL RESULTS;406
11.46.6;5 RELATED WORK;409
11.46.7;6 DISCUSSION;409
11.46.8;Acknowledgments;410
11.46.9;References;410
11.47;Chapter 47. Efficient Learning from Delayed Rewards through Symbiotic Evolution;411
11.47.1;Abstract;411
11.47.2;1 Introduction;411
11.47.3;2 Neuro-Evolution;412
11.47.4;3 Symbiotic Evolution;412
11.47.5;4 The SANE Method;412
11.47.6;5 The Inverted Pendulum Problem;413
11.47.7;6 Population Dynamics in SANE;417
11.47.8;7 Related Work;417
11.47.9;8 Extending SANE;418
11.47.10;9 Conclusion;418
11.47.11;Acknowledgments;418
11.47.12;References;418
11.48;Chapter 48. Free to Choose: Investigating the Sample Complexity of Active Learning of Real Valued Functions;420
11.48.1;Abstract;420
11.48.2;1 INTRODUCTION;420
11.48.3;2 MODEL AND PRELIMINARIES;421
11.48.4;3 COLLECTING EXAMPLES: SAMPLING STRATEGIES;421
11.48.5;4 EXAMPLE 1: MONOTONIC FUNCTIONS;422
11.48.6;5 EXAMPLE 2: A CLASS WITH BOUNDED FIRST DERIVATIVE;424
11.48.7;6 CONCLUSIONS AND EXTENSIONS;426
11.48.8;Acknowledgements;426
11.48.9;References;426
11.49;Chapter 49. On learning Decision Committees;428
11.49.1;Abstract;428
11.49.2;1 Introduction;428
11.49.3;2 Definitions and theoretical results;429
11.49.4;3 Learning by DC{-i,0,i}: the IDC algorithm;430
11.49.5;4 Experiments;432
11.49.6;5 Discussion;433
11.49.7;References;434
11.50;Chapter 50. Inferring Reduced Ordered Decision Graphs of Minimum Description Length;436
11.50.1;Abstract;436
11.50.2;1 INTRODUCTION;436
11.50.3;2 DECISION TREES AND DECISION GRAPHS;436
11.50.4;3 MANIPULATING DISCRETE FUNCTIONS USING RODGS;437
11.50.5;4 MINIMUM MESSAGE LENGTH AND ENCODING OF RODGS;438
11.50.6;5 DERIVING AN RODG OF MINIMAL COMPLEXITY;439
11.50.7;6 EXPERIMENTS;442
11.50.8;7 CONCLUSIONS AND FUTURE WORK;444
11.50.9;References;444
11.51;Chapter 51. On Pruning and Averaging Decision Trees;445
11.51.1;Abstract;445
11.51.2;1 INTRODUCTION;445
11.51.3;2. OPTIMAL PRUNING;445
11.51.4;3 TREE AVERAGING;445
11.51.5;4 WEIGHTS FOR DECISION TREES;447
11.51.6;5 COMPLEXITY OF FANNING;448
11.51.7;6 COMPARISON OF AVERAGING AND PRUNING;449
11.51.8;7 DISCUSSION;450
11.51.9;8 FANNING OVER GRAPHS AND PRODUCTION RULES;451
11.51.10;9 CONCLUSION;451
11.51.11;References;452
11.52;Chapter 52. Efficient Memory-Based Dynamic Programming;453
11.52.1;Abstract;453
11.52.2;1 INTRODUCTION;453
11.52.3;2 MEMORY-BASED APPROACH;454
11.52.4;3 EXPERIMENTAL DEMONSTRATION;457
11.52.5;4 DISCUSSION;459
11.52.6;5 CONCLUSION;460
11.52.7;Acknowledgements;460
11.52.8;References;460
11.53;Chapter 53. Using Multidimensional Projection to Find Relations;462
11.53.1;Abstract;462
11.53.2;1 MOTIVATION;462
11.53.3;2 BASIC NOTIONS: RELATION AND PROJECTION;463
11.53.4;3 MULTIDIMENSIONAL RELATIONAL PROJECTION;463
11.53.5;4 A PROTOTYPE IMPLEMENTATION: MRP;464
11.53.6;5 EXPERIMENTAL RESULTS;466
11.53.7;6 RELATED RESEARCH;469
11.53.8;7 CONCLUSIONS;469
11.53.9;Acknowledgements;470
11.53.10;References;470
11.54;Chapter 54. Compression-Based Discretization of Continuous Attributes;471
11.54.1;Abstract;471
11.54.2;1 INTRODUCTION;471
11.54.3;2 AN MDL MEASURE FOR DISCRETIZED ATTRIBUTES;472
11.54.4;3 ALGORITHMIC USAGE;473
11.54.5;4 EXPERIMENTS AND EMPIRICAL RESULTS;474
11.54.6;5 CONCLUSIONS AND FURTHER RESEARCH;477
11.54.7;Acknowledgements;478
11.54.8;References;478
11.55;Chapter 55. MDL and Categorical Theories (Continued);479
11.55.1;Abstract;479
11.55.2;1 INTRODUCTION;479
11.55.3;2 CLASS DESCRIPTION THEORIES AND MDL;480
11.55.4;3 AN ANOMALY AND A PREVIOUS SOLUTION;481
11.55.5;4 A NEW SOLUTION;481
11.55.6;5 APPLYING THE SCHEME TO C4.5RULES;482
11.55.7;6 RELATED RESEARCH;483
11.55.8;7 CONCLUSION;484
11.55.9;References;484
11.56;Chapter 56. For Every Generalization Action, Is There Reallyan Equal and Opposite Reaction? Analysis of the Conservation Law for Generalization Performance;486
11.56.1;Abstract;486
11.56.2;1 INTRODUCTION;486
11.56.3;2 CONSERVATION LAWREVISITED;486
11.56.4;3 AN ALTERNATE MEASURE OF GENERALIZATION;489
11.56.5;4 DISCUSSION;492
11.56.6;Acknowledgments;493
11.56.7;References;493
11.57;Chapter 57. Active Exploration and Learning in Real-Valued Spaces using Multi-Armed Bandit Allocation Indices;495
11.57.1;Abstract;495
11.57.2;1 Introduction and Motivation;495
11.57.3;2 Combining Classification Tree Algorithms with Gittins Indices;498
11.57.4;3 The Grasping Task;499
11.57.5;4 Discussion;500
11.57.6;5 Conclusion;501
11.57.7;Acknowledgments;501
11.57.8;References;502
11.58;Chapter 58. Discovering Solutions with Low Kolmogorov Complexity and High Generalization Capability;503
11.58.1;Abstract;503
11.58.2;1 INTRODUCTION;503
11.58.3;2 BASIC CONCEPTS;504
11.58.4;3 PROBABILISTIC SEARCH;505
11.58.5;4 "SIMPLE" NEURAL NETS;507
11.58.6;5 INCREMENTAL LEARNING;509
11.58.7;6 ACKNOWLEDGEMENTS;511
11.58.8;References;511
11.59;Chapter 59. A Comparison of Induction Algorithms for Selective andnon-Selective Bayesian Classifiers;512
11.59.1;Abstract;512
11.59.2;1 INTRODUCTION;512
11.59.3;2 NAIVE BAYESIAN CLASSIFIERS;513
11.59.4;3 BAYESIAN NETWORK CLASSIFIERS;513
11.59.5;5 DISCUSSION;516
11.59.6;6 RELATED WORK;518
11.59.7;7 CONCLUSION;519
11.59.8;Acknowledgement;520
11.59.9;References;520
11.60;Chapter 60. Retrofitting Decision Tree Classifiers Using Kernel Density Estimation;521
11.60.1;Abstract;521
11.60.2;1. INTRODUCTION;521
11.60.3;2 A REVIEW OF KERNEL DENSITY ESTIMATION;522
11.60.4;3 CLASSIFICATION WITH KERNEL DENSITY ESTIMATES;523
11.60.5;4 DECISION TREE DENSITY ESTIMATORS;523
11.60.6;5 DETAILS ON DECISION TREE DENSITY ESTIMATORS;524
11.60.7;6 EXPERIMENTAL RESULTS;524
11.60.8;7 RELATED WORK, EXTENSIONS, AND DISCUSSION;527
11.60.9;8 CONCLUSION;528
11.61;Chapter 61. Automatic Speaker Recognition: An Application of Machine Learning;530
11.61.1;Abstract;530
11.61.2;1 INTRODUCTION;530
11.61.3;2 PREPROCESSING;531
11.61.4;3 SPEAKER CLASSIFICATION;532
11.61.5;4 EXPERIMENTAL RESULTS;533
11.61.6;5 CONCLUSION;536
11.61.7;Acknowledgments;536
11.61.8;References;536
11.62;Chapter 62. An Inductive Learning Approach to Prognostic Prediction;537
11.62.1;Abstract;537
11.62.2;1 INTRODUCTION;537
11.62.3;2 RECURRENCE SURFACE APPROXIMATION;538
11.62.4;3 CLINICAL APPLICATION;542
11.62.5;4 CONCLUSIONS AND FUTURE WORK;544
11.63;Chapter 63. TD Models: Modeling the World at a Mixture of Time Scales;546
11.63.1;Abstract;546
11.63.2;1 Multi-Scale Planning and Modeling;546
11.63.3;2 Reinforcement Learning;547
11.63.4;3 The Prediction Problem;547
11.63.5;4 A Generalized Bellman Equation;548
11.63.6;5 n-Step Models;548
11.63.7;6 Intermixing Time Scales;548
11.63.8;7 ß-Models;549
11.63.9;8 Theoretical Results;550
11.63.10;9 TD(.) Learning of ß-models;551
11.63.11;10 A Wall-Following Example;551
11.63.12;11 A Hidden-State Example;552
11.63.13;12 Adding Actions (Future Work);553
11.63.14;13 Conclusions;553
11.63.15;Acknowledgments;554
11.63.16;References;554
11.64;Chapter 64. Learning Collection Fusion Strategies for Information Retrieval;555
11.64.1;Abstract;555
11.64.2;1 INTRODUCTION;555
11.64.3;2 UNDERPINNINGS;556
11.64.4;3 LEARNING COLLECTION FUSION STRATEGIES;558
11.64.5;4 EXPERIMENTS;561
11.64.6;5 DISCUSSION AND CONCLUSIONS;562
11.64.7;References;563
11.65;Chapter 65. Learning by Observation and Practice:An Incremental Approach for Planning Operator Acquisition;564
11.65.1;Abstract;564
11.65.2;1 Introduction;564
11.65.3;2 Learning architecture overview;565
11.65.4;3 Issues of learning planning operators;565
11.65.5;4 Learning algorithm descriptions;567
11.65.6;5 Empirical results and analysis;570
11.65.7;Acknowledgements;571
11.65.8;References;572
11.66;Chapter 66. Learning with Rare Cases and Small Disjuncts;573
11.66.1;Abstract;573
11.66.2;1. INTRODUCTION;573
11.66.3;2. BACKGROUND;573
11.66.4;3. WHY ARE SMALL DISJUNCTS SO ERROR PRONE?;574
11.66.5;4. THE PROBLEM DOMAINS;574
11.66.6;5. THE EXPERIMENTS;575
11.66.7;6. RESULTS AND DISCUSSION;576
11.66.8;7. FUTURE RESEARCH;579
11.66.9;8. CONCLUSION;579
11.66.10;Acknowledgements;580
11.66.11;References;580
11.67;Chapter 67. Horizontal Generalization;581
11.67.1;Abstract;581
11.67.2;1 INTRODUCTION;581
11.67.3;2 FAN GENERALIZERS;582
11.67.4;3 COMPUTER EXPERIMENTS;582
11.67.5;4 GENERAL COMMENTS ON FG's;589
11.67.6;Acknowledgements;589
11.67.7;References;589
11.68;Chapter 68. Learning Hierarchies from Ambiguous Natural Language Data;590
11.68.1;Abstract;590
11.68.2;1 Introduction;590
11.68.3;2 Background;591
11.68.4;3 Learning Translation Rules with FOCL;591
11.68.5;4 Learning a Semantic Hierarchy from scratch;593
11.68.6;5 Updating an existing hierarchy;594
11.68.7;7 Limitation;597
11.68.8;8 Related Work;597
11.68.9;9 Conclusion;597
11.68.10;Acknowledgement;598
11.68.11;References;598
12;PART 2: INVITED TALKS;600
12.1;Chapter 69. Machine Learning and Information Retrieval;602
12.2;Chapter 70. Learning With Bayesian Networks;603
12.2.1;References;603
12.3;Chapter 71. Learning for Automotive Collision Avoidance and Autonomous Control;604
13;Author Index;606



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.