Shachter / Kanal / Henrion | Uncertainty in Artificial Intelligence 5 | E-Book | sack.de
E-Book

E-Book, Englisch, Band Volume 10, 456 Seiten, Web PDF

Reihe: Machine Intelligence and Pattern Recognition

Shachter / Kanal / Henrion Uncertainty in Artificial Intelligence 5


1. Auflage 2017
ISBN: 978-1-4832-9655-5
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark

E-Book, Englisch, Band Volume 10, 456 Seiten, Web PDF

Reihe: Machine Intelligence and Pattern Recognition

ISBN: 978-1-4832-9655-5
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark



This volume, like its predecessors, reflects the cutting edge of research on the automation of reasoning under uncertainty.A more pragmatic emphasis is evident, for although some papers address fundamental issues, the majority address practical issues. Topics include the relations between alternative formalisms (including possibilistic reasoning), Dempster-Shafer belief functions, non-monotonic reasoning, Bayesian and decision theoretic schemes, and new inference techniques for belief nets. New techniques are applied to important problems in medicine, vision, robotics, and natural language understanding.

Shachter / Kanal / Henrion Uncertainty in Artificial Intelligence 5 jetzt bestellen!

Weitere Infos & Material


1;Front Cover;1
2;Uncertainty in Artificial Intelligence 5;4
3;Copyright Page;5
4;Table of Contents;8
5;Preface;6
6;Reviewers;12
7;Program Committee;12
8;Contributors;14
9;PART I: FUNDAMENTAL ISSUES;16
9.1;Chapter 1. Lp—A Logic for Statistical Information;18
9.1.1;1 Introduction;18
9.1.2;2 Other Probability Logics;19
9.1.3;3 Types of Statistical Knowledge;19
9.1.4;4 Syntax and Semantics;20
9.1.5;5 Syntax;21
9.1.6;6 Examples of Representation;23
9.1.7;7 Deductive Proof Theory;24
9.1.8;8 Degrees of Belief;27
9.1.9;Acknowledgments;28
9.1.10;References;28
9.2;CHAPTER
2. REPRESENTING TIME IN CAUSAL PROBABILISTIC NETWORKS;30
9.2.1;1 INTRODUCTION1;30
9.2.2;2 THE DISTRIBUTION OF TIME;31
9.2.3;3 MARKED POINT PROCESS REPRESENTATION;32
9.2.4;4 NETWORKS OF "DATES";34
9.2.5;Acknowledgements;43
9.2.6;References;43
9.3;CHAPTER
3. CONSTRUCTING THE PIGNISTIC PROBABILITY FUNCTION IN A CONTEXT OF UNCERTAINTY;44
9.3.1;1. Introduction;44
9.3.2;2. The credibility function;45
9.3.3;3. a-combined credibility spaces;46
9.3.4;4. The pignistic probability function;49
9.3.5;5. Co-credibility function;52
9.3.6;6. The Moebius transformations of Cr;52
9.3.7;7. Conclusions;53
9.3.8;Bibliography;53
9.3.9;Acknowledgements;54
9.4;Chapter 4. Can Uncertainty Management Be Realized In A Finite Totally Ordered Probability Algebra?;56
9.4.1;1 Introduction;56
9.4.2;2 Finite totally ordered probability algebras;57
9.4.3;3 Bayes theorem and reasoning by case;62
9.4.4;4 Problems with legal finite totally ordered probability;63
9.4.5;5 An experiment;66
9.4.6;6 Conclusion;68
9.4.7;Acknowledgements;68
9.4.8;References;68
9.4.9;Appendix A: Derivation of;69
9.4.10;Appendix B: Examples of legal FTOPAs;70
9.4.11;Appendix C;71
10;PART Il: DEFEASIBLE REASONING AND UNCERTAINTY;74
10.1;Chapter 5. Defeasible Reasoning and Uncertainty: Comments;76
10.1.1;1 Overview;76
10.1.2;2 Goldszmidt & Pearl;77
10.1.3;3 Bonissone et al;77
10.1.4;4 Loui;79
10.1.5;5 Reference Classes: What They Didn't Talk About, But Somebody Should!;80
10.1.6;Acknowledgements;80
10.1.7;References;80
10.2;Chapter 6. Uncertainty and Incompleteness: Breaking the Symmetry of Defeasible Reasoning ;82
10.2.1;1 Introduction;82
10.2.2;2 Plausible Reasoning Module;87
10.2.3;3 Finding Admissible Labelings;89
10.2.4;4 Algorithms and Heuristics;94
10.2.5;5 Conclusions;96
10.2.6;References;97
10.3;Chapter 7. Deciding Consistency of Databases Containing Defeasible and Strict Information;102
10.3.1;1 Introduction;102
10.3.2;2 Notation and Preliminary Definitions;104
10.3.3;3 Probabilistic Consistency and Entailment;105
10.3.4;4 An Effective Procedure for Testing Consistency;108
10.3.5;5 Examples;109
10.3.6;6 Conclusions;111
10.3.7;Acknowledgments;111
10.3.8;References;111
10.4;CHAPTER
8. DEFEASIBLE DECISIONS: WHAT THE PROPOSAL IS AND ISN'T;114
10.4.1;1 WHAT THE PROPOSAL IS;114
10.4.2;2 WHAT THE PROPOSAL ISN'T;126
10.4.3;3 AN OPEN CONVERSATION WITH RAIFFA;130
10.5;CHAPTER
9. CONDITIONING ON DISJUNCTIVE KNOWLEDGE: SIMPSON'S PARADOX IN DEFAULT LOGIC;132
10.5.1;1. INTRODUCTION;132
10.5.2;2. DOES AN EMU OR OSTRICH RUN?;134
10.5.3;3. ARTS STUDENTS AND SCIENCE STUDENTS;135
10.5.4;4. DISCUSSION OF THE PARADOX;136
10.5.5;6. CONCLUSIONS;138
10.5.6;ACKNOWLEDGEMENTS;139
10.5.7;REFERENCES;139
11;PART Ill: ALGORITHMS FOR INFERENCE IN BELIEF NETS;142
11.1;Chapter 10. An Introduction to Algorithms for Inference in Belief Nets;144
11.1.1;1. Introduction;144
11.1.2;2. Qualitative, real, and interval-valued belief representations;144
11.1.3;3. Early approaches;145
11.1.4;4. Exact methods;146
11.1.5;5. Two level belief networks;147
11.1.6;6. Stochastic simulation and Monte Carlo schemes;148
11.1.7;7. Final remarks;150
11.1.8;References;151
11.2;CHAPTER
11. d-SEPARATION: FROM THEOREMS TO ALGORITHMS;154
11.2.1;1. INTRODUCTION;154
11.2.2;2. SOUNDNESS AND COMPLETENESS OF d -SEPARATION;155
11.2.3;3. THE MAIN RESULTS;157
11.2.4;ACKNOWLEDGEMENT;162
11.2.5;REFERENCES;163
11.3;Chapter 12. Interval Influence Diagrams;164
11.3.1;1 Introduction;164
11.3.2;2 Probabilistic Inference with Bounds on Probabilities;165
11.3.3;3 Interval Influence Diagrams;166
11.3.4;4 Transformations;167
11.3.5;5 Example;171
11.3.6;6 Computational Characteristics;174
11.3.7;7 Conclusions;174
11.3.8;References;175
11.4;Chapter 13. A Tractable Inference Algorithm for Diagnosing Multiple Diseases;178
11.4.1;1 Introduction;178
11.4.2;2 The QMR model;179
11.4.3;3 The Quickscore Algorithm;181
11.4.4;4 Run-Time Performance of Quickscore;184
11.4.5;5 Weaknesses of the Algorithm;184
11.4.6;6 Conclusion;185
11.4.7;7 Acknowledgments;185
11.4.8;References;185
11.5;CHAPTER
14. EVIDENCE ABSORPTION AND PROPAGATION THROUGH EVIDENCE REVERSALS;188
11.5.1;1. Introduction;188
11.5.2;2. Belief Diagrams;189
11.5.3;3. Evidence Nodes and Evidence Propagation;190
11.5.4;4. Probability Propagation;197
11.5.5;5. Control of the Evidence Process;200
11.5.6;6. Comparisons with the Pearl and the Lauritzen and Spiegelhalter Algorithms;202
11.5.7;7. Conclusions;204
11.5.8;References;205
11.6;Chapter 15. An Empirical Evaluation of a Randomized Algorithm for Probabilistic Inference;206
11.6.1;1. Introduction;206
11.6.2;2. Methods and Procedures;208
11.6.3;3. Results;214
11.6.4;4. Discussion and Conclusions;218
11.6.5;5. Acknowledgments;221
11.6.6;References;221
11.7;Chapter 16. Weighing and Integrating Evidence for Stochastic Simulation in Bayesian Networks;224
11.7.1;1 Introduction;224
11.7.2;2 The Evidence Weighting Technique;226
11.7.3;3 Evidence Weighting With Evidential Integration;227
11.7.4;4 Example;229
11.7.5;5 Discussion;230
11.7.6;6 Conclusions;233
11.7.7;References;234
11.8;CHAPTER
17. SIMULATION APPROACHES TO GENERAL PROBABILISTIC INFERENCE ON BELIEF NETWORKS;236
11.8.1;1. Introduction;236
11.8.2;2. The Algorithms;237
11.8.3;3. Test Results;240
11.8.4;4. Conclusions;242
11.8.5;5. Acknowledgments;245
11.8.6;References;245
12;PART IV: SOFTWARE TOOLS FOR UNCERTAIN REASONING;248
12.1;Chapter 18. Software tools for uncertain reasoning: An Introduction;250
12.2;Chapter 19. Now that I Have a Good Theory of Uncertainty, What Else Do I Need? ;252
12.2.1;1 Normative vs. Prescriptive Theories of Uncertainty;252
12.2.2;2 Dynamic Classification Problems: Situation Assessment;253
12.2.3;3 RUM's Theory and Constraints;255
12.2.4;4 The Integrated RUM/RUMrunner Technology;259
12.2.5;5 Addressing the DCP's Reasoning Requirements;264
12.2.6;6 RUM/RUMrunner Applications;266
12.2.7;7 Conclusions;267
12.2.8;References;267
12.3;Chapter 20. Knowledge Acquisition Techniques for Intelligent Decision Systems: Integrating Axotl and Aquinas in DDUCKS;270
12.3.1;1. INTRODUCTION;270
12.3.2;2. APPROACH;274
12.3.3;3. DISCUSSION;281
12.3.4;ACKNOWLEDGEMENTS;282
12.3.5;REFERENCES;283
12.4;Chapter 21. BaRT: A Bayesian Reasoning Tool for Knowledge Based Systems;286
12.4.1;1. Introduction;286
12.4.2;2. Classificatory Problem Solving;288
12.4.3;3. A Bayesian Reasoning Tool;291
12.4.4;4. Conclusion;295
12.4.5;Acknowledgements;295
12.4.6;References;295
13;PART V: KNOWLEDGE ACQUISITION, MODELLING, AND EXPLANATION;298
13.1;Chapter 22. Assessment, criticism and improvement of imprecise subjective probabilities for a medical expert system;300
13.1.1;1. Introduction;300
13.1.2;2. Background, Assessments and Data;301
13.1.3;3. Criticising the probability assessments;302
13.1.4;4. Discrimination and Reliability;304
13.1.5;5. Learning from experience;306
13.1.6;6. Discussion;307
13.1.7;Acknowledgments;308
13.1.8;References;308
13.2;Chapter 23. Automated construction of sparse Bayesian networks from unstructured probabilistic models and domain information;310
13.2.1;1 Introduction;311
13.2.2;2 Bayesian networks;312
13.2.3;3 The construction algorithm;314
13.2.4;4 Results;317
13.2.5;5 Discussion and further work;318
13.2.6;6 Acknowledgements;321
13.2.7;References;323
13.3;Chapter 24. A Decision-Analytic Model for Using Scientific Data;324
13.3.1;I THE PROPLEM;324
13.3.2;2 A MODEL FOR THE USE OF REPORTED SCIENTIFIC DATA;326
13.3.3;3 USING BIASES TO PARAMETERIZE THE SPACE OF CLINICAL STUDIES;329
13.3.4;4 PREVIOUS ATTEMPTS TO MODEL THE USE OF SCIENTIFIC DATA;331
13.3.5;5 USES OF THE MODEL;331
13.3.6;ACKNOWLEDGMENTS;332
13.3.7;REFERENCES;332
13.4;Chapter 25. Verbal expressions for probability updates How much more probable is "much more probable"?;334
13.4.1;1. Introduction;334
13.4.2;2. Hypotheses about Phrase Selection Functions;335
13.4.3;3. Experimental Design;337
13.4.4;4. Analysis;338
13.4.5;5. Conclusions;341
13.4.6;Acknowledgements;342
13.4.7;Footnotes;342
13.4.8;References;342
14;PART VI: APPLICATIONS TO VISION AND RECOGNITION;344
14.1;Chapter 26. Map Learning with Indistinguishable Locations;346
14.1.1;1 Introduction;346
14.1.2;2 Spatial Modeling;347
14.1.3;3 Map Learning;350
14.1.4;4 Discussion;354
14.1.5;5 Related Work;355
14.1.6;References;356
14.2;Chapter 27. Plan Recognition in Stories and in Life;358
14.2.1;1 Introduction;358
14.2.2;2 Preliminaries;359
14.2.3;3 The "Knob" Theory;361
14.2.4;4 The "Mention" Theory;364
14.2.5;5 Conclusion;365
14.2.6;Appendix: Work in Progress;366
14.2.7;References;366
14.3;Chapter 28. HIERARCHICAL EVIDENCE ACCUMULATION IN THE PSEIKI SYSTEM and EXPERIMENTS IN MODEL-DRIVEN MOBILE ROBOT NAVIGATION;368
14.3.1;1. APPLICATION;368
14.3.2;2. REPRESENTATION AND FLOW OF CONTROL;370
14.3.3;3. ACCUMULATION OF EVIDENCE;376
14.3.4;4. EDGE-BASED vs. REGION-BASED OPERATION;380
14.3.5;5. ARE INDEPENDENCE CONDITIONS SATISFIED?;381
14.3.6;6. ROBOT SELF-LOCATION USING PSEIKI;381
14.3.7;7. CONCLUDING REMARKS;383
14.3.8;8. REFERENCES;384
14.4;Chapter 29. Model-Based Influence Diagrams For Machine Vision;386
14.4.1;1 Introduction;386
14.4.2;2 Model-Based Reasoning for Machine Vision;388
14.4.3;3 Sequential Control for Machine Vision Inference;389
14.4.4;4 Model Guided Influence Diagram Construction;390
14.4.5;5 Dynamic Instantiation for Sequential Control;393
14.4.6;6 Conclusions;399
14.4.7;Acknowledgments;402
14.4.8;References;402
14.5;CHAPTER
30. THE APPLICATION OF DEMPSTER SHAFER THEORY TO A LOGIC-BASED VISUAL RECOGNITION SYSTEM;404
14.5.1;1 INTRODUCTION;404
14.5.2;2 DEMPSTER SHAFER THEORY REVIEW;405
14.5.3;3 PROPOSITIONAL LOGIC REVIEW;406
14.5.4;4 DEMPSTER SHAFER THEORY FORMULATION IN LOGIC-BASED TERMS;406
14.5.5;5 ATMS-BASED IMPLEMENTATION OF DEMPSTER SHAFER THEORY;409
14.5.6;6 MODEL-BASED VISUAL RECOGNITION USING AN EXTENDED ATMS;412
14.5.7;7 DISCUSSION;419
14.5.8;ACKNOWLEDGEMENTS;419
14.5.9;References;419
14.6;Chapter 31. Efficient Parallel Estimation for Markov Random Fields;422
14.6.1;1 Introduction;422
14.6.2;2 Generating Most Probable Labelings;423
14.6.3;3 Markov Random Fields;424
14.6.4;4 HCF;425
14.6.5;5 Local HCF;426
14.6.6;6 Test Results;426
14.6.7;7 Conclusions and Future Work;430
14.6.8;A Proof of Convergence for Local HCF;432
14.6.9;B Comparing Local HCF and HCF;433
14.6.10;References;433
15;PART VII: COMPARING APPROACHES TO UNCERTAIN REASONING;436
15.1;Chapter 32. Comparing Approaches to Uncertain Reasoning: Discussion System Condemnation Pays Off;438
15.2;CHAPTER
33. A PROBABILITY ANALYSIS OF THE USEFULNESS OF DECISION AIDS1;442
15.2.1;1.0 INTRODUCTION;442
15.2.2;2.0 FALLIBLE VS. INFALLIBLE ADVICE;444
15.2.3;3.0 OVERCOMING THE COST OF FALLIBILITY;447
15.2.4;4.0 DISCUSSION;450
15.2.5;REFERENCES;451
15.3;CHAPTER
34. INFERENCE POLICIES1, 2;452
15.3.1;1.0 SATISFYING REQUIREMENTS;452
15.3.2;2.0 STANDARD INFERENCE POLICIES;453
15.3.3;3.0 NONSTANDARD INFERENCE POLICIES;455
15.3.4;4.0 SUMMARY AND DISCUSSION;459
15.3.5;REFERENCES;459
15.4;CHAPTER 35. COMPARING EXPERT SYSTEMS BUILT USING DIFFERENT UNCERTAIN INFERENCE SYSTEMS;460
15.4.1;1. INTRODUCTION;460
15.4.2;2. METHOD;462
15.4.3;3. RESULTS AND DISCUSSION;465
15.4.4;4. CONCLUSIONS;468
15.4.5;5. REFERENCES;469
15.5;Chapter 36. Shootout-89, An Evaluation of Knowledge-based Weather Forecasting Systems;472
16;Author index;474



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.