Gammerman | Causal Models and Intelligent Data Management | Buch | 978-3-540-66328-7 | sack.de

Buch, Englisch, 185 Seiten, HC runder Rücken kaschiert, Format (B × H): 160 mm x 241 mm, Gewicht: 1020 g

Gammerman

Causal Models and Intelligent Data Management

Buch, Englisch, 185 Seiten, HC runder Rücken kaschiert, Format (B × H): 160 mm x 241 mm, Gewicht: 1020 g

ISBN: 978-3-540-66328-7
Verlag: Springer Berlin Heidelberg


The need to electronically store, manipulate and analyze large-scale, high-dimensional data sets requires new computational methods. This book presents new intelligent data management methods and tools, including new results from the field of inference. Leading experts also map out future directions of intelligent data analysis. This book will be a valuable reference for researchers exploring the interdisciplinary area between statistics and computer science as well as for professionals applying advanced data analysis methods in industry.
Gammerman Causal Models and Intelligent Data Management jetzt bestellen!

Zielgruppe


Professional/practitioner


Autoren/Hrsg.


Weitere Infos & Material


I. Causal Models.- 1. Statistics, Causality, and Graphs.- 1.1 A Century of Denial.- 1.2 Researchers in Search of a Language.- 1.3 Graphs as a Mathematical Language.- 1.4 The Challenge.- References.- 2. Causal Conjecture.- 2.1 Introduction.- 2.2 Variables in a Probability Tree.- 2.3 Causal Uncorrelatedness.- 2.4 Three Positive Causal Relations.- 2.5 Linear Sign.- 2.6 Causal Uncorrelatedness Again.- 2.7 Scored Sign.- 2.8 Tracking.- References.- 3. Who Needs Counterfactuals?.- 3.1 Introduction.- 3.1.1 Decision-Theoretic Framework.- 3.1.2 Unresponsiveness and Insensitivity.- 3.2 Counterfactuals.- 3.3 Problems of Causal Inference.- 3.3.1 Causes of Effects.- 3.3.2 Effects of Causes.- 3.4 The Counterfactual Approach.- 3.4.1 The Counterfactual Setting.- 3.4.2 Counterfactual Assumptions.- 3.5 Homogeneous Population.- 3.5.1 Experiment and Inference.- 3.6 Decision-Analytic Approach.- 3.7 Sheep and Goats.- 3.7.1 ACE.- 3.7.2 Neyman and Fisher.- 3.7.3 Bioequivalence.- 3.8 Causes of Effects.- 3.8.1 A Different Approach?.- 3.9 Conclusion.- References.- 4. Causality: Independence and Determinism.- 4.1 Introduction.- 4.2 Conclusion.- References.- II. Intelligent Data Management.- 5. Intelligent Data Analysis and Deep Understanding.- 5.1 Introduction.- 5.2 The Question: The Strategy.- 5.3 Diminishing Returns.- 5.4 Conclusion.- References.- 6. Learning Algorithms in High Dimensional Spaces.- 6.1 Introduction.- 6.2 SVM for Pattern Recognition.- 6.2.1 Dual Representation of Pattern Recognition.- 6.3 SVM for Regression Estimation.- 6.3.1 Dual Representation of Regression Estimation.- 6.3.2 SVM Applet and Software.- 6.4 Ridge Regression and Least Squares Methods in Dual Variables.- 6.5 Transduction.- 6.6 Conclusion.- References.- 7. Learning Linear Causal Models by MML Sampling.- 7.1 Introduction.- 7.2 Minimum Message Length Principle.- 7.3 The Model Space.- 7.4 The Message Format.- 7.5 Equivalence Sets.- 7.5.1 Small Effects.- 7.5.2 Partial Order Equivalence.- 7.5.3 Structural Equivalence.- 7.5.4 Explanation Length.- 7.6 Finding Good Models.- 7.7 Sampling Control.- 7.8 By-products.- 7.9 Prior Constraints.- 7.10 Test Results.- 7.11 Remarks on Equivalence.- 7.11.1 Small Effect Equivalence.- 7.11.2 Equivalence and Causality.- 7.12 Conclusion.- References.- 8. Game Theory Approach to Multicommodity Flow Network Vulnerability Analysis.- References.- 9. On the Accuracy of Stochastic Complexity Approximations.- 9.1 Introduction.- 9.2 Stochastic Complexity and Its Applications.- 9.3 Approximating the Stochastic Complexity in the Incomplete Data Case.- 9.4 Empirical Results.- 9.4.1 The Problem.- 9.4.2 The Experimental Setting.- 9.4.3 The Algorithms.- 9.4.4 Results.- 9.5 Conclusion.- References.- 10. AI Modelling for Data Quality Control Xiaohui Liu.- 10.1 Introduction.- 10.2 Statistical Approaches to Outliers.- 10.3 Outlier Detection and Analysis.- 10.4 Visual Field Test.- 10.5 Outlier Detection.- 10.5.1 Self-Organising Maps (SOM).- 10.5.2 Applications of SOM.- 10.6 Outlier Analysis by Modelling ‘Real Measurements’.- 10.7 Outlier Analysis by Modelling Noisy Data.- 10.7.1 Noise Model I: Noise Definition.- 10.7.2 Noise Model II: Construction.- 10.7.3 Noise Elimination.- 10.8 Concluding Remarks.- References.- 11. New Directions in Text Categorization.- 11.1 Introduction.- 11.2 Machine Learning for Text Classification.- 11.3 Radial Basis Functions and the Bard.- 11.4 An Evolutionary Algorithm for Text Classification.- 11.5 Text Classification by Vocabulary Richness.- 11.6 Text Classification with Frequent Function Words.- 11.7 Do Authors Have Semantic Signatures?.- 11.8 Syntax with Style.- 11.9 Intermezzo.- 11.10 Some Methods of Textual Feature-Finding.- 11.10.1 Progressive Pairwise Chunking.- 11.10.2 Monte Carlo Feature Finding.- 11.10.3 How Long Is a Piece of Substring?.- 11.10.4 Comparative Testing.- 11.11 Which Methods Work Best? - A Benchmarking Study.- 11.12 Discussion.- 11.12.1 In Praise of Semi-Crude Bayesianism.- 11.12.2 What's So Special About Linguistic Data?.- References.


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.