Lawless / Mittu / Sofge | Autonomy and Artificial Intelligence: A Threat or Savior? | E-Book | www.sack.de
E-Book

E-Book, Englisch, 324 Seiten

Lawless / Mittu / Sofge Autonomy and Artificial Intelligence: A Threat or Savior?


1. Auflage 2017
ISBN: 978-3-319-59719-5
Verlag: Springer Nature Switzerland
Format: PDF
Kopierschutz: 1 - PDF Watermark

E-Book, Englisch, 324 Seiten

ISBN: 978-3-319-59719-5
Verlag: Springer Nature Switzerland
Format: PDF
Kopierschutz: 1 - PDF Watermark



This book explores how Artificial Intelligence (AI), by leading to an increase in the autonomy of machines and robots, is offering opportunities for an expanded but uncertain impact on society by humans, machines, and robots. To help readers better understand the relationships between AI, autonomy, humans and machines that will help society reduce human errors in the use of advanced technologies (e.g., airplanes, trains, cars), this edited volume presents a wide selection of the underlying theories, computational models, experimental methods, and field applications. While other literature deals with these topics individually, this book unifies the fields of autonomy and AI, framing them in the broader context of effective integration for human-autonomous machine and robotic systems. The contributions, written by world-class researchers and scientists, elaborate on key research topics at the heart of effective human-machine-robot-systems integration. These topics include, for example, computational support for intelligence analyses; the challenge of verifying today's and future autonomous systems; comparisons between today's machines and autism; implications of human information interaction on artificial intelligence and errors; systems that reason; the autonomy of machines, robots, buildings; and hybrid teams, where hybrid reflects arbitrary combinations of humans, machines and robots. The contributors span the field of autonomous systems research, ranging from industry and academia to government. Given the broad diversity of the research in this book, the editors strove to thoroughly examine the challenges and trends of systems that implement and exhibit AI; the social implications of present and future systems made autonomous with AI; systems with AI seeking to develop trusted relationships among humans, machines, and robots; and the effective human systems integration that must result for trust in these new systems and their applications to increase and to be sustained.

Contributing authors: Kevin Barry, Patrick Benavidez, Chris Berka, Joseph Coyne, Boris A. Galitsky, Peter Gerken, Sri Nikhil Gupta Gourisetti, Rachel Hingst, Ayanna Howard, Mo Jamshidi, W.F. Lawless, James Llinas, Jonathan Lwowski, Ranjeev Mittu, Ira S. Moskowitz, Michael Mylrea, Anna Parnis, John J. Prevost, Adrienne Raglin, Signe A. Redfield, Paul Robinette, Galina Rogova, Stephen Russell, Alicia Ruvinsky, Mae L. Seto, Sarah Sherwood, Ciara Sibley, Donald Sofge, Douglas Summers Stay, Maja Stikic, Catherine Tessier, Alan R. Wagner.

Lawless / Mittu / Sofge Autonomy and Artificial Intelligence: A Threat or Savior? jetzt bestellen!

Weitere Infos & Material


1;Preface;5
1.1;Spring 2015: Foundations of Autonomy and Its (Cyber) Threats—From Individuals to Interdependence;6
1.1.1;Spring 2015: Organizing Committee;6
1.1.2; Spring 2015: Program Committee;6
1.1.3; Spring 2015: Invited Keynote Speakers;7
1.1.4; Spring 2015: Regular Speakers;8
1.2; Spring 2016: AI and the Mitigation of Human Error—Anomalies, Team Metrics and Thermodynamics;9
1.2.1;Spring 2016: Organizing Committee;9
1.2.2; Spring 2016: Program Committee (duplicates the spring 2015 symposium);9
1.2.2.1;Spring 2016: Invited Keynote Speakers;9
1.2.2.2; Spring 2016: Regular Speakers;10
1.3; Questions for Speakers and Attendees at AAAI-2015 and AAAI-2016 and for Readers of This Book;10
2;Contents;13
3;Chapter 1: Introduction;15
3.1;1.1 Background of the 2015 Symposium;15
3.2;1.2 Background of the 2016 Symposium;16
3.3;1.3 Contributed Chapters;17
3.4;References;24
4;Chapter 2: Reexamining Computational Support for Intelligence Analysis: A Functional Design for a Future Capability;26
4.1;2.1 Motivation;26
4.2;2.2 Goals and Requirements;27
4.3;2.3 Future Directions in Intelligence Analysis;28
4.3.1;2.3.1 Reviews of Open Literature and Operational Environments;28
4.3.2;2.3.2 Analytical Rigor in Intelligence Analysis/Argument Mapping;29
4.4;2.4 Approaches to Computational Support;30
4.4.1;2.4.1 Paradigms and Methods;30
4.4.2;2.4.2 Argumentation Methods;32
4.4.3;2.4.3 Computational Support to Argumentation: The State of the Art;34
4.4.3.1;2.4.3.1 Argument Detection;37
4.4.3.1.1;Moen et al. (2007) Automatic Detection of Arguments in Legal Texts;37
4.4.3.1.2;Mochales-Palau and Moens (2007);37
4.4.3.1.3;Feng and Hirst (2011), Classifying Arguments by Scheme;38
4.4.3.2;2.4.3.2 Argument Mining;39
4.4.3.2.1;Moens (2013), State of the Art in Argument Mining;39
4.4.3.3;2.4.3.3 Argument Invention;40
4.4.3.3.1;Walton and Gordon (2012), the Carneades Model of Argument Invention;40
4.4.3.4;2.4.3.4 Argument Visualization (a.k.a. Mapping, Diagramming);41
4.5;2.5 Current-Day Computational Support to Argumentation;45
4.5.1;2.5.1 AVERS and CISpaces as Leading Relevant Prototypes;45
4.6;2.6 Computational Support for Narrative Development;47
4.6.1;2.6.1 Using Topic Modeling to Assess Story Relevance and Narrative Formation;47
4.7;2.7 Developing a Functional Design for an Advanced-­Capability Prototype;51
4.7.1;2.7.1 Looking Ahead: Possible Test and Evaluation Schemes;55
4.8;References;56
5;Chapter 3: Task Allocation Using Parallelized Clustering and Auctioning Algorithms for Heterogeneous Robotic Swarms Operating on a Cloud Network;60
5.1;3.1 Introduction;60
5.2;3.2 Robotic Swarm Methodology;62
5.2.1;3.2.1 Scenario;62
5.2.2;3.2.2 System Overview;62
5.2.3;3.2.3 Localization of People;63
5.2.4;3.2.4 Building the Map;65
5.2.4.1;3.2.4.1 Clustering the People;65
5.2.4.2;3.2.4.2 Parallelization of Clustering Algorithm;67
5.2.4.3;3.2.4.3 Meta-Clustering the Clusters;69
5.2.5;3.2.5 Auctioning the Meta-Clusters;71
5.2.6;3.2.6 Traveling to the Assigned Clusters;72
5.2.6.1;3.2.6.1 Traveling Salesman Solver;72
5.2.6.2;3.2.6.2 Human Interaction with Swarm;74
5.3;3.3 Experimental Results;75
5.3.1;3.3.1 Simulation Results;75
5.3.2;3.3.2 Hardware Emulation Results;76
5.3.2.1;3.3.2.1 Unmanned Ground Vehicle (UGV);76
5.3.2.2;3.3.2.2 GPS Emulation;77
5.3.2.3;3.3.2.3 Traveling Salesman;78
5.3.2.4;3.3.2.4 CCR K-Means Clustering;79
5.3.2.5;3.3.2.5 Meta-Clustering;79
5.3.2.6;3.3.2.6 Auction Algorithm;80
5.4;3.4 Conclusions;80
5.5;References;82
6;Chapter 4: Human Information Interaction, Artificial Intelligence, and Errors;83
6.1;4.1 Introduction;83
6.2;4.2 Human Information Interaction;85
6.3;4.3 HII and Artificial Intelligence;95
6.4;4.4 HII, AI, and Errors;99
6.5;4.5 Conclusion;107
6.6;References;108
7;Chapter 5: Verification Challenges for Autonomous Systems;114
7.1;5.1 Introduction;114
7.2;5.2 Autonomy;115
7.2.1;5.2.1 Benefits of Autonomy;116
7.3;5.3 Verification;117
7.3.1;5.3.1 Verification Implications of Autonomy;118
7.3.2;5.3.2 Example System;120
7.4;5.4 Challenges;121
7.4.1;5.4.1 Models;121
7.4.2;5.4.2 Abstraction;123
7.4.2.1;5.4.2.1 Fidelity;123
7.4.2.2;5.4.2.2 Requirements Generation;124
7.4.3;5.4.3 Test;126
7.4.3.1;5.4.3.1 Scenarios;126
7.4.3.2;5.4.3.2 Metrics and Performance Evaluation;128
7.4.3.3;5.4.3.3 Intersection of Scenarios and Metrics;131
7.4.4;5.4.4 Tools;131
7.5;5.5 Summary;135
7.6;References;137
8;Chapter 6: Conceptualizing Overtrust in Robots: Why Do People Trust a Robot That Previously Failed?;139
8.1;6.1 Introduction;139
8.2;6.2 Conceptualizing Overtrust;140
8.3;6.3 Robot Guidance Versus Existing Guidance Technology;142
8.3.1;6.3.1 Experimental Setup;142
8.3.2;6.3.2 Results;145
8.4;6.4 Human-Robot Trust in Virtual Simulations;147
8.5;6.5 Repairing Broken Trust;151
8.5.1;6.5.1 Experimental Setup;151
8.5.2;6.5.2 Results;153
8.6;6.6 Overtrust of Robots in Physical Situations;158
8.6.1;6.6.1 Experimental Setup;158
8.6.2;6.6.2 Results;160
8.7;6.7 Discussion;161
8.8;6.8 Thoughts on Future Work;163
8.9;References;164
9;Chapter 7: Research Considerations and Tools for Evaluating Human-Automation Interaction with Future Unmanned Systems;166
9.1;7.1 The Current Environment and Future Vision;166
9.2;7.2 Calibrating Trust in Automation;168
9.3;7.3 DoD Plans and Guides;169
9.4;7.4 Supervisory Control Research and Testing Environments;170
9.4.1;7.4.1 The Adaptive Levels of Automation Test Bed and Research;171
9.4.2;7.4.2 The Research Environment for Supervisory Control of Heterogeneous Unmanned Vehicles Test Bed and Research;173
9.5;7.5 Supervisory Control Research Limitations and Challenges;174
9.6;7.6 Assessing Human-Automation Performance;175
9.6.1;7.6.1 The Value of Eye Tracking;176
9.7;7.7 Supervisory Control Operations User Testbed (SCOUT) Overview;179
9.8;7.8 Summary;183
9.9;References;185
10;Chapter 8: Robots Autonomy: Some Technical Issues;188
10.1;8.1 Introduction;188
10.2;8.2 What Is a Robot?;188
10.3;8.3 Autonomy;190
10.3.1;8.3.1 What Is Autonomy?;190
10.3.2;8.3.2 Authority Sharing;192
10.4;8.4 Autonomy and Authority Sharing: Some Questions;193
10.4.1;8.4.1 The Robot;193
10.4.1.1;8.4.1.1 Situation Tracking: Interpretation and Assessment of the Situation;193
10.4.1.2;8.4.1.2 Decision;194
10.4.2;8.4.2 The Human Operator;195
10.4.3;8.4.3 The Operator-Robot Interaction;195
10.5;8.5 Autonomy and Authority Sharing Ethical Challenges;198
10.5.1;8.5.1 Why Imbue a Robot with Ethics?;199
10.5.2;8.5.2 A Careful Approach Is Needed;199
10.5.3;8.5.3 Thought Experiments Usefulness;200
10.6;8.6 Conclusion: Some Prospects for Robots Autonomy;201
10.7;References;202
11;Chapter 9: How Children with Autism and Machines Learn to Interact;204
11.1;9.1 Introduction;204
11.2;9.2 From Hypersensitivity to Limited Interaction with the World;206
11.2.1;9.2.1 Hypersensitivity;206
11.2.2;9.2.2 Active Learning in Computer Science;207
11.2.3;9.2.3 Learning Repetitive Patterns;209
11.2.4;9.2.4 Self-Stimulation;210
11.2.5;9.2.5 Not Paying Attention to What Is Important;212
11.2.6;9.2.6 From Hyper-Sensitivity to Self-Stimulation of an Engineering System;213
11.3;9.3 Building and Revising Hypotheses in Active Human Learning;215
11.4;9.4 Building Teams Having Learned to Interact;218
11.4.1;9.4.1 How Trust Develops in a Baby;218
11.4.2;9.4.2 Measuring Skills of Reasoning About Mental World;219
11.4.3;9.4.3 A Cooperation Between CwA in the Real World;222
11.5;9.5 Rehabilitating Autistic Interactions;224
11.5.1;9.5.1 Teaching Hide-and-Seek Game;224
11.5.2;9.5.2 Learning to Navigate Environment;226
11.5.3;9.5.3 A Literary Work Search System;227
11.6;9.6 Discussion and Conclusions;232
11.7;References;234
12;Chapter 10: Semantic Vector Spaces for Broadening Consideration of Consequences;236
12.1;10.1 Designing for Safety;236
12.2;10.2 Understanding Intent;238
12.3;10.3 Expressing Intent;239
12.4;10.4 Problem 1: An Encoding for Concepts;240
12.5;10.5 Semantic Vector Spaces;242
12.6;10.6 Problem 2: Distributional Semantic Vector Spaces;244
12.7;10.7 What Needs to be Done;248
12.7.1;10.7.1 Learning More Complex Relations;248
12.7.2;10.7.2 Distributional Semantics;249
12.7.3;10.7.3 Semantics from Images, Video, and Other Data Streams;249
12.7.4;10.7.4 Combining Two Vector Spaces to Better Capture the Knowledge Learned from Each;249
12.7.5;10.7.5 Encoding the Meaning of Natural Language Phrases and Sentences as Vectors;250
12.7.6;10.7.6 Modifying a Semantic Vector Space as New Information Is Learned Without Destroying Already Existing Structure;250
12.7.7;10.7.7 Performing Reasoning Within Vector Spaces;250
12.7.8;10.7.8 Ways of Discovering and Representing Knowledge About Physical Consequences;251
12.8;10.8 Conclusion;251
12.9;References;251
13;Chapter 11: On the Road to Autonomy: Evaluating and Optimizing Hybrid Team Dynamics;253
13.1;11.1 Introduction;253
13.2;11.2 Teaming Platform;255
13.3;11.3 Teaming Studies;260
13.4;11.4 Neurophysiologic Synchronies;260
13.5;11.5 EEG Predictors of Team Performance;261
13.6;11.6 Narrative Storytelling;262
13.7;11.7 Tutoring Dyads;263
13.8;11.8 Quality of Surgical Operations;265
13.9;11.9 Discussion;265
13.10;11.10 Future Research Directions;267
13.11;References;268
14;Chapter 12: Cybersecurity and Optimization in Smart “Autonomous” Buildings;271
14.1;12.1 Introduction;272
14.2;12.2 Smart Building Opportunity;273
14.3;12.3 Smart Building Challenges;274
14.4;12.4 AI Enabled Building Automation Is Blurring the Lines Between Information Technology and Operations Technology;279
14.5;12.5 AI Enabled Autonomous Building Automation to Enhance Security;279
14.5.1;12.5.1 AI Enabled Threat Identification and Mitigation;280
14.5.1.1;12.5.1.1 Theoretical Concept: AI Based Identification System;280
14.5.1.2;12.5.1.2 AI Based Security Learning System: Theoretical Concept;280
14.5.2;12.5.2 AI Enabled Cybersecurity Protection;282
14.5.2.1;12.5.2.1 The Role of AI in Cybersecurity Protection: Theoretical Concept;282
14.5.3;12.5.3 AI Enabled Cyber-Physical Intrusion Detection System;285
14.5.3.1;12.5.3.1 An Integrated AI Based IDPS: Theoretical Concept;286
14.5.4;12.5.4 AI Enabled Cyber Incident Response;288
14.5.4.1;12.5.4.1 An Autonomous AI Cybersecurity Response System: Theoretical Concept;290
14.5.5;12.5.5 AI Based Building Recovery System;290
14.5.6;12.5.6 AI Based Building Recovery System: Theoretical Concept;291
14.6;12.6 Use Cases;291
14.6.1;12.6.1 AI to Mitigate Insider Threat: Cognitive Ubiquitous Sensing and Insider Threat;291
14.6.2;12.6.2 AI Enabled Smart Buildings Cybersecurity and Business Optimization;292
14.6.3;12.6.3 Uber for Cyber and Energy;293
14.6.4;12.6.4 Blockchain for Power Grid Resilience: Exchanging Distributed Energy at Speed, Scale, Autonomy and Security;295
14.6.5;12.6.5 Social Engineering Autonomy for Cyber Intrusion Monitoring and Real-Time Anomaly Detecting (SCI-RAD);298
14.7;12.7 Conclusion and Future Research;298
14.8;References;300
15;Chapter 13: Evaluations: Autonomy and Artificial Intelligence: A Threat or Savior?;303
15.1;13.1 Introduction;303
15.1.1;13.1.1 Mathematical Model of Autonomy: Entropy of Teamwork;305
15.1.2;13.1.2 Entropy Production;309
15.1.3;13.1.3 Emotion;310
15.1.4;13.1.4 Evaluations;311
15.2;13.2 Introduction. Safety and Human Error;313
15.2.1;13.2.1 Human Error;314
15.2.2;13.2.2 The Role of AI in Reducing Human Error;315
15.2.3;13.2.3 Roles with AI;316
15.2.4;13.2.4 Forecasts with AI and Interdependence;317
15.2.5;13.2.5 Evaluations;318
15.3;References;320



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.