E-Book, Englisch, 424 Seiten
Wang / Zhang Trends in Control and Decision-Making for Human-Robot Collaboration Systems
1. Auflage 2017
ISBN: 978-3-319-40533-9
Verlag: Springer Nature Switzerland
Format: PDF
Kopierschutz: 1 - PDF Watermark
E-Book, Englisch, 424 Seiten
ISBN: 978-3-319-40533-9
Verlag: Springer Nature Switzerland
Format: PDF
Kopierschutz: 1 - PDF Watermark
This book provides an overview of recent research developments in the automation and control of robotic systems that collaborate with humans. A measure of human collaboration being necessary for the optimal operation of any robotic system, the contributors exploit a broad selection of such systems to demonstrate the importance of the subject, particularly where the environment is prone to uncertainty or complexity. They show how such human strengths as high-level decision-making, flexibility, and dexterity can be combined with robotic precision, and ability to perform task repetitively or in a dangerous environment.The book focuses on quantitative methods and control design for guaranteed robot performance and balanced human experience from both physical human-robot interaction and social human-robot interaction. Its contributions develop and expand upon material presented at various international conferences. They are organized into three parts covering:one-human-one-robot collaboration;
one-human-multiple-robot collaboration; and
human-swarm collaboration.
Individual topic areas include resource optimization (human and robotic), safety in collaboration, human trust in robot and decision-making when collaborating with robots, abstraction of swarm systems to make them suitable for human control, modeling and control of internal force interactions for collaborative manipulation, and the sharing of control between human and automated systems, etc. Control and decision-making algorithms feature prominently in the text, importantly within the context of human factors and the constraints they impose. Applications such as assistive technology, driverless vehicles, cooperative mobile robots, manufacturing robots and swarm robots are considered. Illustrative figures and tables are provided throughout the book.Researchers and students working in controls, and the interaction of humans and robots will learn new methods for human-robot collaboration from this book and will find the cutting edge of the subject described in depth.
Yue Wang received her B.S. degree in Mechanical Engineering from Shanghai University, China, in 2005 and M.S. and Ph.D. degrees in Mechanical Engineering from Worcester Polytechnic Institute in 2008 and 2011. She is an Assistant Professor in the Department of Mechanical Engineering at Clemson University. Prior to joining Clemson in 2012, she was a postdoctoral research associate in the Electrical Engineering Department at the University of Notre Dame. Her research interests include cooperative control and decision-making for human-robot collaboration systems, multi-agent systems, and control of cyber-physical systems. Dr. Wang received the National Science Foundation CAREER award and the Air Force Summer Faculty Fellowship in 2015, respectively. Her research has lead to 9 journal publications, 23 peer-reviewed conference papers, a book, and 2 book chapters. Dr. Wang is a member of IEEE, ASME, and AIAA. She is the co-chair for the IEEE Technical Committee on Manufacturing Automation and Robotic Control and organizers of several invited sessions in the American Control Conference. Fumin Zhang is Associate Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He received a PhD degree in 2004 from the University of Maryland (College Park) in Electrical Engineering, and held a postdoctoral position in Princeton University from 2004 to 2007. His research interests include mobile sensor networks, maritime robotics, control systems, and theoretical foundations for cyber-physical systems. He received the NSF CAREER Award in September 2009, the Lockheed Inspirational Young Faculty Award in March 2010, the ONR Young Investigator Program Award in April 2010, and the GT Roger P. Webb Outstanding Junior Faculty Award in April 2011. He is currently serving as the co-chair for the IEEE RAS Technical Committee on Marine Robotics, and the chair for the IEEE CSS Technical Committee on Robotic Control and Manufacturing Automation.
Autoren/Hrsg.
Weitere Infos & Material
1;Preface;6
2;Acknowledgements;7
3;Contents;8
4;1 Introduction;17
4.1;1.1 Overview;17
4.2;1.2 Collaboration Between One Human--Robot Pair;20
4.3;1.3 Collaboration Between Human and Multiple Robots/Swarms;22
4.4;References;24
5;2 Robust Shared-Control for Rear-Wheel Drive Cars;30
5.1;2.1 Introduction;30
5.2;2.2 Problem Formulation, Definitions, and Assumptions;31
5.3;2.3 Design of the Shared-Control Law with Measurements of Absolute Positions;34
5.3.1;2.3.1 Design of the Feedback Controller;34
5.3.2;2.3.2 Shared-Control Algorithm;37
5.4;2.4 Disturbance Rejections;40
5.5;2.5 Design of the Shared Control Without Measurements of Absolute Positions;43
5.5.1;2.5.1 Design of the Feedback Controller;44
5.5.2;2.5.2 Shared-Control Algorithm;46
5.6;2.6 Case Studies;48
5.6.1;2.6.1 Case I: Turning Without Absolute Positioning;48
5.6.2;2.6.2 Case II: Driving on a Road with Parked Cars;51
5.6.3;2.6.3 Case III: Emergency Breaking;51
5.7;2.7 Conclusions;53
5.8;References;53
6;3 Baxter-On-Wheels (BOW): An Assistive Mobile Manipulator for Mobility Impaired Individuals;56
6.1;3.1 Introduction;56
6.2;3.2 System Description;59
6.2.1;3.2.1 Experimental Platform: BOW;59
6.2.2;3.2.2 System Kinematics;63
6.3;3.3 Control Algorithm;64
6.3.1;3.3.1 Baseline Shared-Control Algorithm;64
6.3.2;3.3.2 Free-Space Mode and Contact Mode;66
6.4;3.4 Application to the BOW;69
6.4.1;3.4.1 User Interface;69
6.4.2;3.4.2 Object Pick-Up and Placement Task;70
6.4.3;3.4.3 Board Cleaning Task;73
6.5;3.5 Conclusion;75
6.6;References;76
7;4 Switchings Between Trajectory Tracking and Force Minimization in Human--Robot Collaboration;79
7.1;4.1 Introduction;79
7.2;4.2 Dynamic Models;81
7.2.1;4.2.1 Robot Model;82
7.2.2;4.2.2 Human Arm Model;82
7.2.3;4.2.3 Unified Model;84
7.2.4;4.2.4 Trajectory Tracking;85
7.3;4.3 Control Design;85
7.3.1;4.3.1 Control Objective;85
7.3.2;4.3.2 Selection of Cost Functions;86
7.3.3;4.3.3 Optimal Control;87
7.4;4.4 Simulations;89
7.4.1;4.4.1 Simulation Settings;89
7.4.2;4.4.2 Change of Weights;90
7.4.3;4.4.3 Adaptation of Desired Trajectory;92
7.5;4.5 Conclusions;93
7.6;References;94
8;5 Estimating Human Intention During a Human--Robot Cooperative Task Based on the Internal Force Model;96
8.1;5.1 Introduction;96
8.2;5.2 Internal Force Model;99
8.2.1;5.2.1 Problem Formulation;99
8.2.2;5.2.2 Existing Models;100
8.2.3;5.2.3 Proposed Model;101
8.2.4;5.2.4 Discussion;104
8.3;5.3 Method;104
8.3.1;5.3.1 Apparatus;105
8.3.2;5.3.2 Procedure;105
8.4;5.4 Results;107
8.5;5.5 Validation of the Model;110
8.6;5.6 Statistical Analysis of the Internal Force Features;112
8.6.1;5.6.1 Initial Grasp Force Magnitude;113
8.6.2;5.6.2 Final Grasp Force Magnitude;114
8.6.3;5.6.3 Internal Force Energy;114
8.6.4;5.6.4 Difference Between Initial and Final Grasp Forces;115
8.6.5;5.6.5 Internal Force Variation;115
8.6.6;5.6.6 Negotiation Force;116
8.6.7;5.6.7 Negotiation Force Versus Object Velocity;117
8.7;5.7 Proposed Cooperation Policy;118
8.8;5.8 Conclusion;120
8.9;References;121
9;6 A Learning Algorithm to Select Consistent Reactions to Human Movements;123
9.1;6.1 Introduction;123
9.2;6.2 Background;125
9.2.1;6.2.1 Expert-Based Learning;125
9.2.2;6.2.2 Binary Learning Algorithms;126
9.3;6.3 Analysis;127
9.3.1;6.3.1 Performance;128
9.3.2;6.3.2 Consistency;128
9.3.3;6.3.3 Adaptiveness;130
9.3.4;6.3.4 Tie Breaking;131
9.4;6.4 Expanded Dual Expert Algorithm;132
9.4.1;6.4.1 Performance Analysis;133
9.4.2;6.4.2 Consistency and Adaptiveness;134
9.5;6.5 Simulation;134
9.5.1;6.5.1 Dual Expert Algorithm;134
9.5.2;6.5.2 Expanded Dual Expert Algorithm;135
9.6;6.6 Experiment;137
9.6.1;6.6.1 Setup;138
9.6.2;6.6.2 Results;139
9.7;6.7 Conclusions;141
9.8;References;141
10;7 Assistive Optimal Control-on-Request with Application in Standing Balance Therapy and Reinforcement;143
10.1;7.1 Introduction;143
10.2;7.2 Assistive Control Synthesis;145
10.2.1;7.2.1 Calculating a Schedule of Optimal Infinitesimal Actions;145
10.2.2;7.2.2 Computing the Control Duration;150
10.3;7.3 Human--Robot Interaction in Assisted Balance Therapy;151
10.3.1;7.3.1 Related Work: Assist-as-Needed Techniques;152
10.3.2;7.3.2 Interactive Simulation Study;153
10.4;7.4 Human--Robot Communication in Posture Reinforcement: A Short Study;157
10.5;7.5 Conclusion;160
10.6;References;161
11;8 Intelligent Human--Robot Interaction Systems Using Reinforcement Learning and Neural Networks;164
11.1;8.1 Introduction;164
11.2;8.2 HRI Control: Motivation and Structure Overview of the Proposed Approach;166
11.3;8.3 Inner Robot-Specific Loop;167
11.4;8.4 Outer Task-Specific Loop Control;173
11.4.1;8.4.1 Task-Specific Outer Loop Control Method: An LQR Approach;173
11.4.2;8.4.2 Learning Optimal Parameters of the Prescribed Impedance Model Using Integral Reinforcement Learning;177
11.5;8.5 Simulation Results;178
11.6;8.6 Conclusion;185
11.7;References;185
12;9 Regret-Based Allocation of Autonomy in Shared Visual Detection for Human--Robot Collaborative Assembly in Manufacturing;188
12.1;9.1 Introduction;188
12.2;9.2 The Hybrid Cell for Human--Robot Collaborative Assembly;190
12.3;9.3 Detection Problem Formulation with Focus on the Selected Assembly Task;193
12.3.1;9.3.1 Description of the Problem;193
12.3.2;9.3.2 Problem Formulation;196
12.4;9.4 Bayesian Sequential Decision-Making Algorithm for Allocation of Autonomy;197
12.5;9.5 Inclusion of Regret in Bayesian Decision-Making Algorithm for Allocation of Autonomy;198
12.6;9.6 Illustration of the Decision-Making Approach;201
12.6.1;9.6.1 Illustration of the Optimal Bayesian Decision-Making Approach;201
12.6.2;9.6.2 Illustration of the Regret-Based Modified Decision-Making Approach;204
12.7;9.7 Implementation Scheme of the Regret-Based Bayesian Decision-Making Approach for the Assembly Task;204
12.7.1;9.7.1 The Overall Scheme in a Flowchart;204
12.7.2;9.7.2 Measurement of Sensing Probability and Observation Cost;207
12.7.3;9.7.3 Measurement Method for Regret Intensity;208
12.8;9.8 Experimental Evaluation of the Regret-Based Bayesian Decision-Making Approach;210
12.8.1;9.8.1 Objective;210
12.8.2;9.8.2 Hypothesis;210
12.8.3;9.8.3 The Evaluation Criteria;211
12.8.4;9.8.4 The Experiment Design;211
12.8.5;9.8.5 Subjects;211
12.8.6;9.8.6 The Experimental Procedures;212
12.9;9.9 Evaluation Results and Analyses;212
12.10;9.10 Conclusions and Future Innovations;214
12.11;References;215
13;10 Considering Human Behavior Uncertainty and Disagreements in Human--Robot Cooperative Manipulation;217
13.1;10.1 Introduction;217
13.2;10.2 Human--Robot Cooperative Manipulation;219
13.2.1;10.2.1 Cooperative Manipulation;219
13.2.2;10.2.2 Control Challenges in Physical Human--Robot Interaction;222
13.2.3;10.2.3 Reactive Assistants;222
13.2.4;10.2.4 Proactive Assistants;223
13.3;10.3 Interaction Wrench Decomposition;225
13.3.1;10.3.1 Nonuniform Wrench Decomposition Matrices;226
13.3.2;10.3.2 Effective and Internal Wrenches;227
13.3.3;10.3.3 Load Share and Disagreement;231
13.4;10.4 Optimal Robot Assistance Considering Human Behavior Uncertainty and Disagreements;231
13.4.1;10.4.1 Anticipatory Assistance Based on Learned Models;232
13.4.2;10.4.2 The Two-Dimensional Translational Case;237
13.4.3;10.4.3 Experiments;239
13.5;10.5 Conclusions;245
13.6;References;248
14;11 Designing the Robot Behavior for Safe Human--Robot Interactions;251
14.1;11.1 Introduction;251
14.1.1;11.1.1 The Safety Issues and Existing Solutions;252
14.1.2;11.1.2 Safety Problems in HRI: Conflicts in Multiagent Systems;252
14.1.3;11.1.3 Safe Control and Exploration;253
14.2;11.2 Modeling the Human--Robot Interactions;254
14.2.1;11.2.1 The Agent Model;254
14.2.2;11.2.2 The Closed-Loop System;255
14.2.3;11.2.3 Information Structure;256
14.3;11.3 The Safety-Oriented Behavior Design;257
14.3.1;11.3.1 The Safety Principle;257
14.3.2;11.3.2 The Safety Index;258
14.4;11.4 The Safe Set Algorithm (SSA);260
14.4.1;11.4.1 The Control Algorithm;261
14.4.2;11.4.2 Online Learning and Prediction of Humans' Dynamics;262
14.4.3;11.4.3 Applications;263
14.5;11.5 The Safe Exploration Algorithm (SEA);265
14.5.1;11.5.1 The Safe Set in the Belief Space;266
14.5.2;11.5.2 Learning in the Belief Space;267
14.5.3;11.5.3 A Comparative Study Between SSA and SEA;270
14.6;11.6 Combining SSA and SEA in Time Varying MAS Topology;273
14.6.1;11.6.1 The Control Algorithm;274
14.6.2;11.6.2 The Learning Algorithm;275
14.6.3;11.6.3 Performance;275
14.7;11.7 Discussions;276
14.7.1;11.7.1 The Energy Based Methods;277
14.7.2;11.7.2 Limitations and Future Work;277
14.8;11.8 Conclusion;278
14.9;References;278
15;12 When Human Visual Performance Is Imperfect---How to Optimize the Collaboration Between One Human Operator and Multiple Field Robots;281
15.1;12.1 Introduction;281
15.2;12.2 Human and Robot Performance in Target Classification [4];283
15.3;12.3 Optimizing Human--Robot Collaboration for Target Classification;285
15.3.1;12.3.1 Predetermined Site Allocation;285
15.3.2;12.3.2 Optimized Site Allocation;290
15.4;12.4 Numerical Results;293
15.4.1;12.4.1 Collaboration Between the Human Operator and One Robot [14];294
15.4.2;12.4.2 Predetermined Site Allocation;296
15.4.3;12.4.3 Optimized Site Allocation;303
15.5;12.5 Conclusions;308
15.6;References;308
16;13 Human-Collaborative Schemes in the Motion Control of Single and Multiple Mobile Robots;310
16.1;13.1 Introduction;310
16.2;13.2 Modeling of the Robot and the Interactions;311
16.2.1;13.2.1 Mobile Robot;311
16.2.2;13.2.2 Communication Infrastructure;313
16.2.3;13.2.3 Human--Robot Interface;313
16.3;13.3 A Taxonomy of Collaborative Human--Robot Control;315
16.3.1;13.3.1 Physical Domain of the Robots;315
16.3.2;13.3.2 Degree of Autonomy from the Human Operator;316
16.3.3;13.3.3 Force Interaction with the Operator;319
16.3.4;13.3.4 Near-Operation Versus Teleoperation;321
16.3.5;13.3.5 Physical Interaction with the Environment;322
16.3.6;13.3.6 Use of Onboard Sensors Only;324
16.4;13.4 A Taxonomy of Collaborative Human--Multi-robot Control;325
16.4.1;13.4.1 Level of Centralization;325
16.4.2;13.4.2 Master--Leader--Followers Schemes;326
16.4.3;13.4.3 Formation-Orthogonal Control Schemes;327
16.4.4;13.4.4 Group-Property Preservation Schemes;328
16.4.5;13.4.5 Physical Interaction with Contact;329
16.5;13.5 Conclusions;330
16.6;References;331
17;14 A Passivity-Based Approach to Human--Swarm Collaboration and Passivity Analysis of Human Operators;334
17.1;14.1 Introduction;334
17.2;14.2 Intended Scenario and Control Goals;337
17.3;14.3 Control Architecture and Passivity;340
17.4;14.4 Convergence Analysis;342
17.4.1;14.4.1 Synchronization in Position Control Mode;342
17.4.2;14.4.2 Synchronization in Velocity Control Mode;346
17.5;14.5 Passivity of the Human Operator Decision Process;349
17.5.1;14.5.1 Experimental Setup and Approach;350
17.5.2;14.5.2 Analysis on Human Passivity in Position Control Mode;353
17.5.3;14.5.3 Analysis on Human Passivity in Velocity Control Mode;356
17.5.4;14.5.4 Analysis on Individual Variability;358
17.6;14.6 Summary;362
17.7;References;363
18;15 Human--Swarm Interactions via Coverage of Time-Varying Densities;365
18.1;15.1 Introduction;365
18.2;15.2 Human--Swarm Interactions via Coverage;367
18.2.1;15.2.1 The Coverage Problem;370
18.2.2;15.2.2 Centralized Coverage of Time-Varying Densities;373
18.2.3;15.2.3 Distributed Coverage of Time-Varying Densities;377
18.3;15.3 Designing Density Functions;382
18.3.1;15.3.1 Diffusion of Drawn Geometric Configurations;382
18.3.2;15.3.2 Control of Gaussian Functions;384
18.4;15.4 Robotic Experiments;385
18.5;15.5 Conclusions;388
18.6;References;389
19;16 Co-design of Control and Scheduling for Human--Swarm Collaboration Systems Based on Mutual Trust;394
19.1;16.1 Introduction;394
19.2;16.2 Swarm Setup;397
19.2.1;16.2.1 Dynamic Timing Model and Collaboration Delay;397
19.2.2;16.2.2 Cooperative Control for Swarm Agents;399
19.3;16.3 Collaboration Framework;405
19.3.1;16.3.1 Trust Model;405
19.3.2;16.3.2 Human Performance Model;406
19.3.3;16.3.3 Swarm Performance Model;407
19.3.4;16.3.4 Human Attention Preference;407
19.3.5;16.3.5 Fitness;409
19.4;16.4 Real-Time Scheduling;410
19.5;16.5 Simulation Results;411
19.5.1;16.5.1 Parameter Setup;411
19.5.2;16.5.2 Results and Discussions;413
19.6;16.6 Conclusions;417
19.7;References;418
20;Index;421




