Cutsuridis / Hussain / Taylor | Perception-Action Cycle | E-Book | www.sack.de
E-Book

E-Book, Englisch, 784 Seiten

Reihe: Springer Series in Cognitive and Neural Systems

Cutsuridis / Hussain / Taylor Perception-Action Cycle

Models, Architectures, and Hardware
1. Auflage 2011
ISBN: 978-1-4419-1452-1
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark

Models, Architectures, and Hardware

E-Book, Englisch, 784 Seiten

Reihe: Springer Series in Cognitive and Neural Systems

ISBN: 978-1-4419-1452-1
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark



The perception-action cycle is the circular flow of information that takes place between the organism and its environment in the course of a sensory-guided sequence of behaviour towards a goal. Each action causes changes in the environment that are analyzed bottom-up through the perceptual hierarchy and lead to the processing of further action, top-down through the executive hierarchy, toward motor effectors. These actions cause new changes that are analyzed and lead to new action, and so the cycle continues. The Perception-action cycle: Models, architectures and hardware book provides focused and easily accessible reviews of various aspects of the perception-action cycle. It is an unparalleled resource of information that will be an invaluable companion to anyone in constructing and developing models, algorithms and hardware implementations of autonomous machines empowered with cognitive capabilities. The book is divided into three main parts. In the first part, leading computational neuroscientists present brain-inspired models of perception, attention, cognitive control, decision making, conflict resolution and monitoring, knowledge representation and reasoning, learning and memory, planning and action, and consciousness grounded on experimental data. In the second part, architectures, algorithms, and systems with cognitive capabilities and minimal guidance from the brain, are discussed. These architectures, algorithms, and systems are inspired from the areas of cognitive science, computer vision, robotics, information theory, machine learning, computer agents and artificial intelligence. In the third part, the analysis, design and implementation of hardware systems with robust cognitive abilities from the areas of mechatronics, sensing technology, sensor fusion, smart sensor networks, control rules, controllability, stability, model/knowledge representation, and reasoning are discussed.

Cutsuridis / Hussain / Taylor Perception-Action Cycle jetzt bestellen!

Weitere Infos & Material


1;Perception-Action Cycle: Models, Architectures, and Hardware;1
1.1;Preface;5
1.2;Contents
;7
1.3;Contributors;11
1.4;Part I Computational Neuroscience Models;15
1.4.1;1 The Role of Attention in Shaping Visual Perceptual Processes;18
1.4.1.1;1.1 Introduction;18
1.4.1.2;1.2 Connecting Attention, Recognition, and Binding;21
1.4.1.3;1.3 Finding the Right Subset of Neural Pathways on a Recurrent Pass;28
1.4.1.4;1.4 Vision as Dynamic Tuning of a General Purpose Processor;32
1.4.1.5;References;33
1.4.2;2 Sensory Fusion;35
1.4.2.1;2.1 Introduction;35
1.4.2.2;2.2 Audio--Visual Integration;38
1.4.2.2.1;2.2.1 Audio--Visual Integration in the Superior Colliculus: Neurophysiological and Behavioral Evidence(Overview and Model Justification);38
1.4.2.2.2;2.2.2 Model Components;41
1.4.2.2.3;2.2.3 Results;43
1.4.2.2.3.1;2.2.3.1 Enhancement and Inverse Effectiveness;43
1.4.2.2.3.2;2.2.3.2 Cross-Modal Suppression;45
1.4.2.2.3.3;2.2.3.3 Within-Modal Suppression Without Cross-Modal Suppression;48
1.4.2.2.3.4;2.2.3.4 Cross-Modal Facilitation and Ventriloquism Phenomenon;48
1.4.2.2.4;2.2.4 Successes, Limitations and Future Challenges;52
1.4.2.3;2.3 Visual--Tactile Integration;54
1.4.2.3.1;2.3.1 Visual--Tactile Representation of Peripersonal Space: Neurophysiological and Behavioral Evidence(Overview and Model Justification);54
1.4.2.3.2;2.3.2 A Neural Network Model for Peri-Hand Space Representation: Simulation of a Healthy Subject and a RBD Patient (Model Components and Results 1);56
1.4.2.3.2.1;2.3.2.1 Simulation of the Healthy Subject;59
1.4.2.3.2.2;2.3.2.2 Simulation of the RBD Patient with Left Tactile Extinction;61
1.4.2.3.3;2.3.3 Modeling Peri-Hand Space Resizing: Simulation of Tool-Use Training (Model Components and Results 2);63
1.4.2.3.4;2.3.4 Successes, Limitations and Future Challenges;67
1.4.2.4;2.4 Conclusions;70
1.4.2.5;References;72
1.4.3;3 Modelling Memory and Learning Consistently from Psychology to Physiology;75
1.4.3.1;3.1 Introduction;76
1.4.3.2;3.2 The Recommendation Architecture Model;77
1.4.3.3;3.3 Review of Experimental Data Literature;83
1.4.3.3.1;3.3.1 Semantic Memory;83
1.4.3.3.2;3.3.2 Episodic Memory;84
1.4.3.3.3;3.3.3 Procedural Memory;84
1.4.3.3.4;3.3.4 Working Memory;85
1.4.3.3.5;3.3.5 Priming Memory;86
1.4.3.3.6;3.3.6 Dissociations Indicating Separate Memory Systems;87
1.4.3.4;3.4 Other Modelling Approaches;87
1.4.3.5;3.5 Brain Anatomy and the Recommendation Architecture Model;90
1.4.3.5.1;3.5.1 Cortical Structure;90
1.4.3.5.2;3.5.2 Cortical Information Models;90
1.4.3.5.2.1;3.5.2.1 Information Model for a Cortical Area;91
1.4.3.5.2.2;3.5.2.2 Information Model for a Cortical Column;95
1.4.3.5.2.3;3.5.2.3 Information Model for a Pyramidal Neuron;98
1.4.3.5.2.4;3.5.2.4 Pyramidal Neuron Dynamics;101
1.4.3.5.3;3.5.3 Structure of the Basal Ganglia and Thalamus;104
1.4.3.5.4;3.5.4 Information Models for the Thalamus and Basal Ganglia;105
1.4.3.5.4.1;3.5.4.1 Information Model for the Thalamus;106
1.4.3.5.4.2;3.5.4.2 Information Model for the Striatum;107
1.4.3.5.4.3;3.5.4.3 Information Model for the GPi and SNr;109
1.4.3.5.4.4;3.5.4.4 Information Model for the Nucleus Accumbens;109
1.4.3.5.5;3.5.5 Structure of the Hippocampal System;109
1.4.3.5.6;3.5.6 Information Model for the Hippocampal System;110
1.4.3.5.7;3.5.7 Structure of the Cerebellum;114
1.4.3.5.8;3.5.8 Information Model for the Cerebellum;114
1.4.3.6;3.6 Modelling of Memory and Learning Phenomena;116
1.4.3.6.1;3.6.1 Receptive Fields Stability and Memory;116
1.4.3.6.2;3.6.2 Development and Evolution of Indirect Activation Recommendation Strengths;118
1.4.3.6.3;3.6.3 Semantic Memory;119
1.4.3.6.4;3.6.4 Working Memory;122
1.4.3.6.5;3.6.5 Episodic Memory;124
1.4.3.6.6;3.6.6 Priming Memory;126
1.4.3.6.7;3.6.7 Procedural Memory;127
1.4.3.7;3.7 Mapping Between Different Levels of Description;128
1.4.3.8;3.8 More Complex Cognitive Processes;130
1.4.3.8.1;3.8.1 Attention;130
1.4.3.8.2;3.8.2 Emotion and Reward;131
1.4.3.8.3;3.8.3 Sleep;132
1.4.3.8.4;3.8.4 Mental Image Manipulation;133
1.4.3.8.5;3.8.5 Self-Awareness;134
1.4.3.8.6;3.8.6 Imagination;135
1.4.3.8.7;3.8.7 Planning;137
1.4.3.8.8;3.8.8 Stream of Consciousness;138
1.4.3.9;3.9 Electronic Implementations;139
1.4.3.10;3.10 Conclusions;140
1.4.3.11;References;141
1.4.4;4 Value Maps, Drives, and Emotions;146
1.4.4.1;4.1 Overview of the DECIDER Model;146
1.4.4.2;4.2 Review of Experimental Data;147
1.4.4.2.1;4.2.1 Behavioral Data on Risky Decision Making;147
1.4.4.2.2;4.2.2 Data on Neural Bases of Cognitive-Emotional Decision Making;151
1.4.4.3;4.3 Review of Previous Decision Models;155
1.4.4.3.1;4.3.1 Psychological Models Without Explicit Brain Components;155
1.4.4.3.2;4.3.2 Models of Brain Area Involvement in Cognitive-Emotional Decision Making;157
1.4.4.4;4.4 Organization of the Model;158
1.4.4.4.1;4.4.1 Fuzzy Emotional Traces and Adaptive Resonance;158
1.4.4.4.2;4.4.2 Effects of Learning;162
1.4.4.4.3;4.4.3 Adaptive Resonance and Its Discontents: Relative Versus Absolute Emotional Values;163
1.4.4.4.4;4.4.4 Higher Level and Deliberative Rules;167
1.4.4.5;4.5 A Simplified Simulation;169
1.4.4.6;4.6 Concluding Remarks;172
1.4.4.6.1;4.6.1 Predictions and Syntheses;172
1.4.4.6.2;4.6.2 Extension to a Multi-Drive Multiattribute Decision Model;174
1.4.4.6.3;4.6.3 The Larger Human Picture;175
1.4.4.7;References;176
1.4.5;5 Computational Neuroscience Models: Error Monitoring, Conflict Resolution, and Decision Making;180
1.4.5.1;5.1 Models of Cognitive Control;182
1.4.5.1.1;5.1.1 Biased Competition Model;182
1.4.5.1.2;5.1.2 Neural Models of Decision-Making;183
1.4.5.2;5.2 Medial Prefrontal Cortex and Performance Monitoring;183
1.4.5.2.1;5.2.1 Models of Performance Monitoring;184
1.4.5.2.1.1;5.2.1.1 Comparator Model;184
1.4.5.2.1.2;5.2.1.2 Conflict Monitoring Model;184
1.4.5.2.1.3;5.2.1.3 Action Selection Model;186
1.4.5.3;5.3 Error Likelihood Model;186
1.4.5.3.1;5.3.1 Testing the Error Likelihood Model;187
1.4.5.3.2;5.3.2 Risk;189
1.4.5.3.3;5.3.3 Multiple Response Effects;190
1.4.5.3.3.1;5.3.3.1 Cognitive Control Effects Driven by Error Likelihood, Conflict, and Errors;191
1.4.5.4;5.4 Future Challenges;192
1.4.5.4.1;5.4.1 Reward as well as Error Likelihood?;192
1.4.5.5;5.5 Toward a More Comprehensive Model of Performance Monitoring;192
1.4.5.6;5.6 Concluding Remarks;193
1.4.5.7;References;194
1.4.6;6 Neural Network Models for Reaching and Dexterous Manipulation in Humans and Anthropomorphic Robotic Systems;197
1.4.6.1;6.1 Introduction;198
1.4.6.2;6.2 Overview;199
1.4.6.2.1;6.2.1 Overview of the Neural Network Model for Arm Reaching and Grasping;199
1.4.6.2.2;6.2.2 Modular Multinetwork Architecture for Learning Reaching, and Grasping Tasks;199
1.4.6.3;6.3 Experimental and Computational Neurosciences Background;200
1.4.6.3.1;6.3.1 Review of Experimental Data Literature;200
1.4.6.3.2;6.3.2 Review of Previous Modeling Attempts;201
1.4.6.4;6.4 The Neural Network Model Architecture;203
1.4.6.4.1;6.4.1 Model Components;203
1.4.6.4.1.1;6.4.1.1 The Basic Module;203
1.4.6.4.1.2;6.4.1.2 Learning the Inverse Kinematics of the Arm and Fingers: LM1;204
1.4.6.4.1.3;6.4.1.3 Learning to Associate Object's Intrinsic Properties and Grasping Postures: LM2;208
1.4.6.5;6.5 Simulations Results, Limitations, and Future Challenges;211
1.4.6.5.1;6.5.1 Simulation Results;211
1.4.6.5.1.1;6.5.1.1 Generation of Reaching and Grasping Trajectories;211
1.4.6.5.1.2;6.5.1.2 Learning Capabilities, Training, and Generalization Errors of the GRASP Module;213
1.4.6.5.1.3;6.5.1.3 Analysis of the Neural Activity of the GRASP Module During Performance;216
1.4.6.5.2;6.5.2 Discussion;218
1.4.6.5.2.1;6.5.2.1 Reaching and Grasping Performance;218
1.4.6.5.2.2;6.5.2.2 Neural Activity of the GRASP Module;219
1.4.6.5.2.3;6.5.2.3 Model Assumptions, Limitations, and Possible Solutions to Challenge Them;221
1.4.6.6;6.6 Conclusion;222
1.4.6.7;References;223
1.4.7;7 Schemata Learning;228
1.4.7.1;7.1 Introduction;228
1.4.7.2;7.2 Review of Prior Models and Neuroscience Evidences;229
1.4.7.3;7.3 Proposed Model;233
1.4.7.3.1;7.3.1 General;233
1.4.7.3.2;7.3.2 Training;236
1.4.7.3.3;7.3.3 Action Generation in Physical Environment and Motor Imagery;237
1.4.7.4;7.4 Setup of Humanoid Robot Experiments;237
1.4.7.5;7.5 Experimental Results;239
1.4.7.5.1;7.5.1 Overall Task Performances in the End of Development;239
1.4.7.5.2;7.5.2 Development Processes;240
1.4.7.5.3;7.5.3 Analyses;242
1.4.7.6;7.6 Discussion;244
1.4.7.6.1;7.6.1 Summary of the Robot Experiments;244
1.4.7.6.2;7.6.2 Schemata Learning from Developmental Psychological Views;245
1.4.7.7;7.7 Summary;247
1.4.7.8;References;248
1.4.8;8 The Perception-Conceptualisation-Knowledge Representation-Reasoning Representation-Action Cycle: The View from the Brain;251
1.4.8.1;8.1 Introduction;251
1.4.8.2;8.2 The GNOSYS Model;255
1.4.8.2.1;8.2.1 The Basic GNOSYS Robot Platform and Environment;255
1.4.8.2.2;8.2.2 Information Flow and GNOSYS Sub-systems;255
1.4.8.2.2.1;8.2.2.1 Perception;256
1.4.8.2.2.2;8.2.2.2 Memory;257
1.4.8.2.2.3;8.2.2.3 Action Execution;258
1.4.8.3;8.3 The GNOSYS Model Processing Details;259
1.4.8.3.1;8.3.1 The GNOSYS Perception System;259
1.4.8.3.2;8.3.2 Learning Attended Object Representations;261
1.4.8.3.3;8.3.3 Learning Expectation of Reward;264
1.4.8.3.4;8.3.4 The GNOSYS Concept System;266
1.4.8.4;8.4 The Development of Internal Models in the Brain;267
1.4.8.5;8.5 Thinking as Mental Simulation;271
1.4.8.6;8.6 Creativity as Unattended Mental Simulation;273
1.4.8.6.1;8.6.1 Simulation Results for Unusual Uses of a Cardboard Box;276
1.4.8.6.1.1;8.6.1.1 Meta Goal;277
1.4.8.6.1.2;8.6.1.2 Object Codes;277
1.4.8.6.1.3;8.6.1.3 Affordance Codes;277
1.4.8.6.1.4;8.6.1.4 Mental Simulation Loop;277
1.4.8.6.1.5;8.6.1.5 Error Monitor;278
1.4.8.6.1.6;8.6.1.6 Attention;278
1.4.8.7;8.7 Reasoning as Rewarded Mental Simulation;280
1.4.8.7.1;8.7.1 Non-linguistic Reasoning;280
1.4.8.7.2;8.7.2 Setting Up the Linguistic Machinery;283
1.4.8.7.3;8.7.3 Linguistic Reasoning;283
1.4.8.8;8.8 Overall Results of the System;285
1.4.8.8.1;8.8.1 Experiments;285
1.4.8.8.2;8.8.2 Results;287
1.4.8.8.3;8.8.3 Extensions Needed;287
1.4.8.9;8.9 Relation to Other Cognitive System Architectures;288
1.4.8.10;8.10 Conclusions;289
1.4.8.11;References;291
1.4.9;9 Consciousness, Decision-Making and Neural Computation;294
1.4.9.1;9.1 Introduction;295
1.4.9.2;9.2 A Higher Order Syntactic Thought Theory of Consciousness;296
1.4.9.2.1;9.2.1 Multiple Routes to Action;296
1.4.9.2.2;9.2.2 A Computational Hypothesis of Consciousness;299
1.4.9.2.3;9.2.3 Adaptive Value of Processing in the System That Is Related to Consciousness;300
1.4.9.2.4;9.2.4 Symbol Grounding;302
1.4.9.2.5;9.2.5 Qualia;303
1.4.9.2.6;9.2.6 Pathways;304
1.4.9.2.7;9.2.7 Consciousness and Causality;305
1.4.9.2.8;9.2.8 Consciousness, a Computational System for Higher Order Syntactic Manipulation of Symbols, and a Commentary or Reporting Functionality;306
1.4.9.3;9.3 Selection Between Conscious vs. Unconscious Decision-Making and Free Will;308
1.4.9.3.1;9.3.1 Dual Routes to Action;308
1.4.9.3.2;9.3.2 The Selfish Gene vs. the Selfish Phene;311
1.4.9.3.3;9.3.3 Decision-Making Between the Implicit and Explicit Systems;313
1.4.9.3.4;9.3.4 Free Will;314
1.4.9.4;9.4 Decision-Making and ``Subjective Confidence'';315
1.4.9.4.1;9.4.1 Neural Networks for Decision-Making That Reflect ``Subjective Confidence'' in Their Firing Rates;316
1.4.9.4.2;9.4.2 A Model for Decisions About Confidence Estimates;320
1.4.9.5;9.5 Oscillations and Stimulus-Dependent Neuronal Synchrony: Their Role in Information Processing in the Ventral Visual System and in Consciousness;324
1.4.9.6;9.6 A Neural Threshold for Consciousness: The Neurophysiology of Backward Masking;327
1.4.9.6.1;9.6.1 The Neurophysiology and Psychophysics of Backward Masking;327
1.4.9.6.2;9.6.2 The Relation to Blindsight;329
1.4.9.7;9.7 The Speed of Visual Processing Within a Cortical Visual Area Shows That Top-Down Interactions with Bottom-Up Processes Are Not Essential for Conscious Visual Perception;330
1.4.9.8;9.8 Comparisons with Other Approaches to Consciousness;331
1.4.9.9;References;335
1.4.10;10 A Review of Models of Consciousness;341
1.4.10.1;10.1 Introduction;341
1.4.10.2;10.2 The Models of Consciousness;343
1.4.10.2.1;10.2.1 The Higher Order Thought Model;343
1.4.10.2.2;10.2.2 The Working Memory Model;344
1.4.10.2.3;10.2.3 The Global Workspace Model;345
1.4.10.2.4;10.2.4 The Complexity Model;346
1.4.10.2.5;10.2.5 The Recurrent Model;347
1.4.10.2.6;10.2.6 The Neural Field Model;347
1.4.10.2.7;10.2.7 The Relational Mind;348
1.4.10.2.8;10.2.8 The Attention-Based CODAM Model;348
1.4.10.2.9;10.2.9 Further Models of Consciousness;350
1.4.10.3;10.3 Criteria for the Review;350
1.4.10.3.1;10.3.1 Fits to Experimental Data;351
1.4.10.3.2;10.3.2 The Presence of Attention;352
1.4.10.3.3;10.3.3 As Providing an Explanation of Mental Diseases;354
1.4.10.3.4;10.3.4 Existence of an Inner Self;355
1.4.10.4;10.4 The Test Results;356
1.4.10.4.1;10.4.1 Higher Order Thought;357
1.4.10.4.2;10.4.2 Working Memory;358
1.4.10.4.3;10.4.3 Global Workspace;358
1.4.10.4.4;10.4.4 Complexity;358
1.4.10.4.5;10.4.5 Recurrence;358
1.4.10.4.6;10.4.6 Neural Field Theory;359
1.4.10.4.7;10.4.7 Relational Mind;359
1.4.10.4.8;10.4.8 CODAM;359
1.4.10.4.9;10.4.9 Possible Model Fusion;359
1.4.10.5;10.5 Conclusions;360
1.4.10.6;References;361
1.5;Part II Cognitive Architectures;364
1.5.1;11 Vision, Attention Control, and Goals Creation System;367
1.5.1.1;11.1 Overview;367
1.5.1.2;11.2 Computational Models of Visual Attention;368
1.5.1.2.1;11.2.1 Bottom-Up Visual Attention;368
1.5.1.2.2;11.2.2 Top-Down Visual Attention;370
1.5.1.2.3;11.2.3 Attentional Selection: Attention as a Controller;370
1.5.1.2.4;11.2.4 CODAM: COrollary Discharge of Attention Movement;371
1.5.1.3;11.3 Applications;372
1.5.1.3.1;11.3.1 Scene/Object Recognition;372
1.5.1.3.2;11.3.2 Novelty Detection and Video Summarization;373
1.5.1.3.3;11.3.3 Robotic Vision;374
1.5.1.4;11.4 Volumetric Saliency by Feature Competition;375
1.5.1.5;11.5 Problem Formulation;376
1.5.1.6;11.6 Saliency-Based Video Classification;378
1.5.1.7;11.7 Evaluation of Classification Performance;380
1.5.1.8;11.8 Action Recognition;384
1.5.1.9;11.9 Spatiotemporal Point Detection;385
1.5.1.10;11.10 Discussion;386
1.5.1.11;References;387
1.5.2;12 Semantics Extraction From Multimedia Data: An Ontology-Based Machine Learning Approach;391
1.5.2.1;12.1 Introduction;391
1.5.2.2;12.2 Fusing at the Semantic Level;393
1.5.2.2.1;12.2.1 Low-, Mid- and High-Level Fusion;393
1.5.2.2.1.1;12.2.1.1 Low-Level Fusion;394
1.5.2.2.1.2;12.2.1.2 Mid-Level Fusion;394
1.5.2.2.1.3;12.2.1.3 High-Level Fusion;394
1.5.2.2.2;12.2.2 Redundancy and Complementarity of Multimedia Information;395
1.5.2.2.2.1;12.2.2.1 Complementarity;395
1.5.2.2.2.2;12.2.2.2 Redundancy;396
1.5.2.2.3;12.2.3 Physical and Logical Document Structure;397
1.5.2.2.4;12.2.4 Practical Considerations;398
1.5.2.3;12.3 Methodology;400
1.5.2.3.1;12.3.1 Motivation;400
1.5.2.3.2;12.3.2 Problem Formulation;401
1.5.2.3.2.1;12.3.2.1 Reference Functions;402
1.5.2.3.2.2;12.3.2.2 Approximation Functions;402
1.5.2.3.2.3;12.3.2.3 Distance Between A-Boxes;402
1.5.2.3.3;12.3.3 Using Directed Graphs;403
1.5.2.3.3.1;12.3.3.1 Set of DL Assertions as Directed Graphs;403
1.5.2.3.4;12.3.4 Optimal Graph Expansion Operators;405
1.5.2.3.4.1;12.3.4.1 Elementary Operators;405
1.5.2.3.4.2;12.3.4.2 Greedy Search for Optimal Operators;405
1.5.2.3.4.3;12.3.4.3 Optimal Elementary Operators;406
1.5.2.3.4.4;12.3.4.4 Optimal Edge Addition;406
1.5.2.3.4.5;12.3.4.5 Optimal Vertex Addition;406
1.5.2.3.4.6;12.3.4.6 Complexity Issues;407
1.5.2.3.5;12.3.5 Scoring Functions for Graph Expansion Operators;407
1.5.2.3.5.1;12.3.5.1 Graph Local Representations;408
1.5.2.3.5.2;12.3.5.2 Representing Graph Paths as Features;409
1.5.2.3.5.3;12.3.5.3 Example;409
1.5.2.3.5.4;12.3.5.4 Representing Uncertainty;410
1.5.2.3.5.5;12.3.5.5 Complexity Issues;410
1.5.2.3.5.6;12.3.5.6 Soft Classifiers as scoring functions;411
1.5.2.4;12.4 Evaluation;412
1.5.2.4.1;12.4.1 Experimental Setting;412
1.5.2.4.1.1;12.4.1.1 Data;412
1.5.2.4.1.2;12.4.1.2 Methodology;413
1.5.2.4.2;12.4.2 Evaluation Results;415
1.5.2.5;12.5 Related Work;417
1.5.2.6;12.6 Conclusions;418
1.5.2.7;References;418
1.5.3;13 Cognitive Algorithms and Systems of Episodic Memory, Semantic Memory, and Their Learnings;420
1.5.3.1;13.1 Introduction;420
1.5.3.2;13.2 Computational Systems of Episodic Memory, Semantic Memory, and Their Learnings;422
1.5.3.2.1;13.2.1 Cognitive Systems of Learning and Memory;422
1.5.3.2.1.1;13.2.1.1 Collins and Quillian's Hierarchical Network Model;423
1.5.3.2.1.2;13.2.1.2 ACT-R;424
1.5.3.2.1.3;13.2.1.3 CLARION;426
1.5.3.2.2;13.2.2 Connectionist Systems of Episodic Memory, Semantic Memory and Their Learnings;427
1.5.3.3;13.3 A Multileveled Network System of Episodic Memory, Semantic Memory, and Their Learnings;430
1.5.3.3.1;13.3.1 Single Memory: To Locally Store Information;433
1.5.3.3.2;13.3.2 Memory Triangle: To Learn Meanings or Common Features;434
1.5.3.3.3;13.3.3 Organizing Memory Triangles: To Learn a Knowledge Structure;435
1.5.3.3.4;13.3.4 Conceptual Learning: To Ground Symbols to Their Meanings;435
1.5.3.3.5;13.3.5 Episodic Storage: To Store Episodic Memory;436
1.5.3.4;13.4 Simulating Episodic Memory, Semantic Memory, and Their Learnings;438
1.5.3.4.1;13.4.1 Episodic Learning, Serial Recall, and Recognition;438
1.5.3.4.2;13.4.2 Dreaming, Learning, and Memory Consolidation;440
1.5.3.4.3;13.4.3 Retrograde Amnesia and Anterograde Amnesia;442
1.5.3.4.4;13.4.4 Developmental Amnesia;443
1.5.3.4.5;13.4.5 Dense Amnesia and Direct Semantic Learning;445
1.5.3.4.6;13.4.6 Robustness and Flexibility;447
1.5.3.5;13.5 Future Challenges;447
1.5.3.6;References;448
1.5.4;14 Motivational Processes Within the Perception--Action Cycle;452
1.5.4.1;14.1 Overview;452
1.5.4.2;14.2 Background: Data and Models Relevant to Motivational Representations, Processes, and Structures;454
1.5.4.2.1;14.2.1 Previous Work on Motivation;454
1.5.4.2.2;14.2.2 Previous Work on Personality;456
1.5.4.2.3;14.2.3 Previous Work on Cognitive Architectures;458
1.5.4.2.4;14.2.4 Essential Desiderata;459
1.5.4.3;14.3 The CLARION Cognitive Architecture: The Role of Motivational Variables;459
1.5.4.3.1;14.3.1 Overview of CLARION;459
1.5.4.3.2;14.3.2 The Action-Centered Subsystem;461
1.5.4.3.3;14.3.3 The Non-Action-Centered Subsystem;463
1.5.4.3.4;14.3.4 The Motivational Subsystem;464
1.5.4.3.5;14.3.5 The Meta-Cognitive Subsystem;467
1.5.4.3.6;14.3.6 Model of Personality Within CLARION;468
1.5.4.4;14.4 Results, Successes, Limitations, and Future Challenges;469
1.5.4.4.1;14.4.1 Some Simulation Results;469
1.5.4.4.2;14.4.2 Implications, Limitations, and Future Work;473
1.5.4.5;References;474
1.5.5;15 Cognitive Algorithms and Systems of Error Monitoring, Conflict Resolution and Decision Making;476
1.5.5.1;15.1 Overview;476
1.5.5.2;15.2 Algorithm/System Justification;478
1.5.5.3;15.3 The Algorithm/System and How It Deviates from Its Predecessors;479
1.5.5.3.1;15.3.1 Robot Task Model Components;480
1.5.5.3.1.1;15.3.1.1 Plan Coordination Components;481
1.5.5.3.1.2;15.3.1.2 Plan Components;482
1.5.5.3.1.3;15.3.1.3 Resources;483
1.5.5.3.2;15.3.2 Functional Architecture;483
1.5.5.3.3;15.3.3 Information Flow;484
1.5.5.3.4;15.3.4 Petri Net Model of Task Plans;486
1.5.5.4;15.4 Successes, Limitations and Future Challenges;496
1.5.5.5;References;498
1.5.6;16 Developmental Learning of Cooperative Robot Skills: A Hierarchical Multi-Agent Architecture;500
1.5.6.1;16.1 Introduction;501
1.5.6.2;16.2 Hierarchical Multi-Agent Control Framework;503
1.5.6.2.1;16.2.1 Mapping Agents to Degrees of Freedom;504
1.5.6.2.2;16.2.2 Hierarchical Architecture;504
1.5.6.2.3;16.2.3 Continuous Problem Setting;504
1.5.6.3;16.3 Agent Architecture: The Case of Robot Kinematic Chains;507
1.5.6.3.1;16.3.1 Basic Internal Functions of an Agent;509
1.5.6.3.2;16.3.2 Continuous Reinforcement Learning: Kinematic Chain;510
1.5.6.3.2.1;16.3.2.1 Q: Learning Method;510
1.5.6.3.2.2;16.3.2.2 State-Space Fuzzification for Continuous Problem Sets;511
1.5.6.3.2.3;16.3.2.3 Action Selection and Reward Function;512
1.5.6.4;16.4 Agent Architecture: The Case of Collaborative Mobile Robots;514
1.5.6.4.1;16.4.1 Continuous Reinforcement Learning: Mobile Robots;516
1.5.6.4.1.1;16.4.1.1 TD() Learning Method;516
1.5.6.4.1.2;16.4.1.2 TD() Learning with Linear Function Approximation;518
1.5.6.4.1.3;16.4.1.3 TD() Learning Method with Gradient Correction;519
1.5.6.4.1.4;16.4.1.4 Linear Function Approximation Using a Fuzzy Rule Base;520
1.5.6.5;16.5 The RL-Based Robot Control Architecture;523
1.5.6.6;16.6 Numerical Experiments: Results and Discussion;524
1.5.6.6.1;16.6.1 Single Kinematic Chain;524
1.5.6.6.2;16.6.2 Multi-Finger Grasp;529
1.5.6.6.3;16.6.3 Collaborative Mobile Robots: Box-Pushing Task;533
1.5.6.7;16.7 Conclusion and Future Work;539
1.5.6.8;References;540
1.5.7;17 Actions and Imagined Actions in Cognitive Robots;542
1.5.7.1;17.1 Introduction;543
1.5.7.2;17.2 The GNOSYS Playground;547
1.5.7.3;17.3 Forward/Inverse Model for Reaching: The Passive Motion Paradigm;550
1.5.7.4;17.4 Spatial Map and Pushing Sensorimotor Space;555
1.5.7.4.1;17.4.1 Acquisition of the Sensorimotor Space;555
1.5.7.4.2;17.4.2 Dynamics of the Sensorimotor Space;558
1.5.7.4.3;17.4.3 Value Field Dynamics: How Goal Influences Activity in SMS;561
1.5.7.4.4;17.4.4 Reaching Spatial Goals Using the Spatial Sensorimotor Space;562
1.5.7.4.5;17.4.5 Learning the Reward Structure in ``Pushing'' Sensorimotor Space;564
1.5.7.5;17.5 A Goal-Directed, Mental Sequence of ``Push--Move--Reach'';569
1.5.7.6;17.6 Discussion;571
1.5.7.7;References;573
1.5.8;18 Cognitive Algorithms and Systems: Reasoning and Knowledge Representation;576
1.5.8.1;18.1 Introduction;576
1.5.8.2;18.2 Neurons and Symbols;578
1.5.8.2.1;18.2.1 Abstraction;579
1.5.8.2.2;18.2.2 Modularity;579
1.5.8.2.3;18.2.3 Applications;580
1.5.8.2.4;18.2.4 Expressiveness;580
1.5.8.2.5;18.2.5 Representation;581
1.5.8.2.6;18.2.6 Nonclassical Reasoning;581
1.5.8.3;18.3 Neural-Symbolic Learning Systems;582
1.5.8.4;18.4 Technical Background;584
1.5.8.4.1;18.4.1 Neural Networks and Neural-Symbolic Systems;584
1.5.8.4.2;18.4.2 The Language of Connectionist Modal Logic;586
1.5.8.4.3;18.4.3 Reasoning About Time and Knowledge;588
1.5.8.5;18.5 Connectionist Nonclassical Reasoning;589
1.5.8.5.1;18.5.1 Connectionist Modal Reasoning;590
1.5.8.5.2;18.5.2 Connectionist Temporal Reasoning;591
1.5.8.5.3;18.5.3 Case Study;593
1.5.8.6;18.6 Fibring Neural Networks;596
1.5.8.7;18.7 Concluding Remarks;597
1.5.8.8;References;600
1.5.9;19 Information Theory of Decisions and Actions;604
1.5.9.1;19.1 Introduction;605
1.5.9.2;19.2 Rationale;606
1.5.9.3;19.3 Notation;608
1.5.9.3.1;19.3.1 Probabilistic Quantities;608
1.5.9.3.2;19.3.2 Entropy and Information;608
1.5.9.4;19.4 Markov Decision Processes;610
1.5.9.4.1;19.4.1 MDP: Definition;610
1.5.9.4.2;19.4.2 The Value Function of an MDP and Its Optimization;611
1.5.9.5;19.5 Coupling Information with Decisions and Actions;613
1.5.9.5.1;19.5.1 Information and the Perception--Action Cycle;613
1.5.9.5.1.1;19.5.1.1 Causal Bayesian Networks;614
1.5.9.5.1.2;19.5.1.2 Bayesian Network for a Reactive Agent;614
1.5.9.5.1.3;19.5.1.3 Bayesian Network for a General Agent;615
1.5.9.5.2;19.5.2 Actions as Coding;616
1.5.9.5.3;19.5.3 Information-To-Go;619
1.5.9.5.3.1;19.5.3.1 A Bellman Picture;619
1.5.9.5.3.2;19.5.3.2 Perfectly Adapted Environments;619
1.5.9.5.3.3;19.5.3.3 Predictive Information;620
1.5.9.5.3.4;19.5.3.4 Symmetry;621
1.5.9.5.4;19.5.4 The Balance of Information;621
1.5.9.5.4.1;19.5.4.1 The Data Processing Inequality and Chain Rules for Information;622
1.5.9.5.4.2;19.5.4.2 Multi-Information and Information in Directed Acyclic Graphs;623
1.5.9.6;19.6 Bellman Recursion for Sequential Information Processing;624
1.5.9.6.1;19.6.1 Introductory Remarks;625
1.5.9.6.2;19.6.2 Decision Complexity;626
1.5.9.6.3;19.6.3 Recursion Equation for the MDP Information-To-Go;628
1.5.9.6.3.1;19.6.3.1 The Environmental Response Term;628
1.5.9.6.3.2;19.6.3.2 The Decision Complexity Term;629
1.5.9.7;19.7 Trading Information and Value;629
1.5.9.7.1;19.7.1 The ``Free-Energy'' Functional;629
1.5.9.7.2;19.7.2 Perfectly Adapted Environments;632
1.5.9.8;19.8 Experiments and Discussion;633
1.5.9.8.1;19.8.1 Information-Value Trade-Off in a Maze;633
1.5.9.8.2;19.8.2 Soft vs. Sharp Policies;634
1.5.9.9;19.9 Conclusions;636
1.5.9.10;References;637
1.5.10;20 Artificial Consciousness;640
1.5.10.1;20.1 Introduction;640
1.5.10.2;20.2 Goals of Artificial Consciousness;643
1.5.10.2.1;20.2.1 Environment Coupling;645
1.5.10.2.2;20.2.2 Autonomy and Resilience;647
1.5.10.2.3;20.2.3 Phenomenal Experience;648
1.5.10.2.4;20.2.4 Semantics or Intentionality of the First Type;650
1.5.10.2.5;20.2.5 Self-Motivations or Intentionality of the Second Type;651
1.5.10.2.6;20.2.6 Information Integration;652
1.5.10.2.7;20.2.7 Attention;653
1.5.10.3;20.3 A Consciousness-Oriented Architecture;655
1.5.10.3.1;20.3.1 The Elementary Intentional Unit;657
1.5.10.3.2;20.3.2 The Intentional Module;659
1.5.10.3.3;20.3.3 The Intentional Architecture;662
1.5.10.3.4;20.3.4 Check List for Consciousness-Oriented Architectures;664
1.5.10.3.5;20.3.5 A Comparison with Other Approaches;666
1.5.10.4;20.4 Conclusion;668
1.5.10.5;References;670
1.6;Part III Hardware Implementations;675
1.6.1;21 Smart Sensor Networks;677
1.6.1.1;21.1 Overview;677
1.6.1.1.1;21.1.1 Wireless Sensor Networks Technology;678
1.6.1.1.2;21.1.2 Design Requirements and Issues;680
1.6.1.1.3;21.1.3 Implementation Issues;681
1.6.1.2;21.2 Engineering Technology Justifications;682
1.6.1.2.1;21.2.1 Application of Perception--Reason--ActionSensor Networks;682
1.6.1.2.1.1;21.2.1.1 Distributed Multi-robot Perception, Navigation, and Manipulation;683
1.6.1.2.1.2;21.2.1.2 Distributed Sense-and-Response Systems;684
1.6.1.2.1.3;21.2.1.3 Dynamic Situation Awareness and Decision Support Systems;686
1.6.1.2.2;21.2.2 Review of Application Challenges;686
1.6.1.2.3;21.2.3 Review of Previous Engineering Technology Systems;687
1.6.1.3;21.3 The System;688
1.6.1.3.1;21.3.1 Components;690
1.6.1.3.1.1;21.3.1.1 Data-Centric Sensor Network Protocols;690
1.6.1.3.1.2;21.3.1.2 Distributed Services;691
1.6.1.3.1.3;21.3.1.3 Distributed Perception--Reason--Action Modules;695
1.6.1.3.2;21.3.2 Proof of Concept;700
1.6.1.3.2.1;21.3.2.1 Multi-robot Control Applications;700
1.6.1.3.2.2;21.3.2.2 Real-Time Target Tracking Applications;702
1.6.1.3.3;21.3.3 Preliminary Results;703
1.6.1.3.3.1;21.3.3.1 Performance of Multi-robot Control Applications;703
1.6.1.3.3.2;21.3.3.2 Performance of Target Tracking Applications;704
1.6.1.4;21.4 Future Work;706
1.6.1.4.1;21.4.1 Future Extensions;706
1.6.1.5;References;708
1.6.2;22 Multisensor Fusion for Low-Power Wireless Microsystems;712
1.6.2.1;22.1 Introduction;712
1.6.2.2;22.2 ANNs in Electrochemical Sensor Fusion;715
1.6.2.3;22.3 Neural Hardware in VLSI Technology;718
1.6.2.3.1;22.3.1 Supervised ANN-Based Hardware;719
1.6.2.3.2;22.3.2 Unsupervised ANN-Based Hardware;719
1.6.2.4;22.4 Analytical Techniques for Counteracting Drift;721
1.6.2.4.1;22.4.1 Recalibration;721
1.6.2.4.2;22.4.2 Data Filtering;722
1.6.2.4.3;22.4.3 Drift Insensitivity;722
1.6.2.4.4;22.4.4 Fault Isolation;723
1.6.2.5;22.5 Lab-in-a-Pill;723
1.6.2.6;22.6 The ``Neural" Solution: Adaptive Stochastic Classifier;726
1.6.2.6.1;22.6.1 Continuous Restricted Boltzmann Machine;726
1.6.2.6.1.1;22.6.1.1 Continuous Stochastic Neuron;726
1.6.2.6.1.2;22.6.1.2 CRBM Learning Rule;727
1.6.2.6.2;22.6.2 Training Methodology;728
1.6.2.6.3;22.6.3 Simulation Results;729
1.6.2.6.3.1;22.6.3.1 With Simple, Multidimensional Overlapping Clusters;730
1.6.2.6.3.2;22.6.3.2 With 2D Non-Gaussian Meshed Clusters;732
1.6.2.6.3.3;22.6.3.3 With Real Drifting Data;734
1.6.2.7;22.7 CRBM Hardware and Experimental Results;737
1.6.2.7.1;22.7.1 Chip Implementation;737
1.6.2.7.2;22.7.2 Learning in Hardware;738
1.6.2.7.3;22.7.3 Regenerating Data With a Symmetric Distribution;740
1.6.2.7.4;22.7.4 Regenerating Data with a Nonsymmetric Distribution;741
1.6.2.7.5;22.7.5 Regenerating Data with a Doughnut-Shaped Distribution;742
1.6.2.8;22.8 Discussion and Future Works;743
1.6.2.9;22.9 Summary;745
1.6.2.10;References;745
1.6.3;23 Bio-Inspired Mechatronics and Control Interfaces;750
1.6.3.1;23.1 Overview;750
1.6.3.2;23.2 Previous Work;752
1.6.3.3;23.3 System Architecture;754
1.6.3.3.1;23.3.1 Background and Problem Definition;754
1.6.3.3.2;23.3.2 System Training Phase;754
1.6.3.3.2.1;23.3.2.1 Recording Arm Motion;755
1.6.3.3.2.2;23.3.2.2 Recording Muscle Activity;756
1.6.3.3.3;23.3.3 Data Representation;757
1.6.3.3.4;23.3.4 Decoding Arm Motion from EMG Signals;759
1.6.3.3.5;23.3.5 Modeling Human Arm Movement;761
1.6.3.3.5.1;23.3.5.1 Graphical Models;761
1.6.3.3.5.2;23.3.5.2 Building the Model;762
1.6.3.3.5.3;23.3.5.3 Inference Using the Graphical Model;765
1.6.3.3.6;23.3.6 Filtering Motion Estimates Using the Graphical Model;766
1.6.3.3.7;23.3.7 Robot Control;766
1.6.3.4;23.4 Experimental Results;768
1.6.3.4.1;23.4.1 Hardware and Experiment Design;768
1.6.3.4.2;23.4.2 Efficiency Assessment;769
1.6.3.5;23.5 Conclusion and Future Extensions;772
1.6.3.6;References;774
1.7;Index;777



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.