Dolan / Sharot | Neuroscience of Preference and Choice | E-Book | www.sack.de
E-Book

E-Book, Englisch, 356 Seiten

Dolan / Sharot Neuroscience of Preference and Choice

Cognitive and Neural Mechanisms
1. Auflage 2011
ISBN: 978-0-12-381432-6
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark

Cognitive and Neural Mechanisms

E-Book, Englisch, 356 Seiten

ISBN: 978-0-12-381432-6
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark



One of the most pressing questions in neuroscience, psychology and economics today is how does the brain generate preferences and make choices? With a unique interdisciplinary approach, this volume is among the first to explore the cognitive and neural mechanisms mediating the generation of the preferences that guide choice. From preferences determining mundane purchases, to social preferences influencing mating choice, through to moral decisions, the authors adopt diverse approaches to answer the question. Chapters explore the instability of preferences and the common neural processes that occur across preferences. Edited by one of the world's most renowned cognitive neuroscientists, each chapter is authored by an expert in the field, with a host of international contributors. - Emphasis on common process underlying preference generation makes material applicable to a variety of disciplines - neuroscience, psychology, economics, law, philosophy, etc. - Offers specific focus on how preferences are generated to guide decision making, carefully examining one aspect of the broad field of neuroeconomics and complementing existing volumes - Features outstanding, international scholarship, with chapters written by an expert in the topic area

Dolan / Sharot Neuroscience of Preference and Choice jetzt bestellen!

Weitere Infos & Material


1;Front Cover;1
2;Neuroscience of Preference and Choice;4
3;Copyright Page;5
4;Contents;6
5;Preface;12
6;Contributors;14
7;I: MECHANISMS;16
7.1;1. The Neurobiology of Preferences;18
7.1.1;Introduction;19
7.1.2;Concepts Of Choice And Preference;21
7.1.3;What are the Potential Neural Mechanisms underlying preference;27
7.1.4;Conclusion;37
7.1.5;References;38
7.2;2. Models of Value and Choice;48
7.2.1;Introduction;48
7.2.2;Reinforcement Learning;50
7.2.3;Controllers;53
7.2.4;Combination;59
7.2.5;Discussion;62
7.2.6;Acknowledgments;63
7.2.7;References;64
7.3;3. Predicting Emotional Reactions: Mechanisms, Bias and Choice;68
7.3.1;Introduction;68
7.3.2;How We Predict Emotional Reactions;70
7.3.3;Biases in Predicting Emotional Reactions;76
7.3.4;Conclusion;83
7.3.5;Acknowledgments;84
7.3.6;References;84
8;II: CONTEXTUAL FACTORS;88
8.1;4. The Evolution of Our Preferences: Insights from Non-human Primates;90
8.1.1;Introduction;90
8.1.2;How Choice Changes Preferences in Adult Humans;92
8.1.3;The Origins of Choice-Based Preference Reversals;93
8.1.4;How Framing Affects Choice in Adult Humans;96
8.1.5;Framing Effects in Capuchin Monkeys;99
8.1.6;How Studies of the Origin of Choice Biases Inform Adult Human Decision-Making;102
8.1.7;Acknowledgments;104
8.1.8;References;104
8.2;5. The Effect of Context on Choice and Value;108
8.2.1;Introduction;109
8.2.2;Brief Historical Overview;109
8.2.3;Effect of Context on Choices;112
8.2.4;Effect of Context on Values;122
8.2.5;The Neurobiology of Loss Aversion;127
8.2.6;Conclusions;129
8.2.7;Acknowledgments;130
8.2.8;References;130
8.3;6. Preference Change through Choice;136
8.3.1;Introduction;136
8.3.2;Choice Blindness;140
8.3.3;Choice Blindness and Preference Change for Faces;142
8.3.4;Choice Blindness and Preference Change for Risky Choices;145
8.3.5;Choice Blindness and Preference Change for Political Opinion;147
8.3.6;Discussion;150
8.3.7;Acknowledgment;153
8.3.8;References;153
8.4;7. Set-Size Effects and the Neural Representation of Value;158
8.4.1;Introduction;158
8.4.2;The Paradox of Choice;159
8.4.3;Context-Dependent Choice Behavior;163
8.4.4;The Neural Basis of Decision-Making;168
8.4.5;Neural Activity and the Effect of Alternatives;172
8.4.6;Set-Size Effects and the Neural Representation of Subjective Value;176
8.4.7;Conclusion;182
8.4.8;References;184
9;III: SOCIAL FACTORS;190
9.1;8. Social Factors and Preference Change;192
9.1.1;Introduction;193
9.1.2;The Presence of Others;193
9.1.3;The Behavior and Experience of Others;199
9.1.4;The Minds of Others;208
9.1.5;Conclusions;215
9.1.6;Acknowledgments;215
9.1.7;References;216
9.2;9. Social and Emotional Factors in Decision-Making: Appraisal and Value;222
9.2.1;Introduction;222
9.2.2;Emotional Factors;223
9.2.3;Social Factors and Emotional Influences;229
9.2.4;Conclusions;235
9.2.5;References;236
10;IV: PERCEPTUAL FACTORS;240
10.1;10. Auditory Preferences and Aesthetics: Music, Voices, and Everyday Sounds;242
10.1.1;Introduction;242
10.1.2;Annoying Sounds;243
10.1.3;Pleasant Environmental Sounds;246
10.1.4;Voices;246
10.1.5;Music;250
10.1.6;Conclusions;264
10.1.7;Acknowledgments;264
10.1.8;References;265
10.2;11. The Flexibility of Chemosensory Preferences;272
10.2.1;Introduction;273
10.2.2;Needs, Goals and Values in the Flexibility of Chemosensory Preferences;275
10.2.3;The Role of Learning and Exposure Factors in the Flexibility of Chemosensory Preferences;278
10.2.4;The Role of Other Sensory Inputs and Cognitive Factors in the Flexibility of Chemosensory Preferences;281
10.2.5;Conclusion;284
10.2.6;References;285
10.3;12. Dynamic Preference Formation via Gaze and Memory;292
10.3.1;Gaze and Preference;292
10.3.2;Novelty Versus Familiarity;295
10.3.3;How Does Gaze Interact With Memory in Preference Decision?;296
10.3.4;Dynamic and Time-Evolving Process Towards Preference Decision;305
10.3.5;References;306
11;V: IMPLICATIONS, APPLICATION AND FUTURE DIRECTION;308
11.1;13. Choice Sets as Percepts;310
11.1.1;Introduction;310
11.1.2;Samuelson Meets Weber;311
11.1.3;Search and the Choice Process;313
11.1.4;The Drift-Diffusion Model and Small Choice Sets;315
11.1.5;Next Steps;317
11.1.6;Acknowledgments;319
11.1.7;References;319
11.2;14. Preferences and Their Implication for Policy, Health and Wellbeing;320
11.2.1;Introduction;320
11.2.2;Routes to Behavior Change;322
11.2.3;Systems for Self-Control;326
11.2.4;Behavior Change Interventions;334
11.2.5;Conclusion;344
11.2.6;Acknowledgments;345
11.2.7;References;345
12;Index;352


Chapter 2

Models of Value and Choice


Peter Dayan,    Gatsby Computational Neuroscience Unit, UCL, London, UK

Publisher Summary


Complexities in the relationship between value and choice are two central sources of anomalies. First, the different systems can disagree about their values. Actions involve such things as picking a stimulus or pressing a button. The environment specifies a set of rules governing the transitions between states depending on the action chosen. The trouble is that a tree typically grows exponentially with the number of layers considered, making this extremely difficult. Model-based and model-free controls are ways of doing this, which differ in the information about the environment they use and the computations they perform. The model-free system would learn the utility of pressing the lever but would not have the informational wherewithal to realize that this utility had changed when the cheese had been poisoned. Pavlovian control is also based on predictions of affectively important outcomes such as rewards and punishments. However, rather than determining the choices that would lead to the acquisition or avoidance of these outcomes, it expresses a set of hard-wired preparatory and consummatory choices.

Reinforcement learning; motivation; model-based control; model-free control; Pavlovian control; utility

Outline

One lesson from modern neuroeconomics is that value precedes preference. A second lesson is that this happens in multiple competing and cooperating systems. In this chapter we consider some aspects and consequences of this multiplicity. Even if all systems share a common, Platonic, notion of utility, they acquire and use information about the environment and about the utility in different ways, spanning a spectrum of computationally rationalizable possibilities. We discuss how and why values may not be consistent across systems, and how choices emerging from some systems may not be consistent with their own underlying values. Such complexities may motivate some of the significant, albeit contained, anomalies of choice.

Introduction


The relationship between value and choice seems very simple. We should choose the things we value and value the things we choose. Indeed, one of the most beautiful results in economics goes exactly along these lines – if only our choices between possible actions were to satisfy some simple, intuitive, prerequisites, such as being transitive, then these actions could be arranged along a single axis of preference, and could be endowed with values that could be treated as governing choice. Although psychologists or behaviorally-inclined economists might go no further than asking people to report their subjective values in one of a variety of ways, neuroscientists could search for postulated forms of such a value function in the activity of neurons, or their blood flow surrogates.

Unfortunately, life is not so simple. A subject’s choices fail to satisfy any set of intuitive axioms, instead exhibiting the wide range of anomalies explored in this book and elsewhere. When subjects can be persuaded to express values, not only do these also show anomalies and inconsistencies, but also their anomalies are not quite the same as those of the choices with which they would be associated. Thus, subjects will subscribe to values that are not consistent with their expressible preferences.

There have been many interesting approaches to save part of the bacon of value and/or the relationship between value and choice by appealing to external or internal factors. The former (such things as the radical asymmetry in the information state between the experimenter and the subjects) are covered in other chapters in this book. In this chapter, we will show that even if there was a Platonic utility function governing idealized values of outcomes, rationalizable complexities of the internal architecture of control imply a range of apparent inconsistencies.

We start by noting three separate systems that have been mooted as being involved. Two (called model-based and model-free – Daw, Niv& Dayan, 2005; Dickinson & Balleine, 2002) are instrumental; a third is Pavlovian. A fourth, episodic, system has also been suggested (Lengyel& Dayan, 2007), but enjoys less empirical support and we will not discuss it here.

Crudely speaking (and we will see later why this is too crude in the current context), for instrumental systems, the relationship in the environment between choices and affectively important outcomes such as rewards and punishments plays a key role in determining choice. Subjects repeat actions that lead to high value outcomes (rewards), and avoid ones that lead to low value outcomes (punishments).

Conversely, for the Pavlovian system, choices are determined by predictions of these outcomes, irrespective of the actual environmental relationship between choices and outcomes (Dickinson, 1980; Mackintosh, 1983). Thus, for instance, animals cannot help but approach (rather than run away from) a source of food, even if the experimenter has cruelly arranged things in a looking-glass world so that the approach appears to make the food recede, whereas retreating would make the food more accessible (Hershberger, 1986).

Rather than starting from choices, models of all three systems start from values, from which choices are derived. Consistent with this, value signals have duly been found in a swathe of neural systems including, amongst others, the orbitofrontal cortex, the amygdala, the striatum, and the dopaminergic neuromodulatory system that innervates various of these loci (see Morrison & Salzman, 2010; Niv, 2009; O’Doherty, 2004, 2007; Samejima, Ueda, Doya & Kimura, 2005; Schultz, 2002; Wallis & Kennerley, 2010, and references therein). In fact, the mechanisms turning these values into choices, is less clear.

Complexities in the relationship between value and choice are two central sources of anomalies. First, the different systems can disagree about their values (and thus the choices they would make) (Dickinson & Balleine, 2002). However, choice is, almost by definition, unitary. Thus, if the values produced by the different systems differ, then the ultimate behavior will clearly have to fail to follow all of them. Nevertheless, we will argue that it is adaptive to have multiple systems (even at the expense of value-choice inconsistencies), since they offer different “sweet-spots” in the trade-off between adaptivity and adaptability.

The second source of anomalies is the Pavlovian system itself. As described above, it has the property that choices are determined directly by predictions, without any regard for their appropriateness. Thus, for example, given the chance, the subjects in the looking-glass world would clearly exhibit a preference for food, but nevertheless emit actions that are equally clearly inconsistent with this preference.

Having discussed the individual systems and their properties, we then consider issues that arise from their interaction. For instance, we have interpreted various findings as suggesting that Pavlovian mechanisms may interfere with model-based instrumental evaluation (Dayan& Huys, 2008). Equally, if one system controls behavior, then it can prevent other systems from being able to gain sufficient experience to acquire a full set of values and associated preferences.

We conclude the chapter with some general remarks about the naivety of the original expectation for a simple mapping between value and choice for subjects suffering from limited computational power in an unknown and changing world.

Reinforcement Learning


Reinforcement learning (Sutton & Barto, 1998) formalizes the interaction between a subject and its environment. In the simplest cases, the environment comprises a set of possible states ?={}, actions ={} and outcomes ={}. We will also consider that the subject is in an internal motivational state ?, such as hunger or thirst, although sometimes, as we discuss later, we will concatenate external and internal states to make a single state of the world (also called ).

In human experiments, states are often like the stages of a task, and are typically signaled by cues such as lights and tones. A state could also be a location in a maze. Actions involve such things as picking a stimulus or pressing a button.1 The environment specifies a set of rules governing the transitions between states depending on the action chosen. This is often treated as a Markov chain, characterized by a transition matrix () that specifies the probability of moving from state to state given action (Puterman, 2005; Sutton & Barto, 1998). We will treat the outcome (,) as being a deterministic function of the state and action ; it is a simple generalization to make the outcomes also be probabilistic.

In this context, the subject’s choices comprise a policy p. We consider so-called...



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.