Understanding Current Approaches - with Examples in Java and Greenfoot
Buch, Englisch, 184 Seiten, Format (B × H): 160 mm x 241 mm, Gewicht: 471 g
ISBN: 978-3-031-09029-5
Verlag: Springer
In ancient games such as chess or go, the most brilliant players can improve by studying the strategies produced by a machine. Robotic systems practice their own movements. In arcade games, agents capable of learning reach superhuman levels within a few hours. How do these spectacular reinforcement learning algorithms work?
With easy-to-understand explanations and clear examples in Java and Greenfoot, you can acquire the principles of reinforcement learning and apply them in your own intelligent agents. Greenfoot (M.Kölling, King's College London) and the hamster model (D. Bohles, University of Oldenburg) are simple but also powerful didactic tools that were developed to convey basic programming concepts.
The result is an accessible introduction into machine learning that concentrates on reinforcement learning. Taking the reader through the steps of developing intelligent agents, from the very basics to advanced aspects, touching on a variety of machine learning algorithms along the way, one is allowed to play along, experiment, and add their own ideas and experiments.Zielgruppe
Upper undergraduate
Autoren/Hrsg.
Fachgebiete
Weitere Infos & Material
Foreword of Michael Koelling (King’s College London)
PrefaceIntroduction
Chapter 1: Reinforcement learning as subfield of machine learning
1.1 Machine Learning as automated processing of feedback from the environment
1.2 Machine Learning methods
1.3 Reinforcement Learning with Java
Bibliography
Chapter 2: Basic concepts of reinforcement learning
2.1 Agents2.2 Control of the agent
2.3 Evaluation of states and actions (Q-function, Bellman equation)
Bibliography
Chapter 3: Optimal decision-making in a known environment
3.1 Value Iteration
3.1.1 Target-oriented state evaluation (“backward induction”)
3.1.2 Policy-based state valuation (reward prediction)
3.2 Iterative policy search3.2.1 Direct policy improvement
3.2.2 Mutual improvement of policy and value-function
3.3 Optimal policy in a board game scenario
3.4 Summary
Bibliography
Chapter 4: decision making and learning in an unknown environment
4.1 Exploration vs. exploitation
4.2 Retroactive processing of experience ("model-free reinforcement learning")
4.2.1 Goal-oriented learning ("value-based")
4.2.2 Policy search
4.2.3 Combined methods (Actor-Critic)
4.3 Exploration with predictive simulations ("Model-Based Reinforcement Learning")
4.3.1 Dyna-Q
4.3.2 Monte-Carlo rollout
4.3.3 Artificial curiosity
4.3.4 Monte Carlo Tree Search (MCTS).
4.3.5 Remarks on the Concept of Intelligence
4.4 Systematic of learning methods
Bibliography
Chapter 5: Artificial Neural Networks as estimators for state values and the action selection
5.1 Artificial neural networks
5.1.1 Pattern recognition with the perceptron
5.1.2 The adaptability of artificial neural networks
5.1.3 Backpropagation Learning
5.1.4 Regression with multilayer perceptrons
5.2 State evaluation with generalizing approximations
5.3 Neural estimators for action selection
5.3.1 Policy gradient with neural networks
5.3.2 Proximal Policy Optimization
5.3.3 Evolutionary strategy with a neural policy
Bibliography
Chapter 6: Guiding ideas in Artificial Intelligence over time
6.1 Changing basic ideas
6.2 On the relationship between humans and Artificial Intelligence
Bibliography




