Chattopadhyay / Chang / Yu | Emerging Technology and Architecture for Big-data Analytics | E-Book | www.sack.de
E-Book

E-Book, Englisch, 332 Seiten

Chattopadhyay / Chang / Yu Emerging Technology and Architecture for Big-data Analytics


1. Auflage 2017
ISBN: 978-3-319-54840-1
Verlag: Springer International Publishing
Format: PDF
Kopierschutz: 1 - PDF Watermark

E-Book, Englisch, 332 Seiten

ISBN: 978-3-319-54840-1
Verlag: Springer International Publishing
Format: PDF
Kopierschutz: 1 - PDF Watermark



This book describes the current state of the art in big-data analytics, from a technology and hardware architecture perspective. The presentation is designed to be accessible to a broad audience, with general knowledge of hardware design and some interest in big-data analytics. Coverage includes emerging technology and devices for data-analytics, circuit design for data-analytics, and architecture and algorithms to support data-analytics. Readers will benefit from the realistic context used by the authors, which demonstrates what works, what doesn't work, and what are the fundamental problems, solutions, upcoming challenges and opportunities.Provides a single-source reference to hardware architectures for big-data analytics;
Covers various levels of big-data analytics hardware design abstraction and flow, from device, to circuits and systems;
Demonstrates how non-volatile memory (NVM) based hardware platforms can be a viable solution to existing challenges in hardware architecture for big-data analytics.

Anupam Chattopadhyay received his B.E. degree from Jadavpur University, India in 2000. He received his MSc. from ALaRI, Switzerland and PhD from RWTH Aachen in 2002 and 2008 respectively. From 2008 to 2009, he worked as a Member of Consulting Staff in CoWare R&D, Noida, India. From 2010 to 2014, he led the MPSoC Architectures Research Group in RWTH Aachen, Germany as a Junior Professor. Since September, 2014, he is appointed as an assistant Professor in SCE, NTU. During his PhD, he worked on automatic RTL generation from the architecture description language LISA, which was commercialized later by a leading EDA vendor. He developed several high-level optimizations and verification flow for embedded processors. In his doctoral thesis, he proposed a language-based modeling, exploration and implementation framework for partially re-configurable processors. Together with his doctoral students, he proposed domain-specific high-level synthesis for cryptography, high-level reliability estimation flows, generalization of classic linear algebra kernels and a novel multi-layered coarse-grained reconfigurable architecture. In these areas, he published as a (co)-author over 80 conference/ journal papers, several book-chapters and a book. Anupam served in several TPCs of top conferences, regularly reviews journal/ conference articles and presented multiple invited seminars/tutorials in prestigious venues. He is a member of ACM and a senior member of IEEE.Chang Chip Hong received his B.Eng. (Hons) from National University of Singapore in 1989, and his M.Eng. and Ph.D. from the School of Electrical and Electronic Engineering of Nanyang Technological University, Singapore in 1993 and 1998, respectively. Since 1999, he has been with the School of Electrical and Electronic Engineering, Nanyang Technological University where he is currently an Associate Professor. He holds concurrent appointments at the university as the Assistant Chair (Alumni) of the School of EEE since June 2008, Deputy Director of the Centre for High Performance Embedded Systems (CHiPES) since 2000, and the Program Director of the VLSI Design and Embedded Systems research group of the Centre for Integrated Circuits and Systems (CICS) since 2003. He has published three book chapters and more than 140 refereed international journal and conference papers. He is an Associate Editor of the IEEE Transactions on Circuits and Systems I: Regular Papers from 2010-2011, an Editorial Advisory Board Member of the Open Electrical and Electronic Engineering Journal since 2007, an Editorial Board Member of the Journal of Electrical and Computer Engineering since 2008, and a technical reviewer for several prestigious international journals. He is appointed the Charter Fellow of Advisory Directorate International by the American Biographical Institute, Inc. (ABI) and listed in the Marquis Who's Who in the World since 2008. He is a Senior Member of the IEEE and a Fellow of IET.Hao Yu obtained his B.S. degree from Fudan University (Shanghai China) in 1999, with 4-year first-prize Guanghua scholarship (top-2) and 1-year Samsung scholarship for the outstanding student in science and engineering (top-1). After selected by mini-cuspea program, he spent some time in New York University, and obtained M.S/Ph. D degrees both from electrical engineering department at UCLA in 2007, with major of integrated circuit and embedded computing. He was a senior research staff at Berkeley Design Automation (BDA) since 2006, one of top-100 start-ups selected by Red-herrings at Silicon Valley. Since October 2009, he is an assistant professor at school of electrical and electronic engineering, and also as area director of VIRTUS/VALENS Centre of Excellence, Nanyang Technological University (NTU), Singapore.  

Chattopadhyay / Chang / Yu Emerging Technology and Architecture for Big-data Analytics jetzt bestellen!

Weitere Infos & Material


1;Preface;5
2;Contents;7
3;About the Editors;9
4;Part I State-of-the-Art Architectures and Automation for Data-Analytics;12
4.1;1 Scaling the Java Virtual Machine on a Many-Core System;13
4.1.1;1.1 Introduction;13
4.1.2;1.2 Background;17
4.1.2.1;1.2.1 Workload Selection;18
4.1.2.2;1.2.2 Performance Analysis Tools;19
4.1.2.3;1.2.3 Experimental Setup;22
4.1.3;1.3 Thread-Local Data Objects;25
4.1.4;1.4 Memory Allocators;26
4.1.5;1.5 Java Concurrency API;28
4.1.6;1.6 Garbage Collection;29
4.1.7;1.7 Non-uniform Memory Access (NUMA);30
4.1.8;1.8 Conclusion and Future Directions;32
4.1.9;Appendix;32
4.1.10;References;33
4.2;2 Accelerating Data Analytics Kernels with HeterogeneousComputing;35
4.2.1;2.1 Introduction;35
4.2.2;2.2 Motivation;38
4.2.3;2.3 Automated Design Space Exploration Flow;40
4.2.3.1;2.3.1 The Lin-Analyzer Framework;40
4.2.3.2;2.3.2 Framework Overview;41
4.2.3.3;2.3.3 Instrumentation;42
4.2.3.4;2.3.4 Optimized DDDG Generation;42
4.2.3.4.1;2.3.4.1 Sub-trace Extraction;43
4.2.3.4.2;2.3.4.2 DDDG Generation & Pre-optimizations;43
4.2.3.5;2.3.5 DDDG Scheduling;44
4.2.3.6;2.3.6 Enabling Design Space Exploration;45
4.2.4;2.4 Acceleration of Data Analytics Kernels;50
4.2.4.1;2.4.1 Estimation Accuracy;51
4.2.4.1.1;2.4.1.1 Loop Unrolling and Loop Pipelining;51
4.2.4.1.2;2.4.1.2 Array Partitioning;52
4.2.4.2;2.4.2 Rapid Design Space Exploration;53
4.2.5;2.5 Conclusion;56
4.2.6;References;57
4.3;3 Least-squares-solver Based Machine Learning Acceleratorfor Real-time Data Analytics in Smart Buildings;60
4.3.1;3.1 Introduction;60
4.3.2;3.2 IoT System Based Smart Building;62
4.3.2.1;3.2.1 Smart-Grid Architecture;62
4.3.2.2;3.2.2 Smart Gateway for Real-Time Data Analytics;62
4.3.2.3;3.2.3 Problem Formulation for Data Analytics;63
4.3.3;3.3 Background on Neural Network Based Machine Learning;63
4.3.3.1;3.3.1 Backward Propagation for Training;64
4.3.3.2;3.3.2 Least-Squares Solver for Training;66
4.3.3.3;3.3.3 Feature Extraction with Behavior Cognition;66
4.3.4;3.4 Least-Squares Solver Based Training Algorithm;68
4.3.4.1;3.4.1 Regularized 2-Norm;68
4.3.4.2;3.4.2 Square-Root-Free Cholesky Decomposition;69
4.3.4.3;3.4.3 Incremental Least-Squares Solution;70
4.3.5;3.5 Least-Squares Based Machine Learning Accelerator Architecture;71
4.3.5.1;3.5.1 Overview of Computing Flow and Communication;71
4.3.5.2;3.5.2 FPGA Accelerator Architecture;73
4.3.5.3;3.5.3 2-Norm Solver;73
4.3.5.4;3.5.4 Matrix–Vector Multiplication;75
4.3.6;3.6 Experiment Results;75
4.3.6.1;3.6.1 Experiment Setup and Benchmark;75
4.3.6.2;3.6.2 FPGA Design Platform and CAD Flow;77
4.3.6.3;3.6.3 Scalable and Parameterized Accelerator Architecture;78
4.3.6.4;3.6.4 Performance for Data Classification;81
4.3.6.5;3.6.5 Performance for Load Forecasting;81
4.3.6.6;3.6.6 Performance Comparisons with Other Platforms;82
4.3.7;3.7 Conclusion;83
4.3.8;References;84
4.4;4 Compute-in-Memory Architecture for Data-Intensive Kernels;86
4.4.1;4.1 Introduction;86
4.4.2;4.2 Malleable Hardware Acceleration;88
4.4.2.1;4.2.1 Hardware Architecture;88
4.4.2.2;4.2.2 Application Mapping;90
4.4.2.2.1;4.2.2.1 Application Description Using an Instruction Set Architecture;90
4.4.2.2.2;4.2.2.2 Application Mapping to the General Framework;92
4.4.2.3;4.2.3 Domain Customization for Efficient Acceleration;92
4.4.3;4.3 Case Studies for Memory-Centric Computing;93
4.4.3.1;4.3.1 MAHA for Security Applications;93
4.4.3.1.1;4.3.1.1 Domain Exploration;94
4.4.3.1.2;4.3.1.2 Architecture Description;94
4.4.3.1.3;4.3.1.3 Results and Comparison to Other Platforms;96
4.4.3.2;4.3.2 MAHA for Text Mining Applications;97
4.4.3.2.1;4.3.2.1 Domain Exploration;98
4.4.3.2.2;4.3.2.2 Architecture Description;99
4.4.3.2.3;4.3.2.3 Results and Comparison to Other Platforms;101
4.4.4;4.4 Case Studies for In-Memory Computing;101
4.4.4.1;4.4.1 Flash-Based MAHA;102
4.4.4.1.1;4.4.1.1 Domain Exploration;102
4.4.4.1.2;4.4.1.2 Architecture Description;104
4.4.4.1.3;4.4.1.3 Results and Comparison to Other Platform;106
4.4.4.2;4.4.2 MultiFunctional Memory;107
4.4.4.2.1;4.4.2.1 Architecture Description;107
4.4.4.2.2;4.4.2.2 Results and Comparison to Other Platforms;109
4.4.5;4.5 Conclusion;109
4.4.6;References;110
4.5;5 New Solutions for Cross-Layer System-Level and High-LevelSynthesis;111
4.5.1;5.1 Introduction;111
4.5.2;5.2 ESL Design Flow Challenges;113
4.5.3;5.3 System-/High-Level Synthesis Techniques;116
4.5.3.1;5.3.1 Polyhedral Transformation to Improve HLS Optimization Opportunity;117
4.5.3.1.1;5.3.1.1 Step 1;118
4.5.3.1.2;5.3.1.2 Step 2;119
4.5.3.1.3;5.3.1.3 Step 3;121
4.5.3.1.4;5.3.1.4 Evaluation;121
4.5.3.2;5.3.2 Polyhedral Code Generation for High-Level Synthesis;122
4.5.3.2.1;5.3.2.1 Turning Off Polyhedra Separation;123
4.5.3.2.2;5.3.2.2 Division Optimization;124
4.5.3.2.3;5.3.2.3 Hierarchical Min/Max Operations;125
4.5.3.2.4;5.3.2.4 Loop-Tiling Bound Simplification;125
4.5.3.2.5;5.3.2.5 Experimental Results;127
4.5.3.3;5.3.3 Multi-Cycle Path Analysis for High-Level Synthesis;128
4.5.3.3.1;5.3.3.1 Circuit States and Control-States;129
4.5.3.3.2;5.3.3.2 Capturing Conditional Behavior in the STG;130
4.5.3.3.3;5.3.3.3 Data Dependency Analysis;131
4.5.3.3.4;5.3.3.4 Available Cycles Calculation;132
4.5.3.3.5;5.3.3.5 Multi-cycle Constraints Generation;132
4.5.3.3.6;5.3.3.6 Evaluation;132
4.5.3.4;5.3.4 Layout-Driven High-Level Synthesis for FPGAs;133
4.5.3.4.1;5.3.4.1 Component Pre-characterization;135
4.5.3.4.2;5.3.4.2 Initialization Stage;135
4.5.3.4.3;5.3.4.3 Iteration Stage;136
4.5.3.4.4;5.3.4.4 Evaluation;138
4.5.4;5.4 Conclusion;140
4.5.5;References;140
5;Part II Approaches and Applications for Data Analytics;143
5.1;6 Side Channel Attacks and Their Low Overhead Countermeasures on Residue Number System Multipliers;144
5.1.1;6.1 Introduction;144
5.1.2;6.2 Preliminaries;145
5.1.2.1;6.2.1 Power Analysis and Related Countermeasures;146
5.1.2.1.1;6.2.1.1 Power Analysis;146
5.1.2.1.2;6.2.1.2 Power Analysis Countermeasures;147
5.1.2.2;6.2.2 RNS Modular Multiplier;147
5.1.2.2.1;6.2.2.1 Residue Number System;147
5.1.2.2.2;6.2.2.2 RNS Modular Multiplication;149
5.1.2.2.3;6.2.2.3 Leakage Resistant Arithmetic;151
5.1.3;6.3 Attacks on the RNS Modular Multiplier;151
5.1.3.1;6.3.1 Attack Assumptions;151
5.1.3.2;6.3.2 Limited Randomness;152
5.1.3.3;6.3.3 Zero Collision Attack;153
5.1.3.4;6.3.4 Attacks on Mask Initialization;155
5.1.3.5;6.3.5 Channel Reduction Leakage;157
5.1.4;6.4 Countermeasures;158
5.1.4.1;6.4.1 Enlarged Coprime Pool;158
5.1.4.2;6.4.2 Plus-N Randomness;159
5.1.4.3;6.4.3 Initialization Shuffling;160
5.1.4.4;6.4.4 Random Padding;161
5.1.4.5;6.4.5 Channel Task Shuffling;161
5.1.5;6.5 Implementation;162
5.1.6;6.6 Discussion;164
5.1.7;6.7 Conclusion;164
5.1.8;References;164
5.2;7 Ultra-Low-Power Biomedical Circuit Design and Optimization: Catching the Don't Cares;166
5.2.1;7.1 Introduction;166
5.2.2;7.2 How Can We Beat the State of the Art?;168
5.2.3;7.3 Information Processing Capacity;169
5.2.3.1;7.3.1 Information-Theoretic Modeling;169
5.2.3.2;7.3.2 Soft Channel Selection;173
5.2.3.3;7.3.3 Robust Data Processing;174
5.2.4;7.4 Case Study: Brain–Computer Interface;176
5.2.4.1;7.4.1 System Design;176
5.2.4.2;7.4.2 Experimental Results;177
5.2.5;7.5 Summary;179
5.2.6;References;179
5.3;8 Acceleration of MapReduce Framework on a Multicore Processor;181
5.3.1;8.1 Introduction;181
5.3.2;8.2 MapReduce Framework on Multicore Processors;182
5.3.2.1;8.2.1 Introduction to MapReduce;182
5.3.2.2;8.2.2 Related Work;182
5.3.2.3;8.2.3 Experimental Platform;184
5.3.3;8.3 Accelerating Algorithms Based on MapReduce in Multicore Processors;185
5.3.3.1;8.3.1 Acceleration of PageRank Algorithm;185
5.3.3.1.1;8.3.1.1 Math Model of PageRank;185
5.3.3.1.2;8.3.1.2 Hardware Accelerator for Pagerank;185
5.3.3.2;8.3.2 Acceleration of Naive-Bayes Algorithm;186
5.3.3.2.1;8.3.2.1 Math Model of Naive-Bayes Algorithm;187
5.3.3.2.2;8.3.2.2 Hardware Accelerator for Naive-Bayes;187
5.3.3.2.3;8.3.2.3 Task Mapping Scheme: Topo-MapReduce;188
5.3.4;8.4 Configurable MapReduce Acceleration Framework;190
5.3.4.1;8.4.1 High Throughput Data Transferring;191
5.3.5;8.5 Experiment Result Analysis;193
5.3.5.1;8.5.1 Pagerank with Hardware Accelerations;193
5.3.5.2;8.5.2 Topo-Mapreduce;193
5.3.5.3;8.5.3 Configurable Mapreduce Acceleration Framework;194
5.3.6;8.6 Conclusion;195
5.3.7;References;195
5.4;9 Adaptive Dynamic Range Compression for Improving Envelope-Based Speech Perception: Implications for Cochlear Implants;197
5.4.1;9.1 Introduction;197
5.4.2;9.2 Speech Processor in CI Devices;198
5.4.3;9.3 Vocoder-Based Speech Synthesis;200
5.4.4;9.4 Compression Scheme;201
5.4.4.1;9.4.1 The Static Envelope Compression Strategy;201
5.4.4.2;9.4.2 The Adaptive Envelope Compression Strategy;202
5.4.5;9.5 Experiments and Results;205
5.4.5.1;9.5.1 Experiment-1: The Speech Perception Performance of AEC in Noise;205
5.4.5.1.1;9.5.1.1 Subjects and Materials;205
5.4.5.1.2;9.5.1.2 Procedure;206
5.4.5.1.3;9.5.1.3 Results and Discussion;206
5.4.5.2;9.5.2 Experiment-2: The Speech Perception Performance of AEC in Reverberation;208
5.4.5.2.1;9.5.2.1 Subjects and Materials;208
5.4.5.2.2;9.5.2.2 Procedure;208
5.4.5.2.3;9.5.2.3 Results and Discussion;208
5.4.5.3;9.5.3 Experiment-3: The Effect of Adaptation Rate on the Intelligibility of AEC-Processed Speech;210
5.4.5.3.1;9.5.3.1 Subjects and Materials;210
5.4.5.3.2;9.5.3.2 Procedure;210
5.4.5.3.3;9.5.3.3 Results and Discussion;211
5.4.5.4;9.5.4 Experiment-4: The Effect of Joint Envelope Compression and Noise Reduction;213
5.4.5.4.1;9.5.4.1 Subjects and Materials;213
5.4.5.4.2;9.5.4.2 Signal Processing with NR and Envelope Compression;214
5.4.5.4.3;9.5.4.3 Procedure;215
5.4.5.4.4;9.5.4.4 Results and Discussion;216
5.4.6;9.6 Summary;217
5.4.7;References;218
6;Part III Emerging Technology, Circuits and Systems for Data-Analytics;221
6.1;10 Neuromorphic Hardware Acceleration Enabled by EmergingTechnologies;222
6.1.1;10.1 Introduction;222
6.1.2;10.2 Background;224
6.1.2.1;10.2.1 Neural Network;224
6.1.2.2;10.2.2 Memristor Preliminaries;225
6.1.2.3;10.2.3 Memristor Array;226
6.1.3;10.3 Design Methodology;227
6.1.3.1;10.3.1 Weight Mapping;227
6.1.3.1.1;10.3.1.1 Mapping Method for BSB System;227
6.1.3.1.2;10.3.1.2 Mapping Method for Feedforward System;229
6.1.3.2;10.3.2 Training Algorithm Optimization;230
6.1.3.3;10.3.3 Recall Component Optimization;232
6.1.3.3.1;10.3.3.1 BSB Recall Implementation;232
6.1.3.3.2;10.3.3.2 FFW Active Function Implementation;233
6.1.4;10.4 Simulation and Evaluation;236
6.1.4.1;10.4.1 BSB System Evaluation;236
6.1.4.1.1;10.4.1.1 BSB training;236
6.1.4.1.2;10.4.1.2 BSB Recall;241
6.1.4.2;10.4.2 FFW System Evaluation;244
6.1.4.2.1;10.4.2.1 FFW Recall;245
6.1.5;10.5 Conclusion;247
6.1.6;References;247
6.2;11 Energy Efficient Spiking Neural Network Designwith RRAM Devices;250
6.2.1;11.1 Introduction;250
6.2.2;11.2 Preliminaries;252
6.2.2.1;11.2.1 Spike Neurons;252
6.2.2.2;11.2.2 RRAM Device Characteristics;253
6.2.3;11.3 Training Scheme of SNN;255
6.2.3.1;11.3.1 Spike Timing Dependent Plasticity (STDP);255
6.2.3.2;11.3.2 Remote Supervision Method (ReSuMe);256
6.2.3.3;11.3.3 Neural Sampling Learning Scheme;256
6.2.4;11.4 RRAM-Based Spiking Learning System;257
6.2.4.1;11.4.1 Unsupervised Feature Extraction+Supervised Classifier;257
6.2.4.2;11.4.2 Transferring ANN to SNN: Neural Sampling Method;259
6.2.4.3;11.4.3 Discussion on How to Boost the Accuracy of SNN;261
6.2.5;11.5 Conclusion;262
6.2.6;References;263
6.3;12 Efficient Neuromorphic Systems and Emerging Technologies: Prospects and Perspectives;265
6.3.1;12.1 Introduction;265
6.3.2;12.2 Neural Network Basics;266
6.3.3;12.3 General Purpose Computing Architecture;268
6.3.4;12.4 Underlying Device Physics;270
6.3.5;12.5 Proposals for Spintronic Neuromimetic Devices;272
6.3.6;12.6 Crossbar based ``In-Memory'' Computing Architecture;274
6.3.7;12.7 Conclusions;277
6.3.8;References;277
6.4;13 In-Memory Data Compression Using ReRAMs;279
6.4.1;13.1 LZ77 Compression Algorithm;280
6.4.2;13.2 ReVAMP Architecture for In-Memory Computing;281
6.4.2.1;13.2.1 Comparator Design;284
6.4.2.1.1;13.2.1.1 Analysis;286
6.4.2.2;13.2.2 Priority Multiplexer Design;286
6.4.3;13.3 LZ77 Compression Using ReV287
6.4.4;13.4 Performance Estimation;292
6.4.5;13.5 Related Works;292
6.4.6;13.6 Summary;293
6.4.7;References;293
6.5;14 Big Data Management in Neural Implants: The NeuromorphicApproach;296
6.5.1;14.1 Introduction: Brain as a Source of Big Data;296
6.5.2;14.2 The Nature of Neural Data;297
6.5.3;14.3 System Architectures for Neural Spike Recording Systems: Neuromorphic Compression Schemes;298
6.5.3.1;14.3.1 Compression Mode 1: Spike Detection;300
6.5.3.2;14.3.2 Compression Mode 2: Spike Sorting;302
6.5.3.3;14.3.3 Compression Mode 3: Intention Decoding;303
6.5.3.3.1;14.3.3.1 Algorithm: Extreme Learning Machine;303
6.5.3.3.2;14.3.3.2 Chip Architecture;305
6.5.3.3.3;14.3.3.3 Measurement Results;307
6.5.4;14.4 Conclusion and Discussions;310
6.5.5;References;311
6.6;15 Data Analytics in Quantum Paradigm: An Introduction;315
6.6.1;15.1 Introduction;315
6.6.1.1;15.1.1 Basics of a Qubit and the Algebra;316
6.6.1.2;15.1.2 Quantum Gates;317
6.6.1.3;15.1.3 No Cloning;318
6.6.2;15.2 A Brief Overview of Advantages in Quantum Paradigm;320
6.6.2.1;15.2.1 Teleportation;320
6.6.2.2;15.2.2 Deutsch-Jozsa Algorithm;321
6.6.3;15.3 Preliminaries of Quantum Cryptography;322
6.6.3.1;15.3.1 Quantum Key Distribution and the BB84 Protocol;324
6.6.3.2;15.3.2 Secure Multi-Party Computation;325
6.6.4;15.4 Data Analytics: A Critical View of Quantum Paradigm;326
6.6.4.1;15.4.1 Related Quantum Algorithms;326
6.6.4.2;15.4.2 Database;327
6.6.4.3;15.4.3 Text Mining;328
6.6.5;15.5 Conclusion: Google, PageRank, and Quantum Advantage;329
6.6.6;References;330



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.