Padua | Encyclopedia of Parallel Computing | Buch | 978-0-387-09844-9 | sack.de

Buch, Englisch, 2175 Seiten, Buch mit Online-Zugang, Format (B × H): 193 mm x 260 mm

Padua

Encyclopedia of Parallel Computing

Buch, Englisch, 2175 Seiten, Buch mit Online-Zugang, Format (B × H): 193 mm x 260 mm

ISBN: 978-0-387-09844-9
Verlag: Springer


Containing over 300 entries in an A-Z format, the Encyclopedia of Parallel Computing provides easy, intuitive access to relevant information for professionals and researchers seeking access to any aspect within the broad field of parallel computing. Topics for this comprehensive reference were selected, written, and peer-reviewed by an international pool of distinguished researchers in the field.  The Encyclopedia is broad in scope, covering machine organization, programming languages, algorithms, and applications.  Within each area, concepts, designs, and specific implementations are presented.  The highly-structured essays in this work comprise synonyms, a definition and discussion of the topic, bibliographies, and links to related literature. Extensive cross-references to other entries within the Encyclopedia support efficient, user-friendly searchers for immediate access to useful information.

 Key concepts presented in the Encyclopedia of Parallel Computing include; laws and metrics; specific numerical and non-numerical algorithms; asynchronous algorithms; libraries of subroutines; benchmark suites;  applications; sequential consistency and cache coherency; machine classes such as clusters, shared-memory multiprocessors, special-purpose machines and dataflow machines; specific machines such as Cray supercomputers, IBM’s cell processor and Intel’s multicore machines; race detection and auto parallelization; parallel programming languages, synchronization primitives, collective operations, message passing libraries, checkpointing, and operating systems. 

 Topics covered: Speedup, Efficiency, Isoefficiency, Redundancy, Amdahls law, Computer Architecture Concepts, Parallel Machine Designs, Benmarks, Parallel Programming concepts & design, Algorithms, Parallel applications.

 This authoritative reference will be published in two formats: print and online.  The online edition features hyperlinks to cross-references and to additional significant research.

 Related Subjects:  supercomputing, high-performance computing, distributed computing
Padua Encyclopedia of Parallel Computing jetzt bestellen!

Zielgruppe


Research


Autoren/Hrsg.


Weitere Infos & Material


General concepts: Speedup.- Efficiency.- Redundancy.- Efficiency.- Isoefficiency.- Amdahls law.- Computer architecture: Concepts.- Sequential consistency. Relax consistency. Memory models. Cache coherency. Synchronization instructions. Synchronization devices. Interconnection networks. Branch prediction. Instruction level parallelism. Transactional memories. Thread-level speculation. Latency hiding. Atomicity. Fences.- Parallel machine designs.- Shared-Memory Multiprocessors. Cache-only memory architectures. Multicores. Clusters. Distributed memory machines. Distributed-shared memory machines. Array machines. Pipelined vector machines. Mainframes. Dataflow machines. VLIW machines. EPIC machines. SMT machines. GPUs. Multimedia extensions (SSE, Altivec). Superscalar machines. FPGAs.- Machines.- Illiac IV. Cray, Cray 2, Cray X-MP,… Denelcor HEP. Tera. Multiflow. Connection Machine, CM-2. Maspar. Ultracomputer. Intel hypercube. Fujitsu series. Hitachi series. NEC series. IBM’s Blue Gene. IBM’s cell processor. C.mmp. Cm*. Cedar. Flash. Alliant multiprocessors. Convex machines. KSR machines.- Benchmarks.- LINPACK. Perfect benchmarks. NAS Benchmarks. SPEC HPG benchmarks suites. Flash Benchmarks. TOP500.- Parallel programming: Concepts.- Implicit parallelism. Explicit parallelism. Process. Task. Thread. Thread safe routines. Locality. Communication. Races. Nondeterminacy. Monitors. Semaphores. Deadlock. Livelock. Scheduling theory. Loop scheduling. Affinity scheduling. Task stealing. Futures. Critical region. Producer consumer. Communicating Seqential Processes. Doall. Doacross. MapReduce. Data and control dependence. Dependence analysis. Autoparallelization. Run-time speculation. Inspector/executor. Software pipelining.- Designs.- Languages. Libraries. Tools.- Algorithms: Concepts.- Synchronous. Asynchronous. Systolic algorithms. Cache oblivious algorithms.- Algorithms.- Numerical. Graph. Sorting. Garbage collection. Data management.- Libraries.- SuperLU. Pardiso. SPIKE. FFTW. Spira.- Parallel applications: Computational fluid dynamics.- Bio-molecular simulation (NMAD).- Cosmology.- Quantum Chemistry.


Padua, David
David Padua is the Donald Biggar Willett Professor of Computer Science at the University of Illinois. His research interests include compiler and languages for parallel computing, race detection, compilation of dynamic languages, and autotuning techniques. He is author or co-author of more than 150 papers in these areas. At Illinois, he has taught courses on compilers and parallel programming and supervised 25 PhD Dissertations. He has served his university and the computer science community as vice-chair of the College of Engineering Executive Committee, chair of the steering committee for the Symposia on Principles and Practices of Parallel Programming (PPoPP), and editorial board member of the Journal of Parallel programming, the Journal of Parallel and Distributed Computing, the IEEE transactions of Parallel and Distributed Processing, and the ACM Transactions on Programming Languages and Systems. He is a fellow of the IEEE and the ACM.

David Padua is the Donald Biggar Willett Professor of Computer Science at the University of Illinois. His research interests include compiler and languages for parallel computing, race detection, compilation of dynamic languages, and autotuning techniques. He is author or co-author of more than 150 papers in these areas. At Illinois, he has taught courses on compilers and parallel programming and supervised 25 PhD Dissertations.  He has served his university and the computer science community as vice-chair of the College of Engineering Executive Committee, chair of the steering committee for the Symposia on Principles and Practices of Parallel Programming (PPoPP), and editorial board member of the Journal of Parallel programming, the Journal of Parallel and Distributed Computing, the IEEE transactions of Parallel and Distributed Processing, and the ACM Transactions on Programming Languages and Systems. He is a fellow of the IEEE and the ACM.


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.