Dongarra / Podhorszki / Kacsuk | Recent Advances in Parallel Virtual Machine and Message Passing Interface | Buch | 978-3-540-41010-2 | sack.de

Buch, Englisch, Band 1908, 368 Seiten, Paperback, Format (B × H): 155 mm x 235 mm, Gewicht: 1190 g

Reihe: Lecture Notes in Computer Science

Dongarra / Podhorszki / Kacsuk

Recent Advances in Parallel Virtual Machine and Message Passing Interface

7th European PVM/MPI Users' Group Meeting Balatonfüred, Hungary, September 10-13, 2000 Proceedings

Buch, Englisch, Band 1908, 368 Seiten, Paperback, Format (B × H): 155 mm x 235 mm, Gewicht: 1190 g

Reihe: Lecture Notes in Computer Science

ISBN: 978-3-540-41010-2
Verlag: Springer Berlin Heidelberg


Parallel Virtual Machine (PVM) and Message Passing Interface (MPI) are the most frequently used tools for programming according to the message passing paradigm, which is considered one of the best ways to develop parallel applications. This volume comprises 42 revised contributions presented at the Seventh European PVM/MPI Users’ Group Meeting, which was held in Balatonfr ed, Hungary, 10 13 September 2000. The conference was organized by the Laboratory of Parallel and Distributed Systems of the Computer and Automation Research Institute of the Hungarian Academy of Sciences. This conference was previously held in Barcelona, Spain (1999), Liverpool, UK (1998) and Cracow, Poland (1997). The first three conferences were devoted to PVM and were held at the Technische Universit t M nchen, Germany (1996), Ecole Normale Superieure Lyon, France (1995), and University of Rome, Italy (1994). This conference has become a forum for users and developers of PVM, MPI, and other message passing environments. Interaction between those groups has proved to be very useful for developing new ideas in parallel computing and for applying existing ideas to new practical fields. The main topics of the meeting were evaluation and performance of PVM and MPI, extensions and improvements to PVM and MPI, algorithms using the message passing paradigm, and applications in science and engineering based on message passing. The conference included four tutorials and five invited talks on advances in MPI, cluster computing, network computing, grid computing, and SGI parallel computers and programming systems.
Dongarra / Podhorszki / Kacsuk Recent Advances in Parallel Virtual Machine and Message Passing Interface jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


Invited Speakers.- PVM and MPI: What Else Is Needed for Cluster Computing?.- Managing Your Workforce on a Computational Grid.- Isolating and Interfacing the Components of a Parallel Computing Environment.- Symbolic Computing with Beowulf-Class PC Clusters.- High Speed Networks for Clusters, the BIP-Myrinet Experience.- Evaluation and Performance.- A Benchmark for MPI Derived Datatypes.- Working with MPI Benchmarking Suites on ccNUMA Architectures.- Performance Measurements on Dynamite/DPVM.- Validation of Dimemas Communication Model for MPI Collective Operations.- Automatic Performance Analysis of Master/Worker PVM Applications with Kpi.- MPI Optimization for SMP Based Clusters Interconnected with SCI.- Algorithms.- Parallel, Recursive Computation of Global Stability Charts for Liquid Bridges.- Handling Graphs According to a Coarse Grained Approach: Experiments with PVM and MPI.- Adaptive Multigrid Methods in MPI.- Multiple Parallel Local Searches in Global Optimization.- Towards Standard Nested Parallelism.- Pipeline Algorithms on MPI: Optimal Mapping of the Path Planing Problem.- Use of PVM for MAP Image Restoration: A Parallel Implementation of the ARTUR Algorithm.- Parallel Algorithms for the Least-Squares Finite Element Solution of the Neutron Transport Equation.- Extensions and Improvements.- GAMMA and MPI/GAMMA on Gigabit Ethernet.- Distributed Checkpointing Mechanism for a Parallel File System.- Thread Communication over MPI.- A Simple, Fault Tolerant Naming Space for the HARNESS Metacomputing System.- Runtime Checking of Datatype Signatures in MPI.- Implementation Issues.- A Scalable Process-Management Environment for Parallel Programs.- Single Sided Communications in Multi-protocol MPI.- MPI-2 Process Creation & Management Implementation for NT Clusters.- Composition of Message Passing Applications On-Demand.- Heterogeneous Distributed Systems.- An Architecture of Stampi: MPI Library on a Cluster of Parallel Computers.- Integrating MPI Components into Metacomputing Applications.- Tools.- PVMaple: A Distributed Approach to Cooperative Work of Maple Processes.- CIS - A Monitoring System for PC Clusters.- Monito: A Communication Monitoring Tool for a PVM-Linux Environment.- Interoperability of OCM-Based On-Line Tools.- Parallel Program Model for Distributed Systems.- Translation of a High-Level Graphical Code to Message-Passing Primitives in the GRADE Programming Environment.- The Transition from a PVM Program Simulator to a Heterogeneous System Simulator: The HeSSE Project.- Comparison of Different Approaches to Trace PVM Program Execution.- Applications in Science and Engineering.- Scalable CFD Computations Using Message-Passing and Distributed Shared Memory Algorithms.- Parallelization of Neural Networks Using PVM.- Parallel DSIR Text Indexing System: Using Multiple Master/Slave Concept.- Improving Optimistic PDES in PVM Environments.- Use of Parallel Computers in Neurocomputing.- A Distributed Computing Environment for Genetic Programming Using MPI.- Experiments with Parallel Monte Carlo Simulation for Pricing Options Using PVM.- Time Independent 3D Quantum Reactive Scattering on MIMD Parallel Computers.- FT-MPI: Fault Tolerant MPI, Supporting Dynamic Applications in a Dynamic World.- ACCT: Automatic Collective Communications Tuning.


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.