Haridi / Magnusson / Ali | EURO-PAR '95: Parallel Processing | Buch | 978-3-540-60247-7 | sack.de

Buch, Englisch, Band 966, 730 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 1124 g

Reihe: Lecture Notes in Computer Science

Haridi / Magnusson / Ali

EURO-PAR '95: Parallel Processing

First International EURO-PAR Conference, Stockholm, Sweden, August 29 - 31, 1995. Proceedings
1995
ISBN: 978-3-540-60247-7
Verlag: Springer Berlin Heidelberg

First International EURO-PAR Conference, Stockholm, Sweden, August 29 - 31, 1995. Proceedings

Buch, Englisch, Band 966, 730 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 1124 g

Reihe: Lecture Notes in Computer Science

ISBN: 978-3-540-60247-7
Verlag: Springer Berlin Heidelberg


This book presents the proceedings of the First International EURO-PAR Conference on Parallel Processing, held in Stockholm, Sweden in August 1995. EURO-PAR is the merger of the former PARLE and CONPAR-VAPP conference series; the aim of this merger is to create the premier annual scientific conference on parallel processing in Europe.
The book presents 50 full revised research papers and 11 posters selected from a total of 196 submissions on the basis of 582 reviews. The scope of the contributions spans the full spectrum of parallel processing ranging from theory over design to application; thus the volume is a "must" for anybody interested in the scientific aspects of parallel processing or its advanced applications.
Haridi / Magnusson / Ali EURO-PAR '95: Parallel Processing jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


Mainstream parallelism: Taking sides on the SMP/MPP/cluster debate.- The Oz Programming model.- Parallelism in computational algorithms and the physical world.- Execution of distributed reactive systems.- Relating data-parallelism and (and-) parallelism in logic programs.- On the duality between Or-parallelism and And-parallelism in logic programming.- Functional skeletons for parallel coordination.- On the scalability of demand-driven parallel systems.- Bounds on memory bandwidth in streamed computations.- StarT-NG: Delivering seamless parallel computing.- Costs and benefits of multithreading with off-the-shelf RISC processors.- Transformation techniques in Pei.- On the completeness of a proof system for a simple data-parallel programming language (extended abstract).- An implementation of race detection and deterministic replay with MPI.- Formal and experimental validation of a low overhead execution replay mechanism.- On efficient embeddings of grids into grids in PARIX.- Optimal emulation of meshes on meshes of trees.- Optimal embeddings in the Hamming cube networks.- Hierarchical adaptive routing under hybrid traffic load.- Tight bounds on parallel list marking.- Optimization of PRAM-programs with input-dependent memory access.- Optimal circular arc representations.- Exploiting parallelism in cache coherency protocol engines.- Verifying distributed directory-based cache coherence protocols: S3.mp, a case study.- Efficient software data prefetching for a Loop with large arrays.- Generation of synchronous code for automatic parallelization of while loops.- Implementing flexible computation rules with subexpression-level loop transformations.- Synchronization migration for performance enhancement in a DOACROSS loop.- An array partitioning analysis for parallel loop distribution.- A model for efficient programming of dynamic applications on distributed memory multiprocessors.- Efficient solutions for mapping parallel programs.- Optimal data distributions for LU decomposition.- Detecting quantified global predicates in parallel programs.- Using knowledge-based techniques for parallelization on parallelizing compilers.- Automatic vectorization of communications for data-parallel programs.- The program compaction revisited: The functional framework.- Featherweight threads and ANDF compilation of concurrency.- Parallel N-body simulation on a large-scale homogeneous distributed system.- Analysis of parallel scan processing in Shared Disk database systems.- Polynomial time scheduling of low level computer vision algorithms on networks of heterogeneous machines.- Mapping neural network back-propagation onto parallel computers with computation/communication overlapping.- Super Monaco: Its portable and efficient parallel runtime system.- Quiescence detection in a distributed KLIC implementation.- Compiler optimizations in Reform Prolog: Experiments on the KSR-1 multiprocessor.- Bidirectional ring: An alternative to the hierarchy of unidirectional rings.-A formal study of the Mcube interconnection network.- Multiwave interconnection networks for MCM-based parallel processing.- Scheduling master-slave multiprocessor systems.- Time space sharing scheduling: A simulation analysis.- “Agency scheduling” A model for dynamic task scheduling.- FFTs on a linear SIMD array.- Tolerating faults in faulty hypercubes using maximal fault-free subcube-ring.- Communication in multicomputer with nonconvex faults.- Parallelising programs with algebraic programming tools.- Parallel Prolog with uncertainty handling.- A special-purpose coprocessor for qualitative simulation.- Portable Software Tools for Parallel Architectures.- Boosting the performance of workstations through WARPmemory.- A monitoring system for software-heterogeneous distributed environments.- A metacircular data-parallel functional language.- Efficient run-time program allocation on a parallel coprocessor.- A program manipulation system for fine-grained architectures.- Real-time image compression using data-parallelism.- Congestion control in wormhole networks: First results.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.