Rajasekaran / Pardalos | Advances in Randomized Parallel Computing | Buch | 978-1-4613-3284-8 | sack.de

Buch, Englisch, Band 5, 287 Seiten, Paperback, Format (B × H): 160 mm x 240 mm, Gewicht: 508 g

Reihe: Combinatorial Optimization

Rajasekaran / Pardalos

Advances in Randomized Parallel Computing

Buch, Englisch, Band 5, 287 Seiten, Paperback, Format (B × H): 160 mm x 240 mm, Gewicht: 508 g

Reihe: Combinatorial Optimization

ISBN: 978-1-4613-3284-8
Verlag: Springer US


The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at the O( n log n) average run time for quicksort is that each input permutation is equally likely. Clearly, any average case analysis is only as good as how valid the assumption made on the input space is. Randomized algorithms achieve superior performances without making any assumptions on the inputs by making coin flips within the algorithm. Any analysis done of randomized algorithms will be valid for all p0:.sible inputs.
Rajasekaran / Pardalos Advances in Randomized Parallel Computing jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


1 Optimal Bounds on Tail Probabilities: A Study of an Approach.- 1.1 Introduction.- 1.2 Bounding Tail Probabilities with the Laplace Transform.- 1.3 When only the Mean is given: The Hoeffding Bound.- 1.4 When the Mean and the Variance are given: A Simple Proof of the Bennett Bound.- 1.5 When the First n Moments are Given: A Glimpse of the General Theory.- 1.6 An Application: Improved Bounds for the List Update Problem.- References.- 2 Parallelism in Comparison Problems.- 2.1 Introduction.- 2.2 Selection.- 2.3 Merging.- 2.4 Sorting.- 2.5 Conclusions.- References.- 3 Random Sampling.- 3.1 Introduction.- 3.2 Preliminaries.- 3.3 Partitioning I: Sorting.- 3.4 Partitioning II: List Ranking.- 3.5 Pruning I: Selection.- 3.6 Pruning II: Row maxima of monotone matrices.- 3.7 Pruning III: Graph Connected Components.- 3.8 Other examples.- 3.5 Bibliographic Notes.- References.- 4 Randomized Algorithms on the Mesh.- 4.1 Introduction.- 4.2 Preliminaries.- 4.3 Routing on the mesh.- 4.4 Sorting on the mesh.- 4.5 Selection on the mesh.- References.- 5 Efficient Randomized Algorithms.- 5.1 Introduction.- 5.2 Preliminaries.- 5.3 Randomized Routing.- 5.4 Randomized Selection.- 5.5 Randomized Sorting.- 5.6 Randomized PRAM Emulation.- 5.7 Selection and Sorting Schemes for Processing Large Distributed Files.- 5.8 Conclusions.- References.- 6 Ultrafast Randomized Parallel Algorithms for Spanning Forests.- 6.1 Introduction.- 6.2 Uitrafast Parallel Algorithms.- 6.3 Dense Instances.- 6.4 Ultrafast Algorithms for Spanning Forests.- 6.5 Open Problems and Further Research.- References.- 7 Parallel Randomized Techniques for Some Fundamental Geometric Problems.- 7.1 Introduction.- 7.2 Preliminaries and Definitions.- 7.3 The Use of Randomization in Computational Geometry.- 7.4 Applications to Fundamental Geometric Problems.- 7.5 Summary.- References.- 8 Capturing the Connectivity of High-Dimensional Geometric Spaces.- 8.1 Introduction.- 8.2 Basic Probabilistic Roadmap Planner.- 8.3 Other Sampling Strategies.- 8.4 Roadmap Coverage.- 8.5 Roadmap Connectedness.- 8.6 Current and Future Work.- Appendix: A. Proof of Theorem 1.- Appendix: B. Proof of Theorem 2.- Appendix: C. Proof of Theorem 3.- Appendix: D. Proof of Theorem 4.- References.- 9 Randomized Parallel Prefetching and Buffer Management.- 9.1 Introduction.- 9.2 Definitions.- 9.3 Read-Once Reference Strings.- 9.4 Read-Many Reference Strings.- 9.5 Concluding Remarks.- References.- 10 DFA Problems.- 10.1 Introduction.- 10.2 Membership Problem.- 10.3 Containment and Equivalence Problems.- 10.4 Ranking and Related Problems.- 10.5 Coarsest Partition Problems.- 10.6 Automata Testing Problems.- 10.7 Conversion from Regular Expression to NFA.- 10.8 Applications.- 10.9 Open Problems.- References.- 11 LAPACK90.- 11.1 Introduction.- 11.2 Interface Blocks for LAPACK 77.- 11.3 Interface Blocks for LAPACK 90.- 11.4 Code of LAPACK90 Routines.- 11.5 LAPACK90 Documentation.- 11.6 LAPACK90 Test Programs.- 11.7 LAPACK90 User Callable Routines.- References.- Appendix: A, Generic Interfaces.- A.1 LAPACK77 Generic Interface Blocks.- A.2 LAPACK90 Generic Interface Blocks.- Appendix: B, Interface Subroutines.- B.1 LA_GESV and LA_GETRI subroutines.- B.2 Auxiliary Routines.- Appendix: C.- C.1 Documentation of LA_GESV.- Appendix: D.- D.1 The LA_GESV test results.- Appendix: E.- E.1 LAPACK90 User Callable Routines.


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.