E-Book, Englisch, 672 Seiten
Hwang / Dongarra / Fox Distributed and Cloud Computing
1. Auflage 2013
ISBN: 978-0-12-800204-9
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
From Parallel Processing to the Internet of Things
E-Book, Englisch, 672 Seiten
ISBN: 978-0-12-800204-9
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
Distributed and Cloud Computing: From Parallel Processing to the Internet of Things offers complete coverage of modern distributed computing technology including clusters, the grid, service-oriented architecture, massively parallel processors, peer-to-peer networking, and cloud computing. It is the first modern, up-to-date distributed systems textbook; it explains how to create high-performance, scalable, reliable systems, exposing the design principles, architecture, and innovative applications of parallel, distributed, and cloud computing systems. Topics covered by this book include: facilitating management, debugging, migration, and disaster recovery through virtualization; clustered systems for research or ecommerce applications; designing systems as web services; and social networking systems using peer-to-peer computing. The principles of cloud computing are discussed using examples from open-source and commercial applications, along with case studies from the leading distributed computing vendors such as Amazon, Microsoft, and Google. Each chapter includes exercises and further reading, with lecture slides and more available online. This book will be ideal for students taking a distributed systems or distributed computing class, as well as for professional system designers and engineers looking for a reference to the latest distributed technologies including cloud, P2P and grid computing. - Complete coverage of modern distributed computing technology including clusters, the grid, service-oriented architecture, massively parallel processors, peer-to-peer networking, and cloud computing - Includes case studies from the leading distributed computing vendors: Amazon, Microsoft, Google, and more - Explains how to use virtualization to facilitate management, debugging, migration, and disaster recovery - Designed for undergraduate or graduate students taking a distributed systems course-each chapter includes exercises and further reading, with lecture slides and more available online
Kai Hwang is a Professor of Computer Engineering, University of Southern California and an IV-endowed visiting Chair Professor, Tsinghua University, China. He earned the Ph.D. in EECS from University of California at Berkeley. An IEEE Life Fellow, He has published extensively in computer architecture, digital arithmetic, parallel processing, distributed systems, Internet security, and cloud computing. He has founded the Journal of Parallel and Distributed Computing and delivered three dozens of keynote addresses in major IEEE/ACM Conferences. He received the 2004 Outstanding Achievement Award from China Computer Federation and the IEEE 2011 IPDPS Founders' Award for his pioneering contributions in the field of parallel processing.
Autoren/Hrsg.
Weitere Infos & Material
Chapter 1 Distributed System Models and Enabling Technologies
Chapter Outline Summary 1.1 Scalable Computing over the Internet 1.1.1 The Age of Internet Computing 1.1.2 Scalable Computing Trends and New Paradigms 1.1.3 The Internet of Things and Cyber-Physical Systems 1.2 Technologies for Network-based Systems 1.2.1 Multicore CPUs and Multithreading Technologies 1.2.2 GPU Computing to Exascale and Beyond 1.2.3 Memory, Storage, and Wide-Area Networking 1.2.4 Virtual Machines and Virtualization Middleware 1.2.5 Data Center Virtualization for Cloud Computing 1.3 System Models for Distributed and Cloud Computing 1.3.1 Clusters of Cooperative Computers 1.3.2 Grid Computing Infrastructures 1.3.3 Peer-to-Peer Network Families 1.3.4 Cloud Computing over the Internet 1.4 Software Environments for Distributed Systems and Clouds 1.4.1 Service-Oriented Architecture (SOA) 1.4.2 Trends toward Distributed Operating Systems 1.4.3 Parallel and Distributed Programming Models 1.5 Performance, Security, and Energy Efficiency 1.5.1 Performance Metrics and Scalability Analysis 1.5.2 Fault Tolerance and System Availability 1.5.3 Network Threats and Data Integrity 1.5.4 Energy Efficiency in Distributed Computing 1.6 Bibliographic Notes and Homework Problems Acknowledgments References Homework Problems This chapter presents the evolutionary changes that have occurred in parallel, distributed, and cloud computing over the past 30 years, driven by applications with variable workloads and large data sets. We study both high-performance and high-throughput computing systems in parallel computers appearing as computer clusters, service-oriented architecture, computational grids, peer-to-peer networks, Internet clouds, and the Internet of Things. These systems are distinguished by their hardware architectures, OS platforms, processing algorithms, communication protocols, and service models applied. We also introduce essential issues on the scalability, performance, availability, security, and energy efficiency in distributed systems. scalable computing, distributed systems, virtualization, parallel programming, security, peer-to-peer networks Summary
This chapter presents the evolutionary changes that have occurred in parallel, distributed, and cloud computing over the past 30 years, driven by applications with variable workloads and large data sets. We study both high-performance and high-throughput computing systems in parallel computers appearing as computer clusters, service-oriented architecture, computational grids, peer-to-peer networks, Internet clouds, and the Internet of Things. These systems are distinguished by their hardware architectures, OS platforms, processing algorithms, communication protocols, and service models applied. We also introduce essential issues on the scalability, performance, availability, security, and energy efficiency in distributed systems. 1.1 Scalable Computing Over the Internet
Over the past 60 years, computing technology has undergone a series of platform and environment changes. In this section, we assess evolutionary changes in machine architecture, operating system platform, network connectivity, and application workload. Instead of using a centralized computer to solve computational problems, a parallel and distributed computing system uses multiple computers to solve large-scale problems over the Internet. Thus, distributed computing becomes data-intensive and network-centric. This section identifies the applications of modern computer systems that practice parallel and distributed computing. These large-scale Internet applications have significantly enhanced the quality of life and information services in society today. 1.1.1 The Age of Internet Computing
Billions of people use the Internet every day. As a result, supercomputer sites and large data centers must provide high-performance computing services to huge numbers of Internet users concurrently. Because of this high demand, the Linpack Benchmark for high-performance computing (HPC) applications is no longer optimal for measuring system performance. The emergence of computing clouds instead demands high-throughput computing (HTC) systems built with parallel and distributed computing technologies [5,6,19,25]. We have to upgrade data centers using fast servers, storage systems, and high-bandwidth networks. The purpose is to advance network-based computing and web services with the emerging new technologies. 1.1.1.1 The Platform Evolution Computer technology has gone through five generations of development, with each generation lasting from 10 to 20 years. Successive generations are overlapped in about 10 years. For instance, from 1950 to 1970, a handful of mainframes, including the IBM 360 and CDC 6400, were built to satisfy the demands of large businesses and government organizations. From 1960 to 1980, lower-cost minicomputers such as the DEC PDP 11 and VAX Series became popular among small businesses and on college campuses. From 1970 to 1990, we saw widespread use of personal computers built with VLSI microprocessors. From 1980 to 2000, massive numbers of portable computers and pervasive devices appeared in both wired and wireless applications. Since 1990, the use of both HPC and HTC systems hidden in clusters, grids, or Internet clouds has proliferated. These systems are employed by both consumers and high-end web-scale computing and information services. The general computing trend is to leverage shared web resources and massive amounts of data over the Internet. Figure 1.1 illustrates the evolution of HPC and HTC systems. On the HPC side, supercomputers (massively parallel processors or MPPs) are gradually replaced by clusters of cooperative computers out of a desire to share computing resources. The cluster is often a collection of homogeneous compute nodes that are physically connected in close range to one another. We will discuss clusters, MPPs, and grid systems in more detail in Chapters 2 and 7. Figure 1.1 Evolutionary trend toward parallel, distributed, and cloud computing with clusters, MPPs, P2P networks, grids, clouds, web services, and the Internet of Things. On the HTC side, peer-to-peer (P2P) networks are formed for distributed file sharing and content delivery applications. A P2P system is built over many client machines (a concept we will discuss further in Chapter 5). Peer machines are globally distributed in nature. P2P, cloud computing, and web service platforms are more focused on HTC applications than on HPC applications. Clustering and P2P technologies lead to the development of computational grids or data grids. 1.1.1.2 High-Performance Computing For many years, HPC systems emphasize the raw speed performance. The speed of HPC systems has increased from Gflops in the early 1990s to now Pflops in 2010. This improvement was driven mainly by the demands from scientific, engineering, and manufacturing communities. For example, the Top 500 most powerful computer systems in the world are measured by floating-point speed in Linpack benchmark results. However, the number of supercomputer users is limited to less than 10% of all computer users. Today, the majority of computer users are using desktop computers or large servers when they conduct Internet searches and market-driven computing tasks. 1.1.1.3 High-Throughput Computing The development of market-oriented high-end computing systems is undergoing a strategic change from an HPC paradigm to an HTC paradigm. This HTC paradigm pays more attention to high-flux computing. The main application for high-flux computing is in Internet searches and web services by millions or more users simultaneously. The performance goal thus shifts to measure high throughput or the number of tasks completed per unit of time. HTC technology needs to not only improve in terms of batch processing speed, but also address the acute problems of cost, energy savings, security, and reliability at many data and enterprise computing centers. This book will address both HPC and HTC systems to meet the demands of all computer users. 1.1.1.4 Three New Computing Paradigms As Figure 1.1 illustrates, with the introduction of SOA, Web 2.0 services become available. Advances in virtualization make it possible to see the growth of Internet clouds as a new computing paradigm. The maturity of radio-frequency identification (RFID), Global Positioning System (GPS), and sensor technologies has triggered the development of the Internet of Things (IoT). These new paradigms are only briefly introduced here. We will study the details of SOA...