Mishra | PySpark Recipes | Buch | 978-1-4842-3140-1 | www.sack.de

Buch, Englisch, 265 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 4453 g

Mishra

PySpark Recipes

A Problem-Solution Approach with PySpark2
1. Auflage 2017
ISBN: 978-1-4842-3140-1
Verlag: Apress

A Problem-Solution Approach with PySpark2

Buch, Englisch, 265 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 4453 g

ISBN: 978-1-4842-3140-1
Verlag: Apress


Quickly find solutions to common programming problems encountered while processing big data. Content is presented in the popular problem-solution format. Look up the programming problem that you want to solve. Read the solution. Apply the solution directly in your own code. Problem solved!
PySpark Recipes covers Hadoop and its shortcomings. The architecture of Spark, PySpark, and RDD are presented. You will learn to apply RDD to solve day-to-day big data problems. Python and NumPy are included and make it easy for new learners of PySpark to understand and adopt the model.

What You Will Learn  
  • Understand the advanced features of PySpark2 and SparkSQL
  • Optimize your code
  • Program SparkSQL with Python
  • Use Spark Streaming and Spark MLlib with Python
  • Perform graph analysis with GraphFrames
Who This Book Is ForData analysts, Python programmers, big data enthusiasts

Mishra PySpark Recipes jetzt bestellen!

Zielgruppe


Professional/practitioner


Autoren/Hrsg.


Weitere Infos & Material


Chapter 1:  The era of Big Data and HadoopChapter Goal:Reader learns  about Big data and  its usefulness.  Also how Hadoop and its ecosystem is beautifully able to process big data for useful informations. What are the shortcomings  of Hadoop which requires another Big data processing platform.No of pages 15-20Sub -Topics1. Introduction to Big-Data2. Big Data challenges and processing technology 3. Hadoop, structure and its ecosystem4. Shortcomings of Hadoop
Chapter 2: Python, NumPy and SciPyChapter Goal:The goal of this chapter to get reader acquainted with Python, NumPy and SciPy. 
No of pages: 25-30Sub - Topics 1.  Introduction to Python2. Python collection, String Function and Class3. NumPy and ndarray4. SciPyChapter 3:  Spark : Introduction, Installation, Structure and PySparkChapter Goal:This chapters will introduce Spark, Installation on Single machine. There after it continues with structure of Spark. Finally, PySpark is introduced.No of pages : 15-20Sub - Topics:  1. Introduction to Spark2. Spark installation on Ubuntu3.  Spark architecture4. PySpark and Its architecture
Chapter 4: Resilient Distributed Dataset (RDD)Chapter Goal:Chapter deals with the core of Spark, RDD.  Operation on RDDNo of pages: 25-30Sub - Topics: 1. Introduction to RDD and its characteristics2. Transformation and Actions2. Operations on RDD ( like map, filter, set operations and many more)
Chapter 5: The power of pairs : Paired RDDChapter Goal:Paired RDD can help in making many complex computation easy in programming. Learners will learn paired RDD and operation on this.No of pages:15 -20Sub - Topics: 1. Introduction to Paired RDD2. Operation on paired RDD (mapByKey, reduceByKey …...) Chapter 6:  Advance PySpark and PySpark application optimizationChapter Goal: 30-35Reader will learn about Advance PySpark topics broadcast and accumulator. In this chapter learner will learn about PySpark application optimization. No of pages:Sub - Topics: 1. Spark Accumulator2. Spark Broadcast3. Spark Code Optimization
Chapter 7: IO in PySparkChapter Goal:We will learn PySpark IO in this chapter. Reading and writing .csv file and .json files. We will also learn how to connect to different databases with PySpark.No of pages:20-30Sub - Topics: 1. Reading and writing JSON  and .csv files2. Reading data from HDFS3. Reading data from different databases and writing data to different databases
Chapter 8: PySpark StreamingChapter Goal:Reader will understand real time data analysis with PySpark Streaming. This chapter is focus on  PySpark Streaming architecture, Discretized stream operations and windowing operations.No of pages:30-40Sub - Topics: 1. PySpark Streaming architecture2. Discretized Stream and operations3. Concept of windowing operations
Chapter 9:  SparkSQLChapter Goal:In this chapter reader will learn about SparkSQL.  SparkSQL Dataframe is introduced in this chapter. In this chapter learner will learn how to use SQL commands using SparkSQLNo of pages: 40-50Sub - Topics: 1. SparkSQL2. SQL with SparkSQL3. Hive commands with SparkSQL


Raju Mishra has strong interests in data science and systems that have the capability of handling large amounts of data and operating complex mathematical models through computational programming. He was inspired to pursue an M. Tech in computational sciences from Indian Institute of Science in Bangalore, India. Raju primarily works in the areas of data science and its different applications. Working as a corporate trainer he has developed unique insights that help him in teaching and explaining complex ideas with ease. Raju is also a data science consultant solving complex industrial problems. He works on programming tools such as R, Python, scikit-learn, Statsmodels, Hadoop, Hive, Pig, Spark, and many others.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.