an icon showing a delivery van Shulph delivers to United Kingdom.
Book cover for Big Data Processing with Apache Spark, a book by Manuel Ignacio Franco Galeano Book cover for Big Data Processing with Apache Spark, a book by Manuel Ignacio Franco Galeano

Big Data Processing with Apache Spark

Efficiently tackle large datasets and big data analysis with Spark and Python
2018 ᛫


Powered by RoundRead®
This book leverages Shulph’s RoundRead system - buy the book once and read it on both physical book and on up to 5 of your personal devices. With RoundRead, you’re 4 times more likely to read this book cover-to-cover and up to 3 times faster.
Book £ 25.99
Book + eBook £ 31.19
eBook Only £ 19.03
Add to Read List


Instant access to ebook. Print book delivers in 5 - 20 working days.

Summary


No need to spend hours ploughing through endless data – let Spark, one of the fastest big data processing engines available, do the hard work for you.


Key Features


  • Get up and running with Apache Spark and Python

  • Integrate Spark with AWS for real-time analytics

  • Apply processed data streams to machine learning APIs of Apache Spark


Book Description


Processing big data in real time is challenging due to scalability, information consistency, and fault-tolerance. This book teaches you how to use Spark to make your overall analytical workflow faster and more efficient. You'll explore all core concepts and tools within the Spark ecosystem, such as Spark Streaming, the Spark Streaming API, machine learning extension, and structured streaming.


You'll begin by learning data processing fundamentals using Resilient Distributed Datasets (RDDs), SQL, Datasets, and Dataframes APIs. After grasping these fundamentals, you'll move on to using Spark Streaming APIs to consume data in real time from TCP sockets, and integrate Amazon Web Services (AWS) for stream consumption.


By the end of this book, you'll not only have understood how to use machine learning extensions and structured streams but you'll also be able to apply Spark in your own upcoming big data projects.


What you will learn


  • Write your own Python programs that can interact with Spark

  • Implement data stream consumption using Apache Spark

  • Recognize common operations in Spark to process known data streams

  • Integrate Spark streaming with Amazon Web Services (AWS)

  • Create a collaborative filtering model with the movielens dataset

  • Apply processed data streams to Spark machine learning APIs

Who this book is for


Data Processing with Apache Spark is for you if you are a software engineer, architect, or IT professional who wants to explore distributed systems and big data analytics. Although you don't need any knowledge of Spark, prior experience of working with Python is recommended.