84,23 €
93,59 €
-10% with code: EXTRA
Optimizing Databricks Workloads
Optimizing Databricks Workloads
84,23
93,59 €
  • We will send in 10–14 business days.
Accelerate computations and make the most of your data effectively and efficiently on DatabricksKey Features: Understand Spark optimizations for big data workloads and maximizing performanceBuild efficient big data engineering pipelines with Databricks and Delta LakeEfficiently manage Spark clusters for big data processing Book Description: Databricks is an industry-leading, cloud-based platform for data analytics, data science, and data engineering supporting thousands of organizations across…
93.59
  • Publisher:
  • ISBN-10: 1801819076
  • ISBN-13: 9781801819077
  • Format: 19.1 x 23.5 x 1.2 cm, minkšti viršeliai
  • Language: English
  • SAVE -10% with code: EXTRA

Optimizing Databricks Workloads (e-book) (used book) | bookbook.eu

Reviews

(4.00 Goodreads rating)

Description

Accelerate computations and make the most of your data effectively and efficiently on Databricks


Key Features:

  • Understand Spark optimizations for big data workloads and maximizing performance
  • Build efficient big data engineering pipelines with Databricks and Delta Lake
  • Efficiently manage Spark clusters for big data processing

Book Description:

Databricks is an industry-leading, cloud-based platform for data analytics, data science, and data engineering supporting thousands of organizations across the world in their data journey. It is a fast, easy, and collaborative Apache Spark-based big data analytics platform for data science and data engineering in the cloud.

In Optimizing Databricks Workloads, you will get started with a brief introduction to Azure Databricks and quickly begin to understand the important optimization techniques. The book covers how to select the optimal Spark cluster configuration for running big data processing and workloads in Databricks, some very useful optimization techniques for Spark DataFrames, best practices for optimizing Delta Lake, and techniques to optimize Spark jobs through Spark core. It contains an opportunity to learn about some of the real-world scenarios where optimizing workloads in Databricks has helped organizations increase performance and save costs across various domains.

By the end of this book, you will be prepared with the necessary toolkit to speed up your Spark jobs and process your data more efficiently.


What You Will Learn:

  • Get to grips with Spark fundamentals and the Databricks platform
  • Process big data using the Spark DataFrame API with Delta Lake
  • Analyze data using graph processing in Databricks
  • Use MLflow to manage machine learning life cycles in Databricks
  • Find out how to choose the right cluster configuration for your workloads
  • Explore file compaction and clustering methods to tune Delta tables
  • Discover advanced optimization techniques to speed up Spark jobs


Who this book is for:

This book is for data engineers, data scientists, and cloud architects who have working knowledge of Spark/Databricks and some basic understanding of data engineering principles. Readers will need to have a working knowledge of Python, and some experience of SQL in PySpark and Spark SQL is beneficial.

EXTRA 10 % discount with code: EXTRA

84,23
93,59 €
We will send in 10–14 business days.

The promotion ends in 21d.15:02:01

The discount code is valid when purchasing from 10 €. Discounts do not stack.

Log in and for this item
you will receive 0,94 Book Euros!?
  • Author: Anirudh Kala
  • Publisher:
  • ISBN-10: 1801819076
  • ISBN-13: 9781801819077
  • Format: 19.1 x 23.5 x 1.2 cm, minkšti viršeliai
  • Language: English English

Accelerate computations and make the most of your data effectively and efficiently on Databricks


Key Features:

  • Understand Spark optimizations for big data workloads and maximizing performance
  • Build efficient big data engineering pipelines with Databricks and Delta Lake
  • Efficiently manage Spark clusters for big data processing

Book Description:

Databricks is an industry-leading, cloud-based platform for data analytics, data science, and data engineering supporting thousands of organizations across the world in their data journey. It is a fast, easy, and collaborative Apache Spark-based big data analytics platform for data science and data engineering in the cloud.

In Optimizing Databricks Workloads, you will get started with a brief introduction to Azure Databricks and quickly begin to understand the important optimization techniques. The book covers how to select the optimal Spark cluster configuration for running big data processing and workloads in Databricks, some very useful optimization techniques for Spark DataFrames, best practices for optimizing Delta Lake, and techniques to optimize Spark jobs through Spark core. It contains an opportunity to learn about some of the real-world scenarios where optimizing workloads in Databricks has helped organizations increase performance and save costs across various domains.

By the end of this book, you will be prepared with the necessary toolkit to speed up your Spark jobs and process your data more efficiently.


What You Will Learn:

  • Get to grips with Spark fundamentals and the Databricks platform
  • Process big data using the Spark DataFrame API with Delta Lake
  • Analyze data using graph processing in Databricks
  • Use MLflow to manage machine learning life cycles in Databricks
  • Find out how to choose the right cluster configuration for your workloads
  • Explore file compaction and clustering methods to tune Delta tables
  • Discover advanced optimization techniques to speed up Spark jobs


Who this book is for:

This book is for data engineers, data scientists, and cloud architects who have working knowledge of Spark/Databricks and some basic understanding of data engineering principles. Readers will need to have a working knowledge of Python, and some experience of SQL in PySpark and Spark SQL is beneficial.

Reviews

  • No reviews
0 customers have rated this item.
5
0%
4
0%
3
0%
2
0%
1
0%
(will not be displayed)