Cantitate/Preț
Produs

The Azure Data Lakehouse Toolkit: Building and Scaling Data Lakehouses on Azure with Delta Lake, Apache Spark, Databricks, Synapse Analytics, and Snowflake

De (autor)
Notă GoodReads:
en Limba Engleză Paperback – 14 Jul 2022
Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.
The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.

After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.

What You Will Learn
  • Implement the Data Lakehouse Paradigm on Microsoft’s Azure cloud platform
  • Benefit from the new Delta Lake open-source storage layer for data lakehouses 
  • Take advantage of schema evolution, change feeds, live tables, and more
  • Write functional PySpark code for data lakehouse ELT jobs
  • Optimize Apache Spark performance through partitioning, indexing, and other tuning options
  • Choose between alternatives such as Databricks, Synapse Analytics, and Snowflake

Who This Book Is For

Data, analytics, and AI professionals at all levels, including data architect and data engineer practitioners. Also for data professionals seeking patterns of success by which to remain relevant as they learn to build scalable data lakehouses for their organizations and customers who are migrating into the modern Azure Data Platform. 
Citește tot Restrânge

Preț: 26753 lei

Preț vechi: 33441 lei
-20%

Puncte Express: 401

Preț estimativ în valută:
5149 4919$ 4602£

Carte tipărită la comandă

Livrare economică 17 noiembrie-01 decembrie
Livrare express 11-19 octombrie pentru 7617 lei

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9781484282328
ISBN-10: 1484282329
Ilustrații: XXII, 465 p. 365 illus.
Dimensiuni: 178 x 254 mm
Greutate: 0.84 kg
Ediția: 1st ed.
Editura: Apress
Colecția Apress
Locul publicării: Berkeley, CA, United States

Cuprins

Introduction

Part I. Getting Started
1. The Lakehouse Paradigm
2. Mount Lakes to Databricks

Part II. Lakehouse Platforms
3. Snowflake Data Warehouse
4. Synapse Analytics Serverless Pools
5. Databricks SQL Analytics

Part III. Apache Spark
6. PySpark
7. Extract, Load, Transform Jobs

Part IV. Delta Lake
8. Delta Schema Evolution
9. Delta Change Feed
10. Delta Clones
11. Delta Live Tables
12. Delta Sharing

Part V. Optimizing Performance
13. Dynamic Partition Pruning for Querying Star Schemas
14. Z-Ordering and Data Skipping
15. Adaptive Query Execution
16. Bloom Filter Index
17. Hyperspace

Part VI. Lakehouse Capabilities
18. Auto Loader Resource Management
19. Advanced Schema Evolution with Auto Loader 
20. Python Wheels
21. Security and Controls
22. Unity Catalog


Notă biografică

Ron C. L’Esteve is a professional author, trusted technology leader, and digital innovation strategist residing in Chicago, IL, USA. He is well-known for his impactful books and award-winning article publications about Azure Data & AI Architecture and Engineering. He possesses deep technical skills and experience in designing, implementing, and delivering modern Azure Data & AI projects for numerous clients around the world.
Having several Azure Data, AI, and Lakehouse certifications under his belt, Ron has been a go-to technical advisor for some of the largest and most impactful Azure implementation projects on the planet. He has been responsible for scaling key data architectures, defining the road map and strategy for the future of data and business intelligence needs, and challenging customers to grow by thoroughly understanding the fluid business opportunities and enabling change by translating them into high-quality and sustainable technical solutions that solve the most complex challenges and promote digital innovation and transformation.

Ron is a gifted presenter and trainer, known for his innate ability to clearly articulate and explain complex topics to audiences of all skill levels. He applies a practical and business-oriented approach by taking transformational ideas from concept to scale. He is a true enabler of positive and impactful change by championing a growth mindset.

 


Textul de pe ultima copertă

Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.
The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.

After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.

What You Will Learn
  • Implement the Data Lakehouse Paradigm on Microsoft’s Azure cloud platform
  • Benefit from the new Delta Lake open-source storage layer for data lakehouses 
  • Take advantage of schema evolution, change feeds, live tables, and more
  • Write functional PySpark code for data lakehouse ELT jobs
  • Optimize Apache Spark performance through partitioning, indexing, and other tuning options
  • Choose between alternatives such as Databricks, Synapse Analytics, and Snowflake



Caracteristici

Shows data lakehouse design using Apache Spark on Azure
Teaches performance optimization techniques for Spark queries
Provides hands-on PySpark and Delta Lake examples for lakehouse ELT jobs