In this session, learn how to quickly supplement your on-premises Hadoop environment with a simple, open, and collaborative cloud architecture that enables you to generate greater value with scaled application of analytics and AI on all your data. You will also learn five critical steps for a successful migration to the Databricks Lakehouse Platform along with the resources available to help you begin to re-skill your data teams.
5. History of Hadoop
● Created 2005
● Open Source distributed processing and storage
platform running on commodity hardware
● Originally consisted of HDFS, and MapReduce, but
now incorporates numerous open source projects
(Hive, HBase, Spark)
● On-prem and on the cloud
6. COMPLEX FIXED
Today Hadoop is very hard
● Many tools: Need to understand
multiple technologies.
● Real-time and batch ingestion to
build AI models requires
integrating many components.
Slow Innovation
● 24/7 clusters.
● Fixed capacity: CPU
+ RAM + Disk.
● Costly to upgrade.
Cost Prohibitive
MAINTENANCE
INTENSIVE
● Hadoop ecosystem is
complex and hard to
manage that is prone to
failures.
Low Productivity
X
7. Enterprises Need a Modern
Data Analytics Architecture
CRITICAL REQUIREMENTS
Cost-effective scale and performance in the cloud
Easy to manage and highly reliable for diverse data
Predictive and real-time insights to drive innovation
8. Structured Semi-structured Unstructured Streaming
Lakehouse Platform
Data Engineering
BI & SQL
Analytics
Real-time Data
Applications
Data Science
& Machine Learning
Data Management & Governance
Open Data Lake
SIMPLE OPEN COLLABORATIVE
13. Migration Planning
Technical Planning
● Target state architecture
● Data migration
● Workload migration
○ Lift and shift, transformative, hybrid
● Data governance approach
● Automated deployment
● Monitoring and Operations
14. Migration Planning
Enablement and Evaluation
● Workshops,Technical deep dives
● Training
● Proof of technology / MVP
○ Validate assumptions and designs
15. Migration Planning
Migration Execution
● Environment Deployment
● Iterate of use cases
○ Data Migration
○ Workload Migration
○ Dual Production Deployment - Old and New
○ Validation
○ Cut-over and Decommission of Hadoop
19. Hadoop Ecosystem to Databricks Concepts
Hadoop
HDFS
c
disk1
disk2
disk3
disk4
disk5
disk6
...
disk
N
YARN
Impala
HBase
c
c
c
c
c MR
mapper
c MR
mapper
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
c
c
c
2x12c = 24c
compute
HDFS
c
disk1
disk2
disk3
disk4
disk5
disk6
...
disk
N
YARN
Impala
HBase
c
c
c
c
c MR
mapper
c MR
mapper
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
c
c
c
2x12c = 24c
compute
HDFS
c
disk1
disk2
disk3
disk4
disk5
disk6
...
disk
N
YARN
Impala
HBase
c
c
c
c
c MR
mapper
c MR
mapper
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c
Spark
Driver
c
c
c
c
c
c
2x12c = 24c
compute
...
Node 1 Node 2 Node N
Hive
Metastore
Hive
Server
Impala
(LoadBalancer)
HBase
API
Sentry
Table Metadata +
HDFS ACLs
JDBC/ODBC
Node makeup
▪ Local disks
▪ Cores/Memory carved to services
▪ Submitted jobs compete for resources
▪ Services constrained to accommodate
resources
Metadata and Security
▪ Sentry table metadata permissions combined
with syncing HDFS ACLs OR
▪ Apache Ranger, policy based access control
Endpoints
▪ Direct Access to HDFS / Copied dataset
▪ Hive (on MR or Spark) accepts incoming
connections
▪ Impala for interactive queries
▪ HBase APIs as required
Ranger
Policy based
access control
OR
20. Hadoop Ecosystem to Databricks Concepts
Hadoop
HDFS
c
disk1
disk2
disk3
disk4
disk5
disk6
...
disk
N
YARN
Impala
HBase
c
c
c
c
c MR
mapper
c MR
mapper
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
c
c
c
2x12c = 24c
compute
HDFS
c
disk1
disk2
disk3
disk4
disk5
disk6
...
disk
N
YARN
Impala
HBase
c
c
c
c
c MR
mapper
c MR
mapper
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
c
c
c
2x12c = 24c
compute
HDFS
c
disk1
disk2
disk3
disk4
disk5
disk6
...
disk
N
YARN
Impala
HBase
c
c
c
c
c MR
mapper
c MR
mapper
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c
Spark
Driver
c
c
c
c
c
c
2x12c = 24c
compute
...
Node 1 Node 2 Node N
Hive
Metastore
Hive
Server
Impala
(LoadBalancer)
HBase
API
Sentry/Ranger
Table Metadata +
HDFS ACLs
Hive
Metastore
(managed)
Databricks
SQL Endpoint
JDBC/ODBC
High Conc. Cluster SQL Analytics
CosmosDB/
DynamoDB/
Keyspaces
Object Storage
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Driver
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
Databricks Cluster
Spark ETL
(Batch/Streaming)
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Driver
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
Databricks Cluster
SQL Analytics
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Driver
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
Databricks Cluster
ML Runtime
Table
ACLs
Object Storage ACLs
Ephemeral
Clusters for
All-purpose
or Jobs
JDBC/ODBC
21. Hadoop Ecosystem to Databricks Concepts
Hive
Metastore
(managed)
Databricks
SQL Endpoint
High Conc. Cluster SQL Analytics
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Driver
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
Databricks Cluster
Spark ETL
(Batch/Streaming)
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Driver
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
Databricks Cluster
SQL Analytics
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Driver
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
Databricks Cluster
ML Runtime
Table
ACLs
Ephemeral
Clusters or
long running
for
All-purpose
or Jobs
JDBC/ODBC
Node makeup
▪ Each Node (VM), maps to single Spark
Driver/Worker
▪ Cluster of nodes completely isolated from other
jobs/compute
▪ De-coupled compute and storage
Metadata and Security
▪ Managed Hive metastore (other options
available)
▪ Table ACLs (Databricks) and Object Storage
permissions
Endpoints
▪ SQL endpoint for both advanced analytics and
simple SQL analytics
▪ Code access to data - Notebooks
▪ HBase → maps to Azure CosmosDB, AWS
DynamoDB/Keyspaces (non-Databricks
solution)
Object Storage Object Storage ACLs
CosmosDB/
DynamoDB/
Keyspaces
24. Data Migration
- On-premise block storage.
- Fixed disk capacity.
- Health checks to validate data
Integrity.
- As data volumes grow, must
add more nodes to cluster and
rebalance data.
MIGRATE
- Fully managed cloud object storage.
- Unlimited capacity.
- No maintenance, no health checks, no rebalancing.
- 99.99% availability, 99.9999999% durability.
- Use native cloud services to migrate data.
- Leverage partner solutions:
25. Data Migration
Build a Data Lake in cloud storage with Delta Lake
● Open source and uses Parquet file format.
● Performance: Data indexing → Faster queries.
● Reliability: ACID Transactions → Guaranteed data integrity.
● Scalability: Handle petabyte-scale tables with billions of partitions and files at ease.
● Enhanced Spark SQL: UPDATE, MERGE, and DELETE commands.
● Unify Batch and Stream processing → No more LAMBDA architecture.
● Schema Enforcement: Specify schema on write.
● Schema Evolution: Automatically change schemas on the fly.
● Audit History: Full audit trail of the changes.
● Time Travel: Restore data from past versions.
● 100% Compatible with Apache Spark API.
26. Start with Dual ingestion
● Add a feed to cloud storage
● Enable new use cases with new data
● Introduces options for backup
27. How to migrate data
● Leverage existing Data Delivery tools to point to cloud storage
● Introduce simplified flows to land data into cloud storage
28. How to migrate data
● Push the data
○ DistCP
○ 3rd Party Tooling
○ In-house frameworks
○ Cloud Native - Snowmobile , Azure Data Box, Google Transfer Appliance
○ Typically easier to approve (security)
● Pull the data
○ Spark Streaming
○ Spark Batch
■ File Ingest
■ JDBC
○ 3rd Party Tooling
29. How to migrate data - Pull approach
● Set up connectivity to On Premises
○ AWS Direct Connect
○ Azure ExpressRoute / VPN Gateway
○ This may be needed for some use cases
● Kerberized Hadoop Environments
○ Databricks clusters initialization scripts
■ Kerberos client setup
■ krb5.conf, keytab
■ kinit()
● Shared External Metastore
○ Databricks and Hadoop can share a metastore
39. Security and Governance
Authentication Authorization Metadata Management
- Single Sign On (SSO) with SAML
2.0 supported corporate
directory.
- Access Control Lists (ACLs) for
Databricks RBAC.
- Table ACLs - Dynamic Views for
Column/Row permissionons
- Leverage cloud native
security: IAM Federation and
AAD passthrough.
- Integration with Ranger an
Immuta for more advanced
RBAC and ABAC.
- Integration with 3rd party
services.
Amazon Glue
41. Migrating Security Policies from
Hadoop to Databricks
Enabling enterprises to responsibly use their data in the cloud
Powered by Apache Ranger
42. HADOOP ECOSYSTEM
● 100s and 1000s of tables in
Apache Hive
● 100s of policies in Apache
Ranger
● Variety of policies. Resource
Based, Tag Based, Masking, Row
Level Filters, etc.
● Policies for Users and Groups
from AD/LDAP
45. ● Richer, deeper, and more robust Access Control
● Row/Column level access control in SQL
● Dynamic and Static data de-identification
● File level access control for Dataframes, object level access
● Read/Write operations supported
Object Store
(S3/ADLS)
Privacera
+
Databricks
S3 - Bucket
Level
Y
S3 - Object
Level
Y
ADLS Y
Privacera Value Add - Enhancing Databricks Authorization
Spark SQL and R Privacera +
Databricks
Table Y
Column Y
Column Masking Y
Row Level Filtering Y
Tag Based Policies Y
Attribute based policies Y
Centralized Auditing Y
46. Databricks SQL/Python Cluster
Spark Driver Ranger Plugin
Spark Executors
Spark Executors Ranger Policy Manager
Privacera Portal
Privacera Audit Server
DB Solr
Apache Kafka
Splunk
Cloud Watch
SIEM
Privacera Cloud
Spark SQL
and/or Spark
Read/Write
Privacera
Anomaly
Detection and
Alerting
Databricks Cluster
Privacera Discovery
Business User
Admin User
Privacera Approval
Workflow
AD/LDAP
3rd Party Catalog
49. What about the SQL Community
Hadoop
● HUE
○ Data browsing
○ SQL Editor
○ Visualizations
● Interactive SQL
○ Impala
○ Hive LLAP
Databricks
● SQL Analytics Workspace
○ Data Browser
○ SQL Editor
○ Visualizations
● Interactive SQL
○ Spark optimizations - Adaptive Query Execution
○ Advanced Caching
○ Project Photon
○ Scaling cluster of clusters
50. SQL & BI Layer
Optimized SQL and BI
Performance BI Integrations Tuned
- Fast queries with Delta Engine
on Delta Engine.
- Support for high-concurrency
with auto-scaling clusters.
- Optimized JDBC/ODBC drivers.
- Optimized and tuned for BI and
and SQL out of the box.
Compatible with any BI client
and tool that supports Spark.
51. Vision
Give SQL users a home in Databricks
Provide SQL workbench, light
dashboarding, and alerting capabilities
Great BI experience on the data lake
Enable companies to effectively leverage
the data lake from any BI tool without
having to move the data around.
Easy to use & price-performant
Minimal setup & configuration. Data lake
price performance.
52. SQL-native user interface for
analysts
▪ Familiar SQL Editor
▪ Auto Complete
▪ Built in visualizations
▪ Data Browser
▪ Automatic Alerts
▪ Trigger based upon values
▪ Email or Slack integration
▪ Dashboards
▪ Simply convert queries to
dashboards
▪ Share with Access Control
53. Built-in connectors for existing
BI tools
Other BI & SQL clients
that support
▪ Supports your favorite tool
▪ Connectors for top BI & SQL clients
▪ Simple connection setup
▪ Optimized performance
▪ OAuth & Single Sign On
▪ Quick and easy authentication
experience. No need to deal with
access tokens.
▪ Power BI Available now
▪ Others coming soon
54. Performance
Delta Metadata Performance
Improved read performance for cold queries on Delta
tables. Provides interactive metadata performance
regardless of # of Delta tables in a query or table sizes.
New ODBC / JDBC Drivers
Wire protocol re-engineered to provide lower latencies
& higher data transfer speeds:
▪ Lower latency / less overhead (~¼ sec) with reduced
round trips per request
▪ Higher transfer rate (up to 50%) using Apache Arrow
▪ Optimized metadata performance for ODBC/JDBC
APIs (up to 10x for metadata retrieval operations)
Photon - Delta Engine
[Preview]
New MPP engine built from scratch in C++.
Vectorized to exploit data level parallelism and
instruction-level parallelism. Optimized for
modern structured and semi-structured
workloads.