SlideShare ist ein Scribd-Unternehmen logo
1 von 98
Downloaden Sie, um offline zu lesen
Stanford CS347 Guest Lecture
Spark
Reynold Xin @rxin
Guest lecture, May 18, 2015, Stanford
Who am I?
Reynold Xin
Spark PMC member
Databricks cofounder & architect
UC Berkeley AMPLab PhD (on leave)
2
Agenda
1.  MapReduce Review
2.  Introduction to Spark and RDDs
3.  Generality of RDDs (e.g. streaming, ML)
4.  DataFrames
5.  Internals (time permitting)
3
Agenda
1.  MapReduce Review
2.  Introduction to Spark and RDDs
3.  Generality of RDDs (e.g. streaming, ML)
4.  DataFrames
5.  Internals (time permitting)
4
Google Datacenter
How do we program this thing?
5
Traditional Network Programming
Message-passing between nodes (MPI, RPC, etc)
Really hard to do at scale:
• How to split problem across nodes?
– Important to consider network and data locality
• How to deal with failures?
– If a typical server fails every 3 years, a 10,000-node cluster sees 10
faults/day!
• Even without failures: stragglers (a node is slow)
Almost nobody does this!
6
Data-Parallel Models
Restrict the programming interface so that the system
can do more automatically
“Here’s an operation, run it on all of the data”
• I don’t care where it runs (you schedule that)
• In fact, feel free to run it twice on different nodes
7
MapReduce Programming Model
Data type: key-value records
Map function:
(Kin, Vin) -> list(Kinter, Vinter)
Reduce function:
(Kinter, list(Vinter)) -> list(Kout, Vout)
8
MapReduce Programmability
Most real applications require multiple MR steps
• Google indexing pipeline: 21 steps
• Analytics queries (e.g. count clicks & top K): 2 – 5 steps
• Iterative algorithms (e.g. PageRank): 10’s of steps
Multi-step jobs create spaghetti code
• 21 MR steps -> 21 mapper and reducer classes
• Lots of boilerplate code per step
9
10
Problems with MapReduce
MapReduce use cases showed two major limitations:
1.  difficulty of programming directly in MR.
2.  Performance bottlenecks
In short, MR doesn’t compose well for large applications
Therefore, people built high level frameworks and
specialized systems.
11
Higher Level Frameworks
SELECT count(*) FROM users
A = load 'foo';
B = group A all;
C = foreach B generate COUNT(A);
In reality, 90+% of MR jobs are generated by Hive SQL
12
Specialized Systems
MapReduce
General Batch Processing Specialized Systems:
iterative, interactive, streaming, graph, etc.
Pregel Giraph
Dremel Drill
Tez
Impala
GraphLab
StormS4
F1
MillWheel
13
Agenda
1.  MapReduce Review
2.  Introduction to Spark and RDDs
3.  Generality of RDDs (e.g. streaming, ML)
4.  DataFrames
5.  Internals (time permitting)
14
Spark: A Brief History
15
2002
2002
MapReduce @ Google
2004
MapReduce paper
2006
Hadoop @Yahoo!
2004 2006 2008 2010 2012 2014
2014
Apache Spark top-level
2010
Spark paper
2008
Hadoop Summit
Spark Summary
Unlike the various specialized systems, Spark’s goal was
to generalize MapReduce to support new apps
Two small additions are enough:
• fast data sharing
• general DAGs
More efficient engine, and simpler for the end users.
16
Spark Ecosystem
17
Note: not a scientific comparison.
Programmability
19
WordCount in 50+ lines of Java MR
WordCount in 3 lines of Spark
Performance
Time to sort 100TB
2100 machines2013 Record:
Hadoop
2014 Record:
Spark
Source: Daytona GraySort benchmark, sortbenchmark.org
72 minutes
207 machines
23 minutes
Also sorted 1PB in 4 hours
20
RDD: Core Abstraction
Resilient Distributed Datasets
• Collections of objects spread
across a cluster, stored in RAM or
on Disk
• Built through parallel
transformations
• Automatically rebuilt on failure
Operations
•  Transformations
(e.g. map, filter,
groupBy)
•  Actions
(e.g. count, collect,
save)
Write programs in terms of distributed datasets
and operations on them
RDD
Resilient Distributed Datasets are the primary
abstraction in Spark – a fault-tolerant collection of
elements that can be operated on in parallel
Two types:
• parallelized collections – take an existing single-node
collection and parallel it
• Hadoop datasets: files on HDFS or other compatible storage
22
Operations on RDDs
Transformations f(RDD) => RDD
§ Lazy (not computed immediately)
§ E.g. “map”
Actions:
§ Triggers computation
§ E.g. “count”, “saveAsTextFile”
23
Working With RDDs
RDD
textFile = sc.textFile(”SomeFile.txt”)!
Working With RDDs
RDD
RDD
RDD
RDD
Transformations
linesWithSpark = textFile.filter(lambda line: "Spark” in line)!
textFile = sc.textFile(”SomeFile.txt”)!
Working With RDDs
RDD
RDD
RDD
RDD
Transformations
Action Value
linesWithSpark = textFile.filter(lambda line: "Spark” in line)!
linesWithSpark.count()!
74!
!
linesWithSpark.first()!
# Apache Spark!
textFile = sc.textFile(”SomeFile.txt”)!
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
Worker	
  
Worker	
  
Worker	
  
Driver	
  
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
Worker	
  
Worker	
  
Worker	
  
Driver	
  
lines = spark.textFile(“hdfs://...”)
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
Worker	
  
Worker	
  
Worker	
  
Driver	
  
lines = spark.textFile(“hdfs://...”)
Base	
  RDD	
  
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
Worker	
  
Worker	
  
Worker	
  
Driver	
  
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
Worker	
  
Worker	
  
Worker	
  
Driver	
  
Transformed	
  RDD	
  
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
messages = errors.map(lambda s: s.split(“t”)[2])
messages.cache()
Worker	
  
Worker	
  
Worker	
  
Driver	
  
messages.filter(lambda s: “mysql” in s).count()
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
messages = errors.map(lambda s: s.split(“t”)[2])
messages.cache()
Worker	
  
Worker	
  
Worker	
  
Driver	
  
messages.filter(lambda s: “mysql” in s).count()
Ac5on	
  
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
messages = errors.map(lambda s: s.split(“t”)[2])
messages.cache()
Worker	
  
Worker	
  
Worker	
  
Driver	
  
messages.filter(lambda s: “mysql” in s).count()
Block	
  1	
  
Block	
  2	
  
Block	
  3	
  
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
messages = errors.map(lambda s: s.split(“t”)[2])
messages.cache()
Worker	
  
Worker	
  
Worker	
  
messages.filter(lambda s: “mysql” in s).count()
Block	
  1	
  
Block	
  2	
  
Block	
  3	
  
Driver	
  
tasks	
  
tasks	
  
tasks	
  
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
messages = errors.map(lambda s: s.split(“t”)[2])
messages.cache()
Worker	
  
Worker	
  
Worker	
  
messages.filter(lambda s: “mysql” in s).count()
Block	
  1	
  
Block	
  2	
  
Block	
  3	
  
Driver	
  
Read	
  
HDFS	
  
Block	
  
Read	
  
HDFS	
  
Block	
  
Read	
  
HDFS	
  
Block	
  
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
messages = errors.map(lambda s: s.split(“t”)[2])
messages.cache()
Worker	
  
Worker	
  
Worker	
  
messages.filter(lambda s: “mysql” in s).count()
Block	
  1	
  
Block	
  2	
  
Block	
  3	
  
Driver	
  
Cache	
  1	
  
Cache	
  2	
  
Cache	
  3	
  
Process	
  
&	
  Cache	
  
Data	
  
Process	
  
&	
  Cache	
  
Data	
  
Process	
  
&	
  Cache	
  
Data	
  
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
messages = errors.map(lambda s: s.split(“t”)[2])
messages.cache()
Worker	
  
Worker	
  
Worker	
  
messages.filter(lambda s: “mysql” in s).count()
Block	
  1	
  
Block	
  2	
  
Block	
  3	
  
Driver	
  
Cache	
  1	
  
Cache	
  2	
  
Cache	
  3	
  
results	
  
results	
  
results	
  
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
messages = errors.map(lambda s: s.split(“t”)[2])
messages.cache()
Worker	
  
Worker	
  
Worker	
  
messages.filter(lambda s: “mysql” in s).count()
Block	
  1	
  
Block	
  2	
  
Block	
  3	
  
Driver	
  
Cache	
  1	
  
Cache	
  2	
  
Cache	
  3	
  
messages.filter(lambda s: “php” in s).count()
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
messages = errors.map(lambda s: s.split(“t”)[2])
messages.cache()
Worker	
  
Worker	
  
Worker	
  
messages.filter(lambda s: “mysql” in s).count()
Block	
  1	
  
Block	
  2	
  
Block	
  3	
  
Cache	
  1	
  
Cache	
  2	
  
Cache	
  3	
  
messages.filter(lambda s: “php” in s).count()
tasks	
  
tasks	
  
tasks	
  
Driver	
  
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
messages = errors.map(lambda s: s.split(“t”)[2])
messages.cache()
Worker	
  
Worker	
  
Worker	
  
messages.filter(lambda s: “mysql” in s).count()
Block	
  1	
  
Block	
  2	
  
Block	
  3	
  
Cache	
  1	
  
Cache	
  2	
  
Cache	
  3	
  
messages.filter(lambda s: “php” in s).count()
Driver	
  
Process	
  
from	
  
Cache	
  
Process	
  
from	
  
Cache	
  
Process	
  
from	
  
Cache	
  
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
messages = errors.map(lambda s: s.split(“t”)[2])
messages.cache()
Worker	
  
Worker	
  
Worker	
  
messages.filter(lambda s: “mysql” in s).count()
Block	
  1	
  
Block	
  2	
  
Block	
  3	
  
Cache	
  1	
  
Cache	
  2	
  
Cache	
  3	
  
messages.filter(lambda s: “php” in s).count()
Driver	
  
results	
  
results	
  
results	
  
Example: Log Mining
Load error messages from a log into memory, then
interactively search for various patterns
lines = spark.textFile(“hdfs://...”)
errors = lines.filter(lambda s: s.startswith(“ERROR”))
messages = errors.map(lambda s: s.split(“t”)[2])
messages.cache()
Worker	
  
Worker	
  
Worker	
  
messages.filter(lambda s: “mysql” in s).count()
Block	
  1	
  
Block	
  2	
  
Block	
  3	
  
Cache	
  1	
  
Cache	
  2	
  
Cache	
  3	
  
messages.filter(lambda s: “php” in s).count()
Driver	
  
Cache your data è Faster Results
Full-text search of Wikipedia
•  60GB on 20 EC2 machines
•  0.5 sec from mem vs. 20s for on-disk
Language Support
Standalone Programs
Python, Scala, & Java
Interactive Shells
Python & Scala
Performance
Java & Scala are faster due to
static typing
…but Python is often fine
Python
lines = sc.textFile(...)
lines.filter(lambda s: “ERROR” in s).count()
Scala
val lines = sc.textFile(...)
lines.filter(x => x.contains(“ERROR”)).count()
Java
JavaRDD<String> lines = sc.textFile(...);
lines.filter(new Function<String, Boolean>() {
Boolean call(String s) {
return s.contains(“error”);
}
}).count();
Expressive API
map reduce
Expressive API
map
filter
groupBy
sort
union
join
leftOuterJoin
rightOuterJoin
reduce
count
fold
reduceByKey
groupByKey
cogroup
cross
zip
sample
take
first
partitionBy
mapWith
pipe
save ...
Fault Recovery
RDDs track lineage information that can be used to
efficiently reconstruct lost partitions
Ex: messages = textFile(...).filter(_.startsWith(“ERROR”))
.map(_.split(‘t’)(2))
HDFS	
  File	
   Filtered	
  RDD	
   Mapped	
  RDD	
  
filter	
  
(func	
  =	
  _.contains(...))	
  
map	
  
(func	
  =	
  _.split(...))	
  
Fault Recovery Results
119	
  
57	
   56	
   58	
   58	
  
81	
  
57	
   59	
   57	
   59	
  
0	
  
20	
  
40	
  
60	
  
80	
  
100	
  
120	
  
140	
  
1	
   2	
   3	
   4	
   5	
   6	
   7	
   8	
   9	
   10	
  
Iteratrion	
  time	
  (s)	
  
Iteration	
  
Failure	
  happens	
  
Example: Logistic Regression
Goal: find best line separating two sets of points
+	
  
–	
  
+	
  
+	
  
+	
  
+	
  
+	
  
+	
  
+	
  
+	
  
–	
   –	
  
–	
  
–	
  
–	
  
–	
  
–	
  
–	
  
+	
  
target	
  
–	
  
random	
  initial	
  line	
  
Example: Logistic Regression
val data = spark.textFile(...).map(readPoint).cache()
var w = Vector.random(D)
for (i <- 1 to ITERATIONS) {
val gradient = data.map(p =>
(1 / (1 + exp(-p.y*(w dot p.x))) - 1) * p.y * p.x
).reduce(_ + _)
w -= gradient
}
println("Final w: " + w)
w	
  automa5cally	
  
shipped	
  to	
  cluster	
  
LR/K-Means Performance
0.96
110
0 25 50 75 100 125
Logistic Regression
4.1
155
0 30 60 90 120 150 180
K-Means Clustering
Hadoop MR
Spark
Time per Iteration (s)
10B points
Agenda
1.  MapReduce Review
2.  Introduction to Spark and RDDs
3.  Generality of RDDs (e.g. streaming, ML)
4.  DataFrames
5.  Internals (time permitting)
53
Generality of RDDs
Spark	
  
Spark	
  
Streaming	
  
real-­‐time	
  
Spark	
  SQL	
  
SQL	
  
GraphX	
  
graph	
  
MLLib	
  
machine	
  
learning	
  
…	
  
Generality of RDDs
Spark	
  
RDDs,	
  Transformations,	
  and	
  Actions	
  
Spark	
  
Streaming	
  
real-­‐time	
  
Spark	
  
SQL	
  
GraphX	
  
graph	
  
MLLib	
  
machine	
  
learning	
  
DStream’s:	
  	
  
Streams	
  of	
  RDD’s	
  
SchemaRDD’s	
  
	
  
RDD-­‐Based	
  
	
  Matrices	
  
RDD-­‐Based	
  	
  
Graphs	
  
Many important apps must process large data streams at
second-scale latencies
• Site statistics, intrusion detection, online ML
To build and scale these apps users want:
• Integration: with offline analytical stack
• Fault-tolerance: both for crashes and stragglers
• Efficiency: low cost beyond base processing
Spark Streaming: Motivation
Discretized Stream Processing
t	
  =	
  1:	
  
t	
  =	
  2:	
  
stream	
  1	
   stream	
  2	
  
batch	
  opera5on	
  
pull	
  input	
  
…	
  
…	
  
input	
  
immutable	
  dataset	
  
(stored	
  reliably)	
  
immutable	
  dataset	
  
(output	
  or	
  state);	
  
stored	
  in	
  memory	
  
as	
  RDD	
  
…	
  
Programming Interface
Simple functional API
views = readStream("http:...", "1s")
ones = views.map(ev => (ev.url, 1))
counts = ones.runningReduce(_ + _)
Interoperates with RDDs
!
// Join stream with static RDD
counts.join(historicCounts).map(...)
!
// Ad-hoc queries on stream state
counts.slice(“21:00”,“21:05”).topK(10)
t	
  =	
  1:	
  
t	
  =	
  2:	
  
views ones counts
map	
   reduce	
  
.	
  .	
  .	
  
=	
  RDD	
   =	
  partition	
  
Inherited “for free” from Spark
RDD data model and API
Data partitioning and shuffles
Task scheduling
Monitoring/instrumentation
Scheduling and resource allocation
0
20000
40000
60000
80000
100000
120000
140000
Hadoop
MapReduce
Storm
(Streaming)
Impala (SQL)
 Giraph
(Graph)
Spark
non-test, non-example source lines
Powerful Stack – Agile Development
0
20000
40000
60000
80000
100000
120000
140000
Hadoop
MapReduce
Storm
(Streaming)
Impala (SQL)
 Giraph
(Graph)
Spark
non-test, non-example source lines
Streaming
Powerful Stack – Agile Development
0
20000
40000
60000
80000
100000
120000
140000
Hadoop
MapReduce
Storm
(Streaming)
Impala (SQL)
 Giraph
(Graph)
Spark
non-test, non-example source lines
SparkSQL
Streaming
Powerful Stack – Agile Development
Powerful Stack – Agile Development
0
20000
40000
60000
80000
100000
120000
140000
Hadoop
MapReduce
Storm
(Streaming)
Impala (SQL)
 Giraph
(Graph)
Spark
non-test, non-example source lines
GraphX
Streaming
SparkSQL
Powerful Stack – Agile Development
0
20000
40000
60000
80000
100000
120000
140000
Hadoop
MapReduce
Storm
(Streaming)
Impala (SQL)
 Giraph
(Graph)
Spark
non-test, non-example source lines
GraphX
Streaming
SparkSQL
Your App?
Benefits for Users
High performance data sharing
• Data sharing is the bottleneck in many environments
• RDD’s provide in-place sharing through memory
Applications can compose models
• Run a SQL query and then PageRank the results
• ETL your data and then run graph/ML on it
Benefit from investment in shared functionality
• E.g. re-usable components (shell) and performance
optimizations
Agenda
1.  MapReduce Review
2.  Introduction to Spark and RDDs
3.  Generality of RDDs (e.g. streaming, ML)
4.  DataFrames
5.  Internals (time permitting)
66
From MapReduce to Spark
67
Beyond Hadoop Users
68
Spark early adopters
Data Engineers
Data Scientists
Statisticians
R users
PyData …
Users
Understands
MapReduce
& functional APIs
69
DataFrames in Spark
Distributed collection of data grouped into named
columns (i.e. RDD with schema)
DSL designed for common tasks
• Metadata
• Sampling
• Project, filter, aggregation, join, …
• UDFs
Available in Python, Scala, Java, and R (via SparkR)
70
Not Just Less Code: Faster
Implementations
0 2 4 6 8 10
RDD Scala
RDD Python
DataFrame Scala
DataFrame Python
DataFrame SQL
Time to Aggregate 10 million int pairs (secs)
DataFrame Internals
Represented internally as a “logical plan”
Execution is lazy, allowing it to be optimized by a query
optimizer
72
Plan Optimization & Execution
73
DataFrames and SQL share the same optimization/execution pipeline
Maximize code reuse & share optimization efforts
74
joined = users.join(events, users.id == events.uid)
filtered = joined.filter(events.date >= ”2015-01-01”)
this join is expensive à
logical plan
filter
join
scan
(users)
scan
(events)
physical plan
join
scan
(users)
filter
scan
(events)
Data Sources supported by DataFrames
75
built-in external
{ JSON }
JDBC
and more …
More Than Naïve Scans
Data Sources API can automatically prune columns and
push filters to the source
• Parquet: skip irrelevant columns and blocks of data; turn
string comparison into integer comparisons for dictionary
encoded data
• JDBC: Rewrite queries to push predicates down
76
77
joined = users.join(events, users.id == events.uid)
filtered = joined.filter(events.date > ”2015-01-01”)
logical plan
filter
join
scan
(users)
scan
(events)
optimized plan
join
scan
(users)
filter
scan
(events)
optimized plan
with intelligent data sources
join
scan
(users)
filter scan
(events)
Our Experience So Far
SQL is wildly popular and important
•  100% of Databricks customers use some SQL
Schema is very useful
•  Most data pipelines, even the ones that start with unstructured
data, end up having some implicit structure
•  Key-value too limited
•  That said, semi-/un-structured support is paramount
Separation of logical vs physical plan
•  Important for performance optimizations (e.g. join selection)
Machine Learning Pipelines
tokenizer	
  =	
  Tokenizer(inputCol="text",	
  outputCol="words”)	
  
hashingTF	
  =	
  HashingTF(inputCol="words",	
  outputCol="features”)	
  
lr	
  =	
  LogisticRegression(maxIter=10,	
  regParam=0.01)	
  
pipeline	
  =	
  Pipeline(stages=[tokenizer,	
  hashingTF,	
  lr])	
  
	
  
df	
  =	
  sqlCtx.load("/path/to/data")	
  
model	
  =	
  pipeline.fit(df)	
  
df0 df1 df2 df3tokenizer hashingTF lr.model
lr
Pipeline Model	
  
80
R Interface (SparkR)
Spark 1.4 (June)
Exposes DataFrames,
and ML library in R
df = jsonFile(“tweets.json”) 
summarize(                         
  group_by(                        
    df[df$user == “matei”,],
    “date”),
  sum(“retweets”)) 
Data Science at Scale
Higher level interfaces in Scala, Java, Python, R
Drastically easier to program Big Data
• With APIs similar to single-node tools
81
82
{JSON}
Data Sources
Spark Core
DataFrames ML Pipelines
Spark
Streaming
Spark SQL MLlib GraphX
83
Goal: unified engine across data sources,
workloads and environments
Agenda
1.  MapReduce Review
2.  Introduction to Spark and RDDs
3.  Generality of RDDs (e.g. streaming, ML)
4.  DataFrames
5.  Internals (time permitting)
84
Spark Application

sc = new SparkContext

f = sc.textFile(“…”)"
"
f.filter(…)"
.count()"
"
...
Your program
(JVM / Python)
Spark driver"
(app master)
Spark executor
(multiple of them)
HDFS, HBase, …
Block
manager
Task
threads
RDD graph
Scheduler
Block tracker
Shuffle tracker
Cluster"
manager
A single application often contains multiple actions
RDD is an interface
1.  Set of partitions (“splits” in Hadoop)
2.  List of dependencies on parent RDDs
3.  Function to compute a partition
(as an Iterator) given its parent(s)
4.  (Optional) partitioner (hash, range)
5.  (Optional) preferred location(s)
for each partition
“lineage”	
  
op5mized	
  
execu5on	
  
Example: HadoopRDD
partitions = one per HDFS block
dependencies = none
compute(part) = read corresponding block
preferredLocations(part) = HDFS block location
partitioner = none
Example: Filtered RDD
partitions = same as parent RDD
dependencies = “one-to-one” on parent
compute(part) = compute parent and filter it
preferredLocations(part) = none (ask parent)
partitioner = none
RDD Graph (DAG of tasks)
HadoopRDD"
path = hdfs://...
FilteredRDD"
func = _.contains(…)"
shouldCache = true
file:
errors:
Partition-level view:
Dataset-level view:
Task1
Task2
 ...
Example: JoinedRDD
partitions = one per reduce task
dependencies = “shuffle” on each parent
compute(partition) = read and join shuffled data
preferredLocations(part) = none
partitioner = HashPartitioner(numTasks)
Spark will now know
this data is hashed!
Dependency Types
union
groupByKey on"
non-partitioned data
join with inputs not"
co-partitioned
join with inputs
co-partitioned
map, filter
“Narrow” (pipeline-able)
 “Wide” (shuffle)
Execution Process
rdd1.join(rdd2)
.groupBy(…)
.filter(…)
RDD	
  Objects	
  
build	
  operator	
  DAG	
  
DAG	
  Scheduler	
  
split	
  graph	
  into	
  
stages	
  of	
  tasks	
  
submit	
  each	
  
stage	
  as	
  ready	
  
DAG	
  
Task	
  Scheduler	
  
TaskSet	
  
launch	
  tasks	
  via	
  
cluster	
  manager	
  
retry	
  failed	
  or	
  
straggling	
  tasks	
  
Cluster	
  
manager	
  
Worker	
  
execute	
  tasks	
  
store	
  and	
  serve	
  
blocks	
  
Block	
  
manager	
  
Threads	
  
Task	
  
DAG Scheduler
Input: RDD and partitions to compute
Output: output from actions on those partitions
Roles:
• Build stages of tasks
• Submit them to lower level scheduler (e.g. YARN, Mesos,
Standalone) as ready
• Lower level scheduler will schedule data based on locality
• Resubmit failed stages if outputs are lost
Job Scheduler
Captures RDD
dependency graph
Pipelines functions
into “stages”
Cache-aware for
data reuse & locality
Partitioning-aware
to avoid shuffles
join	
  
union	
  
groupBy	
  
map	
  
Stage	
  3	
  
Stage	
  1	
  
Stage	
  2	
  
A:	
   B:	
  
C:	
   D:	
  
E:	
  
F:	
  
G:	
  
=	
  cached	
  partition	
  
95
{JSON}
Data Sources
Spark Core
DataFrames ML Pipelines
Spark
Streaming
Spark SQL MLlib GraphX
96
Goal: unified engine across data sources,
workloads and environments
Thank you. Questions?
@rxin

Weitere ähnliche Inhalte

Was ist angesagt?

Simplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache SparkSimplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache SparkDatabricks
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introductioncolorant
 
Introducing DataFrames in Spark for Large Scale Data Science
Introducing DataFrames in Spark for Large Scale Data ScienceIntroducing DataFrames in Spark for Large Scale Data Science
Introducing DataFrames in Spark for Large Scale Data ScienceDatabricks
 
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...HostedbyConfluent
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overviewDataArt
 
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudNoritaka Sekiyama
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesDatabricks
 
Apache Spark Fundamentals
Apache Spark FundamentalsApache Spark Fundamentals
Apache Spark FundamentalsZahra Eskandari
 
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark Summit
 
Spark introduction and architecture
Spark introduction and architectureSpark introduction and architecture
Spark introduction and architectureSohil Jain
 
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveApache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveSachin Aggarwal
 
Introduction to Spark (Intern Event Presentation)
Introduction to Spark (Intern Event Presentation)Introduction to Spark (Intern Event Presentation)
Introduction to Spark (Intern Event Presentation)Databricks
 
Hudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesHudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesNishith Agarwal
 
Spark DataFrames and ML Pipelines
Spark DataFrames and ML PipelinesSpark DataFrames and ML Pipelines
Spark DataFrames and ML PipelinesDatabricks
 
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...Simplilearn
 
Parquet performance tuning: the missing guide
Parquet performance tuning: the missing guideParquet performance tuning: the missing guide
Parquet performance tuning: the missing guideRyan Blue
 

Was ist angesagt? (20)

Simplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache SparkSimplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache Spark
 
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introduction
 
Introducing DataFrames in Spark for Large Scale Data Science
Introducing DataFrames in Spark for Large Scale Data ScienceIntroducing DataFrames in Spark for Large Scale Data Science
Introducing DataFrames in Spark for Large Scale Data Science
 
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overview
 
Apache Spark Core
Apache Spark CoreApache Spark Core
Apache Spark Core
 
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
 
Apache Spark Fundamentals
Apache Spark FundamentalsApache Spark Fundamentals
Apache Spark Fundamentals
 
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
 
Presto
PrestoPresto
Presto
 
Spark introduction and architecture
Spark introduction and architectureSpark introduction and architecture
Spark introduction and architecture
 
Spark architecture
Spark architectureSpark architecture
Spark architecture
 
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveApache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
 
Introduction to Spark (Intern Event Presentation)
Introduction to Spark (Intern Event Presentation)Introduction to Spark (Intern Event Presentation)
Introduction to Spark (Intern Event Presentation)
 
Hudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesHudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilities
 
Spark DataFrames and ML Pipelines
Spark DataFrames and ML PipelinesSpark DataFrames and ML Pipelines
Spark DataFrames and ML Pipelines
 
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
 
Parquet performance tuning: the missing guide
Parquet performance tuning: the missing guideParquet performance tuning: the missing guide
Parquet performance tuning: the missing guide
 

Andere mochten auch

The Future of Real-Time in Spark
The Future of Real-Time in SparkThe Future of Real-Time in Spark
The Future of Real-Time in SparkReynold Xin
 
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...Reynold Xin
 
Hadoop MapReduce Fundamentals
Hadoop MapReduce FundamentalsHadoop MapReduce Fundamentals
Hadoop MapReduce FundamentalsLynn Langit
 
Hadoop Lecture for Harvard's CS 264 -- October 19, 2009
Hadoop Lecture for Harvard's CS 264 -- October 19, 2009Hadoop Lecture for Harvard's CS 264 -- October 19, 2009
Hadoop Lecture for Harvard's CS 264 -- October 19, 2009Cloudera, Inc.
 
Distributed Computing Seminar - Lecture 2: MapReduce Theory and Implementation
Distributed Computing Seminar - Lecture 2: MapReduce Theory and ImplementationDistributed Computing Seminar - Lecture 2: MapReduce Theory and Implementation
Distributed Computing Seminar - Lecture 2: MapReduce Theory and Implementationtugrulh
 
ConnectingUp Keynote: Leading Change
ConnectingUp Keynote: Leading ChangeConnectingUp Keynote: Leading Change
ConnectingUp Keynote: Leading ChangeHolly Ross
 
The New Data Imperative
The New Data ImperativeThe New Data Imperative
The New Data ImperativeHolly Ross
 
The fourth paradigm in safety
The fourth paradigm in safetyThe fourth paradigm in safety
The fourth paradigm in safetyJohan Roels
 
Real-Time-Analytics mit Spark und Cassandra
Real-Time-Analytics mit Spark und CassandraReal-Time-Analytics mit Spark und Cassandra
Real-Time-Analytics mit Spark und CassandraThomas Mann
 
BibBase Linked Data Triplification Challenge 2010 Presentation
BibBase Linked Data Triplification Challenge 2010 PresentationBibBase Linked Data Triplification Challenge 2010 Presentation
BibBase Linked Data Triplification Challenge 2010 PresentationReynold Xin
 
ERTS 2008 - Using Linux for industrial projects
ERTS 2008 - Using Linux for industrial projectsERTS 2008 - Using Linux for industrial projects
ERTS 2008 - Using Linux for industrial projectsChristian Charreyre
 
ERTS 2008 - Using Linux for industrial projects
ERTS 2008 - Using Linux for industrial projectsERTS 2008 - Using Linux for industrial projects
ERTS 2008 - Using Linux for industrial projectsChristian Charreyre
 
Comment travailler avec les logiciels Open Source
Comment travailler avec les logiciels Open SourceComment travailler avec les logiciels Open Source
Comment travailler avec les logiciels Open SourceChristian Charreyre
 
Créer une distribution Linux embarqué professionnelle avec Yocto Project
Créer une distribution Linux embarqué professionnelle avec Yocto ProjectCréer une distribution Linux embarqué professionnelle avec Yocto Project
Créer une distribution Linux embarqué professionnelle avec Yocto ProjectChristian Charreyre
 
Yocto une solution robuste pour construire des applications à fort contenu ap...
Yocto une solution robuste pour construire des applications à fort contenu ap...Yocto une solution robuste pour construire des applications à fort contenu ap...
Yocto une solution robuste pour construire des applications à fort contenu ap...Christian Charreyre
 
Open Embedded un framework libre pour assurer la cohérence de son projet
Open Embedded un framework libre pour assurer la cohérence de son projetOpen Embedded un framework libre pour assurer la cohérence de son projet
Open Embedded un framework libre pour assurer la cohérence de son projetChristian Charreyre
 

Andere mochten auch (20)

The Future of Real-Time in Spark
The Future of Real-Time in SparkThe Future of Real-Time in Spark
The Future of Real-Time in Spark
 
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...
 
Hadoop MapReduce Fundamentals
Hadoop MapReduce FundamentalsHadoop MapReduce Fundamentals
Hadoop MapReduce Fundamentals
 
Hadoop Lecture for Harvard's CS 264 -- October 19, 2009
Hadoop Lecture for Harvard's CS 264 -- October 19, 2009Hadoop Lecture for Harvard's CS 264 -- October 19, 2009
Hadoop Lecture for Harvard's CS 264 -- October 19, 2009
 
Distributed Computing Seminar - Lecture 2: MapReduce Theory and Implementation
Distributed Computing Seminar - Lecture 2: MapReduce Theory and ImplementationDistributed Computing Seminar - Lecture 2: MapReduce Theory and Implementation
Distributed Computing Seminar - Lecture 2: MapReduce Theory and Implementation
 
ConnectingUp Keynote: Leading Change
ConnectingUp Keynote: Leading ChangeConnectingUp Keynote: Leading Change
ConnectingUp Keynote: Leading Change
 
The New Data Imperative
The New Data ImperativeThe New Data Imperative
The New Data Imperative
 
Data Science and the Fourth Paradigm by Torben Bach Pedersen
Data Science and the Fourth Paradigm by Torben Bach PedersenData Science and the Fourth Paradigm by Torben Bach Pedersen
Data Science and the Fourth Paradigm by Torben Bach Pedersen
 
The Fourth Paradigm Book
The Fourth Paradigm BookThe Fourth Paradigm Book
The Fourth Paradigm Book
 
The fourth paradigm in safety
The fourth paradigm in safetyThe fourth paradigm in safety
The fourth paradigm in safety
 
Real-Time-Analytics mit Spark und Cassandra
Real-Time-Analytics mit Spark und CassandraReal-Time-Analytics mit Spark und Cassandra
Real-Time-Analytics mit Spark und Cassandra
 
Big Data mit Apache Hadoop
Big Data mit Apache HadoopBig Data mit Apache Hadoop
Big Data mit Apache Hadoop
 
BibBase Linked Data Triplification Challenge 2010 Presentation
BibBase Linked Data Triplification Challenge 2010 PresentationBibBase Linked Data Triplification Challenge 2010 Presentation
BibBase Linked Data Triplification Challenge 2010 Presentation
 
Meetup Systemd vs sysvinit
Meetup Systemd vs sysvinitMeetup Systemd vs sysvinit
Meetup Systemd vs sysvinit
 
ERTS 2008 - Using Linux for industrial projects
ERTS 2008 - Using Linux for industrial projectsERTS 2008 - Using Linux for industrial projects
ERTS 2008 - Using Linux for industrial projects
 
ERTS 2008 - Using Linux for industrial projects
ERTS 2008 - Using Linux for industrial projectsERTS 2008 - Using Linux for industrial projects
ERTS 2008 - Using Linux for industrial projects
 
Comment travailler avec les logiciels Open Source
Comment travailler avec les logiciels Open SourceComment travailler avec les logiciels Open Source
Comment travailler avec les logiciels Open Source
 
Créer une distribution Linux embarqué professionnelle avec Yocto Project
Créer une distribution Linux embarqué professionnelle avec Yocto ProjectCréer une distribution Linux embarqué professionnelle avec Yocto Project
Créer une distribution Linux embarqué professionnelle avec Yocto Project
 
Yocto une solution robuste pour construire des applications à fort contenu ap...
Yocto une solution robuste pour construire des applications à fort contenu ap...Yocto une solution robuste pour construire des applications à fort contenu ap...
Yocto une solution robuste pour construire des applications à fort contenu ap...
 
Open Embedded un framework libre pour assurer la cohérence de son projet
Open Embedded un framework libre pour assurer la cohérence de son projetOpen Embedded un framework libre pour assurer la cohérence de son projet
Open Embedded un framework libre pour assurer la cohérence de son projet
 

Ähnlich wie Stanford CS347 Guest Lecture: Apache Spark

Unified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache SparkUnified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache SparkC4Media
 
Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)Databricks
 
Brief Intro to Apache Spark @ Stanford ICME
Brief Intro to Apache Spark @ Stanford ICMEBrief Intro to Apache Spark @ Stanford ICME
Brief Intro to Apache Spark @ Stanford ICMEPaco Nathan
 
Apache Spark: What? Why? When?
Apache Spark: What? Why? When?Apache Spark: What? Why? When?
Apache Spark: What? Why? When?Massimo Schenone
 
Spark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student SlidesSpark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student SlidesDatabricks
 
How Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapeHow Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapePaco Nathan
 
What's new with Apache Spark?
What's new with Apache Spark?What's new with Apache Spark?
What's new with Apache Spark?Paco Nathan
 
How Apache Spark fits in the Big Data landscape
How Apache Spark fits in the Big Data landscapeHow Apache Spark fits in the Big Data landscape
How Apache Spark fits in the Big Data landscapePaco Nathan
 
11. From Hadoop to Spark 1:2
11. From Hadoop to Spark 1:211. From Hadoop to Spark 1:2
11. From Hadoop to Spark 1:2Fabio Fumarola
 
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"IT Event
 
#MesosCon 2014: Spark on Mesos
#MesosCon 2014: Spark on Mesos#MesosCon 2014: Spark on Mesos
#MesosCon 2014: Spark on MesosPaco Nathan
 
Intro to apache spark stand ford
Intro to apache spark stand fordIntro to apache spark stand ford
Intro to apache spark stand fordThu Hiền
 
A look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsA look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsDatabricks
 
Spark & Cassandra at DataStax Meetup on Jan 29, 2015
Spark & Cassandra at DataStax Meetup on Jan 29, 2015 Spark & Cassandra at DataStax Meetup on Jan 29, 2015
Spark & Cassandra at DataStax Meetup on Jan 29, 2015 Sameer Farooqui
 
How Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapeHow Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapePaco Nathan
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkTuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkDatabricks
 
Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming Djamel Zouaoui
 
Apache spark-melbourne-april-2015-meetup
Apache spark-melbourne-april-2015-meetupApache spark-melbourne-april-2015-meetup
Apache spark-melbourne-april-2015-meetupNed Shawa
 
Big data distributed processing: Spark introduction
Big data distributed processing: Spark introductionBig data distributed processing: Spark introduction
Big data distributed processing: Spark introductionHektor Jacynycz García
 

Ähnlich wie Stanford CS347 Guest Lecture: Apache Spark (20)

Unified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache SparkUnified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache Spark
 
Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)
 
Brief Intro to Apache Spark @ Stanford ICME
Brief Intro to Apache Spark @ Stanford ICMEBrief Intro to Apache Spark @ Stanford ICME
Brief Intro to Apache Spark @ Stanford ICME
 
Apache Spark: What? Why? When?
Apache Spark: What? Why? When?Apache Spark: What? Why? When?
Apache Spark: What? Why? When?
 
Spark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student SlidesSpark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student Slides
 
How Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapeHow Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscape
 
What's new with Apache Spark?
What's new with Apache Spark?What's new with Apache Spark?
What's new with Apache Spark?
 
How Apache Spark fits in the Big Data landscape
How Apache Spark fits in the Big Data landscapeHow Apache Spark fits in the Big Data landscape
How Apache Spark fits in the Big Data landscape
 
11. From Hadoop to Spark 1:2
11. From Hadoop to Spark 1:211. From Hadoop to Spark 1:2
11. From Hadoop to Spark 1:2
 
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
 
#MesosCon 2014: Spark on Mesos
#MesosCon 2014: Spark on Mesos#MesosCon 2014: Spark on Mesos
#MesosCon 2014: Spark on Mesos
 
Spark 101
Spark 101Spark 101
Spark 101
 
Intro to apache spark stand ford
Intro to apache spark stand fordIntro to apache spark stand ford
Intro to apache spark stand ford
 
A look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsA look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutions
 
Spark & Cassandra at DataStax Meetup on Jan 29, 2015
Spark & Cassandra at DataStax Meetup on Jan 29, 2015 Spark & Cassandra at DataStax Meetup on Jan 29, 2015
Spark & Cassandra at DataStax Meetup on Jan 29, 2015
 
How Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapeHow Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscape
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkTuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
 
Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming
 
Apache spark-melbourne-april-2015-meetup
Apache spark-melbourne-april-2015-meetupApache spark-melbourne-april-2015-meetup
Apache spark-melbourne-april-2015-meetup
 
Big data distributed processing: Spark introduction
Big data distributed processing: Spark introductionBig data distributed processing: Spark introduction
Big data distributed processing: Spark introduction
 

Kürzlich hochgeladen

Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 

Kürzlich hochgeladen (20)

Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 

Stanford CS347 Guest Lecture: Apache Spark

  • 1. Stanford CS347 Guest Lecture Spark Reynold Xin @rxin Guest lecture, May 18, 2015, Stanford
  • 2. Who am I? Reynold Xin Spark PMC member Databricks cofounder & architect UC Berkeley AMPLab PhD (on leave) 2
  • 3. Agenda 1.  MapReduce Review 2.  Introduction to Spark and RDDs 3.  Generality of RDDs (e.g. streaming, ML) 4.  DataFrames 5.  Internals (time permitting) 3
  • 4. Agenda 1.  MapReduce Review 2.  Introduction to Spark and RDDs 3.  Generality of RDDs (e.g. streaming, ML) 4.  DataFrames 5.  Internals (time permitting) 4
  • 5. Google Datacenter How do we program this thing? 5
  • 6. Traditional Network Programming Message-passing between nodes (MPI, RPC, etc) Really hard to do at scale: • How to split problem across nodes? – Important to consider network and data locality • How to deal with failures? – If a typical server fails every 3 years, a 10,000-node cluster sees 10 faults/day! • Even without failures: stragglers (a node is slow) Almost nobody does this! 6
  • 7. Data-Parallel Models Restrict the programming interface so that the system can do more automatically “Here’s an operation, run it on all of the data” • I don’t care where it runs (you schedule that) • In fact, feel free to run it twice on different nodes 7
  • 8. MapReduce Programming Model Data type: key-value records Map function: (Kin, Vin) -> list(Kinter, Vinter) Reduce function: (Kinter, list(Vinter)) -> list(Kout, Vout) 8
  • 9. MapReduce Programmability Most real applications require multiple MR steps • Google indexing pipeline: 21 steps • Analytics queries (e.g. count clicks & top K): 2 – 5 steps • Iterative algorithms (e.g. PageRank): 10’s of steps Multi-step jobs create spaghetti code • 21 MR steps -> 21 mapper and reducer classes • Lots of boilerplate code per step 9
  • 10. 10
  • 11. Problems with MapReduce MapReduce use cases showed two major limitations: 1.  difficulty of programming directly in MR. 2.  Performance bottlenecks In short, MR doesn’t compose well for large applications Therefore, people built high level frameworks and specialized systems. 11
  • 12. Higher Level Frameworks SELECT count(*) FROM users A = load 'foo'; B = group A all; C = foreach B generate COUNT(A); In reality, 90+% of MR jobs are generated by Hive SQL 12
  • 13. Specialized Systems MapReduce General Batch Processing Specialized Systems: iterative, interactive, streaming, graph, etc. Pregel Giraph Dremel Drill Tez Impala GraphLab StormS4 F1 MillWheel 13
  • 14. Agenda 1.  MapReduce Review 2.  Introduction to Spark and RDDs 3.  Generality of RDDs (e.g. streaming, ML) 4.  DataFrames 5.  Internals (time permitting) 14
  • 15. Spark: A Brief History 15 2002 2002 MapReduce @ Google 2004 MapReduce paper 2006 Hadoop @Yahoo! 2004 2006 2008 2010 2012 2014 2014 Apache Spark top-level 2010 Spark paper 2008 Hadoop Summit
  • 16. Spark Summary Unlike the various specialized systems, Spark’s goal was to generalize MapReduce to support new apps Two small additions are enough: • fast data sharing • general DAGs More efficient engine, and simpler for the end users. 16
  • 18. Note: not a scientific comparison.
  • 19. Programmability 19 WordCount in 50+ lines of Java MR WordCount in 3 lines of Spark
  • 20. Performance Time to sort 100TB 2100 machines2013 Record: Hadoop 2014 Record: Spark Source: Daytona GraySort benchmark, sortbenchmark.org 72 minutes 207 machines 23 minutes Also sorted 1PB in 4 hours 20
  • 21. RDD: Core Abstraction Resilient Distributed Datasets • Collections of objects spread across a cluster, stored in RAM or on Disk • Built through parallel transformations • Automatically rebuilt on failure Operations •  Transformations (e.g. map, filter, groupBy) •  Actions (e.g. count, collect, save) Write programs in terms of distributed datasets and operations on them
  • 22. RDD Resilient Distributed Datasets are the primary abstraction in Spark – a fault-tolerant collection of elements that can be operated on in parallel Two types: • parallelized collections – take an existing single-node collection and parallel it • Hadoop datasets: files on HDFS or other compatible storage 22
  • 23. Operations on RDDs Transformations f(RDD) => RDD § Lazy (not computed immediately) § E.g. “map” Actions: § Triggers computation § E.g. “count”, “saveAsTextFile” 23
  • 24. Working With RDDs RDD textFile = sc.textFile(”SomeFile.txt”)!
  • 25. Working With RDDs RDD RDD RDD RDD Transformations linesWithSpark = textFile.filter(lambda line: "Spark” in line)! textFile = sc.textFile(”SomeFile.txt”)!
  • 26. Working With RDDs RDD RDD RDD RDD Transformations Action Value linesWithSpark = textFile.filter(lambda line: "Spark” in line)! linesWithSpark.count()! 74! ! linesWithSpark.first()! # Apache Spark! textFile = sc.textFile(”SomeFile.txt”)!
  • 27. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns
  • 28. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Worker   Worker   Worker   Driver  
  • 29. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Worker   Worker   Worker   Driver   lines = spark.textFile(“hdfs://...”)
  • 30. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Worker   Worker   Worker   Driver   lines = spark.textFile(“hdfs://...”) Base  RDD  
  • 31. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) Worker   Worker   Worker   Driver  
  • 32. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) Worker   Worker   Worker   Driver   Transformed  RDD  
  • 33. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“t”)[2]) messages.cache() Worker   Worker   Worker   Driver   messages.filter(lambda s: “mysql” in s).count()
  • 34. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“t”)[2]) messages.cache() Worker   Worker   Worker   Driver   messages.filter(lambda s: “mysql” in s).count() Ac5on  
  • 35. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“t”)[2]) messages.cache() Worker   Worker   Worker   Driver   messages.filter(lambda s: “mysql” in s).count() Block  1   Block  2   Block  3  
  • 36. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“t”)[2]) messages.cache() Worker   Worker   Worker   messages.filter(lambda s: “mysql” in s).count() Block  1   Block  2   Block  3   Driver   tasks   tasks   tasks  
  • 37. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“t”)[2]) messages.cache() Worker   Worker   Worker   messages.filter(lambda s: “mysql” in s).count() Block  1   Block  2   Block  3   Driver   Read   HDFS   Block   Read   HDFS   Block   Read   HDFS   Block  
  • 38. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“t”)[2]) messages.cache() Worker   Worker   Worker   messages.filter(lambda s: “mysql” in s).count() Block  1   Block  2   Block  3   Driver   Cache  1   Cache  2   Cache  3   Process   &  Cache   Data   Process   &  Cache   Data   Process   &  Cache   Data  
  • 39. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“t”)[2]) messages.cache() Worker   Worker   Worker   messages.filter(lambda s: “mysql” in s).count() Block  1   Block  2   Block  3   Driver   Cache  1   Cache  2   Cache  3   results   results   results  
  • 40. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“t”)[2]) messages.cache() Worker   Worker   Worker   messages.filter(lambda s: “mysql” in s).count() Block  1   Block  2   Block  3   Driver   Cache  1   Cache  2   Cache  3   messages.filter(lambda s: “php” in s).count()
  • 41. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“t”)[2]) messages.cache() Worker   Worker   Worker   messages.filter(lambda s: “mysql” in s).count() Block  1   Block  2   Block  3   Cache  1   Cache  2   Cache  3   messages.filter(lambda s: “php” in s).count() tasks   tasks   tasks   Driver  
  • 42. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“t”)[2]) messages.cache() Worker   Worker   Worker   messages.filter(lambda s: “mysql” in s).count() Block  1   Block  2   Block  3   Cache  1   Cache  2   Cache  3   messages.filter(lambda s: “php” in s).count() Driver   Process   from   Cache   Process   from   Cache   Process   from   Cache  
  • 43. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“t”)[2]) messages.cache() Worker   Worker   Worker   messages.filter(lambda s: “mysql” in s).count() Block  1   Block  2   Block  3   Cache  1   Cache  2   Cache  3   messages.filter(lambda s: “php” in s).count() Driver   results   results   results  
  • 44. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns lines = spark.textFile(“hdfs://...”) errors = lines.filter(lambda s: s.startswith(“ERROR”)) messages = errors.map(lambda s: s.split(“t”)[2]) messages.cache() Worker   Worker   Worker   messages.filter(lambda s: “mysql” in s).count() Block  1   Block  2   Block  3   Cache  1   Cache  2   Cache  3   messages.filter(lambda s: “php” in s).count() Driver   Cache your data è Faster Results Full-text search of Wikipedia •  60GB on 20 EC2 machines •  0.5 sec from mem vs. 20s for on-disk
  • 45. Language Support Standalone Programs Python, Scala, & Java Interactive Shells Python & Scala Performance Java & Scala are faster due to static typing …but Python is often fine Python lines = sc.textFile(...) lines.filter(lambda s: “ERROR” in s).count() Scala val lines = sc.textFile(...) lines.filter(x => x.contains(“ERROR”)).count() Java JavaRDD<String> lines = sc.textFile(...); lines.filter(new Function<String, Boolean>() { Boolean call(String s) { return s.contains(“error”); } }).count();
  • 48. Fault Recovery RDDs track lineage information that can be used to efficiently reconstruct lost partitions Ex: messages = textFile(...).filter(_.startsWith(“ERROR”)) .map(_.split(‘t’)(2)) HDFS  File   Filtered  RDD   Mapped  RDD   filter   (func  =  _.contains(...))   map   (func  =  _.split(...))  
  • 49. Fault Recovery Results 119   57   56   58   58   81   57   59   57   59   0   20   40   60   80   100   120   140   1   2   3   4   5   6   7   8   9   10   Iteratrion  time  (s)   Iteration   Failure  happens  
  • 50. Example: Logistic Regression Goal: find best line separating two sets of points +   –   +   +   +   +   +   +   +   +   –   –   –   –   –   –   –   –   +   target   –   random  initial  line  
  • 51. Example: Logistic Regression val data = spark.textFile(...).map(readPoint).cache() var w = Vector.random(D) for (i <- 1 to ITERATIONS) { val gradient = data.map(p => (1 / (1 + exp(-p.y*(w dot p.x))) - 1) * p.y * p.x ).reduce(_ + _) w -= gradient } println("Final w: " + w) w  automa5cally   shipped  to  cluster  
  • 52. LR/K-Means Performance 0.96 110 0 25 50 75 100 125 Logistic Regression 4.1 155 0 30 60 90 120 150 180 K-Means Clustering Hadoop MR Spark Time per Iteration (s) 10B points
  • 53. Agenda 1.  MapReduce Review 2.  Introduction to Spark and RDDs 3.  Generality of RDDs (e.g. streaming, ML) 4.  DataFrames 5.  Internals (time permitting) 53
  • 54. Generality of RDDs Spark   Spark   Streaming   real-­‐time   Spark  SQL   SQL   GraphX   graph   MLLib   machine   learning   …  
  • 55. Generality of RDDs Spark   RDDs,  Transformations,  and  Actions   Spark   Streaming   real-­‐time   Spark   SQL   GraphX   graph   MLLib   machine   learning   DStream’s:     Streams  of  RDD’s   SchemaRDD’s     RDD-­‐Based    Matrices   RDD-­‐Based     Graphs  
  • 56. Many important apps must process large data streams at second-scale latencies • Site statistics, intrusion detection, online ML To build and scale these apps users want: • Integration: with offline analytical stack • Fault-tolerance: both for crashes and stragglers • Efficiency: low cost beyond base processing Spark Streaming: Motivation
  • 57. Discretized Stream Processing t  =  1:   t  =  2:   stream  1   stream  2   batch  opera5on   pull  input   …   …   input   immutable  dataset   (stored  reliably)   immutable  dataset   (output  or  state);   stored  in  memory   as  RDD   …  
  • 58. Programming Interface Simple functional API views = readStream("http:...", "1s") ones = views.map(ev => (ev.url, 1)) counts = ones.runningReduce(_ + _) Interoperates with RDDs ! // Join stream with static RDD counts.join(historicCounts).map(...) ! // Ad-hoc queries on stream state counts.slice(“21:00”,“21:05”).topK(10) t  =  1:   t  =  2:   views ones counts map   reduce   .  .  .   =  RDD   =  partition  
  • 59. Inherited “for free” from Spark RDD data model and API Data partitioning and shuffles Task scheduling Monitoring/instrumentation Scheduling and resource allocation
  • 62. 0 20000 40000 60000 80000 100000 120000 140000 Hadoop MapReduce Storm (Streaming) Impala (SQL) Giraph (Graph) Spark non-test, non-example source lines SparkSQL Streaming Powerful Stack – Agile Development
  • 63. Powerful Stack – Agile Development 0 20000 40000 60000 80000 100000 120000 140000 Hadoop MapReduce Storm (Streaming) Impala (SQL) Giraph (Graph) Spark non-test, non-example source lines GraphX Streaming SparkSQL
  • 64. Powerful Stack – Agile Development 0 20000 40000 60000 80000 100000 120000 140000 Hadoop MapReduce Storm (Streaming) Impala (SQL) Giraph (Graph) Spark non-test, non-example source lines GraphX Streaming SparkSQL Your App?
  • 65. Benefits for Users High performance data sharing • Data sharing is the bottleneck in many environments • RDD’s provide in-place sharing through memory Applications can compose models • Run a SQL query and then PageRank the results • ETL your data and then run graph/ML on it Benefit from investment in shared functionality • E.g. re-usable components (shell) and performance optimizations
  • 66. Agenda 1.  MapReduce Review 2.  Introduction to Spark and RDDs 3.  Generality of RDDs (e.g. streaming, ML) 4.  DataFrames 5.  Internals (time permitting) 66
  • 67. From MapReduce to Spark 67
  • 68. Beyond Hadoop Users 68 Spark early adopters Data Engineers Data Scientists Statisticians R users PyData … Users Understands MapReduce & functional APIs
  • 69. 69
  • 70. DataFrames in Spark Distributed collection of data grouped into named columns (i.e. RDD with schema) DSL designed for common tasks • Metadata • Sampling • Project, filter, aggregation, join, … • UDFs Available in Python, Scala, Java, and R (via SparkR) 70
  • 71. Not Just Less Code: Faster Implementations 0 2 4 6 8 10 RDD Scala RDD Python DataFrame Scala DataFrame Python DataFrame SQL Time to Aggregate 10 million int pairs (secs)
  • 72. DataFrame Internals Represented internally as a “logical plan” Execution is lazy, allowing it to be optimized by a query optimizer 72
  • 73. Plan Optimization & Execution 73 DataFrames and SQL share the same optimization/execution pipeline Maximize code reuse & share optimization efforts
  • 74. 74 joined = users.join(events, users.id == events.uid) filtered = joined.filter(events.date >= ”2015-01-01”) this join is expensive à logical plan filter join scan (users) scan (events) physical plan join scan (users) filter scan (events)
  • 75. Data Sources supported by DataFrames 75 built-in external { JSON } JDBC and more …
  • 76. More Than Naïve Scans Data Sources API can automatically prune columns and push filters to the source • Parquet: skip irrelevant columns and blocks of data; turn string comparison into integer comparisons for dictionary encoded data • JDBC: Rewrite queries to push predicates down 76
  • 77. 77 joined = users.join(events, users.id == events.uid) filtered = joined.filter(events.date > ”2015-01-01”) logical plan filter join scan (users) scan (events) optimized plan join scan (users) filter scan (events) optimized plan with intelligent data sources join scan (users) filter scan (events)
  • 78. Our Experience So Far SQL is wildly popular and important •  100% of Databricks customers use some SQL Schema is very useful •  Most data pipelines, even the ones that start with unstructured data, end up having some implicit structure •  Key-value too limited •  That said, semi-/un-structured support is paramount Separation of logical vs physical plan •  Important for performance optimizations (e.g. join selection)
  • 79. Machine Learning Pipelines tokenizer  =  Tokenizer(inputCol="text",  outputCol="words”)   hashingTF  =  HashingTF(inputCol="words",  outputCol="features”)   lr  =  LogisticRegression(maxIter=10,  regParam=0.01)   pipeline  =  Pipeline(stages=[tokenizer,  hashingTF,  lr])     df  =  sqlCtx.load("/path/to/data")   model  =  pipeline.fit(df)   df0 df1 df2 df3tokenizer hashingTF lr.model lr Pipeline Model  
  • 80. 80 R Interface (SparkR) Spark 1.4 (June) Exposes DataFrames, and ML library in R df = jsonFile(“tweets.json”)  summarize(                            group_by(                             df[df$user == “matei”,],     “date”),   sum(“retweets”)) 
  • 81. Data Science at Scale Higher level interfaces in Scala, Java, Python, R Drastically easier to program Big Data • With APIs similar to single-node tools 81
  • 82. 82 {JSON} Data Sources Spark Core DataFrames ML Pipelines Spark Streaming Spark SQL MLlib GraphX
  • 83. 83 Goal: unified engine across data sources, workloads and environments
  • 84. Agenda 1.  MapReduce Review 2.  Introduction to Spark and RDDs 3.  Generality of RDDs (e.g. streaming, ML) 4.  DataFrames 5.  Internals (time permitting) 84
  • 85. Spark Application sc = new SparkContext f = sc.textFile(“…”)" " f.filter(…)" .count()" " ... Your program (JVM / Python) Spark driver" (app master) Spark executor (multiple of them) HDFS, HBase, … Block manager Task threads RDD graph Scheduler Block tracker Shuffle tracker Cluster" manager A single application often contains multiple actions
  • 86. RDD is an interface 1.  Set of partitions (“splits” in Hadoop) 2.  List of dependencies on parent RDDs 3.  Function to compute a partition (as an Iterator) given its parent(s) 4.  (Optional) partitioner (hash, range) 5.  (Optional) preferred location(s) for each partition “lineage”   op5mized   execu5on  
  • 87. Example: HadoopRDD partitions = one per HDFS block dependencies = none compute(part) = read corresponding block preferredLocations(part) = HDFS block location partitioner = none
  • 88. Example: Filtered RDD partitions = same as parent RDD dependencies = “one-to-one” on parent compute(part) = compute parent and filter it preferredLocations(part) = none (ask parent) partitioner = none
  • 89. RDD Graph (DAG of tasks) HadoopRDD" path = hdfs://... FilteredRDD" func = _.contains(…)" shouldCache = true file: errors: Partition-level view: Dataset-level view: Task1 Task2 ...
  • 90. Example: JoinedRDD partitions = one per reduce task dependencies = “shuffle” on each parent compute(partition) = read and join shuffled data preferredLocations(part) = none partitioner = HashPartitioner(numTasks) Spark will now know this data is hashed!
  • 91. Dependency Types union groupByKey on" non-partitioned data join with inputs not" co-partitioned join with inputs co-partitioned map, filter “Narrow” (pipeline-able) “Wide” (shuffle)
  • 92. Execution Process rdd1.join(rdd2) .groupBy(…) .filter(…) RDD  Objects   build  operator  DAG   DAG  Scheduler   split  graph  into   stages  of  tasks   submit  each   stage  as  ready   DAG   Task  Scheduler   TaskSet   launch  tasks  via   cluster  manager   retry  failed  or   straggling  tasks   Cluster   manager   Worker   execute  tasks   store  and  serve   blocks   Block   manager   Threads   Task  
  • 93. DAG Scheduler Input: RDD and partitions to compute Output: output from actions on those partitions Roles: • Build stages of tasks • Submit them to lower level scheduler (e.g. YARN, Mesos, Standalone) as ready • Lower level scheduler will schedule data based on locality • Resubmit failed stages if outputs are lost
  • 94. Job Scheduler Captures RDD dependency graph Pipelines functions into “stages” Cache-aware for data reuse & locality Partitioning-aware to avoid shuffles join   union   groupBy   map   Stage  3   Stage  1   Stage  2   A:   B:   C:   D:   E:   F:   G:   =  cached  partition  
  • 95. 95 {JSON} Data Sources Spark Core DataFrames ML Pipelines Spark Streaming Spark SQL MLlib GraphX
  • 96. 96 Goal: unified engine across data sources, workloads and environments
  • 97.