This document provides an overview of streaming data platforms for cloud-native event-driven applications, focusing on Apache Pulsar, Apache NiFi, and Apache Flink. It includes an agenda for a presentation on these topics, background on the presenter Tim Spann, and details about each technology and how they can work together.
3. Agenda
1. Welcome
2. Introduction to Apache Pulsar
• Basics of Pulsar
• Use Cases
3. Apache NiFi <-> Apache Pulsar
4. Apache Flink <-> Apache Pulsar
5. Let’s Build an App!
• Demo
4. Resources
5. Q&A
4. Tim Spann
Developer Advocate
Tim Spann
Developer Advocate at StreamNative
● FLiP(N) Stack = Flink, Pulsar and NiFi Stack
● Streaming Systems & Data Architecture Expert
● Experience:
○ 15+ years of experience with streaming technologies including Pulsar, Flink, Spark, NiFi, Big
Data, Cloud, MXNet, IoT, Python and more.
○ Today, he helps to grow the Pulsar community sharing rich technical knowledge and experience
at both global conferences and through individual conversations.
5. FLiP Stack Weekly
This week in Apache Flink, Apache Pulsar, Apache
NiFi, Apache Spark and open source friends.
https://bit.ly/32dAJft
7. Apache Pulsar is a Cloud-Native
Messaging and Event-Streaming Platform.
8. CREATED
Originally
developed inside
Yahoo! as Cloud
Messaging
Service
GROWTH
10x Contributors
10MM+ Downloads
Ecosystem Expands
Kafka on Pulsar
AMQ on Pulsar
Functions
. . .
2012 2016 2018 TODAY
APACHE TLP
Pulsar
becomes
Apache top
level project.
OPEN SOURCE
Pulsar
committed
to open source.
Apache Pulsar Timeline
10. Pulsar Has a Built-in Super Set of OSS
Features
Durability
Scalability Geo-Replication
Multi-Tenancy
Unified Messaging
Model
Reduced Vendor Dependency
Functions
Open-Source Features
11. Schema Registry
schema-1 (value=Avro/Protobuf/JSON) schema-2
(value=Avro/Protobuf/JSON)
schema-3
(value=Avro/Protobuf/JSON)
Schema
Data
ID
Local Cache
for Schemas
+
Schema
Data
ID +
Local Cache
for Schemas
Send schema-1
(value=Avro/Protobuf/JSON) data
serialized per schema ID
Send (register)
schema (if not in
local cache)
Read schema-1
(value=Avro/Protobuf/JSON) data
deserialized per schema ID
Get schema by ID (if
not in local cache)
Producers Consumers
Integrated Schema Registry
12. Ideal for app and data tiers
Less sprawl and better
utilization
Cloud-native scalability
Build globally without
the complexity
Cost effective long-term
storage
Pulsar across the
organization
13. Joining Streams in SQL
Perform in Real-Time
Ordering and Arrival
Concurrent Consumers
Change Data Capture
Data Streaming
17. Messages - the basic unit of Pulsar
Component Description
Value / data payload The data carried by the message. All Pulsar messages contain raw bytes, although
message data can also conform to data schemas.
Key Messages are optionally tagged with keys, used in partitioning and also is useful for
things like topic compaction.
Properties An optional key/value map of user-defined properties.
Producer name The name of the producer who produces the message. If you do not specify a producer
name, the default name is used. Message De-Duplication.
Sequence ID Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of
the message is its order in that sequence. Message De-Duplication.
19. Schema Registry
Schema Registry
schema-1 (value=Avro/Protobuf/JSON) schema-2 (value=Avro/Protobuf/JSON) schema-3
(value=Avro/Protobuf/JSON)
Schema
Data
ID
Local Cache
for Schemas
+
Schema
Data
ID +
Local Cache
for Schemas
Send schema-1
(value=Avro/Protobuf/JSON) data
serialized per schema ID
Send (register)
schema (if not in
local cache)
Read schema-1
(value=Avro/Protobuf/JSON) data
deserialized per schema ID
Get schema by ID (if
not in local cache)
Producers Consumers
20. Apache Pulsar has a vibrant community
560+
Contributors
10,000+
Commits
7,000+
Slack Members
1,000+
Organizations
Using Pulsar
24. Apache Pulsar features
Cloud native with decoupled
storage and compute layers.
Built-in compatibility with your
existing code and messaging
infrastructure.
Geographic redundancy and high
availability included.
Centralized cluster management
and oversight.
Elastic horizontal and vertical
scalability.
Seamless and instant partitioning
rebalancing with no downtime.
Flexible subscription model
supports a wide array of use cases.
Compatible with the tools you use
to store, analyze, and process data.
25. ● “Bookies”
● Stores messages and cursors
● Messages are grouped in
segments/ledgers
● A group of bookies form an
“ensemble” to store a ledger
● “Brokers”
● Handles message routing and
connections
● Stateless, but with caches
● Automatic load-balancing
● Topics are composed of
multiple segments
●
● Stores metadata for both
Pulsar and BookKeeper
● Service discovery
Store
Messages
Metadata &
Service Discovery
Metadata &
Service Discovery
Pulsar Cluster
Metadata
Storage
Pulsar Cluster
27. Component Description
Value / data payload The data carried by the message. All Pulsar messages contain raw bytes, although message data
can also conform to data schemas.
Key Messages are optionally tagged with keys, used in partitioning and also is useful for things like
topic compaction.
Properties An optional key/value map of user-defined properties.
Producer name The name of the producer who produces the message. If you do not specify a producer name, the
default name is used.
Sequence ID Each Pulsar message belongs to an ordered sequence on its topic. The sequence ID of the
message is its order in that sequence.
Messages
29. Schema Registry
schema-1 (value=Avro/Protobuf/JSON) schema-2
(value=Avro/Protobuf/JSON)
schema-3
(value=Avro/Protobuf/JSON)
Schema
Data
ID
Local Cache
for Schemas
+
Schema
Data
ID +
Local Cache
for Schemas
Send schema-1
(value=Avro/Protobuf/JSON) data
serialized per schema ID
Send (register)
schema (if not in
local cache)
Read schema-1
(value=Avro/Protobuf/JSON) data
deserialized per schema ID
Get schema by ID (if
not in local cache)
Producers Consumers
Integrated Schema Registry
30. Flexible Pub/Sub API for Pulsar - Shared
Consumer consumer = client.newConsumer()
.topic("my-topic")
.subscriptionName("work-q-1")
.subscriptionType(SubType.Shared)
.subscribe();
31. Flexible Pub/Sub API for Pulsar - Failover
Consumer consumer = client.newConsumer()
.topic("my-topic")
.subscriptionName("stream-1")
.subscriptionType(SubType.Failover)
.subscribe();
32. Reader Interface
byte[] msgIdBytes = // Some byte
array
MessageId id =
MessageId.fromByteArray(msgIdBytes);
Reader<byte[]> reader =
pulsarClient.newReader()
.topic(topic)
.startMessageId(id)
.create();
Create a reader that will read from
some message between earliest and
latest.
Reader
33. Built-in
Back
Pressure
Producer<String> producer = client.newProducer(Schema.STRING)
.topic("hellotopic")
.blockIfQueueFull(true) // enable blocking
// when queue is full
.maxPendingMessages(10) // max queue size
.create();
// During Back Pressure: the sendAsync call blocks
// with no room in queues
producer.newMessage()
.key("mykey")
.value("myvalue")
.sendAsync(); // can be a blocking call
38. Pulsar Functions
● Lightweight computation
similar to AWS Lambda.
● Specifically designed to use
Apache Pulsar as a message
bus.
● Function runtime can be
located within Pulsar Broker.
A serverless event streaming
framework
39. ● Consume messages from one
or more Pulsar topics.
● Apply user-supplied
processing logic to each
message.
● Publish the results of the
computation to another topic.
● Support multiple
programming languages (Java,
Python, Go)
● Can leverage 3rd-party
libraries to support the
execution of ML models on
the edge.
Pulsar Functions
40. Moving Data In and Out of Pulsar
IO/Connectors are a simple way to integrate with external systems and move
data in and out of Pulsar. https://pulsar.apache.org/docs/en/io-jdbc-sink/
● Built on top of Pulsar Functions
● Built-in connectors - hub.streamnative.io
Source Sink
66. ● Lightweight computation similar
to AWS Lambda.
● Specifically designed to use
Apache Pulsar as a message
bus.
● Function runtime can be
located within Pulsar Broker.
● Java Functions
A serverless event
streaming framework
Pulsar Functions
68. ● Consume messages from one or
more Pulsar topics.
● Apply user-supplied processing
logic to each message.
● Publish the results of the
computation to another topic.
● Support multiple programming
languages (Java, Python, Go)
● Can leverage 3rd-party libraries
to support the execution of ML
models on the edge.
Pulsar Functions
80. ● Java, Scala, Python Support
● Strong ETL/ELT
● Diverse ML support
● Scalable Distributed compute
● Apache Zeppelin and Jupyter Notebooks
● Fast connector for Apache Pulsar
Apache Spark
81. val dfPulsar = spark.readStream.format("
pulsar")
.option("
service.url", "pulsar://pulsar1:6650")
.option("
admin.url", "http://pulsar1:8080
")
.option("
topic", "persistent://public/default/airquality").load()
val pQuery = dfPulsar.selectExpr("*")
.writeStream.format("
console")
.option("truncate", false).start()
____ __
/ __/__ ___ _____/ /__
_ / _ / _ `/ __/ '_/
/___/ .__/_,_/_/ /_/_ version 3.2.0
/_/
Using Scala version 2.12.15
(OpenJDK 64-Bit Server VM, Java 11.0.11)
Apache Spark + Apache Pulsar
83. Fanout, Queueing, or Streaming
In Pulsar, you have flexibility to use different subscription modes.
If you want to... Do this... And use these subscription
modes...
achieve fanout messaging
among consumers
specify a unique subscription name
for each consumer
exclusive (or failover)
achieve message queuing
among consumers
share the same subscription name
among multiple consumers
shared
allow for ordered consumption
(streaming)
distribute work among any number of
consumers
exclusive, failover or key shared
84. 84
Unified Messaging Model
Simplify your data infrastructure
and enable new use cases with
queuing and streaming capabilities
in one platform.
85. Ideal for app and data tiers
Less sprawl and better utilization
Cloud-native scalability
Build globally without the
complexity
Cost effective long-term storage
Pulsar across the
organization
86. Starting a Function - Distributed Cluster
Once compiled into a JAR, start a Pulsar Function in a distributed cluster:
88. Moving Data In and Out of Pulsar
IO/Connectors are a simple way to integrate with external systems and move
data in and out of Pulsar. https://pulsar.apache.org/docs/en/io-jdbc-sink/
● Built on top of Pulsar Functions
● Built-in connectors - hub.streamnative.io
Source Sink
89. Why a Table Abstraction?
● In many use cases, applications want to consume data from a
Pulsar Topic as if it were a database table, where each new
message is an “update” to the table.
● Up until now, applications used Pulsar consumers or readers to
fetch all the updates from a topic and construct a map with the
latest value of each key for received messages.
● The Table Abstraction provides a standard implementation of this
message consumption pattern for any given keyed topic.
90. TableView
● New Consumer type added in Pulsar 2.10 that provides a continuously
updated key-value map view of compacted topic data.
● An abstraction of a changelog stream from a primary-keyed table, where
each record in the changelog stream is an update on the primary-keyed table
with the record key as the primary key.
● READ ONLY DATA STRUCTURE!
92. How does it work?
● When you create a TableView,
and additional compacted
topic is created.
● In a compacted topic, only the
most recent value associated
with each key is retained.
● A background reader
consumes from the compacted
topic and updates the map
when new messages arrive.
93. Event-Based Microservices
• When your microservices use a message bus to communicate, it is
referred to as an “event-driven” architecture.
• In an event-driven architecture, when a service performs some
piece of work that other services might be interested in, that
service produces an event. Other services consume those events
so that they can perform their own tasks.
• Messages are stored in intermediate topics to prevent message
loss. If a service fails, the message remains in the topic.
94. Event-Based Microservices Application
• A basic order entry use case
for a food delivery service
will involve several different
microservices.
• They communicate via
messages sent to Pulsar
topics.
• A service can also subscribe
to the events published by
another service.
96. Founded by the original creators of Apache Pulsar.
StreamNative has more experience designing,
deploying, and running large-scale Apache Pulsar
instances than any team in the world.
StreamNative employs more than 50% of the
active core committers to Apache Pulsar.
97. Pulsar and other
Ecosystems
Real-Time
Processor
Long-Term
Storage
Subscription B Group 0
Subscription B Group 20
...
Pulsar will feed data into
many different systems and
data format is crucial.
Pulsar feeds real-time
engines and data format
makes this integration easier.
Data is offloaded from Pulsar
to S3/HDFS and benefits
from a correct data format.
Any improvement in size
reduction is multiplied by the
number of consumers.
98. Apache Avro
Avro is a data serialization system used across Big Data ecosystem.
Avro provides:
● Rich data structures
● A compact, fast, binary data format
● A container file, to store persistent data
● Remote procedure call (RPC)
● Simple integration with dynamic languages
For more information: https://avro.apache.org/docs/current/
99. Messaging
versus
Streaming
Messaging Use Cases Streaming Use Cases
Service x commands service y to
make some change.
Example: order service removing
item from inventory service
Moving large amounts of data to
another service (real-time ETL).
Example: logs to elasticsearch
Distributing messages that
represent work among n
workers.
Example: order processing not in
main “thread”
Periodic jobs moving large amounts of
data and aggregating to more
traditional stores.
Example: logs to s3
Sending “scheduled” messages.
Example: notification service for
marketing emails or push
notifications
Computing a near real-time aggregate
of a message stream, split among n
workers, with order being important.
Example: real-time analytics over page
views
100. Differences in consumption
Messaging Use Case Streaming Use Case
Retention The amount of data retained is
relatively small - typically only a day
or two of data at most.
Large amounts of data are retained,
with higher ingest volumes and
longer retention periods.
Throughput Messaging systems are not designed
to manage big “catch-up” reads.
Streaming systems are designed to
scale and can handle use cases
such as catch-up reads.
105. Trains, Planes and Automobiles +++
Information Needed Data Feed(s)
Local weather conditions ● XML, JSON, RSS
Mass transit status & alerts ● XML, JSON, RSS
Regional highways & tunnels ● GeoRSS, XML, ProtoBuf, JSON
Local social media ● JSON
ADS-B Plane Data ● JSON
Local air quality ● JSON
121. Cloud native with decoupled
storage and compute layers.
Built-in compatibility with your
existing code and messaging
infrastructure.
Geographic redundancy and high
availability included.
Centralized cluster management
and oversight.
Elastic horizontal and vertical
scalability.
Seamless and instant partitioning
rebalancing with no downtime.
Flexible subscription model
supports a wide array of use cases.
Compatible with the tools you use
to store, analyze, and process data.
Apache Pulsar features
122. Schema Registry
Schema Registry
schema-1 (value=Avro/Protobuf/JSON) schema-2 (value=Avro/Protobuf/JSON) schema-3
(value=Avro/Protobuf/JSON)
Schema
Data
ID
Local Cache
for Schemas
+
Schema
Data
ID +
Local Cache
for Schemas
Send schema-1
(value=Avro/Protobuf/JSON) data
serialized per schema ID
Send (register)
schema (if not in
local cache)
Read schema-1
(value=Avro/Protobuf/JSON) data
deserialized per schema ID
Get schema by ID (if
not in local cache)
Producers Consumers
131. Why Pulsar Functions for Microservices?
Desired Characteristic Pulsar Functions…
Highly maintainable and testable ● Small pieces of code in Java, Python, or Go.
● Easily maintained in source control repositories and tested
with existing frameworks automatically.
Loosely coupled with other services ● Not directly linked to one another and communicate via
messages.
Independently deployable ● Designed to be deployed independently
Can be developed by a small team ● Often developed by a single developer.
Inter-service Communication ● Support all message patterns using Pulsar as the underlying
message bus.
Deployment & Composition ● Can run as individual threads, processes, or K8s pods.
● Function Mesh allows you to deploy multiple Pulsar
Functions as a single unit.
134. Pulsar Functions
● Consume messages from one
or more Pulsar topics.
● Apply user-supplied
processing logic to each
message.
● Publish the results of the
computation to another topic.
● Support multiple programming
languages (Java, Python, Go)
● Can leverage 3rd-party
libraries to support the
execution of ML models on the
edge.
139. SQL
select aqi, parameterName, dateObserved, hourObserved, latitude,
longitude, localTimeZone, stateCode, reportingArea from
airquality;
select max(aqi) as MaxAQI, parameterName, reportingArea from
airquality group by parameterName, reportingArea;
select max(aqi) as MaxAQI, min(aqi) as MinAQI, avg(aqi) as
AvgAQI, count(aqi) as RowCount, parameterName, reportingArea
from airquality group by parameterName, reportingArea;