Sonos is a smart system of hi-fi wireless speakers and audio components. It unites your digital music collection in one app that you control from any device. Sonos leverages the Amazon Kinesis stream-processing platform to run near real-time streaming analytics on device data logs from connected Sonos hi-fi audio equipment. It analyzes usage, performance, quality logs, and other data feeds collected from Sonos-connected devices in near real-time to better understand its customer experience. In this session, Sonos will focus on the design and architecture considerations that drove their selection of AWS services for their platform, diving deep on Amazon Kinesis and Amazon DynamoDB. They will discuss architecture tradeoffs, such as Kinesis vs. Kafka and using its device data to gain some insights that differentiate Sonos in the music industry.
2. What to Expect from the Session
• What is Sonos?
• Sonos Data Pipeline V1
• Sonos Data Pipeline V2
• Transition planning and execution
• Takeaways and future ideas
3. Sonos 3
What is Sonos?
Sonos is the smart
speaker system that
streams all your
favorite music to any
room, or every room.
4. Sonos 4
What is Sonos?
Control your music with
one simple app, and fill
your home with pure,
immersive sound.
6. Firmware device logs
Application telemetry
Music service usage metrics
Cloud applications logs
Performance Indicators
Where does all this wonderful data come from?
7. Manufacturing tests and yields
Diagnostics
Customer support
Sales and marketing data
Diagnostics
Where does all this wonderful data come from?
8. A note on privacy
We strive to provide the best experience possible for our
customers through the analysis of usage data, however we
also respect our customer’s right to privacy.
We only collect usage data from the households that OPT-
IN to provide the data.
10. Sonos Data Pipeline V1
Design goals
• Provide visibility into music service usage
• Secure, robust pipeline to minimize data loss
• Downstream processing should not affect data ingestion
11. Sonos Data Pipeline V1
Collect Store Process Consume
Data Collector Initial SQS queue SQS queues Visualization
13. Sonos Data Pipeline V1 Results
• Insight into the health of the music services on Sonos
14. Sonos Data Pipeline V1
Challenges:
• Increased visibility of data throughout the company
• New data types required additional development
• Unable to reprocess the data after initial ingestion
• Costs became an obstacle to gathering more data
16. Sonos Data Pipeline V2
Design goals
• Move from aggregate reporting to event-based reporting
• Accept any type of data (text, binary, JSON, XML)
• Secure storage of raw data
• Simplify the pipeline and reduce costs
17. Sonos Data Pipeline V2
Bottom line:
We needed to be able to handle orders of magnitude more
throughput, by the end of 2015, with guaranteed delivery
and storage, near-linear scalability, under a sustainable
cost model.
18. Sonos Data Pipeline V2
Collect Store Process Consume
Data Collector Initial SQS queue SQS queues Visualization
19. Sonos Data Pipeline V2
Collect Store Process Consume
Data Collector Initial SQS queue SQS queues Visualization
20. Sonos Data Pipeline V2
Collect Store Process Consume
Collection service Storage service Processing engines Visualization
21. Sonos Data Pipeline V2
Collect Store Process Consume
• Decouple collection from storage and processing
• Optimize for raw throughput and scale
• Amazon Kinesis vs. Kafka
• Amazon Kinesis Producer Library vs.
AmazonKinesisAsyncClient
• Netty 4
22. Amazon Kinesis
• Max 1 MB message size
• Streams/partition keys
• 24-hour retention
• REST API/KPL
• Replication across 3 AZs
• AWS managed service
Sonos Data Pipeline V2
Kafka
• Configurable (default 1 MB)
• Topics/partition keys
• Configurable based on storage
• REST/low-level API
• Configurable
• Sync/ACK within AZ
• Async across regions
• Self-hosted and managed
23. Sonos Data Pipeline V2
Collect Store Process Consume
Collection service Storage service Processing engines Visualization
24. Sonos Data Pipeline V2
Collect Store Process Consume
• Decouple storage from collection and processing
• Increase security of raw data
• Amazon S3 vs. Cassandra, HDFS, etc.
• Amazon Kinesis Consumer Library vs. Amazon Kinesis SDK
25. Implementing a ‘data lake’
• Disparate operational systems forward data in their own format
• Formats/schemas can change at any time
• Stores any data type in raw format
• Typically very large stores with a schemaless structure
• It is up to the “consumer” to know what they’re looking for
Sonos Data Pipeline V2
26. Amazon Kinesis Consumer
Library
• Java API
• Lease/shard management
• Payload aggregation
Sonos Data Pipeline V2
AmazonKinesisAsyncClient (SDK)
• Java API
• Developer’s choice
• Self-implemented
27. Sonos Data Pipeline V2
# ReponoRecordProcessor.java
public class ReponoRecordProcessor implements IRecordProcessor {
...
@Override
public void processRecords(List<Record> records, IRecordProcessorCheckpointer chkptr) {
...
for (Record record : records) {
bufferRecord(data, record);
}
if (buffer.shouldFlush()) {
emit(chkptr, buffer.getRecords());
}
}
...
}
28. Sonos Data Pipeline V2
Collect Store Process Consume
Collection service Storage service Processing engines Visualization
29. Sonos Data Pipeline V2
Collect Store Process Consume
• Decouple processing from collection and storage
• Allow for flexibility in processing tool chain
• Apache Spark
• Support any ‘consumer’
30. Sonos Data Pipeline V2
Collect Store Process Consume
Collection service Storage service Processing engines Visualization
31. Sonos Data Pipeline V2
Collect Store Process Consume
• Decouple from collection and processing
• Allow for self-service
32. Sonos Data Pipeline V2
Collect Store Process Consume
Collection service Storage service Processing engines Visualization
33. Sonos Data Pipeline V2
Results:
• Increased traceability and better consistency across
pipeline driven by '1 source of truth'
• Self-service pipeline
• Linear scalability backed by EC2, Amazon Kinesis, and
Amazon S3
• 20x reduction of costs in the overall pipeline
40. Transition planning
Collect Store Process Consume
• Decouple collection from storage and processing
• Optimize for raw throughput and scale
• Amazon Kinesis vs. Kafka
• Amazon Kinesis Producer Library vs.
AmazonKinesisAsyncClient
• Netty 4
41. Amazon Kinesis Producer
Library
• Java API
• Async/PutRecords by default
• Payload aggregation
• C++ IPC microservice
Transition planning
AmazonKinesisAsyncClient (SDK)
• Java API
• Developer’s choice
• Self-implemented
• Talks to Amazon Kinesis
HTTPS API
45. Future directions
• EOL Sonos Data Pipeline V1
• Amazon Kinesis failure modes
• Data collection: Scala or C++?
• Spark on Amazon EMR
46. Final takeaways
• Separation of concerns allows each service to specialize
in its task, reducing complexity and downtime
• Self-service analytics unlocks the research potential of
the whole company
• Amazon Kinesis gives us the streaming data pipeline
we’re looking for without operational overhead