SlideShare ist ein Scribd-Unternehmen logo
1 von 70
DataOps: 9 steps to transform
your data science impact
21-24 May 2018
// Harvinder Atwal
MoneySuperMarket
// Web
dunnhumby
{"previous" : "Insight Director, Tesco Clubcard"}
Lloyds Banking
Group
{"previous" : "Senior Manager, Customer Strategy and Insight"}
{"Current" : "Head of Data Strategy and Advanced Analytics"}
@harvindersatwal
British Airways
{"previous" : "Senior Operational Research Analyst"}
{"about" : "me"}
@gmail.com
£2B
SAVINGS
2017 estimate total of UK savings
1993 24.9M 24 million £323M 989
We started life
as mortgages
2000
Adults choose
to share their
data with us
Average
monthly users
2017
Revenue
2017
Product
Providers
Sometimes it’s simple things that work really well
From one version to 1400+
customised variants of the
newsletter
+19% Increase in Revenue Per
Send
Sometimes it’s more complicated solutions
Worried about whether you can afford a
personal loan? With UK interest rates at record
lows, it’s worth checking to see how reasonable
the cost could be.
Whether you need to borrow to buy something,
or you want to bring your existing debts under
one roof, have a look at these competitive deals
we’ve assembled.
Thanks to our Smart Search tool, you can get
an idea of the loans you’re likely to be accepted
for before you proceed with your application.
Same message but
Language tailored
to the customer’s
Financial Attitude
Only 22% of companies are currently seeing
a significant return from data science
expenditures*
*Obligatory conference presentation quote from GartnerForresterMcKinsey Consulting. Sorry.
The mantra is wrong
HIRE DATA
SCIENTISTS
How businesses think they become
data-driven
1 2 3
MONEY
FLOWS
HOARD
DATA
4
Warning: A data-driven customer
focussed strategy will not paper
over cracks in operational
performance or product deficiency
Putting Data ahead of the Customer
or Financial Performance
Source: tamr
Multiple challenges in the process of turning
data into value on existing infrastructure
Business
Problem
Evaluate
available
data
Request
Data Access
from IT
Request
Compute
Resources
from IT
Negotiate
with IT for
requested
resources
Wait for
resources to
be
provisioned
Install
Languages
and tools
Configure
connectivity,
Access and
security
RAM/CPU
Availability,
scaling,
monitoring
Request
network
Config
Change
Request to
install
another
package
Model
building
Compose
PowerPoint
to share
results
Edit
Confluence
to document
work
Negotiate with
business
stakeholder
on
deployment
timeline
Wait for Data
Engineering to
implement the
model
Test Newly
implemented
model to
ensure valid
results
Request
Modifications
to model due
to unexpected
results
Release model
to production
and schedule
Document
release notes
and
deployment
steps
Prepare for
change
management
Data Science trapped in laptops
Thinking real-life Data Science is a Kaggle competition
Treating Data Science as a Death Star
Technology Project
Insight does not scale!.
Using data to generate ad hoc
Decision Support Insight INSTEAD OF
ACTION
Money is wasted
Time is wasted
Talent is wasted
Eliminate waste
LEAN THINKING
The Optimist The Pessimist The Lean Thinker
THE GLASS
IS HALF FULL
THE GLASS IS
HALF EMPTY
WHY IS THE GLASS
TWICE AS BIG AS IT
SHOULD BE?
Alignment of data science with the rest of the
organisation and it's goals
It’s a sprint
not a
marathon
Problems with Agile
Data Science
- How do you define
business value?
DATA
STORAGE
Cloud File Storage Distributed File System
NoSQL DB RDBMS
COMPUTE
INFRASTRUCTURE
ResourceManagement/Monitoring/Auditing
Scheduling
ProjectandDataGovernance
DataEngineering
Distributed
SQLQuery
Engine
Distributed
Compute
Framework
Compute
Instances
Coding
Workspace &
Language
Libraries
Output
Files
ANALYTICS
LAYER Machine
Learning
libraries
Data
Visualisation
libraries
BI Tools
Interactive
dashboards/
Web Apps
Security/IdentityAccesscontrol
APIs
Data Prep/Exploration tools
Summary Analysis, Analysis of Experiments, Segmentation,
Machine Learning, Data Matching
Revision/Deployment Tools
Interactive
Dashboards/
Web App
development
Applications
(Business Layer) Insight
Marketing
Optimisation
External Data
Products
Internal
Reporting
Website
Optimisation
Commercial
Optimisation
Production Code
DevOps/Infrastructure
DBAs
ETL
DQM
MetadataManagement
Agile Data Science does not solve tech complexity problem
Container
Service
Resource Vertical requirementsDATA PRODUCTS
(Presentation/
Service Layer)
Deployment,OrchestrationandScaling
ConfigurationManagement
RevisionControl
KnowledgeManagement
DataScientists
DATA SOURCES
Stream
Processing
Framework
Changing the
way we work
Data Science can’t happen in a vacuum
Situational Awareness is needed
Your business already has hypothesis for
what creates value
Actively avoid work on anything else
It’s the Corporate Strategy and Objectives
(everyone is aligned behind)
Measurement of everything gives feedback of not just individual deliverables (fast
loop) but also the organisation’s hypothesis of what adds value (slow loop)
Situational Awareness
Objectives (Themes)
Strategies (Initiatives)
Tactics
(Epics)
Actions
(Stories)
Strategies (Initiatives) Strategies (Initiatives)
Objectives (Themes)
Tactics
(Epics)
Tactics
(Epics)
Tactics
(Epics)
Tactics
(Epics)
Tactics
(Epics)
Actions
(Stories)
Actions
(Stories)
Actions
(Stories)
Actions
(Stories)
Actions
(Stories)
Actions
(Stories)
Actions
(Stories)
Actions
(Stories)
Actions
(Stories)
Actions
(Stories)
Actions
(Stories)
Corporate strategy is broken down into many
options (Epics) for Agile delivery
We reduce Batch sizes of work and
have options to keep flow going
Collaboration is key
Shared Buy-in from Senior management
Organizational behavior structured around the
ideal data-journey model
Shared Priorities
Shared Trust in data
Shared Rewards based on measured outcomes,
not outputs
Test &
Collect
Model Embed Roll Out
Feedback
Plan
Pilot test
Collect Data
Build Model,
Identify segments
Adjust model to fit
organisation
Re-engineer business processes
to support segmented execution
Train organisation
Creation of fast feedback loop
Data cycles are measured to
eliminate bottlenecks
Shortened Data Cycles to be Agile
Data Engineering
Dev Ops/Infrastructure
DB Management
Cloud File
Storage
Distributed File
System
NoSQL DB
RDBMS
Distributed
SQLQuery Engine
Distributed
Compute
Framework
Compute
Instance
Container
Service
Data
Prep/Exploration
tools
Coding
Workspace
& Language
Libraries
Machine
Learning
Data
Visualisation
Interactive
Dashboards/
Web App
development
Version/
Deployment
Tool
Output Files
BI Tools
Interactive
dashboards
/Web Apps
APIs
Knowledge Management
Security/Identity Access control
Revision Control
Configuration Management
Orchestration and scaling
Project and Data Governance
Scheduling
Resource Management/Monitoring/Auditing
ETL
DQM
Data Scientists
Epic
Customer
Feedback &
Iteration
Data
Product
Strategy
Story
Stream
Processing
Data
Sources
Agile
Practice
DevOps
Culture
Lean
Thinking
We had
accidentally
stumbled on
DataOps
Data
Analytics
DataOps was popularised by Andy
Palmer in a 2015 Blog post
DataOps is an independent approach to data analytics
Data Analytics team
moves at lightening speed
using highly optimized
tools and processes
across the whole data
lifecycle
Agile Collaboration to
break down silos and
work on “The Right
Things” that add value
Lean Manufacturing like
focus on eliminating waste
& bottlenecks, improving
quality, monitoring and
control
Iterative project management
Continuous delivery
Automated test and deployment
Monitoring
Self-serve
Quality
Governance
Organisational alignment
Ease of use PredictabilityReproducibility
Strategic Objectives
Further steps to
Trust
DevOps
Reproducibility
implementation
Self-serve
Organisation
Why do we have brakes on a car?
Accept the delivery pipeline is governed by
rules and constraints
Trust part 1: Make the “What you do to data”
people in the organisation happy
Identity and
Access
Management
Custom role
permissions
Audit trail
logs
Data Loss
Prevention
Encryption
of Data at
Rest
Encryption
of Data in
Motion
Resource
Monitoring
Firewall
rules
Resource
and
Object
Isolation
Penetration
Testing
Code
Encryption
and
Backup
Segregation
of Duties
Authorisation
protocols
Data
Access and
Privacy
Policy
Metadata
Management
Data Lineage
Tracking
Data
Stewards
and
Owners
Trust part 2: Make the “What you do with
data” people in the organisation happy
Data
Quality
Testing
Transformation
Testing
End-User
Testing
ETL
Integration
Testing
Metadata
Testing
Data
Completeness
Testing
ETL
Regression
Testing
Incremental
ETL Testing
Reference
Data
Testing
ETL
Performance
Testing
Automated reproducibility is a must
Configuration Management
For consistently reproducible computational
environments
Continuous Integration: Commit Code Regularly
Data Cleaning Master
Data Cleaning
Dev Branch
Feature Extraction Dev
Feature Extraction
Master
Model Train Master
Model Train Dev Branch
Machine Learning Pipeline
Product Development (e.g. App, Website, Marketing system, Operational System, Dashboard, etc.)
Run tests and review code
(please integrate safely)
Continuous Delivery and Beyond:
Accelerating Deployment
Dev Integration testApplication test Acceptance test Production
Continuous Integration
Dev Integration testApplication test
Continuous Delivery
Dev Integration testApplication test Acceptance test Production
Continuous Deployment
Automated
Manual
Continuous Operations: Resources that scale
Chemistry is not about tubes
DataOps is not about tools
(but the right ones help)
Align your spine
Needs
Principles
Practices
Tools
Values
How do you know it is the best
possible tool?
How do you know that
the Practices actively help the system?
How do you know
which Principles you want to
apply?
“We use _____ to get our work done”
“We DO Self-Service and DataOps
to continuously create VALUE for
the customer and business”
We LEVERAGE Agile and Lean
PRINCIPLES to change the system and
make sure resources work on the right
thing
We OPTIMISE for Speed, Accuracy,
Experimentation/Feedback and
Security.
We are here to SATISFY THE NEED to
help customers save money and the
business to execute it’s strategy
It all starts at Needs. Why does this
system exist in the first place?
Source: Kevin Trethewey, Danie Roux, Joanne Perold
Avoid building your own anything or
being on the bleeding edge.
Cost of Delay is high.
Data Scientists need a way to manage their projects
end-to-end with self-service data AND ARCHITECTURE
Business
Problem
Evaluate
available
data
Request
Data Access
from IT
Request
Compute
Resources
from IT
Negotiate
with IT for
requested
resources
Wait for
resources to
be
provisioned
Install
Languages
and tools
Configure
connectivity,
Access and
security
RAM/CPU
Availability,
scaling,
monitoring
Request
network
Config
Change
Request to
install
another
package
Model
building
Compose
PowerPoint
to share
results
Edit
Confluence
to document
work
Negotiate with
business
stakeholder
on
deployment
timeline
Wait for Data
Engineering to
implement the
model
Test Newly
implemented
model to
ensure valid
results
Request
Modifications
to model due
to unexpected
results
Release model
to production
and schedule
Document
release notes
and
deployment
steps
Prepare for
change
management
Modern serverless and managed
infrastructure makes it easy to create
data products just bring code and data
A single unified platform reduces data
fragmentation, overcomes business silos
and helps enforce consistent governance
You can make the data supply chain more efficient
by unifying data and tools in one platform
Data
Warehouse(s)ETL
Analytic
s
Platform
Core
Data
Other
Data
Extract/Load
OffLoad
Main Source(s) of Truth
Presentation/
Service Layer(s)
Analytical
tools
Predictive and
Prescriptive analytics
Flatten/Merge
columns
Data Sharing
BI Tools
Descriptive and
Diagnostic
analytics
Source
Cubes on
Dimensions
Reload
Data
Microservices
Flatten/Merge
columns
Data Sharing
Data Science Platforms add further self-serve
capabilities
Data Access, Prep
and Exploration
Jupyter, Rstudio,
Zeppelin, etc.
Automation and
Machine Learning
Run experiments, track
and compare results
Delivery and Model
Management
Publish APIs,
Interactive web apps
Schedule reports
Collaboration and Version Control
Discover, discuss and build on existing work
Compute Environment Library
Customised software stack
Compute Grid
Orchestrate hardware for development and deployment
Source: Domino Data Labs
The market for platforms is exploding
Data
Scientist
Data
Analyst
Data
Engineer
Self-serve enables reduced DataOps roles
ETL
Quality Testing
Descriptive
Analytics
Advanced
Analytics
BI
Dev
Ops
Infrastructure
Engineers
DBAs
X X
X
Business
Stakeholders
Operations
Sys
adminX
Developers
ML
Product Managers
Implement AI: Actionable Intelligence
#1 Eliminate wasted effort
Find the FASTEST, CHEAPEST path between data and consumers
#2 Align with the Organisation
through Agile Collaboration
#3 Deliver
Products
not
Projects
Prioritize solutions that fit into a DataOps workflow over others
#4 Build a measurement and feedback
culture
#5 Embrace
Development
best practise in
Data Science
Version Control, Configuration
management, Continuous Integration,
Continuous Operations
#6 KEEP CALM
AND
BUILD TRUST IN DATA
Put Effective Data Governance, Security and Testing in place
#7 Invest in tools and process to
reduce bottlenecks and increase quality
Managed Infrastructure and Serverless Cloud,
Automation and Data Science Platforms
#8 Decentralise Self-service analytics
AND cloud infrastructure
#9 Organise around the ideal data
journey instead of teams
Fewer roles, more end-to-end ownership, less friction
Store Share UseManageAcquire Process
Data Engineering
Data Scientists
Data Analysts
Business Stakeholders
#9.5
Optimise
data cycles
for…
SPEED!
Data Science Today
Customer
Data
?
Hamster
wheel
Analytics
The
Roadblock
The Aimless
crash and
burn
The “So What happened?”
The “We did it
once, why doesn’t
it work again?”
The DataOps Data Science Factory
Epic
Customer
Data
Product
Strategy
Story
Data
Rest of
Business Analytics
Agile Collaboration
Data Governance
Automated testing
Value Measurement
Version Control
Configuration Management
Self-Serve Infrastructure
Automation
Continuous Integration
Sequences
shortened
Questions?
// Harvinder Atwal // Web
var current: {
companyName : "MoneySuperMarket",
position : "Head of Data Strategy"
+ " and Advanced Analytics"
};
var previous1: {
companyName : "Dunnhumby",
position : "Insight Director,"
+ " Tesco Clubcard"
};
var previous2: {
companyName : "Lloyds Banking Group",
position : "Senior Manager"
};
var previous3: {
companyName : "British Airways",
position : "Senior Operational Research Analyst"
};
{"about" : "me"}
var username = "harvindersatwal";
var linkedIn = "/in/" + username;
var twitter = "@" + username;
var email = username + "@gmail.com";

Weitere ähnliche Inhalte

Was ist angesagt?

Introdution to Dataops and AIOps (or MLOps)
Introdution to Dataops and AIOps (or MLOps)Introdution to Dataops and AIOps (or MLOps)
Introdution to Dataops and AIOps (or MLOps)Adrien Blind
 
Modern Data architecture Design
Modern Data architecture DesignModern Data architecture Design
Modern Data architecture DesignKujambu Murugesan
 
ODSC May 2019 - The DataOps Manifesto
ODSC May 2019 - The DataOps ManifestoODSC May 2019 - The DataOps Manifesto
ODSC May 2019 - The DataOps ManifestoDataKitchen
 
Architect’s Open-Source Guide for a Data Mesh Architecture
Architect’s Open-Source Guide for a Data Mesh ArchitectureArchitect’s Open-Source Guide for a Data Mesh Architecture
Architect’s Open-Source Guide for a Data Mesh ArchitectureDatabricks
 
DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDatabricks
 
Data Architecture Strategies: Data Architecture for Digital Transformation
Data Architecture Strategies: Data Architecture for Digital TransformationData Architecture Strategies: Data Architecture for Digital Transformation
Data Architecture Strategies: Data Architecture for Digital TransformationDATAVERSITY
 
Intro to Delta Lake
Intro to Delta LakeIntro to Delta Lake
Intro to Delta LakeDatabricks
 
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...DataScienceConferenc1
 
Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
 
Modern Data Flow
Modern Data FlowModern Data Flow
Modern Data Flowconfluent
 
Modernize & Automate Analytics Data Pipelines
Modernize & Automate Analytics Data PipelinesModernize & Automate Analytics Data Pipelines
Modernize & Automate Analytics Data PipelinesCarole Gunst
 
Databricks Fundamentals
Databricks FundamentalsDatabricks Fundamentals
Databricks FundamentalsDalibor Wijas
 
Databricks for Dummies
Databricks for DummiesDatabricks for Dummies
Databricks for DummiesRodney Joyce
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Databricks
 
Making Data Timelier and More Reliable with Lakehouse Technology
Making Data Timelier and More Reliable with Lakehouse TechnologyMaking Data Timelier and More Reliable with Lakehouse Technology
Making Data Timelier and More Reliable with Lakehouse TechnologyMatei Zaharia
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceDatabricks
 

Was ist angesagt? (20)

Introdution to Dataops and AIOps (or MLOps)
Introdution to Dataops and AIOps (or MLOps)Introdution to Dataops and AIOps (or MLOps)
Introdution to Dataops and AIOps (or MLOps)
 
Lakehouse in Azure
Lakehouse in AzureLakehouse in Azure
Lakehouse in Azure
 
adb.pdf
adb.pdfadb.pdf
adb.pdf
 
Modern Data architecture Design
Modern Data architecture DesignModern Data architecture Design
Modern Data architecture Design
 
ODSC May 2019 - The DataOps Manifesto
ODSC May 2019 - The DataOps ManifestoODSC May 2019 - The DataOps Manifesto
ODSC May 2019 - The DataOps Manifesto
 
Architect’s Open-Source Guide for a Data Mesh Architecture
Architect’s Open-Source Guide for a Data Mesh ArchitectureArchitect’s Open-Source Guide for a Data Mesh Architecture
Architect’s Open-Source Guide for a Data Mesh Architecture
 
DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptx
 
Data Architecture Strategies: Data Architecture for Digital Transformation
Data Architecture Strategies: Data Architecture for Digital TransformationData Architecture Strategies: Data Architecture for Digital Transformation
Data Architecture Strategies: Data Architecture for Digital Transformation
 
Intro to Delta Lake
Intro to Delta LakeIntro to Delta Lake
Intro to Delta Lake
 
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...
 
Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)
 
Modern Data Flow
Modern Data FlowModern Data Flow
Modern Data Flow
 
Implementing a Data Lake
Implementing a Data LakeImplementing a Data Lake
Implementing a Data Lake
 
Modernize & Automate Analytics Data Pipelines
Modernize & Automate Analytics Data PipelinesModernize & Automate Analytics Data Pipelines
Modernize & Automate Analytics Data Pipelines
 
Databricks Fundamentals
Databricks FundamentalsDatabricks Fundamentals
Databricks Fundamentals
 
Databricks for Dummies
Databricks for DummiesDatabricks for Dummies
Databricks for Dummies
 
Data Engineering Basics
Data Engineering BasicsData Engineering Basics
Data Engineering Basics
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4
 
Making Data Timelier and More Reliable with Lakehouse Technology
Making Data Timelier and More Reliable with Lakehouse TechnologyMaking Data Timelier and More Reliable with Lakehouse Technology
Making Data Timelier and More Reliable with Lakehouse Technology
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
 

Ähnlich wie DataOps: Nine steps to transform your data science impact Strata London May 18

DataOps - Big Data and AI World London - March 2020 - Harvinder Atwal
DataOps - Big Data and AI World London - March 2020 - Harvinder AtwalDataOps - Big Data and AI World London - March 2020 - Harvinder Atwal
DataOps - Big Data and AI World London - March 2020 - Harvinder AtwalHarvinder Atwal
 
BDW Chicago 2016 - Ramu Kalvakuntla, Sr. Principal - Technical - Big Data Pra...
BDW Chicago 2016 - Ramu Kalvakuntla, Sr. Principal - Technical - Big Data Pra...BDW Chicago 2016 - Ramu Kalvakuntla, Sr. Principal - Technical - Big Data Pra...
BDW Chicago 2016 - Ramu Kalvakuntla, Sr. Principal - Technical - Big Data Pra...Big Data Week
 
Data Intelligence: How the Amalgamation of Data, Science, and Technology is C...
Data Intelligence: How the Amalgamation of Data, Science, and Technology is C...Data Intelligence: How the Amalgamation of Data, Science, and Technology is C...
Data Intelligence: How the Amalgamation of Data, Science, and Technology is C...Caserta
 
Building New Data Ecosystem for Customer Analytics, Strata + Hadoop World, 2016
Building New Data Ecosystem for Customer Analytics, Strata + Hadoop World, 2016Building New Data Ecosystem for Customer Analytics, Strata + Hadoop World, 2016
Building New Data Ecosystem for Customer Analytics, Strata + Hadoop World, 2016Caserta
 
Demonstrating Big Value in Big Data with New Analytics Approaches
Demonstrating Big Value in Big Data with New Analytics ApproachesDemonstrating Big Value in Big Data with New Analytics Approaches
Demonstrating Big Value in Big Data with New Analytics ApproachesJulie Severance
 
Data Mesh in Azure using Cloud Scale Analytics (WAF)
Data Mesh in Azure using Cloud Scale Analytics (WAF)Data Mesh in Azure using Cloud Scale Analytics (WAF)
Data Mesh in Azure using Cloud Scale Analytics (WAF)Nathan Bijnens
 
Data summit connect fall 2020 - rise of data ops
Data summit connect fall 2020 - rise of data opsData summit connect fall 2020 - rise of data ops
Data summit connect fall 2020 - rise of data opsRyan Gross
 
Why Everything You Know About bigdata Is A Lie
Why Everything You Know About bigdata Is A LieWhy Everything You Know About bigdata Is A Lie
Why Everything You Know About bigdata Is A LieSunil Ranka
 
Big Data LDN 2018: THE PATH TO ENTERPRISE AI: TALES FROM THE FIELD
Big Data LDN 2018: THE PATH TO ENTERPRISE AI: TALES FROM THE FIELDBig Data LDN 2018: THE PATH TO ENTERPRISE AI: TALES FROM THE FIELD
Big Data LDN 2018: THE PATH TO ENTERPRISE AI: TALES FROM THE FIELDMatt Stubbs
 
The 5 Keys to a Killer Data Lake
The 5 Keys to a Killer Data LakeThe 5 Keys to a Killer Data Lake
The 5 Keys to a Killer Data LakeDataWorks Summit
 
Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...
Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...
Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...Denodo
 
Implementar una estrategia eficiente de gobierno y seguridad del dato con la ...
Implementar una estrategia eficiente de gobierno y seguridad del dato con la ...Implementar una estrategia eficiente de gobierno y seguridad del dato con la ...
Implementar una estrategia eficiente de gobierno y seguridad del dato con la ...Denodo
 
Big Data's Impact on the Enterprise
Big Data's Impact on the EnterpriseBig Data's Impact on the Enterprise
Big Data's Impact on the EnterpriseCaserta
 
Journey to Cloud Analytics
Journey to Cloud Analytics Journey to Cloud Analytics
Journey to Cloud Analytics Datavail
 
ADV Slides: How to Improve Your Analytic Data Architecture Maturity
ADV Slides: How to Improve Your Analytic Data Architecture MaturityADV Slides: How to Improve Your Analytic Data Architecture Maturity
ADV Slides: How to Improve Your Analytic Data Architecture MaturityDATAVERSITY
 
Migrating Analytics to the Cloud at Fannie Mae
Migrating Analytics to the Cloud at Fannie MaeMigrating Analytics to the Cloud at Fannie Mae
Migrating Analytics to the Cloud at Fannie MaeDataWorks Summit
 
Big Data Analytics_Unit1.pptx
Big Data Analytics_Unit1.pptxBig Data Analytics_Unit1.pptx
Big Data Analytics_Unit1.pptxPrabhaJoshi4
 

Ähnlich wie DataOps: Nine steps to transform your data science impact Strata London May 18 (20)

DataOps - Big Data and AI World London - March 2020 - Harvinder Atwal
DataOps - Big Data and AI World London - March 2020 - Harvinder AtwalDataOps - Big Data and AI World London - March 2020 - Harvinder Atwal
DataOps - Big Data and AI World London - March 2020 - Harvinder Atwal
 
KNIME Meetup 2016-04-16
KNIME Meetup 2016-04-16KNIME Meetup 2016-04-16
KNIME Meetup 2016-04-16
 
BDW Chicago 2016 - Ramu Kalvakuntla, Sr. Principal - Technical - Big Data Pra...
BDW Chicago 2016 - Ramu Kalvakuntla, Sr. Principal - Technical - Big Data Pra...BDW Chicago 2016 - Ramu Kalvakuntla, Sr. Principal - Technical - Big Data Pra...
BDW Chicago 2016 - Ramu Kalvakuntla, Sr. Principal - Technical - Big Data Pra...
 
Data Intelligence: How the Amalgamation of Data, Science, and Technology is C...
Data Intelligence: How the Amalgamation of Data, Science, and Technology is C...Data Intelligence: How the Amalgamation of Data, Science, and Technology is C...
Data Intelligence: How the Amalgamation of Data, Science, and Technology is C...
 
Building New Data Ecosystem for Customer Analytics, Strata + Hadoop World, 2016
Building New Data Ecosystem for Customer Analytics, Strata + Hadoop World, 2016Building New Data Ecosystem for Customer Analytics, Strata + Hadoop World, 2016
Building New Data Ecosystem for Customer Analytics, Strata + Hadoop World, 2016
 
Demonstrating Big Value in Big Data with New Analytics Approaches
Demonstrating Big Value in Big Data with New Analytics ApproachesDemonstrating Big Value in Big Data with New Analytics Approaches
Demonstrating Big Value in Big Data with New Analytics Approaches
 
Data Mesh in Azure using Cloud Scale Analytics (WAF)
Data Mesh in Azure using Cloud Scale Analytics (WAF)Data Mesh in Azure using Cloud Scale Analytics (WAF)
Data Mesh in Azure using Cloud Scale Analytics (WAF)
 
Data summit connect fall 2020 - rise of data ops
Data summit connect fall 2020 - rise of data opsData summit connect fall 2020 - rise of data ops
Data summit connect fall 2020 - rise of data ops
 
Why Everything You Know About bigdata Is A Lie
Why Everything You Know About bigdata Is A LieWhy Everything You Know About bigdata Is A Lie
Why Everything You Know About bigdata Is A Lie
 
Big Data LDN 2018: THE PATH TO ENTERPRISE AI: TALES FROM THE FIELD
Big Data LDN 2018: THE PATH TO ENTERPRISE AI: TALES FROM THE FIELDBig Data LDN 2018: THE PATH TO ENTERPRISE AI: TALES FROM THE FIELD
Big Data LDN 2018: THE PATH TO ENTERPRISE AI: TALES FROM THE FIELD
 
The 5 Keys to a Killer Data Lake
The 5 Keys to a Killer Data LakeThe 5 Keys to a Killer Data Lake
The 5 Keys to a Killer Data Lake
 
Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...
Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...
Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...
 
Implementar una estrategia eficiente de gobierno y seguridad del dato con la ...
Implementar una estrategia eficiente de gobierno y seguridad del dato con la ...Implementar una estrategia eficiente de gobierno y seguridad del dato con la ...
Implementar una estrategia eficiente de gobierno y seguridad del dato con la ...
 
Big Data's Impact on the Enterprise
Big Data's Impact on the EnterpriseBig Data's Impact on the Enterprise
Big Data's Impact on the Enterprise
 
Critical Success Factors
Critical Success FactorsCritical Success Factors
Critical Success Factors
 
Journey to Cloud Analytics
Journey to Cloud Analytics Journey to Cloud Analytics
Journey to Cloud Analytics
 
Focus
FocusFocus
Focus
 
ADV Slides: How to Improve Your Analytic Data Architecture Maturity
ADV Slides: How to Improve Your Analytic Data Architecture MaturityADV Slides: How to Improve Your Analytic Data Architecture Maturity
ADV Slides: How to Improve Your Analytic Data Architecture Maturity
 
Migrating Analytics to the Cloud at Fannie Mae
Migrating Analytics to the Cloud at Fannie MaeMigrating Analytics to the Cloud at Fannie Mae
Migrating Analytics to the Cloud at Fannie Mae
 
Big Data Analytics_Unit1.pptx
Big Data Analytics_Unit1.pptxBig Data Analytics_Unit1.pptx
Big Data Analytics_Unit1.pptx
 

Mehr von Harvinder Atwal

Data leaders summit 2019
Data leaders summit 2019Data leaders summit 2019
Data leaders summit 2019Harvinder Atwal
 
Data Leaders Summit Barcelona 2018
Data Leaders Summit Barcelona 2018Data Leaders Summit Barcelona 2018
Data Leaders Summit Barcelona 2018Harvinder Atwal
 
Machine learning - What they don't teach you on Coursera ODSC London 2016
Machine learning - What they don't teach you on Coursera ODSC London 2016Machine learning - What they don't teach you on Coursera ODSC London 2016
Machine learning - What they don't teach you on Coursera ODSC London 2016Harvinder Atwal
 
Data Insight Leaders Summit Barcelona 2017
Data Insight Leaders Summit Barcelona 2017Data Insight Leaders Summit Barcelona 2017
Data Insight Leaders Summit Barcelona 2017Harvinder Atwal
 
Effective report writing
Effective report writingEffective report writing
Effective report writingHarvinder Atwal
 

Mehr von Harvinder Atwal (7)

AI is a Team Sport
AI is a Team SportAI is a Team Sport
AI is a Team Sport
 
Data leaders summit 2019
Data leaders summit 2019Data leaders summit 2019
Data leaders summit 2019
 
Data Leaders Summit Barcelona 2018
Data Leaders Summit Barcelona 2018Data Leaders Summit Barcelona 2018
Data Leaders Summit Barcelona 2018
 
Machine learning - What they don't teach you on Coursera ODSC London 2016
Machine learning - What they don't teach you on Coursera ODSC London 2016Machine learning - What they don't teach you on Coursera ODSC London 2016
Machine learning - What they don't teach you on Coursera ODSC London 2016
 
Data Insight Leaders Summit Barcelona 2017
Data Insight Leaders Summit Barcelona 2017Data Insight Leaders Summit Barcelona 2017
Data Insight Leaders Summit Barcelona 2017
 
Data visualisation
Data visualisationData visualisation
Data visualisation
 
Effective report writing
Effective report writingEffective report writing
Effective report writing
 

Kürzlich hochgeladen

GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]📊 Markus Baersch
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfJohn Sterrett
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfBoston Institute of Analytics
 
ASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel CanterASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel Cantervoginip
 
RadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfRadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfgstagge
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.natarajan8993
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Jack DiGiovanna
 
Multiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfMultiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfchwongval
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 
Heart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectHeart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectBoston Institute of Analytics
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Sapana Sha
 
Identifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population MeanIdentifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population MeanMYRABACSAFRA2
 
Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 217djon017
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...soniya singh
 
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样vhwb25kk
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsappssapnasaifi408
 
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptxthyngster
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdfHuman37
 

Kürzlich hochgeladen (20)

GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdf
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
 
ASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel CanterASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel Canter
 
RadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfRadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdf
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
 
Multiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfMultiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdf
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 
Heart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectHeart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis Project
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
 
Identifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population MeanIdentifying Appropriate Test Statistics Involving Population Mean
Identifying Appropriate Test Statistics Involving Population Mean
 
Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
 
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
 
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 
Call Girls in Saket 99530🔝 56974 Escort Service
Call Girls in Saket 99530🔝 56974 Escort ServiceCall Girls in Saket 99530🔝 56974 Escort Service
Call Girls in Saket 99530🔝 56974 Escort Service
 
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf
 

DataOps: Nine steps to transform your data science impact Strata London May 18

  • 1. DataOps: 9 steps to transform your data science impact 21-24 May 2018
  • 2. // Harvinder Atwal MoneySuperMarket // Web dunnhumby {"previous" : "Insight Director, Tesco Clubcard"} Lloyds Banking Group {"previous" : "Senior Manager, Customer Strategy and Insight"} {"Current" : "Head of Data Strategy and Advanced Analytics"} @harvindersatwal British Airways {"previous" : "Senior Operational Research Analyst"} {"about" : "me"} @gmail.com
  • 3. £2B SAVINGS 2017 estimate total of UK savings 1993 24.9M 24 million £323M 989 We started life as mortgages 2000 Adults choose to share their data with us Average monthly users 2017 Revenue 2017 Product Providers
  • 4. Sometimes it’s simple things that work really well From one version to 1400+ customised variants of the newsletter +19% Increase in Revenue Per Send
  • 5. Sometimes it’s more complicated solutions Worried about whether you can afford a personal loan? With UK interest rates at record lows, it’s worth checking to see how reasonable the cost could be. Whether you need to borrow to buy something, or you want to bring your existing debts under one roof, have a look at these competitive deals we’ve assembled. Thanks to our Smart Search tool, you can get an idea of the loans you’re likely to be accepted for before you proceed with your application. Same message but Language tailored to the customer’s Financial Attitude
  • 6. Only 22% of companies are currently seeing a significant return from data science expenditures* *Obligatory conference presentation quote from GartnerForresterMcKinsey Consulting. Sorry.
  • 8. HIRE DATA SCIENTISTS How businesses think they become data-driven 1 2 3 MONEY FLOWS HOARD DATA 4
  • 9. Warning: A data-driven customer focussed strategy will not paper over cracks in operational performance or product deficiency
  • 10. Putting Data ahead of the Customer or Financial Performance
  • 12. Multiple challenges in the process of turning data into value on existing infrastructure Business Problem Evaluate available data Request Data Access from IT Request Compute Resources from IT Negotiate with IT for requested resources Wait for resources to be provisioned Install Languages and tools Configure connectivity, Access and security RAM/CPU Availability, scaling, monitoring Request network Config Change Request to install another package Model building Compose PowerPoint to share results Edit Confluence to document work Negotiate with business stakeholder on deployment timeline Wait for Data Engineering to implement the model Test Newly implemented model to ensure valid results Request Modifications to model due to unexpected results Release model to production and schedule Document release notes and deployment steps Prepare for change management
  • 13. Data Science trapped in laptops
  • 14. Thinking real-life Data Science is a Kaggle competition
  • 15. Treating Data Science as a Death Star Technology Project
  • 16. Insight does not scale!. Using data to generate ad hoc Decision Support Insight INSTEAD OF ACTION
  • 17. Money is wasted Time is wasted Talent is wasted
  • 18. Eliminate waste LEAN THINKING The Optimist The Pessimist The Lean Thinker THE GLASS IS HALF FULL THE GLASS IS HALF EMPTY WHY IS THE GLASS TWICE AS BIG AS IT SHOULD BE?
  • 19. Alignment of data science with the rest of the organisation and it's goals
  • 20. It’s a sprint not a marathon
  • 21. Problems with Agile Data Science - How do you define business value?
  • 22. DATA STORAGE Cloud File Storage Distributed File System NoSQL DB RDBMS COMPUTE INFRASTRUCTURE ResourceManagement/Monitoring/Auditing Scheduling ProjectandDataGovernance DataEngineering Distributed SQLQuery Engine Distributed Compute Framework Compute Instances Coding Workspace & Language Libraries Output Files ANALYTICS LAYER Machine Learning libraries Data Visualisation libraries BI Tools Interactive dashboards/ Web Apps Security/IdentityAccesscontrol APIs Data Prep/Exploration tools Summary Analysis, Analysis of Experiments, Segmentation, Machine Learning, Data Matching Revision/Deployment Tools Interactive Dashboards/ Web App development Applications (Business Layer) Insight Marketing Optimisation External Data Products Internal Reporting Website Optimisation Commercial Optimisation Production Code DevOps/Infrastructure DBAs ETL DQM MetadataManagement Agile Data Science does not solve tech complexity problem Container Service Resource Vertical requirementsDATA PRODUCTS (Presentation/ Service Layer) Deployment,OrchestrationandScaling ConfigurationManagement RevisionControl KnowledgeManagement DataScientists DATA SOURCES Stream Processing Framework
  • 24. Data Science can’t happen in a vacuum Situational Awareness is needed
  • 25. Your business already has hypothesis for what creates value Actively avoid work on anything else It’s the Corporate Strategy and Objectives (everyone is aligned behind)
  • 26. Measurement of everything gives feedback of not just individual deliverables (fast loop) but also the organisation’s hypothesis of what adds value (slow loop) Situational Awareness Objectives (Themes) Strategies (Initiatives) Tactics (Epics) Actions (Stories) Strategies (Initiatives) Strategies (Initiatives) Objectives (Themes) Tactics (Epics) Tactics (Epics) Tactics (Epics) Tactics (Epics) Tactics (Epics) Actions (Stories) Actions (Stories) Actions (Stories) Actions (Stories) Actions (Stories) Actions (Stories) Actions (Stories) Actions (Stories) Actions (Stories) Actions (Stories) Actions (Stories) Corporate strategy is broken down into many options (Epics) for Agile delivery
  • 27. We reduce Batch sizes of work and have options to keep flow going
  • 28. Collaboration is key Shared Buy-in from Senior management Organizational behavior structured around the ideal data-journey model Shared Priorities Shared Trust in data Shared Rewards based on measured outcomes, not outputs
  • 29. Test & Collect Model Embed Roll Out Feedback Plan Pilot test Collect Data Build Model, Identify segments Adjust model to fit organisation Re-engineer business processes to support segmented execution Train organisation Creation of fast feedback loop
  • 30. Data cycles are measured to eliminate bottlenecks
  • 31. Shortened Data Cycles to be Agile Data Engineering Dev Ops/Infrastructure DB Management Cloud File Storage Distributed File System NoSQL DB RDBMS Distributed SQLQuery Engine Distributed Compute Framework Compute Instance Container Service Data Prep/Exploration tools Coding Workspace & Language Libraries Machine Learning Data Visualisation Interactive Dashboards/ Web App development Version/ Deployment Tool Output Files BI Tools Interactive dashboards /Web Apps APIs Knowledge Management Security/Identity Access control Revision Control Configuration Management Orchestration and scaling Project and Data Governance Scheduling Resource Management/Monitoring/Auditing ETL DQM Data Scientists Epic Customer Feedback & Iteration Data Product Strategy Story Stream Processing Data Sources
  • 33. DataOps was popularised by Andy Palmer in a 2015 Blog post
  • 34. DataOps is an independent approach to data analytics Data Analytics team moves at lightening speed using highly optimized tools and processes across the whole data lifecycle Agile Collaboration to break down silos and work on “The Right Things” that add value Lean Manufacturing like focus on eliminating waste & bottlenecks, improving quality, monitoring and control Iterative project management Continuous delivery Automated test and deployment Monitoring Self-serve Quality Governance Organisational alignment Ease of use PredictabilityReproducibility Strategic Objectives
  • 36. Why do we have brakes on a car?
  • 37. Accept the delivery pipeline is governed by rules and constraints
  • 38. Trust part 1: Make the “What you do to data” people in the organisation happy Identity and Access Management Custom role permissions Audit trail logs Data Loss Prevention Encryption of Data at Rest Encryption of Data in Motion Resource Monitoring Firewall rules Resource and Object Isolation Penetration Testing Code Encryption and Backup Segregation of Duties Authorisation protocols Data Access and Privacy Policy Metadata Management Data Lineage Tracking Data Stewards and Owners
  • 39. Trust part 2: Make the “What you do with data” people in the organisation happy Data Quality Testing Transformation Testing End-User Testing ETL Integration Testing Metadata Testing Data Completeness Testing ETL Regression Testing Incremental ETL Testing Reference Data Testing ETL Performance Testing
  • 41. Configuration Management For consistently reproducible computational environments
  • 42. Continuous Integration: Commit Code Regularly Data Cleaning Master Data Cleaning Dev Branch Feature Extraction Dev Feature Extraction Master Model Train Master Model Train Dev Branch Machine Learning Pipeline Product Development (e.g. App, Website, Marketing system, Operational System, Dashboard, etc.)
  • 43. Run tests and review code (please integrate safely)
  • 44. Continuous Delivery and Beyond: Accelerating Deployment Dev Integration testApplication test Acceptance test Production Continuous Integration Dev Integration testApplication test Continuous Delivery Dev Integration testApplication test Acceptance test Production Continuous Deployment Automated Manual
  • 46. Chemistry is not about tubes DataOps is not about tools (but the right ones help)
  • 47. Align your spine Needs Principles Practices Tools Values How do you know it is the best possible tool? How do you know that the Practices actively help the system? How do you know which Principles you want to apply? “We use _____ to get our work done” “We DO Self-Service and DataOps to continuously create VALUE for the customer and business” We LEVERAGE Agile and Lean PRINCIPLES to change the system and make sure resources work on the right thing We OPTIMISE for Speed, Accuracy, Experimentation/Feedback and Security. We are here to SATISFY THE NEED to help customers save money and the business to execute it’s strategy It all starts at Needs. Why does this system exist in the first place? Source: Kevin Trethewey, Danie Roux, Joanne Perold
  • 48. Avoid building your own anything or being on the bleeding edge. Cost of Delay is high.
  • 49. Data Scientists need a way to manage their projects end-to-end with self-service data AND ARCHITECTURE Business Problem Evaluate available data Request Data Access from IT Request Compute Resources from IT Negotiate with IT for requested resources Wait for resources to be provisioned Install Languages and tools Configure connectivity, Access and security RAM/CPU Availability, scaling, monitoring Request network Config Change Request to install another package Model building Compose PowerPoint to share results Edit Confluence to document work Negotiate with business stakeholder on deployment timeline Wait for Data Engineering to implement the model Test Newly implemented model to ensure valid results Request Modifications to model due to unexpected results Release model to production and schedule Document release notes and deployment steps Prepare for change management
  • 50. Modern serverless and managed infrastructure makes it easy to create data products just bring code and data A single unified platform reduces data fragmentation, overcomes business silos and helps enforce consistent governance
  • 51. You can make the data supply chain more efficient by unifying data and tools in one platform Data Warehouse(s)ETL Analytic s Platform Core Data Other Data Extract/Load OffLoad Main Source(s) of Truth Presentation/ Service Layer(s) Analytical tools Predictive and Prescriptive analytics Flatten/Merge columns Data Sharing BI Tools Descriptive and Diagnostic analytics Source Cubes on Dimensions Reload Data Microservices Flatten/Merge columns Data Sharing
  • 52. Data Science Platforms add further self-serve capabilities Data Access, Prep and Exploration Jupyter, Rstudio, Zeppelin, etc. Automation and Machine Learning Run experiments, track and compare results Delivery and Model Management Publish APIs, Interactive web apps Schedule reports Collaboration and Version Control Discover, discuss and build on existing work Compute Environment Library Customised software stack Compute Grid Orchestrate hardware for development and deployment Source: Domino Data Labs
  • 53. The market for platforms is exploding
  • 54. Data Scientist Data Analyst Data Engineer Self-serve enables reduced DataOps roles ETL Quality Testing Descriptive Analytics Advanced Analytics BI Dev Ops Infrastructure Engineers DBAs X X X Business Stakeholders Operations Sys adminX Developers ML Product Managers
  • 55. Implement AI: Actionable Intelligence
  • 56. #1 Eliminate wasted effort Find the FASTEST, CHEAPEST path between data and consumers
  • 57. #2 Align with the Organisation through Agile Collaboration
  • 58. #3 Deliver Products not Projects Prioritize solutions that fit into a DataOps workflow over others
  • 59. #4 Build a measurement and feedback culture
  • 60. #5 Embrace Development best practise in Data Science Version Control, Configuration management, Continuous Integration, Continuous Operations
  • 61. #6 KEEP CALM AND BUILD TRUST IN DATA Put Effective Data Governance, Security and Testing in place
  • 62. #7 Invest in tools and process to reduce bottlenecks and increase quality Managed Infrastructure and Serverless Cloud, Automation and Data Science Platforms
  • 63. #8 Decentralise Self-service analytics AND cloud infrastructure
  • 64. #9 Organise around the ideal data journey instead of teams Fewer roles, more end-to-end ownership, less friction Store Share UseManageAcquire Process Data Engineering Data Scientists Data Analysts Business Stakeholders
  • 66. Data Science Today Customer Data ? Hamster wheel Analytics The Roadblock The Aimless crash and burn The “So What happened?” The “We did it once, why doesn’t it work again?”
  • 67. The DataOps Data Science Factory Epic Customer Data Product Strategy Story Data Rest of Business Analytics Agile Collaboration Data Governance Automated testing Value Measurement Version Control Configuration Management Self-Serve Infrastructure Automation Continuous Integration
  • 70. // Harvinder Atwal // Web var current: { companyName : "MoneySuperMarket", position : "Head of Data Strategy" + " and Advanced Analytics" }; var previous1: { companyName : "Dunnhumby", position : "Insight Director," + " Tesco Clubcard" }; var previous2: { companyName : "Lloyds Banking Group", position : "Senior Manager" }; var previous3: { companyName : "British Airways", position : "Senior Operational Research Analyst" }; {"about" : "me"} var username = "harvindersatwal"; var linkedIn = "/in/" + username; var twitter = "@" + username; var email = username + "@gmail.com";

Hinweis der Redaktion

  1. The average colleague doesn't want to be a data person any more than I want the an accountant. You have to hire like Google. Data people who happen to make good product owners.
  2. Star wars is not a metaphor for good vs Evil but Waterfall vs Agile
  3. Too much wastage in the process and hard to impact customers directly
  4. The key to adding value is to adapt and borrow principles from Agile Software development starting with alignment of data science with the rest of the organisation and it's goals.
  5. Work only on the organisation’s biggest strategic objectives – those stakeholders have aligned behind. Objectives the business hypothesises will add the most value.
  6. We don’t know upfront what is going to work.
  7. DataOps is an automated, process-oriented methodology, used by big data teams, to improve the quality and reduce the cycle time of data analytics. While DataOps began as a set of best practices, it has now matured to become a new and independent approach to data analytics.[1] DataOps applies to the entire data lifecycle[2] from data preparation to reporting, and recognizes the interconnected nature of the data analytics team and information technology operations.[3] From a process and methodology perspective, DataOps applies Agile software development, DevOps[3] and the statistical process control used in lean manufacturing, to data analytics.[4] In DataOps, development of new analytics is streamlined using Agile software development, an iterative project management methodology that replaces the traditional Waterfall sequential methodology. Studies show that software development projects complete significantly faster and with far fewer defects when Agile Development is used. The Agile methodology is particularly effective in environments where requirements are quickly evolving — a situation well known to data analytics professionals.[5] DevOps focuses on continuous delivery by leveraging on-demand IT resources and by automating test and deployment of analytics. This merging of software development and IT operations has improved velocity, quality, predictability and scale of software engineering and deployment. Borrowing methods from DevOps, DataOps seeks to bring these same improvements to data analytics.[3] Like lean manufacturing, DataOps utilizes statistical process control (SPC) to monitor and control the data analytics pipeline. With SPC in place, the data flowing through an operational system is constantly monitored and verified to be working. If an anomaly occurs, the data analytics team can be notified through an automated alert.[6] DataOps seeks to provide the tools, processes, and organizational structures to cope with this significant increase in data.[7] Automation streamlines the daily demands of managing large integrated databases, freeing the data team to develop new analytics in a more efficient and effective way.[9] DataOps embraces the need to manage many sources of data, numerous data pipelines and a wide variety of transformations.[3] DataOps seeks to increase velocity, reliability, and quality of data analytics.[10] It emphasizes communication, collaboration, integration, automation, measurement and cooperation between data scientists, analysts, data/ETL(extract, transform, load) engineers, information technology (IT), and quality assurance/governance.[11] It aims to help organizations rapidly produce insight, turn that insight into operational tools, and continuously improve analytic operations and performance.[11]
  8. This is sometimes really hard for Data Scientists who experiment with data on laptops to accept.
  9. Add Data and Logic Tests
  10. Version control is the foundation upon which a lot of delivery is built. At a minimum, reviewers of a publication and future researchers should be able to: 1) Download all data and software used to generate the results. 2) Run tests and review source code to verify correctness. 3) Run a build process to execute the computation. Version control makes it possible to maintain an archived version of the code used to produce a particular result.  Examples include Git and Subversion. 3) Automated build systems document the high-level structure of a computation: which programs process which data, what outputs they produce, etc.  Examples include Make and Ant.
  11. Configuration management tools document the details of the computational environment where the result was produced, including the programming languages, libraries, and system-level software the results depend on.  Examples include package managers like Conda that document a set of packages, containers like Docker that also document system software, and virtual machines that actually contain the entire environment needed to run a computation.
  12. In an enterprise setting where multiple data scientists could be working on a single project, the first step to doing data science work that scales is implementing version control, whether that’s GitHub, GitLab, Bitbucket, or another solution. Once your team has the ability to track code changes, the next step is to create a process in which they regularly commit their code to the master branch of your repository.
  13. 2) During development, automated tests make programs more likely to be correct; they also tend to improve code quality.  During review, they provide evidence of correctness, and for future researchers they provide what is often the most useful form of documentation.  Examples include unittest and nose for Python and JUnit for Java.
  14. You can move beyond Continuous Integration to make deployment even faster. Traditionally, data science deployment has been a multi-step process that puts the onus on engineering: Engineers would refactor, test, and automate or schedule a data scientist’s model before slowly rolling it out, sometimes months after it was originally built. Developers that embrace continuous delivery are pushing new application features or changes into production quickly, sometimes with the click of a button. Increasingly, cloud and data science platforms are filling this void with features such as the ability to deploy models as APIs or schedule code runs which means that as soon as new development passes your tests it can be deployed into production with no dependencies on other teams.
  15. For IT teams managing the systems that support models in production and data science environments, the ability to monitor and add resources as data science work expands  — while maintaining system availability — is essential. But that’s just one application. For IT teams managing the resources needed for every deployed model and data science environment across an entire company, a data science platform that offers cluster management features and the ability for IT to dictate the size of the resources made available to data science teams can go a long way toward achieving continuous operations.
  16. Which brings me on to tools Just as chemistry is not about the tubes but the process of experimentation. DataOps is not tied to a particular technology, architecture, tool, language or framework. However, some tools are better at supporting DataOps collaboration, orchestration, agility, quality, security, access and ease of use.
  17. Whenever, choosing tools is best to never start with the tools themselves. I like to use the spine model by Tre the wey, Roux and Perold. So to decide on the tool you need to understand the practices you employ, in order to understand what practices to employ you need to define your principles, to define your principles you need to know your values, and to know your values you need start with the needs you’re trying to fulfil. We have a set of clear DataOps Practices we want to employ so we have a clear idea of what tools will be fit for purpose. http://spinemodel.info/explanation/introduction
  18. But first a bit of advice. You should avoid building your own anything or being on the bleeding edge. Any technology or tool that is really useful will end up being refined or commoditised and turned into a service. Let someone else find the bugs, be the beta tester or end up in cul-de-sac. The other factor to take into account is Cost of Delay. It’s nearly always ignored in business cases. On paper be cheaper to build your own solution. However, the months, or years, you’re taking to do that is months, or years, you’re not benefiting from the solution and handing to your competitors. And it always takes twice as long to build your own solutions, even after you’ve factored in it’s going to take you twice as long as you think.
  19. Because one of our principles is that we want to make data cycles shorter and shorter it’s important Data Scientists can self-serve not just the data but also the infrastructure, tools and packages
  20. Modern Cloud architecture makes it very easy to create data products rapidly. Specifically, the move from Infrastructure and Platform as a Service to Software as a Service and Serverless architecture. That means you having not hardware or software to configure, you just bring your data and code and all the scaling and optimisation is done for you. The other advantage is you can use the same tools for dev and production, You can also the same data in dev and production as in the SAAS or Serverless word there’s no need for separation of environments. We’re so convinced of the benefits we’re actually moving our Data Analytics stack out of AWS onto Google Cloud Platform. Here’s an example of GCP reference architecture for Big data which isn’t a million miles from our architecture. There’s absolutely no infrastrucutre to manage within the environment. The other thing you can do is use the cloud as a centralised platform helping to break down organisational barriers and makes it easier to enforce governance rules.
  21. Modern cloud takes care of the underlying tools but you can add further levels of abstraction and self-service to the compute infrastructure and data pipeline. Data Science Platforms provide tools that enable teams to work faster and deploy DataOps methodology very easily from choosing the computer infrastructure and environments to run their code on, to automated version control, collaboration tools and one-click deployment to APIs and Dashboards.
  22. The requirements for this type of platform haven’t gone unnoticed, these are just some of the vendors we looked at before settling on Domino Data Labs. Each has their strengths and weaknesses, so which one is best depends on your use-cases.
  23. There’s another positive side-effect of going down the DataOps route. You require fewer roles due to self-service. There no need for Specialist Dev Ops, Infrastructure Engineers, Sys Admins or DBAs. This reduces friction, hand-offs, bottlenecks You’re left with just four key roles, Data Scientists, Data Engineers, Data Analysts (who are a much under-invested in group as everyone wants to be a Data Scientist) and the Line of Business (these are the stakeholders and also those who will help integrate your Data Product into other applications.
  24. Worrying About Artificial Intelligence when you can’t even produce a Sales report is not going to get you very far. You need to worry about being able to action data instead in alignment with the organisation’s strategy and goals.
  25. 80% of the battle is knowing what not to work on. You should not work on projects but products. Products are in constant use by consumers and have direct customer and business benefit. The benefit scales according to the number of customers who use them. A data product may be a machine learning model, a segmentation, a recommendation engine, a dashboard. They may be integrated into other products. They have an owner, you get feedback that helps you improve them through iteration. They are not one-off adhoc pieces of insight that get filed away.
  26. Velocity is th
  27. We need to solve all the problems with Data Science today: Hamster Wheel Analytics – Doing busywork for the organisation that makes us feel good because we’re putting in a lot of effort and clients appreciate but is never going to move the needle. The work we do that’s not repeatable because it was never documented The aimless crash and burn – Where we explore data to find the magical insights without a clear objective or worse the rest of the business has no interest in The Roadblock – Work we do that has no route to the customers because it is blocked by corporate silos, IT, Security, lack of infrastructure, tools or willingness to integrate into an end product and remains on a laptop. Work we do that does make a customer impact which we can’t measure because the feedback loop was never closed.
  28. Instead we can move to the DataOps World – What I like to call the Data Science Factory. It starts with alignment with the rest of the business’ strategy to create options for Agile Delivery and collaboration to deliver them. Rapid delivery of Data Products because there is the governance, trust in data, self-service and automation A path to the end-consumer and feedback to measure value for the next iteration.