SlideShare ist ein Scribd-Unternehmen logo
1 von 118
Consumer Science
         &
Product Development
Rochelle King
VP, User Experience & Product Services




      Matt Marenghi
    VP, User Interface Engineering
Netflix Overview
TV & Movie
Enjoyment Made
     Easy
~26 Million Members
~26 Million Members
Over

800
Partner
Products
8
9
10
11
12
13
?
    13
Netflix &
  Consumer Science


“If you want to increase your success rate,
         double your failure rate.”
                              – Thomas Watson, Sr., founder of IBM
Goal:
Customer Satisfaction



                        15
Measuring Success



                    16
17
Consumer
 Science




           17
Consumer
 Science




           17
Consumer
 Science




           17
Consumer
 Science




           17
Consumer
 Science




           A/B testing
                         17
What “performs best”?




                        18
Choosing the Right Metrics




                             19
Choosing the Right Metrics

 Core Metric: Retention


                             19
Choosing the Right Metrics

 Core Metric: Retention


                             20
Choosing the Right Metrics

   Core Metric: Retention
Proxy Metric: Hours Watched
                              20
Start with a Hypothesis...




                             21
Start with a Hypothesis...

   If we make a
huge “play” button,
 people will watch
       more.




                                 21
Start with a Hypothesis...

   If we make a
huge “play” button,
 people will watch
       more.
                          If we give people
                      $1 every time they press
                        “play”, retention will
                              improve.




                                                 21
Start with a Hypothesis...

    If we make a
 huge “play” button,
  people will watch
        more.
                              If we give people
                          $1 every time they press
                            “play”, retention will
                                  improve.

       Showing more
movies & TV shows will lead to
more streaming and improved
          retention.
                                                     21
Start with a Hypothesis...

    If we make a
 huge “play” button,
  people will watch
        more.
                              If we give people
                          $1 every time they press
                            “play”, retention will
                                  improve.

       Showing more
movies & TV shows will lead to
more streaming and improved
          retention.
                                                     22
Determine the Variables




       Showing more
movies & TV shows will lead to
more streaming and improved
          retention.




                                 23
Determine the Variables




       Showing more
movies & TV shows will lead to
more streaming and improved
          retention.




                                 23
Determine the Variables




                          24
Determine the Variables

                  more titles per row
more rows




                                        24
Determine the Variables

                  more titles per row



                       Depth
                        vs.
more rows




                      Breadth




                                        24
Design the Test




    Control

    25 rows x
     75 titles




                  25
Design the Test



     Control

    25 rows x
     75 titles




                  25
Design the Test



                  Control

                 25 rows x
                  75 titles




 Cell 1

25 rows x
150 titles



                               25
Design the Test



                  Control

                 25 rows x
                  75 titles




 Cell 1            Cell 2

25 rows x         31 rows x
150 titles         75 titles



                               25
Design the Test



                  Control

                 25 rows x
                  75 titles




 Cell 1            Cell 2       Cell 3

25 rows x         31 rows x    31 rows x
150 titles         75 titles   150 titles



                                            25
Design the Test



                  Control

                 25 rows x
                  75 titles




 Cell 1                         Cell 3
                   Cell 2
25 rows x                      31 rows x
150 titles        31 rows x    150 titles
                   75 titles


                                            25
26
Level the
playing field




                26
Level the        Data from
playing field   real customers




                                 26
Level the        Data from      Align to core
playing field   real customers      metrics




                                                 26
Large scale concept testing
can provide general direction
Original PlayStation 3 UI
Original PlayStation 3 UI




How can we get our customers to watch more?
Cell 1:
Browsing more titles using a flexible menu system and
         hierarchy will lead to more viewing
Cell 2:
A simple, flat interface which focuses on content will lead
                       to more viewing
Cell 3:
Separating navigation from content will guide our
members to the content and lead to more viewing
Cell 4:
A video-rich browsing experience will lead to more
                     viewing
VOTE!


 Cell 1:             Cell 2:
Hierarchy             Grid




  Cell 3:            Cell 4:
Separation           Video
And the winner is...


 Cell 1:                  Cell 2:
 Control                   Grid




  Cell 3:                 Cell 4:
Separation                Video
Iterate...
Iterate...
Iterate...
Iterate...
Iterate...
Iterate...
Iterate...
Iterate...
Data can give you confidence
      in your decisions
Hypothesis




A cleaner UI which showcases the content
        will lead to more viewing.
Cell 0:   Cell 1:
Control   Clean
larger boxes
no titles
on hover
Results




Cell 0:             Cell 1:
Control             Clean




                              39
Results




Cell 0:
                    Cell 1:
Control
                    Clean




                              39
Roll Out!




            40
Roll Out! but...




                   40
Roll Out! but...

     “NO GOOD...it SucKs BIG
      TIME...plz change back”




                                40
Roll Out! but...

         “NO GOOD...it SucKs BIG
          TIME...plz change back”


  “I am hoping that at least one
person at Netflix with authority will
 put down the crack pipe...and go
     back to the old interface”




                                        40
Roll Out! but...

         “NO GOOD...it SucKs BIG
          TIME...plz change back”


  “I am hoping that at least one
person at Netflix with authority will
 put down the crack pipe...and go
     back to the old interface”



       “I don’t like it, where is the sortable
       list? and I can’t stand the scroll it’s
             just wierd and stupid...”

                                                 40
Respond...




             41
Today
Making Decisions
Dealing With Results




                       44
Dealing With Results



Roll it out




                             44
Dealing With Results



       Roll it out


With consideration to the
change effect on users




                                    44
Dealing With Results



       Roll it out             Move On


With consideration to the
change effect on users




                                         44
Dealing With Results



       Roll it out                 Move On


With consideration to the   Polish won’t make it turn
change effect on users              positive




                                                        44
When the World is Flat




                         45
When the World is Flat


Unsure of Value



   Retest?




                                45
When the World is Flat


   Unsure of Value



       Retest?


  If there’s a specific
concern, address it and
   consider retesting


                                    45
When the World is Flat


   Unsure of Value         Value Add Feature



       Retest?                Roll out?


  If there’s a specific
concern, address it and
   consider retesting


                                               45
When the World is Flat


   Unsure of Value            Value Add Feature



       Retest?                     Roll out?


  If there’s a specific   but...
concern, address it and     - Ongoing tax
   consider retesting       - Likely to constrain
                              future innovation

                                                    45
Pitfalls




           46
Pitfalls

• A/B testing becomes a crutch for decision making




                                                     46
Pitfalls

• A/B testing becomes a crutch for decision making
• Not getting a clear signal




                                                     46
Pitfalls

• A/B testing becomes a crutch for decision making
• Not getting a clear signal
• Too many variations




                                                     46
Pitfalls

• A/B testing becomes a crutch for decision making
• Not getting a clear signal
• Too many variations
• Local maximum problem




                                                     46
Pitfalls

• A/B testing becomes a crutch for decision making
• Not getting a clear signal
• Too many variations
• Local maximum problem
• Declaring victory too soon




                                                     46
Pitfalls

• A/B testing becomes a crutch for decision making
• Not getting a clear signal
• Too many variations
• Local maximum problem
• Declaring victory too soon
• Not knowing when to end a test




                                                     46
Culture of Consumer Science
+
    48
Approach
           +
               48
Approach
           +   People




                        48
49
Fostering the Culture




                        49
Fostering the Culture
• Universally embraced




                          49
Fostering the Culture
• Universally embraced
• Common vocabulary




                          49
Fostering the Culture
• Universally embraced
• Common vocabulary
• Be disciplined



                          49
Fostering the Culture
• Universally embraced
• Common vocabulary
• Be disciplined
• Share results broadly

                          49
Fostering the Culture   People Matter
• Universally embraced
• Common vocabulary
• Be disciplined
• Share results broadly

                                          49
Fostering the Culture        People Matter
• Universally embraced    • Humble
• Common vocabulary
• Be disciplined
• Share results broadly

                                               49
Fostering the Culture        People Matter
• Universally embraced    • Humble
• Common vocabulary       • Focused
• Be disciplined
• Share results broadly

                                               49
Fostering the Culture         People Matter
• Universally embraced    • Humble
• Common vocabulary       • Focused
• Be disciplined          • Data-driven
• Share results broadly

                                                49
Fostering the Culture         People Matter
• Universally embraced    • Humble
• Common vocabulary       • Focused
• Be disciplined          • Data-driven
• Share results broadly   • Curious about business

                                                     49
“We are what we repeatedly do.
Excellence, then, is not an act but a
               habit.”
                            – Aristotle
Questions?
   Rochelle King - roking@netflix.com
 Matt Marenghi - mmarenghi@netflix.com




PS - Interested in learning more first hand?
  We’re hiring designers and engineers!
END

Weitere ähnliche Inhalte

Was ist angesagt?

Marketing excellence-DISNEY
Marketing excellence-DISNEYMarketing excellence-DISNEY
Marketing excellence-DISNEYSunidhi Sahay
 
Presentation1 accenture
Presentation1 accenturePresentation1 accenture
Presentation1 accentureLekshmi D
 
Les soft skills dans le Lean Management
Les soft skills dans le Lean ManagementLes soft skills dans le Lean Management
Les soft skills dans le Lean ManagementXL Groupe
 
[French] Matinale du Big Data Talend
[French] Matinale du Big Data Talend[French] Matinale du Big Data Talend
[French] Matinale du Big Data TalendJean-Michel Franco
 
walt disney animation studio
walt disney animation studiowalt disney animation studio
walt disney animation studioSonia Akter Bally
 
Nudging Consumer Behavior by Martin Janzen from Simon Kutcher & Partners
Nudging Consumer Behavior by Martin Janzen from Simon Kutcher & PartnersNudging Consumer Behavior by Martin Janzen from Simon Kutcher & Partners
Nudging Consumer Behavior by Martin Janzen from Simon Kutcher & Partnersrobusta
 
Advocacy-vs-Inquiry-2015.pptx
Advocacy-vs-Inquiry-2015.pptxAdvocacy-vs-Inquiry-2015.pptx
Advocacy-vs-Inquiry-2015.pptxT.J. Elliott
 
Disney迪士尼策略分析
Disney迪士尼策略分析Disney迪士尼策略分析
Disney迪士尼策略分析Lee Phoebe
 
jeu dispersion 6 sigma
jeu dispersion 6 sigmajeu dispersion 6 sigma
jeu dispersion 6 sigmaCIPE
 
Sony Synergy And Convergence
Sony Synergy And ConvergenceSony Synergy And Convergence
Sony Synergy And ConvergenceRyan May
 
Cross industry innovation toolkit: 50 inspiring companies and industries you ...
Cross industry innovation toolkit: 50 inspiring companies and industries you ...Cross industry innovation toolkit: 50 inspiring companies and industries you ...
Cross industry innovation toolkit: 50 inspiring companies and industries you ...Marc Heleven
 
TRIZ : problem-solving et innovation systématique
TRIZ : problem-solving et innovation systématiqueTRIZ : problem-solving et innovation systématique
TRIZ : problem-solving et innovation systématiqueBenoit DELVAUX
 
Les Basiques du Lean
Les Basiques du LeanLes Basiques du Lean
Les Basiques du LeanPeter Klym
 

Was ist angesagt? (20)

Marketing excellence-DISNEY
Marketing excellence-DISNEYMarketing excellence-DISNEY
Marketing excellence-DISNEY
 
Ubisoft
UbisoftUbisoft
Ubisoft
 
What Goes Behind Sound Design in Games
What Goes Behind Sound Design in GamesWhat Goes Behind Sound Design in Games
What Goes Behind Sound Design in Games
 
La gestion de projet agile
La gestion de projet agileLa gestion de projet agile
La gestion de projet agile
 
Presentation1 accenture
Presentation1 accenturePresentation1 accenture
Presentation1 accenture
 
History of Video Games
History of Video GamesHistory of Video Games
History of Video Games
 
Les soft skills dans le Lean Management
Les soft skills dans le Lean ManagementLes soft skills dans le Lean Management
Les soft skills dans le Lean Management
 
PlayStation
PlayStationPlayStation
PlayStation
 
[French] Matinale du Big Data Talend
[French] Matinale du Big Data Talend[French] Matinale du Big Data Talend
[French] Matinale du Big Data Talend
 
walt disney animation studio
walt disney animation studiowalt disney animation studio
walt disney animation studio
 
Nudging Consumer Behavior by Martin Janzen from Simon Kutcher & Partners
Nudging Consumer Behavior by Martin Janzen from Simon Kutcher & PartnersNudging Consumer Behavior by Martin Janzen from Simon Kutcher & Partners
Nudging Consumer Behavior by Martin Janzen from Simon Kutcher & Partners
 
Advocacy-vs-Inquiry-2015.pptx
Advocacy-vs-Inquiry-2015.pptxAdvocacy-vs-Inquiry-2015.pptx
Advocacy-vs-Inquiry-2015.pptx
 
02 rousseau
02 rousseau02 rousseau
02 rousseau
 
Gaming consoles
Gaming consolesGaming consoles
Gaming consoles
 
Disney迪士尼策略分析
Disney迪士尼策略分析Disney迪士尼策略分析
Disney迪士尼策略分析
 
jeu dispersion 6 sigma
jeu dispersion 6 sigmajeu dispersion 6 sigma
jeu dispersion 6 sigma
 
Sony Synergy And Convergence
Sony Synergy And ConvergenceSony Synergy And Convergence
Sony Synergy And Convergence
 
Cross industry innovation toolkit: 50 inspiring companies and industries you ...
Cross industry innovation toolkit: 50 inspiring companies and industries you ...Cross industry innovation toolkit: 50 inspiring companies and industries you ...
Cross industry innovation toolkit: 50 inspiring companies and industries you ...
 
TRIZ : problem-solving et innovation systématique
TRIZ : problem-solving et innovation systématiqueTRIZ : problem-solving et innovation systématique
TRIZ : problem-solving et innovation systématique
 
Les Basiques du Lean
Les Basiques du LeanLes Basiques du Lean
Les Basiques du Lean
 

Andere mochten auch

Disruption of Enterprise IT and DevOps
Disruption of Enterprise IT and DevOpsDisruption of Enterprise IT and DevOps
Disruption of Enterprise IT and DevOpsmike d. kail
 
Netflix IT Ops 2014 Roadmap
Netflix IT Ops 2014 RoadmapNetflix IT Ops 2014 Roadmap
Netflix IT Ops 2014 Roadmapmike d. kail
 
Netflix Product & Campaign Development
Netflix Product & Campaign DevelopmentNetflix Product & Campaign Development
Netflix Product & Campaign DevelopmentNorman Tran
 
Devops at Netflix (re:Invent)
Devops at Netflix (re:Invent)Devops at Netflix (re:Invent)
Devops at Netflix (re:Invent)Jeremy Edberg
 
Personas, Scenarios, User Stories, Use Cases (IxDworks.com)
Personas, Scenarios, User Stories, Use Cases (IxDworks.com)Personas, Scenarios, User Stories, Use Cases (IxDworks.com)
Personas, Scenarios, User Stories, Use Cases (IxDworks.com)Valeria Gasik
 
Data and Consumer Product Development
Data and Consumer Product DevelopmentData and Consumer Product Development
Data and Consumer Product DevelopmentGaurav Bhalotia
 
NetflixOSS for Triangle Devops Oct 2013
NetflixOSS for Triangle Devops Oct 2013NetflixOSS for Triangle Devops Oct 2013
NetflixOSS for Triangle Devops Oct 2013aspyker
 
Netflix business proposal
Netflix business proposalNetflix business proposal
Netflix business proposalDanny Ford
 
Leading Agile Product Development
Leading Agile Product DevelopmentLeading Agile Product Development
Leading Agile Product DevelopmentArto Saari
 
Spring Cloud Netflix OSS
Spring Cloud Netflix OSSSpring Cloud Netflix OSS
Spring Cloud Netflix OSSSteve Hall
 
Optimizing the Ops in DevOps
Optimizing the Ops in DevOpsOptimizing the Ops in DevOps
Optimizing the Ops in DevOpsGordon Haff
 
David Pearl: Analysis Netflix
David Pearl: Analysis NetflixDavid Pearl: Analysis Netflix
David Pearl: Analysis NetflixDavid Pearl
 
BUILD GREAT PRODUCTS: Introduction to LEAN Product Development
BUILD GREAT PRODUCTS: Introduction to LEAN Product DevelopmentBUILD GREAT PRODUCTS: Introduction to LEAN Product Development
BUILD GREAT PRODUCTS: Introduction to LEAN Product DevelopmentKlooff
 
Velocity NYC 2016 - Containers @ Netflix
Velocity NYC 2016 - Containers @ NetflixVelocity NYC 2016 - Containers @ Netflix
Velocity NYC 2016 - Containers @ Netflixaspyker
 
(ENT209) Netflix Cloud Migration, DevOps and Distributed Systems | AWS re:Inv...
(ENT209) Netflix Cloud Migration, DevOps and Distributed Systems | AWS re:Inv...(ENT209) Netflix Cloud Migration, DevOps and Distributed Systems | AWS re:Inv...
(ENT209) Netflix Cloud Migration, DevOps and Distributed Systems | AWS re:Inv...Amazon Web Services
 
Strategy Analysis of NETFLIX
Strategy Analysis of NETFLIXStrategy Analysis of NETFLIX
Strategy Analysis of NETFLIXAbhishek Sao
 
Netflix Open Source Meetup Season 4 Episode 3
Netflix Open Source Meetup Season 4 Episode 3Netflix Open Source Meetup Season 4 Episode 3
Netflix Open Source Meetup Season 4 Episode 3aspyker
 
Ibm cloud nativenetflixossfinal
Ibm cloud nativenetflixossfinalIbm cloud nativenetflixossfinal
Ibm cloud nativenetflixossfinalaspyker
 
Re:invent 2016 Container Scheduling, Execution and AWS Integration
Re:invent 2016 Container Scheduling, Execution and AWS IntegrationRe:invent 2016 Container Scheduling, Execution and AWS Integration
Re:invent 2016 Container Scheduling, Execution and AWS Integrationaspyker
 

Andere mochten auch (20)

Disruption of Enterprise IT and DevOps
Disruption of Enterprise IT and DevOpsDisruption of Enterprise IT and DevOps
Disruption of Enterprise IT and DevOps
 
Netflix IT Ops 2014 Roadmap
Netflix IT Ops 2014 RoadmapNetflix IT Ops 2014 Roadmap
Netflix IT Ops 2014 Roadmap
 
Netflix Product & Campaign Development
Netflix Product & Campaign DevelopmentNetflix Product & Campaign Development
Netflix Product & Campaign Development
 
Devops at Netflix (re:Invent)
Devops at Netflix (re:Invent)Devops at Netflix (re:Invent)
Devops at Netflix (re:Invent)
 
Personas, Scenarios, User Stories, Use Cases (IxDworks.com)
Personas, Scenarios, User Stories, Use Cases (IxDworks.com)Personas, Scenarios, User Stories, Use Cases (IxDworks.com)
Personas, Scenarios, User Stories, Use Cases (IxDworks.com)
 
Data and Consumer Product Development
Data and Consumer Product DevelopmentData and Consumer Product Development
Data and Consumer Product Development
 
Netflix Social. UXDi Student Project
Netflix Social. UXDi Student ProjectNetflix Social. UXDi Student Project
Netflix Social. UXDi Student Project
 
NetflixOSS for Triangle Devops Oct 2013
NetflixOSS for Triangle Devops Oct 2013NetflixOSS for Triangle Devops Oct 2013
NetflixOSS for Triangle Devops Oct 2013
 
Netflix business proposal
Netflix business proposalNetflix business proposal
Netflix business proposal
 
Leading Agile Product Development
Leading Agile Product DevelopmentLeading Agile Product Development
Leading Agile Product Development
 
Spring Cloud Netflix OSS
Spring Cloud Netflix OSSSpring Cloud Netflix OSS
Spring Cloud Netflix OSS
 
Optimizing the Ops in DevOps
Optimizing the Ops in DevOpsOptimizing the Ops in DevOps
Optimizing the Ops in DevOps
 
David Pearl: Analysis Netflix
David Pearl: Analysis NetflixDavid Pearl: Analysis Netflix
David Pearl: Analysis Netflix
 
BUILD GREAT PRODUCTS: Introduction to LEAN Product Development
BUILD GREAT PRODUCTS: Introduction to LEAN Product DevelopmentBUILD GREAT PRODUCTS: Introduction to LEAN Product Development
BUILD GREAT PRODUCTS: Introduction to LEAN Product Development
 
Velocity NYC 2016 - Containers @ Netflix
Velocity NYC 2016 - Containers @ NetflixVelocity NYC 2016 - Containers @ Netflix
Velocity NYC 2016 - Containers @ Netflix
 
(ENT209) Netflix Cloud Migration, DevOps and Distributed Systems | AWS re:Inv...
(ENT209) Netflix Cloud Migration, DevOps and Distributed Systems | AWS re:Inv...(ENT209) Netflix Cloud Migration, DevOps and Distributed Systems | AWS re:Inv...
(ENT209) Netflix Cloud Migration, DevOps and Distributed Systems | AWS re:Inv...
 
Strategy Analysis of NETFLIX
Strategy Analysis of NETFLIXStrategy Analysis of NETFLIX
Strategy Analysis of NETFLIX
 
Netflix Open Source Meetup Season 4 Episode 3
Netflix Open Source Meetup Season 4 Episode 3Netflix Open Source Meetup Season 4 Episode 3
Netflix Open Source Meetup Season 4 Episode 3
 
Ibm cloud nativenetflixossfinal
Ibm cloud nativenetflixossfinalIbm cloud nativenetflixossfinal
Ibm cloud nativenetflixossfinal
 
Re:invent 2016 Container Scheduling, Execution and AWS Integration
Re:invent 2016 Container Scheduling, Execution and AWS IntegrationRe:invent 2016 Container Scheduling, Execution and AWS Integration
Re:invent 2016 Container Scheduling, Execution and AWS Integration
 

Ähnlich wie Netflix Consumer Science and A/B Testing

Paper overview: "Deep Residual Learning for Image Recognition"
Paper overview: "Deep Residual Learning for Image Recognition"Paper overview: "Deep Residual Learning for Image Recognition"
Paper overview: "Deep Residual Learning for Image Recognition"Ilya Kuzovkin
 
Focus fast bigd15_roger_belveal_2015-09-19
Focus fast bigd15_roger_belveal_2015-09-19Focus fast bigd15_roger_belveal_2015-09-19
Focus fast bigd15_roger_belveal_2015-09-19Roger Belveal
 
Northern New England TUG - January 2024
Northern New England TUG  -  January 2024Northern New England TUG  -  January 2024
Northern New England TUG - January 2024patrickdtherriault
 
Northern New England TUG - January 2024
Northern New England TUG -  January 2024Northern New England TUG -  January 2024
Northern New England TUG - January 2024patrickdtherriault
 
Dana Chisnell: Designing for Delightful Interfaces (Webdagene 2011)
Dana Chisnell: Designing for Delightful Interfaces (Webdagene 2011)Dana Chisnell: Designing for Delightful Interfaces (Webdagene 2011)
Dana Chisnell: Designing for Delightful Interfaces (Webdagene 2011)webdagene
 
Play in User Experience
Play in User ExperiencePlay in User Experience
Play in User ExperienceJason Mesut
 
Cassandra nyc 2011 ilya maykov - ooyala - scaling video analytics with apac...
Cassandra nyc 2011   ilya maykov - ooyala - scaling video analytics with apac...Cassandra nyc 2011   ilya maykov - ooyala - scaling video analytics with apac...
Cassandra nyc 2011 ilya maykov - ooyala - scaling video analytics with apac...ivmaykov
 

Ähnlich wie Netflix Consumer Science and A/B Testing (7)

Paper overview: "Deep Residual Learning for Image Recognition"
Paper overview: "Deep Residual Learning for Image Recognition"Paper overview: "Deep Residual Learning for Image Recognition"
Paper overview: "Deep Residual Learning for Image Recognition"
 
Focus fast bigd15_roger_belveal_2015-09-19
Focus fast bigd15_roger_belveal_2015-09-19Focus fast bigd15_roger_belveal_2015-09-19
Focus fast bigd15_roger_belveal_2015-09-19
 
Northern New England TUG - January 2024
Northern New England TUG  -  January 2024Northern New England TUG  -  January 2024
Northern New England TUG - January 2024
 
Northern New England TUG - January 2024
Northern New England TUG -  January 2024Northern New England TUG -  January 2024
Northern New England TUG - January 2024
 
Dana Chisnell: Designing for Delightful Interfaces (Webdagene 2011)
Dana Chisnell: Designing for Delightful Interfaces (Webdagene 2011)Dana Chisnell: Designing for Delightful Interfaces (Webdagene 2011)
Dana Chisnell: Designing for Delightful Interfaces (Webdagene 2011)
 
Play in User Experience
Play in User ExperiencePlay in User Experience
Play in User Experience
 
Cassandra nyc 2011 ilya maykov - ooyala - scaling video analytics with apac...
Cassandra nyc 2011   ilya maykov - ooyala - scaling video analytics with apac...Cassandra nyc 2011   ilya maykov - ooyala - scaling video analytics with apac...
Cassandra nyc 2011 ilya maykov - ooyala - scaling video analytics with apac...
 

Kürzlich hochgeladen

How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 

Kürzlich hochgeladen (20)

How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 

Netflix Consumer Science and A/B Testing

Hinweis der Redaktion

  1. \n
  2. ROCHELLE - introduction\nMATT - introduction\n
  3. \n
  4. At Netflix, we strive to delight our customers by making it as easy as possible to find and watch movies and TV shows.\n\n
  5. 26M global streaming members\n
  6. In the past two years we’ve developed relationships to get built into over 800 partner products\n\nWe are on major game consoles (Wii, PS3, Xbox), mobile devices (iPad, Android, iPhone) also on DVD players, smart TVs and home theaters\n\n\n
  7. Streaming service has allowed us to change the way people use our service, much more mobile, more flexible and instant\n\nMaking it as easy as possible to get to is important which is why a key part of the strategy has been to get on as many devices as possible.\n\nInternet connected TVs, gaming consoles\n
  8. Mobile - tablet, phones\n
  9. PC & Macs\n
  10. Started in the United States\n
  11. Expanded to Canada in 2010\n
  12. Latin America in Sept 2011\n
  13. UK in Jan 2012, next territory in Q4 of 2012\n
  14. At Netflix, we make most of our product decisions using “consumer science”. \n\n\n
  15. When building a product, you need to be clear on what your goal is. Netflix is a consumer product - so customer satisfaction is the primary goal that drives most product innovation. We’re also a subscription business - and we believe that satisfied customers will be more likely to renew their subscriptions and retain better. \n\n\n\n
  16. We’re a data-driven organization, so it’s important for us to understand and measure whether or not the new product features we’re rolling out are making a positive impact on our customers. Understanding how we measure success needs to be shared across the entire organization.\n\n
  17. We use the term “Consumer Science” to capture how we measure success. We want to gather as much information as possible, directly from our customers, to understand what is or isn’t working for them. Consumer Science can be made up of many components:\n- customer surveys\n- hard core data (demographics, % hours watched, etc.)\n- qualitative feedback directly from consumers via focus groups and usability testing\n- A/B testing or split testing where you give your customers a few experiences that are slightly different from each other and see which one performs best\n\n\n\n\n
  18. We use the term “Consumer Science” to capture how we measure success. We want to gather as much information as possible, directly from our customers, to understand what is or isn’t working for them. Consumer Science can be made up of many components:\n- customer surveys\n- hard core data (demographics, % hours watched, etc.)\n- qualitative feedback directly from consumers via focus groups and usability testing\n- A/B testing or split testing where you give your customers a few experiences that are slightly different from each other and see which one performs best\n\n\n\n\n
  19. We use the term “Consumer Science” to capture how we measure success. We want to gather as much information as possible, directly from our customers, to understand what is or isn’t working for them. Consumer Science can be made up of many components:\n- customer surveys\n- hard core data (demographics, % hours watched, etc.)\n- qualitative feedback directly from consumers via focus groups and usability testing\n- A/B testing or split testing where you give your customers a few experiences that are slightly different from each other and see which one performs best\n\n\n\n\n
  20. We use the term “Consumer Science” to capture how we measure success. We want to gather as much information as possible, directly from our customers, to understand what is or isn’t working for them. Consumer Science can be made up of many components:\n- customer surveys\n- hard core data (demographics, % hours watched, etc.)\n- qualitative feedback directly from consumers via focus groups and usability testing\n- A/B testing or split testing where you give your customers a few experiences that are slightly different from each other and see which one performs best\n\n\n\n\n
  21. We use the term “Consumer Science” to capture how we measure success. We want to gather as much information as possible, directly from our customers, to understand what is or isn’t working for them. Consumer Science can be made up of many components:\n- customer surveys\n- hard core data (demographics, % hours watched, etc.)\n- qualitative feedback directly from consumers via focus groups and usability testing\n- A/B testing or split testing where you give your customers a few experiences that are slightly different from each other and see which one performs best\n\n\n\n\n
  22. The entire team needs to be on the same page about what “performs best” means.\n\n
  23. Important to choose the right metrics. For Netflix, as a subscription business, RETENTION is the core metric that we want to measure on our tests. Anything that we test in our product should be with the intent of improving retention. \n\n
  24. However, retention can be hard to measure or take a long time to measure. Therefore, it’s important to develop leading indicators or proxy metrics. Hours watched is one of our proxy metrics. A customer who watches 4 hours a week from Netflix will be more likely to stick around as a customer than someone who is watching only 1 hour a month. Generally speaking, if they’re watching more Netflix, then they’re getting more value from our service and more likely to retain. \n\n\n
  25. Every test starts with a hypothesis. Why do we think what we’re going to do is actually going to make a difference for the business? Some ideas might sound like they’ll make a difference to the core metric, but you need to ensure that they will actually help the overall business (e.g $1 per play - not good for business).\n\n
  26. Every test starts with a hypothesis. Why do we think what we’re going to do is actually going to make a difference for the business? Some ideas might sound like they’ll make a difference to the core metric, but you need to ensure that they will actually help the overall business (e.g $1 per play - not good for business).\n\n
  27. Every test starts with a hypothesis. Why do we think what we’re going to do is actually going to make a difference for the business? Some ideas might sound like they’ll make a difference to the core metric, but you need to ensure that they will actually help the overall business (e.g $1 per play - not good for business).\n\n
  28. We’ll use this last hypothesis as an example to walk through how A/B testing works.\n
  29. What variables will you test to determine if your hypothesis is sound or not? At Netflix, our movies and TV shows are displayed in rows. Each row represents a different genre or category. \n\n
  30. If the hypothesis is about “showing more movies & TV shows” up front, then you can either: 1) add more titles per row (provide more depth within each genre/category)\nor 2) add more rows and categories (provide more breadth in the catalog for customers to browse). Think about dependent & independent variables, control, significance…\n\n\n
  31. If the hypothesis is about “showing more movies & TV shows” up front, then you can either: 1) add more titles per row (provide more depth within each genre/category)\nor 2) add more rows and categories (provide more breadth in the catalog for customers to browse). Think about dependent & independent variables, control, significance…\n\n\n
  32. If the hypothesis is about “showing more movies & TV shows” up front, then you can either: 1) add more titles per row (provide more depth within each genre/category)\nor 2) add more rows and categories (provide more breadth in the catalog for customers to browse). Think about dependent & independent variables, control, significance…\n\n\n
  33. Every test starts with a control - usually the experience that is already out there. Then we make several different experiences (test cells) which hopefully give us a better understanding about how much impact the variables that we’re testing has on our customers. It’s a test/experiment - keep in mind that the ideal execution can be 2X as effective as a prototype, but not 10X. Once the test is run, analyze your data. Users will answer the question for you of which variables make an impact. We can learn a lot by understanding why our test succeeded or failed. In this test, adding breadth lifted viewing.\n\n\n
  34. Every test starts with a control - usually the experience that is already out there. Then we make several different experiences (test cells) which hopefully give us a better understanding about how much impact the variables that we’re testing has on our customers. It’s a test/experiment - keep in mind that the ideal execution can be 2X as effective as a prototype, but not 10X. Once the test is run, analyze your data. Users will answer the question for you of which variables make an impact. We can learn a lot by understanding why our test succeeded or failed. In this test, adding breadth lifted viewing.\n\n\n
  35. Every test starts with a control - usually the experience that is already out there. Then we make several different experiences (test cells) which hopefully give us a better understanding about how much impact the variables that we’re testing has on our customers. It’s a test/experiment - keep in mind that the ideal execution can be 2X as effective as a prototype, but not 10X. Once the test is run, analyze your data. Users will answer the question for you of which variables make an impact. We can learn a lot by understanding why our test succeeded or failed. In this test, adding breadth lifted viewing.\n\n\n
  36. Every test starts with a control - usually the experience that is already out there. Then we make several different experiences (test cells) which hopefully give us a better understanding about how much impact the variables that we’re testing has on our customers. It’s a test/experiment - keep in mind that the ideal execution can be 2X as effective as a prototype, but not 10X. Once the test is run, analyze your data. Users will answer the question for you of which variables make an impact. We can learn a lot by understanding why our test succeeded or failed. In this test, adding breadth lifted viewing.\n\n\n
  37. Every test starts with a control - usually the experience that is already out there. Then we make several different experiences (test cells) which hopefully give us a better understanding about how much impact the variables that we’re testing has on our customers. It’s a test/experiment - keep in mind that the ideal execution can be 2X as effective as a prototype, but not 10X. Once the test is run, analyze your data. Users will answer the question for you of which variables make an impact. We can learn a lot by understanding why our test succeeded or failed. In this test, adding breadth lifted viewing.\n\n\n
  38. Every test starts with a control - usually the experience that is already out there. Then we make several different experiences (test cells) which hopefully give us a better understanding about how much impact the variables that we’re testing has on our customers. It’s a test/experiment - keep in mind that the ideal execution can be 2X as effective as a prototype, but not 10X. Once the test is run, analyze your data. Users will answer the question for you of which variables make an impact. We can learn a lot by understanding why our test succeeded or failed. In this test, adding breadth lifted viewing.\n\n\n
  39. Every test starts with a control - usually the experience that is already out there. Then we make several different experiences (test cells) which hopefully give us a better understanding about how much impact the variables that we’re testing has on our customers. It’s a test/experiment - keep in mind that the ideal execution can be 2X as effective as a prototype, but not 10X. Once the test is run, analyze your data. Users will answer the question for you of which variables make an impact. We can learn a lot by understanding why our test succeeded or failed. In this test, adding breadth lifted viewing.\n\n\n
  40. We like A/B testing because: 1) Levels the playing field - many ideas (from anyone) can be tested and it helps democratize product development. Eliminates the problem of only building the idea from the person who yells the loudest or the “highest paid opinion”. 2) Gives us data from real customers - best and most direct way for understanding what will work with our customers 3) Aligns to core metrics - Keeps the entire team (design, development, product management) on the same page about what we’re measuring and thinking about how to move the business forward\n
  41. We like A/B testing because: 1) Levels the playing field - many ideas (from anyone) can be tested and it helps democratize product development. Eliminates the problem of only building the idea from the person who yells the loudest or the “highest paid opinion”. 2) Gives us data from real customers - best and most direct way for understanding what will work with our customers 3) Aligns to core metrics - Keeps the entire team (design, development, product management) on the same page about what we’re measuring and thinking about how to move the business forward\n
  42. We like A/B testing because: 1) Levels the playing field - many ideas (from anyone) can be tested and it helps democratize product development. Eliminates the problem of only building the idea from the person who yells the loudest or the “highest paid opinion”. 2) Gives us data from real customers - best and most direct way for understanding what will work with our customers 3) Aligns to core metrics - Keeps the entire team (design, development, product management) on the same page about what we’re measuring and thinking about how to move the business forward\n
  43. A/B testing can be used for to test radically different ideas as well as smaller iterative ones.\n
  44. This was the original TV UI for Playstation 3, before we had the ability to do A/B testing and dynamically update just the UI with server-delivered UI code. It was the launch of our PS3 downloadable application in late 2009, which replaced the original disc-based version, that introduced our use of the open source browser WebKit. Using WebKit in our application as the UI engine meant we could start doing true A/B testing of the UI in the same way that we had been doing for years on our netflix.com website on PC/Mac.\n\n\n
  45. This experience served as our control. It was already available on a small number of Smart TVs. The main elements of it included: a) a menu structure for selecting different categories. The menu allowed for introducing navigation hierarchy with sub-lists, allowing for deeper drill-down into niches of the catalog, for example Romantic Comedies. b) this hierarchical browse exposed more of the catalog via browsing. c) more boxshots were available on screen at any one time.\n \n
  46. This experience strived for simplicity. No hierarchy, no menus. Simple navigation within a grid of titles. Horizontal rows for individual categories/lists with a title always receiving focus and a panel along the right side providing a rich amount of metadata to inform one’s decision.\n
  47. This experience strived for separating navigation from the content. A guided experience through a set of menus. Once a category or sub-category was selected, the navigation got out of the way and the focus was on titles and supporting similar titles to whatever title has focus.\n\n
  48. This experience focused on the power of playing video to help inform one’s decision. They hypothesis being that perhaps customers can more easily choose what to watch through the act of watching. A title is always playing at fullscreen with an browse experience as an overlay over the video. Selecting a different title would result in that playing, but allowing the customer to continue browsing for other potential titles, similar to a channel surfing type of experience.\n
  49. Which one do you think performed the best?\n
  50. The Simple Grid UI won compared to our control. It’s also worth noting that cells 3 & 4 were negative compared to the control. Internally, a lot of people were excited about the video cell and were confident that it would perform well. If we had simply rolled out that experience without first testing it, we would have done a disservice to our customers.\n
  51. Once you have a general direction, then you can take it and test iterations on it so that you can improve it’s performance even more.\n
  52. Once you have a general direction, then you can take it and test iterations on it so that you can improve it’s performance even more.\n
  53. Once you have a general direction, then you can take it and test iterations on it so that you can improve it’s performance even more.\n
  54. Once you have a general direction, then you can take it and test iterations on it so that you can improve it’s performance even more.\n
  55. Once you have a general direction, then you can take it and test iterations on it so that you can improve it’s performance even more.\n
  56. Once you have a general direction, then you can take it and test iterations on it so that you can improve it’s performance even more.\n
  57. Once you have a general direction, then you can take it and test iterations on it so that you can improve it’s performance even more.\n
  58. Once you have a general direction, then you can take it and test iterations on it so that you can improve it’s performance even more.\n
  59. Once you have a general direction, then you can take it and test iterations on it so that you can improve it’s performance even more.\n
  60. Once you have a general direction, then you can take it and test iterations on it so that you can improve it’s performance even more.\n
  61. Once you have a general direction, then you can take it and test iterations on it so that you can improve it’s performance even more.\n
  62. Example from our website experience to illustrate another benefit of A/B testing.\n
  63. Hypothesis is that by removing a lot of the clutter and affordances on the UI will make it easier for customers to discover great movies and TV shows to watch. \n
  64. The ‘control’ was the default UI at the time and cell 1 is a cleaned up version of the UI:\n- Box shots made larger to showcase the content, so large that we could remove the titles from above all the box shots\n- A number of items put into the hover state (play buttons, stars, etc.)\n\n
  65. The ‘control’ was the default UI at the time and cell 1 is a cleaned up version of the UI:\n- Box shots made larger to showcase the content, so large that we could remove the titles from above all the box shots\n- A number of items put into the hover state (play buttons, stars, etc.)\n\n
  66. The ‘control’ was the default UI at the time and cell 1 is a cleaned up version of the UI:\n- Box shots made larger to showcase the content, so large that we could remove the titles from above all the box shots\n- A number of items put into the hover state (play buttons, stars, etc.)\n\n
  67. The ‘control’ was the default UI at the time and cell 1 is a cleaned up version of the UI:\n- Box shots made larger to showcase the content, so large that we could remove the titles from above all the box shots\n- A number of items put into the hover state (play buttons, stars, etc.)\n\n
  68. The ‘control’ was the default UI at the time and cell 1 is a cleaned up version of the UI:\n- Box shots made larger to showcase the content, so large that we could remove the titles from above all the box shots\n- A number of items put into the hover state (play buttons, stars, etc.)\n\n
  69. The ‘control’ was the default UI at the time and cell 1 is a cleaned up version of the UI:\n- Box shots made larger to showcase the content, so large that we could remove the titles from above all the box shots\n- A number of items put into the hover state (play buttons, stars, etc.)\n\n
  70. The ‘control’ was the default UI at the time and cell 1 is a cleaned up version of the UI:\n- Box shots made larger to showcase the content, so large that we could remove the titles from above all the box shots\n- A number of items put into the hover state (play buttons, stars, etc.)\n\n
  71. The ‘control’ was the default UI at the time and cell 1 is a cleaned up version of the UI:\n- Box shots made larger to showcase the content, so large that we could remove the titles from above all the box shots\n- A number of items put into the hover state (play buttons, stars, etc.)\n\n
  72. The ‘control’ was the default UI at the time and cell 1 is a cleaned up version of the UI:\n- Box shots made larger to showcase the content, so large that we could remove the titles from above all the box shots\n- A number of items put into the hover state (play buttons, stars, etc.)\n\n
  73. The ‘control’ was the default UI at the time and cell 1 is a cleaned up version of the UI:\n- Box shots made larger to showcase the content, so large that we could remove the titles from above all the box shots\n- A number of items put into the hover state (play buttons, stars, etc.)\n\n
  74. The “clean” design won and increased streaming hours.\n
  75. The “clean” design won and increased streaming hours.\n
  76. Naturally, we were excited to roll it out. However, the reaction we got from customers posting on our blog was VERY negative. When you get such emotional feedback (even if it’s from a vocal minority) - it’s hard not to question the decision you made and second guess yourself. But the data can give you confidence that there was something that was working better for customers in your design... Some folks seemed to be asking for a full rollback to the original site. However some (still negative) gave us some useful insight about the specifics of what they didn’t like (scrolling and missing sortable list). \n
  77. Naturally, we were excited to roll it out. However, the reaction we got from customers posting on our blog was VERY negative. When you get such emotional feedback (even if it’s from a vocal minority) - it’s hard not to question the decision you made and second guess yourself. But the data can give you confidence that there was something that was working better for customers in your design... Some folks seemed to be asking for a full rollback to the original site. However some (still negative) gave us some useful insight about the specifics of what they didn’t like (scrolling and missing sortable list). \n
  78. Naturally, we were excited to roll it out. However, the reaction we got from customers posting on our blog was VERY negative. When you get such emotional feedback (even if it’s from a vocal minority) - it’s hard not to question the decision you made and second guess yourself. But the data can give you confidence that there was something that was working better for customers in your design... Some folks seemed to be asking for a full rollback to the original site. However some (still negative) gave us some useful insight about the specifics of what they didn’t like (scrolling and missing sortable list). \n
  79. Naturally, we were excited to roll it out. However, the reaction we got from customers posting on our blog was VERY negative. When you get such emotional feedback (even if it’s from a vocal minority) - it’s hard not to question the decision you made and second guess yourself. But the data can give you confidence that there was something that was working better for customers in your design... Some folks seemed to be asking for a full rollback to the original site. However some (still negative) gave us some useful insight about the specifics of what they didn’t like (scrolling and missing sortable list). \n
  80. Naturally, we were excited to roll it out. However, the reaction we got from customers posting on our blog was VERY negative. When you get such emotional feedback (even if it’s from a vocal minority) - it’s hard not to question the decision you made and second guess yourself. But the data can give you confidence that there was something that was working better for customers in your design... Some folks seemed to be asking for a full rollback to the original site. However some (still negative) gave us some useful insight about the specifics of what they didn’t like (scrolling and missing sortable list). \n
  81. Naturally, we were excited to roll it out. However, the reaction we got from customers posting on our blog was VERY negative. When you get such emotional feedback (even if it’s from a vocal minority) - it’s hard not to question the decision you made and second guess yourself. But the data can give you confidence that there was something that was working better for customers in your design... Some folks seemed to be asking for a full rollback to the original site. However some (still negative) gave us some useful insight about the specifics of what they didn’t like (scrolling and missing sortable list). \n
  82. Because we controlled for different variables, our data combined with the customer feedback allowed us to discern what changes we should make to the features that we rolled out while being able to maintain a lot of the positive benefits we saw as well.\n\n
  83. The site today retains much of what we originally rolled out - but without A/B testing, there’s a chance that we would have rolled back all our changes and not been able to move the product forward in a meaningful way.\n
  84. \n
  85. Positive result: roll it out, but keep in mind that for existing customers there can be a “change effect” which might have a negative impact\nNegative result: kill the test, resist the urge to revisit it, polish it, thinking it will turn a negative result into a positive one - too costly and unlikely to work\n\n
  86. Positive result: roll it out, but keep in mind that for existing customers there can be a “change effect” which might have a negative impact\nNegative result: kill the test, resist the urge to revisit it, polish it, thinking it will turn a negative result into a positive one - too costly and unlikely to work\n\n
  87. Positive result: roll it out, but keep in mind that for existing customers there can be a “change effect” which might have a negative impact\nNegative result: kill the test, resist the urge to revisit it, polish it, thinking it will turn a negative result into a positive one - too costly and unlikely to work\n\n
  88. Positive result: roll it out, but keep in mind that for existing customers there can be a “change effect” which might have a negative impact\nNegative result: kill the test, resist the urge to revisit it, polish it, thinking it will turn a negative result into a positive one - too costly and unlikely to work\n\n
  89. Positive result: roll it out, but keep in mind that for existing customers there can be a “change effect” which might have a negative impact\nNegative result: kill the test, resist the urge to revisit it, polish it, thinking it will turn a negative result into a positive one - too costly and unlikely to work\n\n
  90. Positive result: roll it out, but keep in mind that for existing customers there can be a “change effect” which might have a negative impact\nNegative result: kill the test, resist the urge to revisit it, polish it, thinking it will turn a negative result into a positive one - too costly and unlikely to work\n\n
  91. We often see tests with a “flat” result. It's important to have the discipline to insist that any product change that doesn't change metrics in a positive direction should be reverted. Even if the change is "only neutral" and you really, really, really like it better, force yourself (and your team) to go back to the drawing board and try again.\n
  92. We often see tests with a “flat” result. It's important to have the discipline to insist that any product change that doesn't change metrics in a positive direction should be reverted. Even if the change is "only neutral" and you really, really, really like it better, force yourself (and your team) to go back to the drawing board and try again.\n
  93. We often see tests with a “flat” result. It's important to have the discipline to insist that any product change that doesn't change metrics in a positive direction should be reverted. Even if the change is "only neutral" and you really, really, really like it better, force yourself (and your team) to go back to the drawing board and try again.\n
  94. We often see tests with a “flat” result. It's important to have the discipline to insist that any product change that doesn't change metrics in a positive direction should be reverted. Even if the change is "only neutral" and you really, really, really like it better, force yourself (and your team) to go back to the drawing board and try again.\n
  95. We often see tests with a “flat” result. It's important to have the discipline to insist that any product change that doesn't change metrics in a positive direction should be reverted. Even if the change is "only neutral" and you really, really, really like it better, force yourself (and your team) to go back to the drawing board and try again.\n
  96. We often see tests with a “flat” result. It's important to have the discipline to insist that any product change that doesn't change metrics in a positive direction should be reverted. Even if the change is "only neutral" and you really, really, really like it better, force yourself (and your team) to go back to the drawing board and try again.\n
  97. We often see tests with a “flat” result. It's important to have the discipline to insist that any product change that doesn't change metrics in a positive direction should be reverted. Even if the change is "only neutral" and you really, really, really like it better, force yourself (and your team) to go back to the drawing board and try again.\n
  98. We often see tests with a “flat” result. It's important to have the discipline to insist that any product change that doesn't change metrics in a positive direction should be reverted. Even if the change is "only neutral" and you really, really, really like it better, force yourself (and your team) to go back to the drawing board and try again.\n
  99. We caution ourselves not to let A/B testing become a crutch for making decisions. That is, not every idea is worth the cost and effort to test. Focus testing on the ideas that will likely move the business forward and be measurable through core metrics. This helps eliminate mediocre ideas, weak hypotheses, and test that won’t move the needle. \nClear Signal - We tend to focus testing on new users because they don’t come with predisposed notions of how to use the product and the largest audience for us is the one that we have yet to get. \nVariations - its costly and time-intensive to test every single variation in isolation. Test cell design should be thoughful and each cell or variation should have a hypothesis behind it. \nLocal Maximum - Free yourself to focus on the big bet/wild/unpopular ideas AND the smaller, incremental sound hypothesis ideas. Know when to “pivot” because you’ve max’d out a specific angle.\nEarly victory - have the discipline to let a test run its expected course before getting too excited by very early results. Likewise, if a test has run its planned course (e.g. 2 months) and there is no positive signal or its negative, don’t be stubborn and let it run indefinitely hoping it will magically turn positive. Its unlikely, and having the maturity to know when to call it a day and move on to testing other great ideas is important.\n\n
  100. We caution ourselves not to let A/B testing become a crutch for making decisions. That is, not every idea is worth the cost and effort to test. Focus testing on the ideas that will likely move the business forward and be measurable through core metrics. This helps eliminate mediocre ideas, weak hypotheses, and test that won’t move the needle. \nClear Signal - We tend to focus testing on new users because they don’t come with predisposed notions of how to use the product and the largest audience for us is the one that we have yet to get. \nVariations - its costly and time-intensive to test every single variation in isolation. Test cell design should be thoughful and each cell or variation should have a hypothesis behind it. \nLocal Maximum - Free yourself to focus on the big bet/wild/unpopular ideas AND the smaller, incremental sound hypothesis ideas. Know when to “pivot” because you’ve max’d out a specific angle.\nEarly victory - have the discipline to let a test run its expected course before getting too excited by very early results. Likewise, if a test has run its planned course (e.g. 2 months) and there is no positive signal or its negative, don’t be stubborn and let it run indefinitely hoping it will magically turn positive. Its unlikely, and having the maturity to know when to call it a day and move on to testing other great ideas is important.\n\n
  101. We caution ourselves not to let A/B testing become a crutch for making decisions. That is, not every idea is worth the cost and effort to test. Focus testing on the ideas that will likely move the business forward and be measurable through core metrics. This helps eliminate mediocre ideas, weak hypotheses, and test that won’t move the needle. \nClear Signal - We tend to focus testing on new users because they don’t come with predisposed notions of how to use the product and the largest audience for us is the one that we have yet to get. \nVariations - its costly and time-intensive to test every single variation in isolation. Test cell design should be thoughful and each cell or variation should have a hypothesis behind it. \nLocal Maximum - Free yourself to focus on the big bet/wild/unpopular ideas AND the smaller, incremental sound hypothesis ideas. Know when to “pivot” because you’ve max’d out a specific angle.\nEarly victory - have the discipline to let a test run its expected course before getting too excited by very early results. Likewise, if a test has run its planned course (e.g. 2 months) and there is no positive signal or its negative, don’t be stubborn and let it run indefinitely hoping it will magically turn positive. Its unlikely, and having the maturity to know when to call it a day and move on to testing other great ideas is important.\n\n
  102. We caution ourselves not to let A/B testing become a crutch for making decisions. That is, not every idea is worth the cost and effort to test. Focus testing on the ideas that will likely move the business forward and be measurable through core metrics. This helps eliminate mediocre ideas, weak hypotheses, and test that won’t move the needle. \nClear Signal - We tend to focus testing on new users because they don’t come with predisposed notions of how to use the product and the largest audience for us is the one that we have yet to get. \nVariations - its costly and time-intensive to test every single variation in isolation. Test cell design should be thoughful and each cell or variation should have a hypothesis behind it. \nLocal Maximum - Free yourself to focus on the big bet/wild/unpopular ideas AND the smaller, incremental sound hypothesis ideas. Know when to “pivot” because you’ve max’d out a specific angle.\nEarly victory - have the discipline to let a test run its expected course before getting too excited by very early results. Likewise, if a test has run its planned course (e.g. 2 months) and there is no positive signal or its negative, don’t be stubborn and let it run indefinitely hoping it will magically turn positive. Its unlikely, and having the maturity to know when to call it a day and move on to testing other great ideas is important.\n\n
  103. We caution ourselves not to let A/B testing become a crutch for making decisions. That is, not every idea is worth the cost and effort to test. Focus testing on the ideas that will likely move the business forward and be measurable through core metrics. This helps eliminate mediocre ideas, weak hypotheses, and test that won’t move the needle. \nClear Signal - We tend to focus testing on new users because they don’t come with predisposed notions of how to use the product and the largest audience for us is the one that we have yet to get. \nVariations - its costly and time-intensive to test every single variation in isolation. Test cell design should be thoughful and each cell or variation should have a hypothesis behind it. \nLocal Maximum - Free yourself to focus on the big bet/wild/unpopular ideas AND the smaller, incremental sound hypothesis ideas. Know when to “pivot” because you’ve max’d out a specific angle.\nEarly victory - have the discipline to let a test run its expected course before getting too excited by very early results. Likewise, if a test has run its planned course (e.g. 2 months) and there is no positive signal or its negative, don’t be stubborn and let it run indefinitely hoping it will magically turn positive. Its unlikely, and having the maturity to know when to call it a day and move on to testing other great ideas is important.\n\n
  104. We caution ourselves not to let A/B testing become a crutch for making decisions. That is, not every idea is worth the cost and effort to test. Focus testing on the ideas that will likely move the business forward and be measurable through core metrics. This helps eliminate mediocre ideas, weak hypotheses, and test that won’t move the needle. \nClear Signal - We tend to focus testing on new users because they don’t come with predisposed notions of how to use the product and the largest audience for us is the one that we have yet to get. \nVariations - its costly and time-intensive to test every single variation in isolation. Test cell design should be thoughful and each cell or variation should have a hypothesis behind it. \nLocal Maximum - Free yourself to focus on the big bet/wild/unpopular ideas AND the smaller, incremental sound hypothesis ideas. Know when to “pivot” because you’ve max’d out a specific angle.\nEarly victory - have the discipline to let a test run its expected course before getting too excited by very early results. Likewise, if a test has run its planned course (e.g. 2 months) and there is no positive signal or its negative, don’t be stubborn and let it run indefinitely hoping it will magically turn positive. Its unlikely, and having the maturity to know when to call it a day and move on to testing other great ideas is important.\n\n
  105. Consumer Science is successful at Netflix because it’s part of our DNA, and something that we’ve evolved over many, many years. Everyone who works at Netflix (design, engineering, product management BUT ALSO legal, HR, recruiting, finance) understands what A/B testing is and how it’s leveraged.\n
  106. Two parts to make this successful:\n1) the day to day practice of A/B testing\n2) the people that we hire (again, in ALL parts of the company)\n
  107. Two parts to make this successful:\n1) the day to day practice of A/B testing\n2) the people that we hire (again, in ALL parts of the company)\n
  108. FOSTERING THE CULTURE:\nUniversally Embraced - from the executive team to all the individuals that are working on execution\n\nVocabulary - words like: “hypothesis” and “core metric” are commonly used to explain what we do. keeps everyone on the same page. makes product discussions, brainstorms more effective and less opinion driven with no reference to needed data to back it up (e.g. “I’d hypothesize...”, “I believe the data might show..” vs. “I think users want X”, ‘how are we going to measure X?’)\n\nDiscipline - with lots of exciting ideas, it’s easy to want to make exceptions and just roll things out (but if we had done that the video-based TV UI, we would have done the wrong thing). A/B testing is not a selectively used tool, and not viewed as optional. It’s the default approach to making decisions when a hypothesis is testable. Decisions in absence of A/B test results are the exception, because you just don’t know if that decision positively/negatively affected your business. If it CAN be tested, it SHOULD be tested.\nShare Results - Habitual sharing/context setting/broad communication, company-wide, of test results. It helps reinforce that many decisions are influenced by test results. Also, it helps everyone learn from what worked and what didn’t. Allows all of us to hone our consumer instincts.\n\nPEOPLE:\nHumble - Empirical focus keeps us humble - most of the time you don’t know exactly what your customer wants (even if you’re an expert in your field). Quick feedback from testing set us straight, forces us to optimize for the customer. You WILL be wrong at some point in your career and you need to be able to accept that (no egos).\nFocused - With the experimental nature of testing, you need to be able to know what to focus on and when. Know how much effort to put into something to make it work well enough to get a good signal, and know what polish you can put off until it goes into production (if it goes into production)\nData-driven - EVERYONE needs to have an appreciation for the data (design, engineering, PM) and a basic understanding of how our data analysis works (statistical significance, etc.)\nCuriosity about business - Business acumen and business savvy are important because all of our tests are designed to make an impact on the business. Understanding the fundamental business strategy allows you to have a more holistic understanding of how your day to day work is impacting the company at large. (And will allow you to participate better in testing by helping you craft better ideas for what to test).\n\n\n\n\n\n
  109. FOSTERING THE CULTURE:\nUniversally Embraced - from the executive team to all the individuals that are working on execution\n\nVocabulary - words like: “hypothesis” and “core metric” are commonly used to explain what we do. keeps everyone on the same page. makes product discussions, brainstorms more effective and less opinion driven with no reference to needed data to back it up (e.g. “I’d hypothesize...”, “I believe the data might show..” vs. “I think users want X”, ‘how are we going to measure X?’)\n\nDiscipline - with lots of exciting ideas, it’s easy to want to make exceptions and just roll things out (but if we had done that the video-based TV UI, we would have done the wrong thing). A/B testing is not a selectively used tool, and not viewed as optional. It’s the default approach to making decisions when a hypothesis is testable. Decisions in absence of A/B test results are the exception, because you just don’t know if that decision positively/negatively affected your business. If it CAN be tested, it SHOULD be tested.\nShare Results - Habitual sharing/context setting/broad communication, company-wide, of test results. It helps reinforce that many decisions are influenced by test results. Also, it helps everyone learn from what worked and what didn’t. Allows all of us to hone our consumer instincts.\n\nPEOPLE:\nHumble - Empirical focus keeps us humble - most of the time you don’t know exactly what your customer wants (even if you’re an expert in your field). Quick feedback from testing set us straight, forces us to optimize for the customer. You WILL be wrong at some point in your career and you need to be able to accept that (no egos).\nFocused - With the experimental nature of testing, you need to be able to know what to focus on and when. Know how much effort to put into something to make it work well enough to get a good signal, and know what polish you can put off until it goes into production (if it goes into production)\nData-driven - EVERYONE needs to have an appreciation for the data (design, engineering, PM) and a basic understanding of how our data analysis works (statistical significance, etc.)\nCuriosity about business - Business acumen and business savvy are important because all of our tests are designed to make an impact on the business. Understanding the fundamental business strategy allows you to have a more holistic understanding of how your day to day work is impacting the company at large. (And will allow you to participate better in testing by helping you craft better ideas for what to test).\n\n\n\n\n\n
  110. FOSTERING THE CULTURE:\nUniversally Embraced - from the executive team to all the individuals that are working on execution\n\nVocabulary - words like: “hypothesis” and “core metric” are commonly used to explain what we do. keeps everyone on the same page. makes product discussions, brainstorms more effective and less opinion driven with no reference to needed data to back it up (e.g. “I’d hypothesize...”, “I believe the data might show..” vs. “I think users want X”, ‘how are we going to measure X?’)\n\nDiscipline - with lots of exciting ideas, it’s easy to want to make exceptions and just roll things out (but if we had done that the video-based TV UI, we would have done the wrong thing). A/B testing is not a selectively used tool, and not viewed as optional. It’s the default approach to making decisions when a hypothesis is testable. Decisions in absence of A/B test results are the exception, because you just don’t know if that decision positively/negatively affected your business. If it CAN be tested, it SHOULD be tested.\nShare Results - Habitual sharing/context setting/broad communication, company-wide, of test results. It helps reinforce that many decisions are influenced by test results. Also, it helps everyone learn from what worked and what didn’t. Allows all of us to hone our consumer instincts.\n\nPEOPLE:\nHumble - Empirical focus keeps us humble - most of the time you don’t know exactly what your customer wants (even if you’re an expert in your field). Quick feedback from testing set us straight, forces us to optimize for the customer. You WILL be wrong at some point in your career and you need to be able to accept that (no egos).\nFocused - With the experimental nature of testing, you need to be able to know what to focus on and when. Know how much effort to put into something to make it work well enough to get a good signal, and know what polish you can put off until it goes into production (if it goes into production)\nData-driven - EVERYONE needs to have an appreciation for the data (design, engineering, PM) and a basic understanding of how our data analysis works (statistical significance, etc.)\nCuriosity about business - Business acumen and business savvy are important because all of our tests are designed to make an impact on the business. Understanding the fundamental business strategy allows you to have a more holistic understanding of how your day to day work is impacting the company at large. (And will allow you to participate better in testing by helping you craft better ideas for what to test).\n\n\n\n\n\n
  111. FOSTERING THE CULTURE:\nUniversally Embraced - from the executive team to all the individuals that are working on execution\n\nVocabulary - words like: “hypothesis” and “core metric” are commonly used to explain what we do. keeps everyone on the same page. makes product discussions, brainstorms more effective and less opinion driven with no reference to needed data to back it up (e.g. “I’d hypothesize...”, “I believe the data might show..” vs. “I think users want X”, ‘how are we going to measure X?’)\n\nDiscipline - with lots of exciting ideas, it’s easy to want to make exceptions and just roll things out (but if we had done that the video-based TV UI, we would have done the wrong thing). A/B testing is not a selectively used tool, and not viewed as optional. It’s the default approach to making decisions when a hypothesis is testable. Decisions in absence of A/B test results are the exception, because you just don’t know if that decision positively/negatively affected your business. If it CAN be tested, it SHOULD be tested.\nShare Results - Habitual sharing/context setting/broad communication, company-wide, of test results. It helps reinforce that many decisions are influenced by test results. Also, it helps everyone learn from what worked and what didn’t. Allows all of us to hone our consumer instincts.\n\nPEOPLE:\nHumble - Empirical focus keeps us humble - most of the time you don’t know exactly what your customer wants (even if you’re an expert in your field). Quick feedback from testing set us straight, forces us to optimize for the customer. You WILL be wrong at some point in your career and you need to be able to accept that (no egos).\nFocused - With the experimental nature of testing, you need to be able to know what to focus on and when. Know how much effort to put into something to make it work well enough to get a good signal, and know what polish you can put off until it goes into production (if it goes into production)\nData-driven - EVERYONE needs to have an appreciation for the data (design, engineering, PM) and a basic understanding of how our data analysis works (statistical significance, etc.)\nCuriosity about business - Business acumen and business savvy are important because all of our tests are designed to make an impact on the business. Understanding the fundamental business strategy allows you to have a more holistic understanding of how your day to day work is impacting the company at large. (And will allow you to participate better in testing by helping you craft better ideas for what to test).\n\n\n\n\n\n
  112. FOSTERING THE CULTURE:\nUniversally Embraced - from the executive team to all the individuals that are working on execution\n\nVocabulary - words like: “hypothesis” and “core metric” are commonly used to explain what we do. keeps everyone on the same page. makes product discussions, brainstorms more effective and less opinion driven with no reference to needed data to back it up (e.g. “I’d hypothesize...”, “I believe the data might show..” vs. “I think users want X”, ‘how are we going to measure X?’)\n\nDiscipline - with lots of exciting ideas, it’s easy to want to make exceptions and just roll things out (but if we had done that the video-based TV UI, we would have done the wrong thing). A/B testing is not a selectively used tool, and not viewed as optional. It’s the default approach to making decisions when a hypothesis is testable. Decisions in absence of A/B test results are the exception, because you just don’t know if that decision positively/negatively affected your business. If it CAN be tested, it SHOULD be tested.\nShare Results - Habitual sharing/context setting/broad communication, company-wide, of test results. It helps reinforce that many decisions are influenced by test results. Also, it helps everyone learn from what worked and what didn’t. Allows all of us to hone our consumer instincts.\n\nPEOPLE:\nHumble - Empirical focus keeps us humble - most of the time you don’t know exactly what your customer wants (even if you’re an expert in your field). Quick feedback from testing set us straight, forces us to optimize for the customer. You WILL be wrong at some point in your career and you need to be able to accept that (no egos).\nFocused - With the experimental nature of testing, you need to be able to know what to focus on and when. Know how much effort to put into something to make it work well enough to get a good signal, and know what polish you can put off until it goes into production (if it goes into production)\nData-driven - EVERYONE needs to have an appreciation for the data (design, engineering, PM) and a basic understanding of how our data analysis works (statistical significance, etc.)\nCuriosity about business - Business acumen and business savvy are important because all of our tests are designed to make an impact on the business. Understanding the fundamental business strategy allows you to have a more holistic understanding of how your day to day work is impacting the company at large. (And will allow you to participate better in testing by helping you craft better ideas for what to test).\n\n\n\n\n\n
  113. FOSTERING THE CULTURE:\nUniversally Embraced - from the executive team to all the individuals that are working on execution\n\nVocabulary - words like: “hypothesis” and “core metric” are commonly used to explain what we do. keeps everyone on the same page. makes product discussions, brainstorms more effective and less opinion driven with no reference to needed data to back it up (e.g. “I’d hypothesize...”, “I believe the data might show..” vs. “I think users want X”, ‘how are we going to measure X?’)\n\nDiscipline - with lots of exciting ideas, it’s easy to want to make exceptions and just roll things out (but if we had done that the video-based TV UI, we would have done the wrong thing). A/B testing is not a selectively used tool, and not viewed as optional. It’s the default approach to making decisions when a hypothesis is testable. Decisions in absence of A/B test results are the exception, because you just don’t know if that decision positively/negatively affected your business. If it CAN be tested, it SHOULD be tested.\nShare Results - Habitual sharing/context setting/broad communication, company-wide, of test results. It helps reinforce that many decisions are influenced by test results. Also, it helps everyone learn from what worked and what didn’t. Allows all of us to hone our consumer instincts.\n\nPEOPLE:\nHumble - Empirical focus keeps us humble - most of the time you don’t know exactly what your customer wants (even if you’re an expert in your field). Quick feedback from testing set us straight, forces us to optimize for the customer. You WILL be wrong at some point in your career and you need to be able to accept that (no egos).\nFocused - With the experimental nature of testing, you need to be able to know what to focus on and when. Know how much effort to put into something to make it work well enough to get a good signal, and know what polish you can put off until it goes into production (if it goes into production)\nData-driven - EVERYONE needs to have an appreciation for the data (design, engineering, PM) and a basic understanding of how our data analysis works (statistical significance, etc.)\nCuriosity about business - Business acumen and business savvy are important because all of our tests are designed to make an impact on the business. Understanding the fundamental business strategy allows you to have a more holistic understanding of how your day to day work is impacting the company at large. (And will allow you to participate better in testing by helping you craft better ideas for what to test).\n\n\n\n\n\n
  114. FOSTERING THE CULTURE:\nUniversally Embraced - from the executive team to all the individuals that are working on execution\n\nVocabulary - words like: “hypothesis” and “core metric” are commonly used to explain what we do. keeps everyone on the same page. makes product discussions, brainstorms more effective and less opinion driven with no reference to needed data to back it up (e.g. “I’d hypothesize...”, “I believe the data might show..” vs. “I think users want X”, ‘how are we going to measure X?’)\n\nDiscipline - with lots of exciting ideas, it’s easy to want to make exceptions and just roll things out (but if we had done that the video-based TV UI, we would have done the wrong thing). A/B testing is not a selectively used tool, and not viewed as optional. It’s the default approach to making decisions when a hypothesis is testable. Decisions in absence of A/B test results are the exception, because you just don’t know if that decision positively/negatively affected your business. If it CAN be tested, it SHOULD be tested.\nShare Results - Habitual sharing/context setting/broad communication, company-wide, of test results. It helps reinforce that many decisions are influenced by test results. Also, it helps everyone learn from what worked and what didn’t. Allows all of us to hone our consumer instincts.\n\nPEOPLE:\nHumble - Empirical focus keeps us humble - most of the time you don’t know exactly what your customer wants (even if you’re an expert in your field). Quick feedback from testing set us straight, forces us to optimize for the customer. You WILL be wrong at some point in your career and you need to be able to accept that (no egos).\nFocused - With the experimental nature of testing, you need to be able to know what to focus on and when. Know how much effort to put into something to make it work well enough to get a good signal, and know what polish you can put off until it goes into production (if it goes into production)\nData-driven - EVERYONE needs to have an appreciation for the data (design, engineering, PM) and a basic understanding of how our data analysis works (statistical significance, etc.)\nCuriosity about business - Business acumen and business savvy are important because all of our tests are designed to make an impact on the business. Understanding the fundamental business strategy allows you to have a more holistic understanding of how your day to day work is impacting the company at large. (And will allow you to participate better in testing by helping you craft better ideas for what to test).\n\n\n\n\n\n
  115. FOSTERING THE CULTURE:\nUniversally Embraced - from the executive team to all the individuals that are working on execution\n\nVocabulary - words like: “hypothesis” and “core metric” are commonly used to explain what we do. keeps everyone on the same page. makes product discussions, brainstorms more effective and less opinion driven with no reference to needed data to back it up (e.g. “I’d hypothesize...”, “I believe the data might show..” vs. “I think users want X”, ‘how are we going to measure X?’)\n\nDiscipline - with lots of exciting ideas, it’s easy to want to make exceptions and just roll things out (but if we had done that the video-based TV UI, we would have done the wrong thing). A/B testing is not a selectively used tool, and not viewed as optional. It’s the default approach to making decisions when a hypothesis is testable. Decisions in absence of A/B test results are the exception, because you just don’t know if that decision positively/negatively affected your business. If it CAN be tested, it SHOULD be tested.\nShare Results - Habitual sharing/context setting/broad communication, company-wide, of test results. It helps reinforce that many decisions are influenced by test results. Also, it helps everyone learn from what worked and what didn’t. Allows all of us to hone our consumer instincts.\n\nPEOPLE:\nHumble - Empirical focus keeps us humble - most of the time you don’t know exactly what your customer wants (even if you’re an expert in your field). Quick feedback from testing set us straight, forces us to optimize for the customer. You WILL be wrong at some point in your career and you need to be able to accept that (no egos).\nFocused - With the experimental nature of testing, you need to be able to know what to focus on and when. Know how much effort to put into something to make it work well enough to get a good signal, and know what polish you can put off until it goes into production (if it goes into production)\nData-driven - EVERYONE needs to have an appreciation for the data (design, engineering, PM) and a basic understanding of how our data analysis works (statistical significance, etc.)\nCuriosity about business - Business acumen and business savvy are important because all of our tests are designed to make an impact on the business. Understanding the fundamental business strategy allows you to have a more holistic understanding of how your day to day work is impacting the company at large. (And will allow you to participate better in testing by helping you craft better ideas for what to test).\n\n\n\n\n\n
  116. FOSTERING THE CULTURE:\nUniversally Embraced - from the executive team to all the individuals that are working on execution\n\nVocabulary - words like: “hypothesis” and “core metric” are commonly used to explain what we do. keeps everyone on the same page. makes product discussions, brainstorms more effective and less opinion driven with no reference to needed data to back it up (e.g. “I’d hypothesize...”, “I believe the data might show..” vs. “I think users want X”, ‘how are we going to measure X?’)\n\nDiscipline - with lots of exciting ideas, it’s easy to want to make exceptions and just roll things out (but if we had done that the video-based TV UI, we would have done the wrong thing). A/B testing is not a selectively used tool, and not viewed as optional. It’s the default approach to making decisions when a hypothesis is testable. Decisions in absence of A/B test results are the exception, because you just don’t know if that decision positively/negatively affected your business. If it CAN be tested, it SHOULD be tested.\nShare Results - Habitual sharing/context setting/broad communication, company-wide, of test results. It helps reinforce that many decisions are influenced by test results. Also, it helps everyone learn from what worked and what didn’t. Allows all of us to hone our consumer instincts.\n\nPEOPLE:\nHumble - Empirical focus keeps us humble - most of the time you don’t know exactly what your customer wants (even if you’re an expert in your field). Quick feedback from testing set us straight, forces us to optimize for the customer. You WILL be wrong at some point in your career and you need to be able to accept that (no egos).\nFocused - With the experimental nature of testing, you need to be able to know what to focus on and when. Know how much effort to put into something to make it work well enough to get a good signal, and know what polish you can put off until it goes into production (if it goes into production)\nData-driven - EVERYONE needs to have an appreciation for the data (design, engineering, PM) and a basic understanding of how our data analysis works (statistical significance, etc.)\nCuriosity about business - Business acumen and business savvy are important because all of our tests are designed to make an impact on the business. Understanding the fundamental business strategy allows you to have a more holistic understanding of how your day to day work is impacting the company at large. (And will allow you to participate better in testing by helping you craft better ideas for what to test).\n\n\n\n\n\n
  117. FOSTERING THE CULTURE:\nUniversally Embraced - from the executive team to all the individuals that are working on execution\n\nVocabulary - words like: “hypothesis” and “core metric” are commonly used to explain what we do. keeps everyone on the same page. makes product discussions, brainstorms more effective and less opinion driven with no reference to needed data to back it up (e.g. “I’d hypothesize...”, “I believe the data might show..” vs. “I think users want X”, ‘how are we going to measure X?’)\n\nDiscipline - with lots of exciting ideas, it’s easy to want to make exceptions and just roll things out (but if we had done that the video-based TV UI, we would have done the wrong thing). A/B testing is not a selectively used tool, and not viewed as optional. It’s the default approach to making decisions when a hypothesis is testable. Decisions in absence of A/B test results are the exception, because you just don’t know if that decision positively/negatively affected your business. If it CAN be tested, it SHOULD be tested.\nShare Results - Habitual sharing/context setting/broad communication, company-wide, of test results. It helps reinforce that many decisions are influenced by test results. Also, it helps everyone learn from what worked and what didn’t. Allows all of us to hone our consumer instincts.\n\nPEOPLE:\nHumble - Empirical focus keeps us humble - most of the time you don’t know exactly what your customer wants (even if you’re an expert in your field). Quick feedback from testing set us straight, forces us to optimize for the customer. You WILL be wrong at some point in your career and you need to be able to accept that (no egos).\nFocused - With the experimental nature of testing, you need to be able to know what to focus on and when. Know how much effort to put into something to make it work well enough to get a good signal, and know what polish you can put off until it goes into production (if it goes into production)\nData-driven - EVERYONE needs to have an appreciation for the data (design, engineering, PM) and a basic understanding of how our data analysis works (statistical significance, etc.)\nCuriosity about business - Business acumen and business savvy are important because all of our tests are designed to make an impact on the business. Understanding the fundamental business strategy allows you to have a more holistic understanding of how your day to day work is impacting the company at large. (And will allow you to participate better in testing by helping you craft better ideas for what to test).\n\n\n\n\n\n
  118. FOSTERING THE CULTURE:\nUniversally Embraced - from the executive team to all the individuals that are working on execution\n\nVocabulary - words like: “hypothesis” and “core metric” are commonly used to explain what we do. keeps everyone on the same page. makes product discussions, brainstorms more effective and less opinion driven with no reference to needed data to back it up (e.g. “I’d hypothesize...”, “I believe the data might show..” vs. “I think users want X”, ‘how are we going to measure X?’)\n\nDiscipline - with lots of exciting ideas, it’s easy to want to make exceptions and just roll things out (but if we had done that the video-based TV UI, we would have done the wrong thing). A/B testing is not a selectively used tool, and not viewed as optional. It’s the default approach to making decisions when a hypothesis is testable. Decisions in absence of A/B test results are the exception, because you just don’t know if that decision positively/negatively affected your business. If it CAN be tested, it SHOULD be tested.\nShare Results - Habitual sharing/context setting/broad communication, company-wide, of test results. It helps reinforce that many decisions are influenced by test results. Also, it helps everyone learn from what worked and what didn’t. Allows all of us to hone our consumer instincts.\n\nPEOPLE:\nHumble - Empirical focus keeps us humble - most of the time you don’t know exactly what your customer wants (even if you’re an expert in your field). Quick feedback from testing set us straight, forces us to optimize for the customer. You WILL be wrong at some point in your career and you need to be able to accept that (no egos).\nFocused - With the experimental nature of testing, you need to be able to know what to focus on and when. Know how much effort to put into something to make it work well enough to get a good signal, and know what polish you can put off until it goes into production (if it goes into production)\nData-driven - EVERYONE needs to have an appreciation for the data (design, engineering, PM) and a basic understanding of how our data analysis works (statistical significance, etc.)\nCuriosity about business - Business acumen and business savvy are important because all of our tests are designed to make an impact on the business. Understanding the fundamental business strategy allows you to have a more holistic understanding of how your day to day work is impacting the company at large. (And will allow you to participate better in testing by helping you craft better ideas for what to test).\n\n\n\n\n\n
  119. FOSTERING THE CULTURE:\nUniversally Embraced - from the executive team to all the individuals that are working on execution\n\nVocabulary - words like: “hypothesis” and “core metric” are commonly used to explain what we do. keeps everyone on the same page. makes product discussions, brainstorms more effective and less opinion driven with no reference to needed data to back it up (e.g. “I’d hypothesize...”, “I believe the data might show..” vs. “I think users want X”, ‘how are we going to measure X?’)\n\nDiscipline - with lots of exciting ideas, it’s easy to want to make exceptions and just roll things out (but if we had done that the video-based TV UI, we would have done the wrong thing). A/B testing is not a selectively used tool, and not viewed as optional. It’s the default approach to making decisions when a hypothesis is testable. Decisions in absence of A/B test results are the exception, because you just don’t know if that decision positively/negatively affected your business. If it CAN be tested, it SHOULD be tested.\nShare Results - Habitual sharing/context setting/broad communication, company-wide, of test results. It helps reinforce that many decisions are influenced by test results. Also, it helps everyone learn from what worked and what didn’t. Allows all of us to hone our consumer instincts.\n\nPEOPLE:\nHumble - Empirical focus keeps us humble - most of the time you don’t know exactly what your customer wants (even if you’re an expert in your field). Quick feedback from testing set us straight, forces us to optimize for the customer. You WILL be wrong at some point in your career and you need to be able to accept that (no egos).\nFocused - With the experimental nature of testing, you need to be able to know what to focus on and when. Know how much effort to put into something to make it work well enough to get a good signal, and know what polish you can put off until it goes into production (if it goes into production)\nData-driven - EVERYONE needs to have an appreciation for the data (design, engineering, PM) and a basic understanding of how our data analysis works (statistical significance, etc.)\nCuriosity about business - Business acumen and business savvy are important because all of our tests are designed to make an impact on the business. Understanding the fundamental business strategy allows you to have a more holistic understanding of how your day to day work is impacting the company at large. (And will allow you to participate better in testing by helping you craft better ideas for what to test).\n\n\n\n\n\n
  120. FOSTERING THE CULTURE:\nUniversally Embraced - from the executive team to all the individuals that are working on execution\n\nVocabulary - words like: “hypothesis” and “core metric” are commonly used to explain what we do. keeps everyone on the same page. makes product discussions, brainstorms more effective and less opinion driven with no reference to needed data to back it up (e.g. “I’d hypothesize...”, “I believe the data might show..” vs. “I think users want X”, ‘how are we going to measure X?’)\n\nDiscipline - with lots of exciting ideas, it’s easy to want to make exceptions and just roll things out (but if we had done that the video-based TV UI, we would have done the wrong thing). A/B testing is not a selectively used tool, and not viewed as optional. It’s the default approach to making decisions when a hypothesis is testable. Decisions in absence of A/B test results are the exception, because you just don’t know if that decision positively/negatively affected your business. If it CAN be tested, it SHOULD be tested.\nShare Results - Habitual sharing/context setting/broad communication, company-wide, of test results. It helps reinforce that many decisions are influenced by test results. Also, it helps everyone learn from what worked and what didn’t. Allows all of us to hone our consumer instincts.\n\nPEOPLE:\nHumble - Empirical focus keeps us humble - most of the time you don’t know exactly what your customer wants (even if you’re an expert in your field). Quick feedback from testing set us straight, forces us to optimize for the customer. You WILL be wrong at some point in your career and you need to be able to accept that (no egos).\nFocused - With the experimental nature of testing, you need to be able to know what to focus on and when. Know how much effort to put into something to make it work well enough to get a good signal, and know what polish you can put off until it goes into production (if it goes into production)\nData-driven - EVERYONE needs to have an appreciation for the data (design, engineering, PM) and a basic understanding of how our data analysis works (statistical significance, etc.)\nCuriosity about business - Business acumen and business savvy are important because all of our tests are designed to make an impact on the business. Understanding the fundamental business strategy allows you to have a more holistic understanding of how your day to day work is impacting the company at large. (And will allow you to participate better in testing by helping you craft better ideas for what to test).\n\n\n\n\n\n
  121. \n\n\n
  122. \n
  123. \n