Tech companies and technologists need to own building responsible AI. However the majority of documents and guidelines are still at the policy and B2B level, rather than at the practitioner level. This talk aims to start bridging that gap and provide ML practitioners and leaders with some tools to inject ethical considerations into their day-to-day process.
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
Building Responsible AI - London Oct 2019
1. Building Responsible AI
Ari Font Llitjós, PhD. @quicola
Cortex ML Platform + NYC Site Lead
IBM (2012- July 2019) @Twitter
October 16, 2019 O’Reilly AI Conference: AI, accelerated, London
2. Outline
• Why do we need to worry about this?
• What is required?
• Who do we need to enable?
• How do we enable them - Frameworks and tools
Building Responsible AI @quicola #OReillyAI
3. AI algorithms are starting to be ubiquitous
https://www.prolificlondon.co.uk/news/tech/2019/01/met-police-spent-£200k-controversial-facial-recognition-tech, January 2019, London
4. DNN are still hard to interpret and fully understand
“Using deep neural network with small dataset to predict material defects”
https://www.sciencedirect.com/science/article/pii/S0264127518308682
Building Responsible AI @quicola #OReillyAI
5. 2019 Edelman AI Survey
Building Responsible AI @quicola #OReillyAI
10. Responsible AI systems need to
• Preserve human autonomy
• Provide transparency
• Ensure safety
• Ensure fairness
Building Responsible AI @quicola #OReillyAI
12. Responsible AI Framework
1 Assemble cross-functional teams
2 Develop empathy for people using your product /
service / platform
Building Responsible AI @quicola #OReillyAI
14. User Research
Understand the current needs (and frustrations) of those
who actually are / will be using <X> to do their job on a
daily basis, i.e. the target users.
•
Building Responsible AI @quicola #OReillyAI
15. User Research Questions
• What are users using / doing now?
• How are they using it?
• What are their goals and needs as well as their pain points?
• What’s preventing them from being effective?
• What would be the single most impactful improvement to their
workflow?
• What could lead anyone to use the product in a way that was
not intended?
Building Responsible AI @quicola #OReillyAI
20. Responsible AI Framework
1
2 Develop empathy for people using your product /
service / platform
3 Develop empathy for people building your product /
service / platform
Assemble cross-functional teams
Building Responsible AI @quicola #OReillyAI
21. Who is building AI?
Building Responsible AI @quicola #OReillyAI
22. People building AI
Machine Learning slice
ML platfrom engineers
ML engineers
ML researchers
Data scientists
ML modelers
Building Responsible AI @quicola #OReillyAI
23. Can there ever be too
much icing?
Building Responsible AI @quicola #OReillyAI
24. Can there ever be too
much icing?
UR can be done
at all levels
Building Responsible AI @quicola #OReillyAI
UR
25. User Research designers are trained in quantitative
and qualitative User Research methods
While everybody can do some User Research…
Building Responsible AI @quicola #OReillyAI
28. Responsible AI Framework
1
2 Develop empathy for people using your product /
service / platform
3 Develop empathy for people building your product /
service / platform
4 Enable people building your product /
service / platform to include ethical considerations
Assemble cross-functional teams
Building Responsible AI @quicola #OReillyAI
29. Building Responsible AI @quicola #OReillyAI
https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf
30. Building Responsible AI @quicola #OReillyAI
https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf
48. Key take-aways
• Ethics of AI is new muscle we all need to develop.
• Everyone in tech is accountable and needs to own building responsible
and ethical AI systems.
• Cross-functional teams are key. Inclusion of non-traditional disciplines
ensures that tech builders have a broader perspective from inception.
• Introduce User Research to your teams so that they can build empathy
for their users and better understand their needs and pain points.
• Define accountability for your team and make it actionable. Develop
metrics for accountability.
Building Responsible AI @quicola #OReillyAI
50. What can you do tomorrow?
• Share this talk with your (leadership) team.
• Start by developing your own values (Hippocratic Oath).
• Define accountability for your team, your role and make it actionable. 5
things to check for, run by other team members
• Explore and adopt lightweight User Research tools such as stakeholder
maps and empathy maps to better understand your users.
• Explore and apply Responsible AI frameworks and toolkits with your
teams.
Building Responsible AI @quicola #OReillyAI
51. Selected Resources
• IBM Design
• https://www.ibm.com/design/thinking
• https://www.ibm.com/design/research/
• https://www.ibm.com/design/ai/team-essentials
• IBM’s everyday ethics for AI guide — https://www.ibm.com/watson/
assets/duo/pdf/everydayethics.pdf
• Design ethically toolkit — https://www.designethically.com/toolkit
• IBM Research: AI fairness 360 — https://aif360.mybluemix.net/
Building Responsible AI @quicola #OReillyAI
52. Building responsible AI is everybody's
responsibility and a long-term investment.
2019 Edelman AI Survey
Building Responsible AI @quicola #OReillyAI
Thanks!