The document discusses the future of moral persuasion in technologies like games, augmented reality, artificial intelligence bots, and self-trackers. It predicts that future technologies will include human-like AI bots that have moral conversations with users to reflect on values and social problems. AI may also engage users in moral reasoning before decisions. Companies may attempt to control behaviors, and technologies will need to be transparent about their ethical codes, values, and data practices.
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
The Future of Moral Persuasion in Games, AR, AI Bots, and Self Trackers by Sherry Jones (April 18, 2019)
1. The Future of Moral
Persuasion in Games,
AR, AI Bots, and
Self Trackers
2. HELLO!
I am Sherry Jones
Philosophy and Games Studies SME and Instructor,
Rocky Mountain College of Art and Design.
Steering Committee Board Member, International Game
Developers Association - Learning, Games, and
Education Special Interest Group (IGDA LEG).
Judge, Software Information Industry Association
(SIIA) CODiE Awards - Games, Virtual Reality,
Augmented Reality, and Gamification in Education.
Twitter @autnes
Bio http://bit.ly/sherryjonesbio
Slides http://bit.ly/futureethics
2
4. Morality and Ethics
⊗ Morality - An individual’s personal right vs. wrong values, or
understanding of right actions vs. wrong actions (this moral
sense is influenced by familial, cultural, religious, political,
social factors in one’s upraising).
⊗ Ethics - A system of moral codes that are applied universally
to everyone in a society for the purpose of ensuring the
survival of that society.
⊗ All technologies that influence user actions/behaviors are
moral and political in nature.
4
5. 2.
Ethical Problems in the
Technology Sector
Web, Facial Recognition, Autonomous Weapons,
Killer Robots, AI, Trackers.
6. We [have] demonstrated that the Web failed instead of
served humanity, as it was supposed to have done, and
failed in many places…. The increasing centralization of
the Web ended up producing—with no deliberate action
of the people who designed the platform—a
large-scale emergent phenomenon which is
anti-human.
-- Tim Bernies-Lee, creator of the World Wide Web.
6
7. Surveillance Capitalism
⊗ Technologies invade privacy by surveilling, tracking, and
collecting user data (using EULA legalese as justification).
⊗ Technologies sell user data to third parties, who then create
user-targeted advertising campaigns to maximize profit.
⊗ Technologies reflect the biases of their developers and judge
and discriminate users’ social, moral, and monetary worth.
⊗ Third parties, such as health insurance companies, can use
data to pry into personal lives and determine whether to
punish or reward users for performing (un)expected behaviors.
7
9. As companies and governments deploy these A.I.
technologies, researchers are also realizing that some
systems are woefully biased. Facial recognition services, for
instance, can be significantly less accurate when trying to
identify women or someone with darker skin. Other systems
may include security holes unlike any seen in the past.
Researchers have shown that driverless cars can be fooled
into seeing things that are not really there.
--- Is Ethical AI Even Possible? by Cade Metz, NY Times
(March 1, 2019).
9
10. Employees at Clarifai worry that the same technological
tools that drive facial recognition will ultimately lead to
autonomous weapons — and that flaws in these tools will
open a Pandora’s box of problems. “We in the industry know
that technology can be compromised. Hackers hack. Bias is
unavoidable,” read the open letter to Mr. Zeiler.
--- Is Ethical AI Even Possible? by Cade Metz, NY Times
(March 1, 2019).
10
25. AI Ethics from Companies and Education
⊗ Google’s AI Principles
⊗ Microsoft’s AI Ethics
⊗ Embedded EthiCS: Bringing Ethical Reasoning Into the
Computer Science Curriculum
25
28. Players in Eastern-cluster countries were more likely than
those in the Western and Southern countries to kill a young
person and spare an old person (represented, in the game, by
a stooped figure holding a cane).
Players in Southern countries were more likely to kill a fat
person (a figure with a large stomach) and spare an athletic
person (a figure that appeared mid-jog, wearing shorts and a
sweatband).
--- Findings from MIT's Moral Machine (January 24, 2019).
28
29. Players in countries with high economic inequality (for
example, in Venezuela and Colombia) were more likely to
spare a business executive (a figure walking briskly, holding
a briefcase) than a homeless person (a hunched figure with a
hat, a beard, and patches on his clothes).
In countries where the rule of law is particularly strong—like
Japan or Germany—people were more likely to kill
jaywalkers than lawful pedestrians.
--- Findings from MIT's Moral Machine (January 24, 2019).
29
31. Google faced intense backlash soon after announcing that
one of the eight council members was Kay Coles James, the
president of the Heritage Foundation, a conservative
thinktank with close ties to Donald Trump’s administration.
James has a history of fighting against trans rights and LGBT
protections, has advocated for Trump’s proposed border wall,
and has taken a vocal stance against abortion rights.
--- Google Scraps AI Ethics Council After Backlash: 'Back to
the Drawing Board' (April 4, 2019).
31
33. The composition of the HLEG AI group is part of the problem:
it consisted of only four ethicists alongside 48 non-ethicists
– representatives from politics, universities, civil society,
and above all industry. That's like trying to build a
state-of-the-art, future-proof AI mainframe for political
consulting with 48 philosophers, one hacker and three
computer scientists (two of whom are always on vacation).
--- Ethics Washing Made in Europe by Thomas Metzinger
(April 8, 2019).
33
34. Because industry acts more quickly and efficiently than
politics or the academic sector, there is a risk that, as with
“Fake News”, we will now also have a problem with fake
ethics, including lots of conceptual smoke screens and
mirrors, highly paid industrial philosophers, self-invented
quality seals, and non-validated certificates for “Ethical AI
made in Europe”
--- Ethics Washing Made in Europe by Thomas Metzinger
(April 8, 2019).
34
36. Prediction #1
Future of moral persuasive design in
Games, AR, AI Bots, and Self Trackers
will include humanlike AIs, conducting
“moral conversations” with users to
reflect on moral values and question
whether those values can or cannot
help solve social problems.
36
37. AI Chatbot: Laozi
Converse with Laozi, the
chatbot to understand his moral
philosophy.
Place your screenshot here
37
38. Prediction #2
AI will engage users in moral
reasoning, listing all possible choices
for a situation, prior to the users
making a decision.
38
40. Prediction #3
Companies may attempt to control
users’ moral behaviors to prevent
future users’ problems.
Ex. User types an angry/abusive
letter in Google Docs. An AI bot
appears and asks the user whether
doing so is a good idea. Also, AI
warns the user of the potential legal
consequences of their actions.
40
41. Prediction #4
Technologies will be legally required
to reveal their own ethical codes,
moral values, and past practices,
when users question the design of
the technologies.
Ex. Tinder directly tells the user that
it is the company’s policy to evaluate
the attractiveness of the user based
on their photo and wealth.
41
43. Prediction #5
Quantified Self Technologies will be
legally required to tell users how
their data will be used, who will get
access to their data, and the
consequences of sharing the data.
Ex. FamilyTreeDNA tells its users
that it may need to hand over user’s
DNA data to government entities,
that collected data is not private.
43
44. FamilyTreeDNA, an early pioneer of the rapidly growing market for consumer
genetic testing, confirmed late Thursday that it has granted the Federal Bureau
of Investigation access to its vast trove of nearly 2 million genetic profiles. The
arrangement was first reported by BuzzFeed News.
Concerns about unfettered access to genetic information gathered by testing
companies have swelled since April, when police used a genealogy website to
ensnare a suspect in the decades-old case of the Golden State Killer. But that
site, GEDmatch, was open-source, meaning police were able to upload
crime-scene DNA data to the site without permission. The latest arrangement
marks the first time a commercial testing company has voluntarily given law
enforcement access to user data.
--- Major DNA Testing Company Sharing Genetic Data With
the FBI By Kristen V Brown (February 1, 2019).
44
45. THANKS!
Philosophy and Games Studies SME and Instructor, Rocky
Mountain College of Art and Design.
Steering Committee Board Member, International Game
Developers Association - Learning, Games, and
Education Special Interest Group (IGDA LEG).
Judge, Software Information Industry Association (SIIA)
CODiE Awards - Games, Virtual Reality, Augmented
Reality, and Gamification in Education.
Twitter @autnes
Bio http://bit.ly/sherryjonesbio
Slides http://bit.ly/futureethics
45
46. CREDITS
Special thanks to all the people who made and
released these awesome resources for free:
⊗ Presentation template by SlidesCarnival
⊗ Photographs by Unsplash
46