On manipulation by emotional AI: UK adults’ views and governance implications

Allbwn ymchwil: Cyfraniad at gyfnodolynErthygladolygiad gan gymheiriaid

StandardStandard

On manipulation by emotional AI: UK adults’ views and governance implications. / Bakir, Vian; Laffer, Alex; McStay, Andrew et al.
Yn: Frontiers in Sociology, Cyfrol 9, 1339834, 07.06.2024.

Allbwn ymchwil: Cyfraniad at gyfnodolynErthygladolygiad gan gymheiriaid

HarvardHarvard

APA

Bakir, V., Laffer, A., McStay, A., Miranda, D., & Urquhart, L. (2024). On manipulation by emotional AI: UK adults’ views and governance implications. Frontiers in Sociology, 9, Erthygl 1339834. https://doi.org/10.3389/fsoc.2024.1339834

CBE

MLA

VancouverVancouver

Bakir V, Laffer A, McStay A, Miranda D, Urquhart L. On manipulation by emotional AI: UK adults’ views and governance implications. Frontiers in Sociology. 2024 Meh 7;9:1339834. doi: 10.3389/fsoc.2024.1339834

Author

RIS

TY - JOUR

T1 - On manipulation by emotional AI

T2 - UK adults’ views and governance implications

AU - Bakir, Vian

AU - Laffer, Alex

AU - McStay, Andrew

AU - Miranda, Diana

AU - Urquhart, Lachlan

PY - 2024/6/7

Y1 - 2024/6/7

N2 - With growing commercial, regulatory and scholarly interest in use of Artificial Intelligence (AI) to profile and interact with human emotion (“emotional AI”), attention is turning to its capacity for manipulating people, relating to factors impacting on a person’s decisions and behavior. Given prior social disquiet about AI and profiling technologies, surprisingly little is known on people’s views on the benefits and harms of emotional AI technologies, especially their capacity for manipulation. This matters because regulators of AI (such as in the European Union and the UK) wish to stimulate AI innovation, minimize harms and build public trust in these systems, but to do so they should understand the public’s expectations. Addressing this, we ascertain UK adults’ perspectives on the potential of emotional AI technologies for manipulating people through a two-stage study. Stage One (the qualitative phase) uses design fiction principles to generate adequate understanding and informed discussion in 10 focus groups with diverse participants (n = 46) on how emotional AI technologies may be used in a range of mundane, everyday settings. The focus groups primarily flagged concerns about manipulation in two settings: emotion profiling in social media (involving deepfakes, false information and conspiracy theories), and emotion profiling in child oriented “emotoys” (where the toy responds to the child’s facial and verbal expressions). In both these settings, participants express concerns that emotion profiling covertly exploits users’ cognitive or affective weaknesses and vulnerabilities; additionally, in the social media setting, participants express concerns that emotion profiling damages people’s capacity for rational thought and action. To explore these insights at a larger scale, Stage Two (the quantitative phase), conducts a UK-wide, demographically representative national survey (n = 2,068) on attitudes toward emotional AI. Taking care to avoid leading and dystopian framings of emotional AI, we find that large majorities express concern about the potential for being manipulated through social media and emotoys. In addition to signaling need for civic protections and practical means of ensuring trust in emerging technologies, the research also leads us to provide a policy-friendly subdivision of what is meant by manipulation through emotional AI and related technologies

AB - With growing commercial, regulatory and scholarly interest in use of Artificial Intelligence (AI) to profile and interact with human emotion (“emotional AI”), attention is turning to its capacity for manipulating people, relating to factors impacting on a person’s decisions and behavior. Given prior social disquiet about AI and profiling technologies, surprisingly little is known on people’s views on the benefits and harms of emotional AI technologies, especially their capacity for manipulation. This matters because regulators of AI (such as in the European Union and the UK) wish to stimulate AI innovation, minimize harms and build public trust in these systems, but to do so they should understand the public’s expectations. Addressing this, we ascertain UK adults’ perspectives on the potential of emotional AI technologies for manipulating people through a two-stage study. Stage One (the qualitative phase) uses design fiction principles to generate adequate understanding and informed discussion in 10 focus groups with diverse participants (n = 46) on how emotional AI technologies may be used in a range of mundane, everyday settings. The focus groups primarily flagged concerns about manipulation in two settings: emotion profiling in social media (involving deepfakes, false information and conspiracy theories), and emotion profiling in child oriented “emotoys” (where the toy responds to the child’s facial and verbal expressions). In both these settings, participants express concerns that emotion profiling covertly exploits users’ cognitive or affective weaknesses and vulnerabilities; additionally, in the social media setting, participants express concerns that emotion profiling damages people’s capacity for rational thought and action. To explore these insights at a larger scale, Stage Two (the quantitative phase), conducts a UK-wide, demographically representative national survey (n = 2,068) on attitudes toward emotional AI. Taking care to avoid leading and dystopian framings of emotional AI, we find that large majorities express concern about the potential for being manipulated through social media and emotoys. In addition to signaling need for civic protections and practical means of ensuring trust in emerging technologies, the research also leads us to provide a policy-friendly subdivision of what is meant by manipulation through emotional AI and related technologies

U2 - 10.3389/fsoc.2024.1339834

DO - 10.3389/fsoc.2024.1339834

M3 - Article

VL - 9

JO - Frontiers in Sociology

JF - Frontiers in Sociology

SN - 2297-7775

M1 - 1339834

ER -