Electronic versions
Documents
- Evidence submitted DCMS
Final published version, 228 KB, PDF document
Invited Submission by Vian Bakir & Andrew McStay to DCMS Online Harms and Disinformation Sub-Committee Inquiry into Misinformation and Trusted Voices.
alse information proliferates online, despite years of multi-stakeholder efforts to quell it. In September 2022, Vian Bakir (Prof. of Journalism & PolComms, SHiLSS) and Andrew McStay (Prof. of Digital Life, SHiLSS) were invited by the UK Parliament’s Online Harms and Disinformation Sub-Committee to provide evidence to their Inquiry into Misinformation and Trusted Voices. They addressed one of the Inquiry’s questions: namely: Is the provision of authoritative information responsive enough to meet the challenge of misinformation that is spread on social media? Now published, one of their conclusions is that rather than having to make difficult content moderation decisions about what is true and false on the fly and at scale, it may be better to ensure that digital platforms’ algorithms optimise emotions for social good rather than just for the platform and its advertisers’ profit. What this social good optimisation would look like is worthy of further study, but they posit that this would likely involve dialling down the platform’s emotional contagion, and engagement, of users.
alse information proliferates online, despite years of multi-stakeholder efforts to quell it. In September 2022, Vian Bakir (Prof. of Journalism & PolComms, SHiLSS) and Andrew McStay (Prof. of Digital Life, SHiLSS) were invited by the UK Parliament’s Online Harms and Disinformation Sub-Committee to provide evidence to their Inquiry into Misinformation and Trusted Voices. They addressed one of the Inquiry’s questions: namely: Is the provision of authoritative information responsive enough to meet the challenge of misinformation that is spread on social media? Now published, one of their conclusions is that rather than having to make difficult content moderation decisions about what is true and false on the fly and at scale, it may be better to ensure that digital platforms’ algorithms optimise emotions for social good rather than just for the platform and its advertisers’ profit. What this social good optimisation would look like is worthy of further study, but they posit that this would likely involve dialling down the platform’s emotional contagion, and engagement, of users.
Keywords
- emotional AI, false information
Original language | English |
---|---|
Type | Written evidence invited by Commons Select Committee |
Medium of output | Text |
Publisher | UK Parliament |
Number of pages | 7 |
Publication status | Published - 12 Oct 2022 |
Research outputs (1)
- Published
Empathic Media, Emotional AI, and the Optimization of Disinformation
Research output: Chapter in Book/Report/Conference proceeding › Chapter › peer-review
Total downloads
No data available