Mystery AI Hype Theater 3000, Episode 28 - LLMs Are Not Human Subjects
Alex and Emily put on their social scientist hats and take on the churn of research papers suggesting that LLMs could be used to replace human labor in social science research -- or even human subjects. Why these writings are essentially calls to fabricate data.
References:
PNAS: ChatGPT outperforms crowd workers for text-annotation tasks
* Beware the Hype: ChatGPT Didn't Replace Human Data Annotators
* ChatGPT Can Replace the Underpaid Workers Who Train AI, Researchers Say
Political Analysis: Out of One, Many: Using Language Models to Simulate Human Samples
Behavioral Research Methods: Can large language models help augment English psycholinguistic datasets?
Information Systems Journal: Editorial: The ethics of using generative AI for qualitative data analysis
Fresh AI Hell:
Advertising vs. reality, synthetic Willy Wonka edition
* https://x.com/AlsikkanTV/status/1762235022851948668?s=20
* https://twitter.com/CultureCrave/status/1762739767471714379
* https://twitter.com/xriskology/status/1762891492476006491?t=bNQ1AQlju36tQYxnm8BPVQ&s=19
A news outlet used an LLM to generate a story...and it falsely quoted Emily
* AI Invents Quote From Real Person in Article by Bihar News Site: A Wake-Up Call?
Trump supporters target Black voters with faked AI images
* Seeking Reliable Election Information? Don’t Trust AI
You can check out future streams at http://twitch.tv/dair_institute.
Follow Emily at https://twitter.com/EmilyMBender and https://dair-community.social/@EmilyMBender
Follow Alex at https://twitter.com/alexhanna and https://dair-community.social/@alex
Music: Toby Menon.
Production: Christie Taylor.
Graphic Design: Naomi Pleasure-Park.
Seeking Reliable Election Information? Don’t Trust AI
Experts testing five leading AI models found the answers were often inaccurate, misleading, and even downright harmfulJulia Angwin (Proof)