Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with AI hype. Here's a quick rundown.
First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
#AIhype
>>
First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
#AIhype
>>
Pause Giant AI Experiments: An Open Letter - Future of Life Institute
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.Future of Life Institute
Prof. Emily M. Bender(she/her)
•So that already tells you something about where this is coming from. This is gonna be a hot mess.
>>
Why longtermism is the world’s most dangerous secular credo | Aeon Essays
Émile P Torres (Aeon Magazine)Prof. Emily M. Bender(she/her)
•So, into the #AIhype. It starts with "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]".
>>
Prof. Emily M. Bender(she/her)
•https://faculty.washington.edu/ebender/stochasticparrots/
And the rest of that paragraph. Yes, AI labs are locked in an out-of-control race, but no one has developed a "digital mind" and they aren't in the process of doing that.
>>
Emily M. Bender
faculty.washington.eduProf. Emily M. Bender(she/her)
•And could folks "understand" these systems? There are plenty of open questions about how deep neural nets map inputs to outputs, but we'd be much better positioned to study them if the AI labs provided transparency about training data, model architecture, and training regimes.
>>
Prof. Emily M. Bender(she/her)
•>>
Prof. Emily M. Bender(she/her)
•https://twitter.com/emilymbender/status/1638891855718002691?s=20
On the GPT-4 ad copy:
https://twitter.com/emilymbender/status/1635697381244272640?s=20
On "general" tasks:
https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/084b6fbb10729ed4da8c3d3f5a3ae7c9-Abstract-round2.html
>>
AI and the Everything in the Whole Wide World Benchmark
datasets-benchmarks-proceedings.neurips.ccProf. Emily M. Bender(she/her)
•>>
Prof. Emily M. Bender(she/her)
•>>
Prof. Emily M. Bender(she/her)
•Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).
>>
Prof. Emily M. Bender(she/her)
•Uh, accurate, transparent and interpretable make sense. "Safe", depending on what they imagine is "unsafe". "Aligned" is a codeword for weird AGI fantasies. And "loyal" conjures up autonomous, sentient entities. #AIhype
>>
Prof. Emily M. Bender(she/her)
•>>
Prof. Emily M. Bender(she/her)
•Yes, there should be liability --- but that liability should clearly rest with people & corporations. "AI-caused harm" already makes it sound like there aren't *people* deciding to deploy these things.
>>
Prof. Emily M. Bender(she/her)
•Also "the dramatic economic and political disruptions that AI will cause". Uh, we don't have AI. We do have corporations and VCs looking to make the most $$ possible with little care for what it does to democracy (and the environment).
>>
Prof. Emily M. Bender(she/her)
•Start with the work of brilliant scholars like Ruha Benjamin, Meredith Broussard, Safiya Noble, Timnit Gebru, Sasha Costanza-Chock and journalists like Karen Hao and Billy Perrigo.