- Careful, I thought…is this going to be like those old anti-internet books of the 1990's? Thankfully, it's absolutely not! It calls out the bullshit that's everywhere in the AI universe.
- Emily M. Bender is a Professor of Linguistics at the University of Washington. She's an expert on how large language models work and why the illusion they produce is so compelling. Alex Hanna is Director of Research at the Distributed AI Research Institute and a former senior research scientist on Google's Ethical AI team.
- Their basic premise is that 'synthetic text extruding machines can’t fill holes in the social fabric. We need people, political will, and resources...Artificial intelligence, if we're being frank, is a con: a bill of goods you're being sold to line someone else's pocket...it's 'mathy maths', a racist pile of linear algebra, or 'Systematic Approaches to Learning Algorithms and Machine Inferences (aka SALAMI)'.
- '...for corporations and venture capitalists, the appeal of AI is not that it is sentient or technologically revolutionary, but that it promises to make the jobs of huge swaths of labor redundant and unnecessary.'
- The chapter 'AI Hype in Art, Journalism, and Science' is excellent. 'Today's synthetic media extruding machines are all based on data theft and labor exploitation, and enable some of the worst, most perverse incentives of each of these attendant fields. The use of these systems does further damage socially: displacing working artists and journalists, warping the practice of science, and polluting the information ecosystem. And their existence undermines the position and value of craft across these endeavors'.
- What we're seeing is the 'normalisation of data theft and exploitation...the derivative works from these models are largely copying their works and also significantly impinging on existing markets...In the case of the New York Times, users of ChatGPT and its different variants are able to produce, nearly verbatim, text from the newspaper, when they provide specific prompts....The argument that these tools are sufficiently "transformative" [permitted under the US Copyright Act] seems to ring hollow if they extrude words and images that are nearly identical to the data they are trained on, and do so on demand when prompted to produce something that matches the work of a specific artist or news outlet'.
- 'For AI boosters, the threat of these lawsuits is existential. And frankly we welcome that. Venture capital firm Adreesssen Horowitz warned that all of their investments in AI would be worth a lot less if they had to abide by copyright law. "Imposing the cost of actual or potential copyright liability on the creators of AI models will either kill or significantly hamper their development". That is, if they actually had to pay artists illustrators, and writers what their content is worth, rather than simply stealing that content from the web, their business model would fall apart'.
- This well informed, clearly written book will bring you a whole new perspective on what's actually happening in the world of AI. Highly recommend.
No comments:
Post a Comment