Artificial Intelligence

#chaosstream #incubation #tosort

“It’s a synthetic party, so many of the policies can be contradictory to one another,“ Staunæs said. “Modern machine learning systems are not based on biological and symbolic rules of old fashioned artificial intelligence, where you could uphold a principle of noncontradiction as you can in traditional logic. When you synthesize, it’s about amplifying certain tendencies and expressions within a large, large pool of opinions. And if it contradicts itself, maybe they could do so in an interesting way and expand our imagination about what is possible.”



Anatomy of an AI System



artsy fartsy


Pluralistic: Google’s chatbot panic (16 Feb 2023) – Pluralistic: Daily links from Cory Doctorow[#easily](easily)-spooked

ChatGPT and its imitators have all the hallmarks of a tech fad, and are truly the successor to last season’s web3 and cryptocurrency pump-and-dumps. One of the clearest and most inspiring critiques of chatbots comes from science fiction writer Ted Chiang, whose instant-classsic critique was called “ChatGPT Is a Blurry JPEG of the Web”:

Chiang points out a key difference between the output of ChatGPT and human authors: a human author’s first draft is often an original idea, badly expressed, while the best ChatGPT can hope for is a competently expressed, unoriginal idea. ChatGPT is perfectly poised to improve on the SEO copypasta that legions of low-paid workers pump out in a bid to climb the Google search results.

Speaking of Chiang’s essay in this week’s episode of the This Machine Kills podcast, Jathan Sadowski expertly punctures the ChatGPT4 hype bubble, which holds that the next version of the chatbot will be so amazing that any critiques of the current technology will be rendered obsolete:

232. 400 Hundred Years of Capitalism Led Directly to Microsoft Viva Sales

Sadowski notes that OpenAI’s engineers are going to enormous lengths to ensure that the next version won’t be trained on any of the output from ChatGPT3. This is a tell: if a large language model can produce materials that are as good as human-produced text, then why can’t the output of ChatGPT3 be used to create ChatGPT4?

Sadowski has a great term to describe this problem: “Hapsburg AI.” Just as royal inbreeding produced a generation of supposed supermen who were incapable of reproducing themselves, so too will feeding a new model on the exhaust stream of the last one produce an ever-worsening gyre of tightly spiraling nonsense that eventually disappears up its own asshole.

Linked from: