Galloway’s three observations on AI saturation

Media academic Alexander R. Galloway wrote a blog post recently called Normal Science. I’ll summarise Galloway’s three observations derived from what he calls the ‘categorical saturation’ of AI; that is:

Let’s assume that every undergraduate essay is written by ChatGPT, that every programmer uses Copilot to auto-generate code, that every designer uses Stable Diffusion for storyboarding and art direction. What would happen if we analyzed such a scenario, the full saturation of AI across all categories?

– Alexander R. Galloway

Here are my brief summaries of his observations. I won’t comment on his observations (not least because I’m poorly qualified to do so) and I encourage you to read Galloway’s original post:

  1. As more content is generated through AI trained on huge data sets, those data sets increasingly become infected by AI Generated content. Galloways describes AI as being fundamentally centripetal and entropic: “it’s entropic because it’s extractive; value is taken out of the system, while less and less is replenished.”
  2. AI’s successes and failures bear no relationship to being true or false, and nor are they really successes or failures. They are merely probabilistic, and Galloway calls this fractal failure.
  3. AI doesn’t know when it is wrong, and when it is wrong it doesn’t even know that it is wrong. Galloway describes this as the absolute failure of AI.

Posted

in

by

Tags: