Rob Horning once again spitting bars on why everyone needs to both calm the fuck down and actually freak the fuck out about AI.

people I know, acquaintances, friend’s boyfriends (always a dude) ask me, a writer, what I think about ChatGPT. “what do you mean, what do I think?” I ask. they can’t exactly articulate it, but the implication is that AI generated text will make writing an obsolete practice, or devalue the effort required to do research and formulate arguments, or something. I’m not much in the business of making arguments any more, having opinions on everything is merely a way to keep you engaged with whatever They want your attention on, which is the final (current) frontier of colonization. but that’s tangential to the question. someone I was talking to about this made the point that advances in technology necessarily beget further advances in technology, and that we’re “just at the beginning” of this AI revolution (a wildly ahistorical claim, since none of the recent faddish products do anything different than what AI has always done). I tried to point out that technology only continues to advance on itself in this way so long as we as a society continue to believe the advancement of technology is a good in and of itself. he claimed, without basis, that these tools will achieve an unimaginable degree of complexity, such that some AI generator might be able to produce idiosyncratic and expressive text the way that skillful, thoughtful human writers do. obviously I disagree, because even with a rudimentary understanding of machine learning you have to see that all these tools do is approximate some median representation, a blurry outline of what it’s been trained to “recognize” via statistical analysis, and that the machines obviously don’t do anything like “thinking.” he suggested that what if we could train the machine to emulate sarcasm, an affect that depends on a recognition between perceiving beings that each carry with them a mutual appreciation for the semiotic system in which the dialogue is possible? leaving behind the obvious question of why the fuck anyone would want a computer to be sarcastic, I anxiously await a machine that isn’t merely a blank slate for starry-eyed naifs and technonihilists to project all their psychic weirdness onto. plus, people tend, in their enthusiasm, to overlook how much human labor is required to make these tools, instead choosing to believe that God or Atman dictates the course of their development free of human intervention. if any nonhuman force makes them, it’s Moloch.

I don’t want to retread what Horning says in this newsletter: read it for yourself. I agree with his point, that it’s ridiculous to think AI will somehow dissolve reality until people are unable to separate what’s human from what’s machine.

what I do want to say here is that a lot of the anxiety over living in a post-truth world, and the paranoia about “psyops” and about the intractable division being sowed among the people by the creation of echo chambers, is almost entirely mitigated by my having stayed off social media. it’s only when I find myself reading the replies to some tweet my friends have drawn my attention to that the Bad Vibes start thrumming.

well, not entirely. Bad Vibes abound, and paranoia is the only defense we have against the evildoers who rule this secular world, but I digress.

anyway, I’m so glad that the federal government is swiftly coming to the rescue of the failed Silicon Valley Bank. how would we ever achieve the full potential of AI if we allow the start ups researching this technology to lose all their money as a result of their hubris?


Leave a Reply

Your email address will not be published. Required fields are marked *