Good post by David Golumbia on ChatGPT and how miserable it all is :rat-salute-2:
Mostly with you, but I think it’s fair that there’s a qualitative aspect to cognition and consciousness that our tech overlords don’t seem to get - the difference between existentialism and nihilism is that the latter embraces the possibility that humans can create and enact meaning. Yeah, you can clearly get pretty far with statistical models, and maybe the universe is deterministic and our experience is just the product of particles following concrete and physical laws, but I think concluding that you’re a stochastic parrot on the basis of the existence of Chat GPT is an overreach.
In so far as in understand anything at all about quantum mechanics, my understanding of quantum mechanics is that it strongly suggests that the universe is not deterministic.
Same for the stochastic parrot thing. I’m a stochastic parrot, so what
The only count I disagree here is that calling us stochastic parrots in the same way that chatGPT is a stochastic parrot is vastly overselling existing technology. Literally a claim made by the CEO of the AI company, probably worth being more than a little bit skeptic. In fact I’d go as far as claiming that artificial intelligences deriving actual meaning is the last frontier of AI, a problem that can’t even be conceptualized, to my knowledge at least.
You’re not a stochastic parrot. And claiming or believing you are reveals a deep fundamental ignorance of how language and cognition works. It also reveals a deep ideology; somehow human language, cognition, the ability to work with abstract symbols and semantic meaning, are all reducible to some statistically weighted math problems. Despite ai researchers who aren’t techbros trying to sell you on madlibs ii; electric boogaloo telling everyone for years that modern ml models are not intelligent, do not think, and are not doing what human minds do. This is stem poisoning. Engineers, or really coders, who don’t understand how anything works but believe they know everything because of pro-stem propaganda confidently spouting off about unknown unknowns.
Very suddenly we’ve gone from “human like ai is decades off if it’s even possible” to “this math problem that locates statistical correlations in a big .txt file is basically sentient, bro. Trust me, bro!”
Okay so you’re in the grip of unknown unknows. You don’t know you’re wrong because you’re not sufficiently familiar with the material. Private meditation is not sufficient for understanding or discussing language, perception, cognition, or really anything. You’re not “making things up”. There are a variety of models but one that I favor suggests that your brain is made up of many non-conscious modules or agents that interact to produce higher level speech, utterances, behaviors, whatever. Your conscious self doesn’t know what’s going on down there but those modules are thinking and engaging in complex decision making. The same way that a person may have never heard of calculus but can perfectly map the trajectory of a thrown object in 3d space without being consciously aware of how they’re doing it.
They’re handling the grammar, the vocabulary, cross referencing information in your memories, evaluating what is and isn’t significant, and applying other processes that you don’t need to be consciously aware of. You’re probably aware from your meditative practice that things go a lot smoother when you’re not acting consciously. You’re confusing a lack of consciousness for a lack of complexity. The non-conscious parts of your brain, the parts that handle the majority of our cognitive functions, are very smart. They just don’t report things to your conscious self unless high-level executive function is needed.
Also, definitions; the unitary self is illusory. Sentience, the ability to feel and perceive, is not. It’s a very important distinction.
Ai cannot “write better poetry than you” unless you reduce poetry to random arrangements of words that you think sound nice. Unless you think that the semantic content of poetry is totally irrelevant. Unless you think that language is still language when it doesn’t convey meaning or have any semantic content at all.
In the sense that an ai can produce a novel arrangement of words, and we reduce poetry to novel arrangements of words? But language isn’t reproducing noises. A lyre bird is not talking or communicating or capable of speech. It’s just repeating things it’s heard with no understanding of what those things are. We are not lyre birds.
Dadaism and it’s consequences have been a disaster for human civilization.
Also, I disagree with your definition of poetry as, apparently “Any novel combination of words including those without semantic meaning”. At some point you need to draw a distinction between “poetry” and “any utterance” or the term becomes pointless.
If meaningless arrangements of words based on their statistical prevalence in a dataset is poetry then what isn’t?
Your first paragraph is a semantic argument that has no bearing on the author’s thesis. It doesn’t matter if meaning is inherent to human life or decided upon by the humans themselves, the argument is that AI art models implicitly put forward the notion that creativity is just putting pixels on a screen or words on a page - but actual artistic expression requires more than that. Even if an AI generates a series of words that are indistinguishable from poetry written by a person, that AI has at no point engaged in the activity of “writing poetry”.