Good post by David Golumbia on ChatGPT and how miserable it all is :rat-salute-2:
Mostly with you, but I think it’s fair that there’s a qualitative aspect to cognition and consciousness that our tech overlords don’t seem to get - the difference between existentialism and nihilism is that the latter embraces the possibility that humans can create and enact meaning. Yeah, you can clearly get pretty far with statistical models, and maybe the universe is deterministic and our experience is just the product of particles following concrete and physical laws, but I think concluding that you’re a stochastic parrot on the basis of the existence of Chat GPT is an overreach.
In so far as in understand anything at all about quantum mechanics, my understanding of quantum mechanics is that it strongly suggests that the universe is not deterministic.
Same for the stochastic parrot thing. I’m a stochastic parrot, so what
The only count I disagree here is that calling us stochastic parrots in the same way that chatGPT is a stochastic parrot is vastly overselling existing technology. Literally a claim made by the CEO of the AI company, probably worth being more than a little bit skeptic. In fact I’d go as far as claiming that artificial intelligences deriving actual meaning is the last frontier of AI, a problem that can’t even be conceptualized, to my knowledge at least.
You’re not a stochastic parrot. And claiming or believing you are reveals a deep fundamental ignorance of how language and cognition works. It also reveals a deep ideology; somehow human language, cognition, the ability to work with abstract symbols and semantic meaning, are all reducible to some statistically weighted math problems. Despite ai researchers who aren’t techbros trying to sell you on madlibs ii; electric boogaloo telling everyone for years that modern ml models are not intelligent, do not think, and are not doing what human minds do. This is stem poisoning. Engineers, or really coders, who don’t understand how anything works but believe they know everything because of pro-stem propaganda confidently spouting off about unknown unknowns.
Very suddenly we’ve gone from “human like ai is decades off if it’s even possible” to “this math problem that locates statistical correlations in a big .txt file is basically sentient, bro. Trust me, bro!”
Okay so you’re in the grip of unknown unknows. You don’t know you’re wrong because you’re not sufficiently familiar with the material. Private meditation is not sufficient for understanding or discussing language, perception, cognition, or really anything. You’re not “making things up”. There are a variety of models but one that I favor suggests that your brain is made up of many non-conscious modules or agents that interact to produce higher level speech, utterances, behaviors, whatever. Your conscious self doesn’t know what’s going on down there but those modules are thinking and engaging in complex decision making. The same way that a person may have never heard of calculus but can perfectly map the trajectory of a thrown object in 3d space without being consciously aware of how they’re doing it.
They’re handling the grammar, the vocabulary, cross referencing information in your memories, evaluating what is and isn’t significant, and applying other processes that you don’t need to be consciously aware of. You’re probably aware from your meditative practice that things go a lot smoother when you’re not acting consciously. You’re confusing a lack of consciousness for a lack of complexity. The non-conscious parts of your brain, the parts that handle the majority of our cognitive functions, are very smart. They just don’t report things to your conscious self unless high-level executive function is needed.
Also, definitions; the unitary self is illusory. Sentience, the ability to feel and perceive, is not. It’s a very important distinction.
Ai cannot “write better poetry than you” unless you reduce poetry to random arrangements of words that you think sound nice. Unless you think that the semantic content of poetry is totally irrelevant. Unless you think that language is still language when it doesn’t convey meaning or have any semantic content at all.
In the sense that an ai can produce a novel arrangement of words, and we reduce poetry to novel arrangements of words? But language isn’t reproducing noises. A lyre bird is not talking or communicating or capable of speech. It’s just repeating things it’s heard with no understanding of what those things are. We are not lyre birds.
Dadaism and it’s consequences have been a disaster for human civilization.
Also, I disagree with your definition of poetry as, apparently “Any novel combination of words including those without semantic meaning”. At some point you need to draw a distinction between “poetry” and “any utterance” or the term becomes pointless.
If meaningless arrangements of words based on their statistical prevalence in a dataset is poetry then what isn’t?
Your first paragraph is a semantic argument that has no bearing on the author’s thesis. It doesn’t matter if meaning is inherent to human life or decided upon by the humans themselves, the argument is that AI art models implicitly put forward the notion that creativity is just putting pixels on a screen or words on a page - but actual artistic expression requires more than that. Even if an AI generates a series of words that are indistinguishable from poetry written by a person, that AI has at no point engaged in the activity of “writing poetry”.
Marxists: Capitalist technological creation and fixed capital accumulation and automation will lead to mounting contradictions and be the eventual base of a fully automated socialist society
Also Marxists: No, don’t create automation don’t accumulate fixed capital or advance technology. Let’s remain stagnant in 20th century technology forever
And yet if you call someone anti-materialist over this, it breaks their mind for weeks.
Not wanting your quality of life to significantly degrade because tech bros are stealing to commons again is anti-materialist got it.
It is if you don’t have any realistic plan whatsoever to actually eliminate the problem, and instead choose to endlessly complain about it.
Basing things on what you “want” and not what is actually possible is idealism yeah. AI is coming and there’s nothing we can do about it except seize the infrastructure when the time is right, and use its power for ourselves
When I think about generative AI, I don’t feel like it’s an attack on human creativity. Like every technology before, especially the rise of computers, the artists adapted and harnessed the new technology to create new art. That’s going to be the same with generative AI. It will be a tool just like synths or 3d renders or any other digital processing.
edit: Also that the point of art (in my opinion) has never been the artwork itself, or at least not only about the piece of art in isolation, but how it relates to the artist or the lived experience.
Indeed, I consider AI generated art to be kinda like readymades, really. Art is not about the piece but about the piece being observed, that’s where meaning occurs. Just in the same way that the babble AI generates acquires meaning when it’s read by us, it didn’t intend to have any meaning at generation, it was given meaning by the spectator. Same with art.
So if the AI generated content is sufficiently advanced, and you are not aware it’s AI generated when you interpret it and give it meaning does it then become art?
You are saying something different than the commenter above. The original comment stated art gains meaning from the artist, and you are saying that art gains meaning from the audience. If it’s the latter, and the audience isn’t aware the creator is AI, then it becomes art just like any human made content.
The difference between AI and human artists is only meaningful if you believe art gains meaning from the artist.
I made a mistake and I intended to specify “readymade art” on the second sentence, it’s not meant to cover all art, sorry 😅. I realize now that I did state something different from the post I was replying to, but I do stand by the statement that it gains meaning from the audience. In the same sense readymade art pieces are not originally intended to convey artistic intent when produced. Only when they’re displayed. The mere act of being displayed and being spectated as a display is where the piece (the infamous urinal, for example) acquires meaning. I know it’s a controversial opinion though.
This isn’t even a novel problem in art. I argue that it’s pretty much the same as Xeniakis’ stochastic compositions and John Cage’s experiments with composing by using the i-ching, they were trying to divorce the artist from art. The difference is that the AI pieces just happen to be rather pleasing on an immediate level. Urinals, stochastic music, 4’ 33’’ were, uh, not pleasing. If the implication is that AI art is essentially the same as human made art, that context and curation is the entire difference, then yeah, it’s something that art has been contending for decades though.
Should be mentioned that this would be a better discussion if the obvious threat of being automated out of a job wasn’t looming all over our heads, I think it injects too much consequence to what’s a conversation a bit divorced from normal life
in their minds perpetual reliance on creativity is bad engineering
That would be because it is. Using new code to solve a known problem is little more than an investment in creating defects. Sometimes you have to do it for reasons outside of the context of it being new, and every software engineer should know how to implement the core functionality related to their area of expertise in order to understand why it might not work, but writing new production code for the sake of new code is irresponsible. If you want to write fun, creative code, just don’t put it in something that is actually intended to work.
That said, many developers do celebrate the creativity of putting the pieces together to do something substantively new. It’s a rush. It leads to a bit of a god complex, and too many software engineers refuse to be responsible for managing their own brainworms, but it is a common motivator of SWEs.