Good post by David Golumbia on ChatGPT and how miserable it all is :rat-salute-2:
This is one of the worst things I have ever had the misfortune of reading.
I don’t think there is ever going to be a good reason to use a nuclear bomb
What does “being against it” do, though? What specific actions would you take in opposition to deep learning tech?
I’m strict on calling it un-Marxist because carrying out an anti-AI programme would rely on either an unsustainable unending struggle against everyone trying to recreate it, or going full :a-guy: and bringing us so far back into the Stone Age that we can never reindustrialize again.
Most of the problems that people describe with deep learning tech, including what you’re describing, are problems with the system that it exists within, not problems with the tech itself. The abolition of capitalism is the only sustainable and permanent solution to the problem, and would be one that allows humanity to fully realize its benefits with few adverse consequences.
As of right now, I do not think any opposition to AI will actually benefit workers in any way – the most likely outcome would be that huge media companies end up being the only people able to effectively use the technology, which will result in most of the job eliminations we would hope to prevent happening anyways. It’s a fight between media companies wanting stronger copyright (look up the Mickey Mouse curve – we’re due for another expansion of copyright) and tech companies wanting to sell ridiculously overpriced cloud services, and regular artists don’t get a seat at this table under our current system.
Chat GPT can write code. It can debug code. It can design websites. It can translate language better than any automated language translation services. I fail to see how this doesn’t automate socially necessary work and solve problems
AI has improved exponentially in just the last year. You are completely blind if you do not see the potential this has to basically eliminate nearly all white collar work as it becomes even more sophisticated
Let’s be honest. ChatGPT is copying code snippets from StackOverflow with varying levels of correctness. I guess that is what people were doing anyways though.
No it isn’t connected to the internet any longer and it creates novel code for requests in plain English that are extremely specific and niche
From what I have seen, there is no guarantee for correctness on technical matters.
But it comes close, which makes it a useful tool. A programmer can get it to generate some code, then they go through and make sure it’s good. I’ve used it for that, there was one problem it wasn’t able to help me solve, but there was another problem where it probably saved me a good hour or two (probably more if I lost focus because of untreated ADHD) of trying to find if someone else had my specific problem or else breaking the problem down to more generic problems.
Similar goes for art. Artists can use an AI generated image as a base and work from there.
Your first paragraph is a semantic argument that has no bearing on the author’s thesis. It doesn’t matter if meaning is inherent to human life or decided upon by the humans themselves, the argument is that AI art models implicitly put forward the notion that creativity is just putting pixels on a screen or words on a page - but actual artistic expression requires more than that. Even if an AI generates a series of words that are indistinguishable from poetry written by a person, that AI has at no point engaged in the activity of “writing poetry”.
Ai cannot “write better poetry than you” unless you reduce poetry to random arrangements of words that you think sound nice. Unless you think that the semantic content of poetry is totally irrelevant. Unless you think that language is still language when it doesn’t convey meaning or have any semantic content at all.
In the sense that an ai can produce a novel arrangement of words, and we reduce poetry to novel arrangements of words? But language isn’t reproducing noises. A lyre bird is not talking or communicating or capable of speech. It’s just repeating things it’s heard with no understanding of what those things are. We are not lyre birds.
Dadaism and it’s consequences have been a disaster for human civilization.
Also, I disagree with your definition of poetry as, apparently “Any novel combination of words including those without semantic meaning”. At some point you need to draw a distinction between “poetry” and “any utterance” or the term becomes pointless.
If meaningless arrangements of words based on their statistical prevalence in a dataset is poetry then what isn’t?
Same for the stochastic parrot thing. I’m a stochastic parrot, so what
The only count I disagree here is that calling us stochastic parrots in the same way that chatGPT is a stochastic parrot is vastly overselling existing technology. Literally a claim made by the CEO of the AI company, probably worth being more than a little bit skeptic. In fact I’d go as far as claiming that artificial intelligences deriving actual meaning is the last frontier of AI, a problem that can’t even be conceptualized, to my knowledge at least.
Mostly with you, but I think it’s fair that there’s a qualitative aspect to cognition and consciousness that our tech overlords don’t seem to get - the difference between existentialism and nihilism is that the latter embraces the possibility that humans can create and enact meaning. Yeah, you can clearly get pretty far with statistical models, and maybe the universe is deterministic and our experience is just the product of particles following concrete and physical laws, but I think concluding that you’re a stochastic parrot on the basis of the existence of Chat GPT is an overreach.
In so far as in understand anything at all about quantum mechanics, my understanding of quantum mechanics is that it strongly suggests that the universe is not deterministic.
You’re not a stochastic parrot. And claiming or believing you are reveals a deep fundamental ignorance of how language and cognition works. It also reveals a deep ideology; somehow human language, cognition, the ability to work with abstract symbols and semantic meaning, are all reducible to some statistically weighted math problems. Despite ai researchers who aren’t techbros trying to sell you on madlibs ii; electric boogaloo telling everyone for years that modern ml models are not intelligent, do not think, and are not doing what human minds do. This is stem poisoning. Engineers, or really coders, who don’t understand how anything works but believe they know everything because of pro-stem propaganda confidently spouting off about unknown unknowns.
Very suddenly we’ve gone from “human like ai is decades off if it’s even possible” to “this math problem that locates statistical correlations in a big .txt file is basically sentient, bro. Trust me, bro!”
Okay so you’re in the grip of unknown unknows. You don’t know you’re wrong because you’re not sufficiently familiar with the material. Private meditation is not sufficient for understanding or discussing language, perception, cognition, or really anything. You’re not “making things up”. There are a variety of models but one that I favor suggests that your brain is made up of many non-conscious modules or agents that interact to produce higher level speech, utterances, behaviors, whatever. Your conscious self doesn’t know what’s going on down there but those modules are thinking and engaging in complex decision making. The same way that a person may have never heard of calculus but can perfectly map the trajectory of a thrown object in 3d space without being consciously aware of how they’re doing it.
They’re handling the grammar, the vocabulary, cross referencing information in your memories, evaluating what is and isn’t significant, and applying other processes that you don’t need to be consciously aware of. You’re probably aware from your meditative practice that things go a lot smoother when you’re not acting consciously. You’re confusing a lack of consciousness for a lack of complexity. The non-conscious parts of your brain, the parts that handle the majority of our cognitive functions, are very smart. They just don’t report things to your conscious self unless high-level executive function is needed.
Also, definitions; the unitary self is illusory. Sentience, the ability to feel and perceive, is not. It’s a very important distinction.
There are good arguments against the current direction of AI development, but only one of them makes a brief cameo in this piece (AI reifies social inequalities and bigotries and further refines them). Missing is what ought to be obvious: These models are hot garbage. The create product of their “work” is bad. Look at that shitty racism rap - ignore the racist and sexist comment and just look at it from the perspective of writing lyrics. It fucking sucks. It has an at-best-loose understanding of meter, rhyme seems to exist purely to make its phrasing maximally awkward, and it uses no real poetic technique. The only lyrics it could actually replace are the random interjections of European techno producers. Then go look at the “art” these things produce. It’s complete shit. It’s just a reproduction of an idiot’s understanding of what an image is supposed to be. At its absolute best, it isn’t good enough for a coffee-table book of mediocre art.
As a programmer, these tools are vaguely useful for some boilerplate code when monitored, but most of the code it spits out either doesn’t work or reflects the reality that the model does nothing but put together words it thinks are related, with no understanding of the underlying use of the code in question. It only performs well when you give it a purely abstract exercise. Start using it for anything real, and you’ll be rewriting 70% of the code it gives you.
You are correct but I think in these discussions there is an assumption that AI tooling will eventually become good enough to overcome those problems.
I’m not particularly worried about that, tbh. These models don’t understand why we put the things together that we put together, just that we do. They can duplicate the things we do, but that doesn’t mean they can duplicate the subtextual conversation between the reader/viewer/listener and the art that makes art art in the first place.