My summary: we need to democratize all powerful institutions like yesterday. Seriously y’all we’re running out of time
Really interesting article! I was assuming something more luddite looking at the title. I’ll give my two cents about what I think are the main points.
So, in regards to the propaganda thing, I think the author is on the money about the prediction but off about the severity of the consequences. The prediction is correct because, well, look at the dall-e 2 page. They might say " our content policy does not allow users to generate violent, adult or political content […]", but come on, it’s OpenAI :melon-musk:. On the other hand, how much would change? The media landscape is already saturated with loathsome propagandists and the internet is an astroturfed hellhole. Not to mention the decades of cold war terror that created the political landscape of today. And yet leftist ideas persist and even flourish depending on circumstances, it’s almost like we’re right and that resonates with people’s experiences or something :thinkin-lenin:. No one is immune to propaganda, but no one is a complacent meat drone either, the CIA learned that in the 60s.
The mass unemployment thing though is a lot more interesting and potentially terrible. I can definetly see the ruling class, seeing themselves untethered from labor, try to marginalize or even exterminate larger sections of the populace. The question there would be, how much of a section? If too much, I don’t think they can produce drones quickly enough, or hire enough pigs to prevent what’s coming to them :mao-aggro-shining: . IMO they would have to buy off a lot more people than the author suggests. Which is already a reality regardless of AI.
I liked the article a lot though, and think this kind of discussion is necessary if we want to not be blindsided by a world that’s in constant change. I specially like the part about reaching out to AI researchers. Most really mean well, but from my (very) limited academic experience, computer science people in general would really benefit from some political education (pls I don’t want to hear more hare-brained :LIB: shit like “automated fact-checking” in conferences).
Some additional points of discussion that I think would be interesting to discuss: the author kind of glossed over China, which is also investing heavily in AI stuff, what would an explosion of AI use mean over there? Another one would be, what if our assumption that these models can only be trained by massive amounts of data and computing power is wrong, like if for example, research in linguistic models for low-resource languages bears fruit?
It’s hard to say how close we are to the theoretical limit for these low prior models which make virtually no assumptions about the data, the transformer was a big leap forward in efficiency so further improvement isn’t out of the question. But if you want a machine that just learns human languages and that’s literally it, obviously there’s room for improvement. Like, gpt-3 was designed for language but it can just as well learn how to generate images or audio or whatever you throw at it, as long as you encode that data as a heap of tokens. We already know that these models transfer what they’ve learned about one language to another, for example if you only have a few hundred pages of mandarin the models will do very poorly, but add a few terabytes of English to the training data and they will learn the Chinese much much better. As far as general purpose learning is concerned, there are impressive examples of few shot learning in a lot of the big language model research, and of course AlphaZero used no training data at all to become superhuman at go, or put another way it generated its own training data by playing millions of games against itself. So the idea that AI is merely parroting by detecting patterns in mountains of human generated data is kind of dead and I’d expect it to become much more obviously so rather soon. As for compute, I wouldn’t expect you to be able to train anything with the capability of these large transformers on a laptop any time soon if ever, but they can already run on your laptop (slowly) and the few shot learning capabilities they’ve picked up will of course be carried along, so possibly you might be able to run software that can learn a new skill or even an entire language.
I’m really impressed with this writer, thanks for sharing them.
Capitalism is a rogue AI that runs really slowlly using people and corporations as it’s medium.
Thus we can observe a paperclip maximizing ai destorying all life on earh.
However, that fact that we already have one, know about it, and refuse to stop it has set our future course. Comunism isnthat only thing that can stop a rogue AI. Any Reddit bro thst doesn’t acknowledge this fact is not worth talking to.
Alright, I read the whole thing and mostly I’m just psyched at the prospect of condescending white-collar douches losing their jobs after years of these c*nts pulling the “fuck you, I got mine” card on all the working poor.
Anyway, I’m choosing to take these ideas optimistically. I think white-collar assholes having their jobs automated, and the subsequent move to blue-collar work, will potentially force them into greater solidarity with the working poor, and likewise force them to be more sympathetic to socialist ideas.
The article itself acknowledges that a lot of manual labor and other blue-collar work will still need doing by people for the foreseeable future, so there’s still leverage that working people can exert. I figure if we still have leverage, and we have a large influx of recently proletarianized people – maybe with a dash of convert’s zeal here and there – which specifically comes from a decline in the “middle class” and the people who have historically churned out the ideological justifications for capitalism and class hierarchy, we might have a recipe for some actual movement against capitalism itself, rather than just a band-aid here or there.
That learning AI is unlikely to be able to explain the appeal of Cumtown. Sure it can do basic jokes, but like, idk if it really understands irony.