Permanently Deleted

In another ten years, after AI behaviour has been studied academically (which I feel like AI developers are not in a hurry to facilitate, so that the product preserves its mystique) we’re all going to be super-jaded about this.

Like, someone’s going to notice something like this and someone else is going to say "oh, yeah, that’s just the Chang-Plimpton effect. It happens when multiple [hoozitz]-type parameters are very high in the source image, essentially creating a feedback loop in the [whatchamacallit]. "

permalink
report
reply
3 points

I mean, I’m jaded about it now. The article - as written - feels like some Creepypasta I’d have seen on 4chan twenty years ago.

Oooo! A mysterious uncanny-valley ghost-woman image is popping up in the back of all my negative-of-a-negative-of-a-negative search results. Lets try to mystify this into a supernatural phenomenon, rather than realize it for a simple AI heuristic scraping the bottom of the logical barrel.

Someone at ABC News needed a fresh spin on the topic of AI Art, which was already saturating media markets. So they wrote a ghost story about AI Art (or, more likely, found a ghost story and slapped a journalist veneer over the top). I wouldn’t even be surprised if someone used a chatbot to reskin the old Pokemon urban legend about an IRL kid who was killed by a haunted copy of the game.

permalink
report
parent
reply

Wait, so he asked it to generate the opposite of a man who’s been pictured smiling and is considered traditionally handsome (Marlon Brando), and it’s somehow shocking that the opposite of that is a face of a frowning woman with ugly features?

I mean all the training data that gets fed to this thing just results in the replication of pre-existing biases of the society that data was collected from, filtered through the people collecting and classifying the data.

permalink
report
reply
43 points
*

The article is pretty silly at certain points. It presents AI generated text saying something as if it’s a conscious belief that’s being expressed, when you could easily get AI generated text to say whatever you want it to with the right prompts and enough tries. Then it suggests that AI will cause a breakdown of agreed-upon reality because everyone will question if a photo is real or fake, but as it mentions, it’s long been possible to create fake images with Photoshop. There’s even that famous photo with Stalin where the other guy got removed from it, so it’s nothing new.

Which honestly is probably where this whole preoccupation with fake images comes from, the whole idea of a “literally 1984” dystopia. The reality is that it’s much easier to mislead someone by how you emphasize information than by telling outright lies. It’s not a lie that Al Gore owns (owned?) a private jet, and if you frame it in just such a way, you can get the audience to draw the conclusion that that means climate change isn’t actually a big deal. At any moment, there are countless things happening all over the world, and some of those things provide evidence for things that are false, if only by random chance. It’s sort of like if you looked at all the text AIs are putting out, then singled out some text that fits the conclusion you want, then put that text directly after a bunch of creepy images. If you just cherry-pick and frame things a certain way, you can create a perception that we’ve created a monsterous, sapient entity that’s begging us to stop. Does this phenomenon ever happen with non-creepy images? The author never asks that :thonk:

Ultimately there’s simply no need to lie by making things up because it’s much easier to lie by telling the truth.

permalink
report
reply

Personally I’m pro “they added the guy in next to Stalin” theory.

permalink
report
parent
reply
11 points
*

I’ll argue a bit that there’s a difference between photo manipulation and creating evidence of entirely fabricated scenarios out of thin air. But I agree with you nonetheless

permalink
report
parent
reply
8 points

The scariest thing about deep fakes is that the powerful will be able to escape the truth by claiming the truth is a deep fake.

permalink
report
parent
reply

yeah getting another AI to comment on loab is silly as it’s at best a result of weird training data in a different AI

they aren’t thinking entities and they don’t have ideas

permalink
report
parent
reply
6 points
*
Deleted by creator
permalink
report
reply
8 points
*
Deleted by creator
permalink
report
reply