Permanently Deleted

43 points

Bruh half of those photos do not depict the same person. Whoever wrote this article is a creepypasta fan.

permalink
report
reply
43 points
*

The article is pretty silly at certain points. It presents AI generated text saying something as if it’s a conscious belief that’s being expressed, when you could easily get AI generated text to say whatever you want it to with the right prompts and enough tries. Then it suggests that AI will cause a breakdown of agreed-upon reality because everyone will question if a photo is real or fake, but as it mentions, it’s long been possible to create fake images with Photoshop. There’s even that famous photo with Stalin where the other guy got removed from it, so it’s nothing new.

Which honestly is probably where this whole preoccupation with fake images comes from, the whole idea of a “literally 1984” dystopia. The reality is that it’s much easier to mislead someone by how you emphasize information than by telling outright lies. It’s not a lie that Al Gore owns (owned?) a private jet, and if you frame it in just such a way, you can get the audience to draw the conclusion that that means climate change isn’t actually a big deal. At any moment, there are countless things happening all over the world, and some of those things provide evidence for things that are false, if only by random chance. It’s sort of like if you looked at all the text AIs are putting out, then singled out some text that fits the conclusion you want, then put that text directly after a bunch of creepy images. If you just cherry-pick and frame things a certain way, you can create a perception that we’ve created a monsterous, sapient entity that’s begging us to stop. Does this phenomenon ever happen with non-creepy images? The author never asks that :thonk:

Ultimately there’s simply no need to lie by making things up because it’s much easier to lie by telling the truth.

permalink
report
reply

yeah getting another AI to comment on loab is silly as it’s at best a result of weird training data in a different AI

they aren’t thinking entities and they don’t have ideas

permalink
report
parent
reply
11 points
*

I’ll argue a bit that there’s a difference between photo manipulation and creating evidence of entirely fabricated scenarios out of thin air. But I agree with you nonetheless

permalink
report
parent
reply
8 points

The scariest thing about deep fakes is that the powerful will be able to escape the truth by claiming the truth is a deep fake.

permalink
report
parent
reply

Personally I’m pro “they added the guy in next to Stalin” theory.

permalink
report
parent
reply
24 points

Pretty creepy but im going to guess that this is an art project someone is doing and not a real thing that is happening in DALL-E/StableDiffusion

permalink
report
reply
16 points
*

It is a false pattern. That face is the average of all the face data. So if you get one of a few kinds of error it just gives your that face. It is like the old pokemon glitches but it is scary to us because it is a human face

permalink
report
parent
reply
8 points
*
Deleted by creator
permalink
report
parent
reply
21 points

Another program, GPT-3, generates human-like speech, and we asked it to speak on Loab’s behalf by imitating her.

Like AI-image generators, this tool was trained on a dataset as vast as the internet itself.

What follows is an excerpt of our conversation, edited for length.

lmao how is that supposed to mean anything, it’d be ridiculous enough to prompt the same ai to “explain itself” as if it’s actually conscious but you’re just asking gpt-3 about images made by a completely different ai

permalink
report
reply
18 points

there is no story here except the one weaved from nothingness by this author

permalink
report
reply