Avatar

0karin728 [any]

0karin728@hexbear.net
Joined
0 posts • 33 comments
Direct message

Yes, obviously it’s an oversimplification, but fundamentally every computational system is either Turing complete or it isn’t, that’s the idea I was getting at. The human brain is not magic, and it’s not doing anything that a sophisticated enough algorithm running on a computer couldn’t do given sufficient memory and power.

permalink
report
parent
reply

This is just the whole Chinese room argument, it confuses consciousness for intelligence. Like, you’re completely correct, but the capabilities of these things scale with compute used during training, with no sign of diminishing returns any time soon.

It could understand Nothing and still outsmart you because it’s good at predicting the next token that corresponds with behavior that would achieve the goals of the system. All without having any internal human-style conscious experience. In the short term this means that essentially every human being with an internet connection now suddenly has access to a genius level intelligence that never sleeps and does whatever it’s told, which has both good and bad implications. In long term, they could (and likely will) become far more intelligent than humans with, which will make them increasingly difficult to control.

It doesn’t matter if the monkey understands what it’s doing if gets so good at “randomly” hitting the typewriter that businesses hire the monkey instead of you, and then as the monkey becomes better and better starts handing out instructions to produce chemical weapons and other bio warfare agents to randos on the street. We need to take this technology seriously if we’re going to prevent Microsoft, OpenAI, Facebook, Google, etc. from accidentally Ending the World with it, or deliberately making the world Worse with it.

permalink
report
parent
reply

LLMs definitely are not the Magic that a lot of idiot techbros think they are, but it’s a mistake to underestimate the technology because it “only generates the next token”. The human brain only generates the next set of neural activations given the previous set of neural activations, and look at how far our intelligence got us.

The capabilities of these things scale with compute used during training, and some of the largest companies on earth are currently in an arms race to throw more and more compute at them. This Will Probably Not End Well. We went from AI barely being able to form a coherent sentence to AI suddenly being a bioterrororism risk in like 2 years because a bunch of chemistry papers were in its training data and now it knows how to synthesize novel chemical warfare agents.

It doesn’t matter whether or not the machine understands what it’s doing when it’s enabling the proliferation of WMDs, or going rogue to achieve some Incoherent goal it extrapolated from it’s training, you’re still Dead at the end.

permalink
report
parent
reply

I figure by the time we’re able to build a base on Enceladus we’ll have fusion, and hydrogen is basically free, plenty of places to get fuel without having to drill through miles of ice and then haul it through an ocean like 3x as deep as Earth’s

permalink
report
parent
reply

It’s almost like a group of people who are openly attacked and marginalized (often to the point of calls for/ steps towards genocide) are going to be overrepresented in leftist spaces. This is not surprising.

permalink
report
reply

I just really fucking wish he wasn’t such a TERF asshole. TANS is my go to text for explaining to people how socialism could actually work in the 21st century, but it makes it harder to recommend knowing that if they google the guy they’ll find 800 transphobic screeds on his website. I love his work and I think Everyone needs to read TANS, it’s just frustrating.

permalink
report
reply

The mRNA vaccines don’t alter your DNA, but the virus absolutely does, so idk what the fuck these people are doing

permalink
report
parent
reply

Thanks!

permalink
report
parent
reply

Basically, but I think it’s even dumber. It’s like pascal’s wager if humans programmed God first.

The idea is that and AI will be created and given the directive to “maximize human well-being”, whatever the fuck that means without any caveats or elaboration. According to these people such an AI would be so effective at improving human quality of life that the most moral thing anyone could do before it’s construction is to do anything in their power to ensure that it is constructed as quickly as possible.

To incentivise this, the AI tortures everyone who didn’t help make it and knew about Roko’s Basilisk, since it really only works as motivation to help make the AI if you know about it.

This is dumb as fuck because no one would ever build an AGI that sophisticated and then only give it a single one sentence command that could easily be interpreted in ways we wouldn’t like. Also, even if somehow an AI like that DID manage to exist it makes no sense for it to actually torture anyone because whether it does or not doesn’t effect the past and can’t get it built any sooner.

permalink
report
parent
reply

My Spanish isn’t the best, could anyone summarize the last article?

permalink
report
reply