Avatar

swlabr

swlabr@awful.systems
Joined
3 posts • 37 comments
Direct message

Fuck yeah, SneerClub 2: the Sneerquel.

Swablr the Blue. That was my name. I am Swlabr the uncoloured. And I come back to you now - at the turn of the tide.

permalink
report
reply

Why did the Alignment community not prepare tools and plans for convincing the wider infosphere about AI safety years in advance?

Did you not read HPMOR, the greatest story ever reluctantly told to reach the wider infosphere about rationalism and, by extension, AI alignment???

Why were there no battle plans in the basement of the pentagon that were written for this exact moment?

It’s almost like AGI isn’t a credible threat!

Heck, 20+ years is enough time to educate, train, hire and surgically insert an entire generation of people into key positions in the policy arena specifically to accomplish this one goal like sleeper cell agents. Likely much, much, easier than training highly qualified alignment researchers.

At MIRI, we don’t do things because they are easy. We don’t do things because we are grifters.

Didn’t we pretty much always know it was going to come from one or a few giant companies or research labs? Didn’t we understand how those systems function in the real world? Capitalist incentives, Moats, Regulatory Capture, Mundane utility, and International Coordination problems are not new.

This is how they look at all other problems in the world, and it’s fucking exasperating. Climate change? I would simply implement ‘Capitalist Incentives’. Wealth inequality? Have you tried a ‘Moat’? Racism? It sounds like a job for ‘Regulatory Capture’. Yes, all problems are easily solvable with 200 IQ and buzzwords. All problems except the hardest problem in the world, preventing Skynet from being invented. Ignore all those other problems; someone will ‘Mundane Utility’ them away. For now, we need your tithe; we’re definitely going to use it for ‘International Coordination’, by which I totally don’t mean buying piles of meth and cocaine for our orgies.

Why was it not obvious back then? Why did we not do this? Was this done and I missed it?

We tried nothing and we’re all out of ideas!

permalink
report
reply

If you or a loved one have been hit by a car, suffered amnesia, and only remembered the AI thought experiment equivalent of “The Game”, you may be entitled to financial compensation.

permalink
report
reply

MediaCorp: “In the interests of increasing our company valuation, we will make our business decisions as far right as legally acceptable, as any profit-seeking company is wont to do. Unrelatedly, inside our economic framework, we don’t care about the actors and writers since we can replace them with robots. They can starve to death after they lose their homes.”

LW: crickets chirping

MediaCorp: “We will continue to have token representation in our media to remain palatable to statistically significant market segments”

LW: “Where’s the economic incentive? DAE silent majority???”

permalink
report
reply

bro just one more AI company bro, bro I swear just one more AI company and we’ll reach singularity bro

Unrelatedly I just unlocked a core memory: I learned about the “signularity” (sic, misspelled as a joke) from the webcomic questionable content. Why do I remember that?

permalink
report
reply

Now I’m thinking about how we can defeat roko’s basilisk by watching disney’s Meet The Robinsons (2007)

permalink
report
reply

Broke: republican book banning

Woke: don’t ban books

Roke(o): On one hand I am a fan of regressive content bans, but allowing ChatGPT to decide which books get banned is how we get the basilisk!!!

permalink
report
reply

Are there crazy people adjacent to the community? Of course, and there are certainly loud billionaires co-opting it to their own purposes, and even some of the people from the beginning have failed to live up to their aspirations.

99% of EA funding went into building a lampshade to hang over this minor quibble.

permalink
report
reply

I constantly experience [the Gell-Mann amnesia] effect on this subreddit; everyone sounds so smart and so knowledgeable until they start talking about the handful of things I know a little bit about (leftism, the arts, philosophy) and they’re so far off the mark — then there’s another post and I’ve forgotten all about it

Bias noted, impact not reduced. Basic rationality failed. These people are so willing to discard their own sense of right and wrong, moral or rational, just to belong in their weird cult. Why is it so hard for these dorks to admit that they don’t actually care about being smart or rational and that they just want a bunch of other dorks to be friends with?

permalink
report
reply