Sailor Sega Saturn
I am the journeyer from the valley of the dead Sega consoles. With the blessings of Sega Saturn, the gaming system of destruction, I am the Scout of Silence… Sailor Saturn.
I had opioids (Norco) in the hospital like 3 or so times when I had pancreatitis, and they gave me some to take home just in case. I didn’t actually need any at home so left that bottle closed but it took a surprising amount of self control; just because of how good it felt in the hospital.
The stuff is potent and best not messed around with.
Ah but the machine gods could be tinkering with your neural wiring to make you think you’re rolling a die when in reality the universe is nothing but the color pink. That’s right, reality is nothing but a shade of Fuchsia and dice don’t actually exist. You should take this possibility into account when adjusting your priors for some reason.
Epistemic Status: Barbie.
Epistemic Status: Single/Cali girl ;)
Maybe the mainstream is correct about everything. “Sneer club” seems to be mostly mainstream opinions.
Lurk moar.
For example, the mainstream opinion on covid was usually lagging several weeks behind Zvi’s posts on lesswrong.
Heaven forbid the mainstream take a few weeks to figure shit out when presented with new information instead of violently changing gears every time a new story or rumor gets published.
For anyone curious: https://www.lesswrong.com/s/rencyawwfr4rfwt5C
My favorite quotes from within:
Going on walks considered fine for some reason, very strange.
My current best thought for how to do experiments quickly is medical cruise ships in international waters. […] Medical cruise ships are already an established way to do things without running into regulatory problems.
We are willing to do things that people find instinctively repugnant, provided they save lives while at least not hurting the economy. How could we accomplish this?
None of the central people concerned with AI risk, associated with LW or otherwise, has ever said that we should expect to see AI having negative effects autonomously before it all goes to hell.
Well isn’t that convenient? None of today’s AIs problems actually matter (don’t give other people money!). Or at least not nearly as much as AI going to biblical levels of apocalypse without warning unless we deep thinkers think deep thoughts and save us all preemptively (give us money!)
A lot of rationalism is just an intense fear of death. Simulation hypothesis? Means that maybe you can live forever if you’re lucky. Superintelligence? Means that your robot god might grant you immortality someday. Cryogenics? Means that there’s some microscopic chance that even if you pass away you could be revived in the future at some point. Long terminism? Nothing besides maybe someday possibly making me immortal could possibly matter.
I mean don’t get me wrong I’d give a lot for immortality, but I try to uhh… stay grounded in reality.
Carves reality at the seams.
Oops, anyone know how to stitch reality back together again?
Also it was a close race, but the award for “most WTF comment” is a tie between:
A policeman stops a black man, who complains about racial profiling, and then the policeman finds evidence of a crime, and says something like “police go where the crime is”?
and
please understand that the reason my mental netcode detected your behavior as agentic still seems to have been justified at the time.
Let us list some of the specific concrete examples of wokeism in this blog post and comments. For fun.
- a gay kiss in the background of a scene in Star Wars
- The movie Knives Out
- What James Damore got fired from Google for pushing back against (if you don’t remember this; he has pushing back against the idea that women make for just as good programmers as men do)
- Suing twitter for firing a significantly higher percentage of women than men during layoffs
- The episode The Star Spangled Man from The Falcon and the Winter Soldier where Sam Wilson gets profiled by police.
- Talking about ones membership of a protected group all the time
- Leaving / Disinvesting from Twitter after Elon Musk purchased it.
- Disney releasing a new Black Princess
- Corporate training that trains people to discriminate against majorities instead of minorities
OK that’s probably enough…
You can test people with bigger problems, like remembering the units in Wargame Red Dragon
The fact that this “bigger problem” is rote memorization aside, it looks like there are a whopping 1700 units in that game. With names like M113A3 Super Dragon, CH46-C Phrog, or LVTP-7A1. Imagine some horrible dystopian future where you get to lock yourself up in your room for a couple months with an Anki deck trying to memorize as many Wargame: Red Dragon units as possible.
Your brain would probably be fried by the time you edge out the competition for your completely-non-Red-Dragon-related job.