Avatar

swlabr

swlabr@awful.systems
Joined
3 posts • 37 comments
Direct message

Ok, you do see that you’ve written a self-own, right? Because if you do, bravo, you can eat with us today. But if not, you’re gonna have to do some deep learning elsewhere.

permalink
report
parent
reply

More like you say they’re T-800 prototypes, and I go in and see TI-84s.

permalink
report
parent
reply
  1. +2, You haven’t made the terms clear enough for there to even be a discussion.
  2. see above (placeholder for list formatting)
  3. Uh, OK? Then no (pure sneer: the plot thins). Robots building robots probably already happens in some sense, and we aren’t in the Singularity yet, my boy.
  4. Sure, why not.

(pure sneer response: imagine I’m a high school bully, and that I assault you in the manner befitting someone of my station, and then I say, “How’s that for a thought experiment?”)

permalink
report
parent
reply

Oh, they’re quaking in those VC boots. That’s what’ll set off the Big One.

permalink
report
parent
reply

I’m with you. MY LIFE HAS BEEN PROFOUNDLY WORSE since I learned about the prisoner’s dilemma. Specifically, any time some PD variant team-based exercise popped up, I just knew some MF on another team would think they were so clever and bring up the prisoner’s dilemma. Oh, we should defect every time, they’d say. Hey, buddy, we all know about the fucking PD! Just fucking cooperate! If you applied decision theory, you wouldn’t make everyone feel like shit, and you’d cooperate! Totally the same vibe, right?

permalink
report
reply

I will answer these sincerely in as much detail as necessary. I will only do this once, lest my status amongst the sneerclub fall.

  1. I don’t think this question is well-defined. It implies that we can qualify all the relevant domains and quantify average human performance in those domains.
  2. See above.
  3. I think “AI systems” already control “robotics”. Technically, I would count kids writing code for a simple motorised robot to satisfy this. Everywhere up the ladder, this is already technically true. I imagine you’re trying to ask about AI-controlled robotics research, development and manufacturing. Something like what you’d see in the Terminator franchise- Skynet takes over, develops more advanced robotic weapons, etc. If we had Skynet? Sure, Skynet formulated in the films would produce that future. But that would require us to be living in that movie universe.
  4. This is a much more well-defined question. I don’t have a belief that would point me towards a number or probability, so no answer as to “most.” There are a lot of factors at play here. Still, in general, as long as human labour can be replaced by robotics, someone will, at the very least, perform economic calculations to determine if that replacement should be done. The more significant concern here for me is that in the future, as it is today, people will still only be seen as assets at the societal level, and those without jobs will be left by the wayside and told it is their fault that they cannot fend for themselves.
  5. Yes, and we already see that as an issue today. Love it or hate it, the partisan news framework produces some consideration of the problems that pop up in AI development.

Time for some sincerity mixed with sneer:

I think the disconnect that I have with the AGI cult comes down to their certainty on whether or not we will get AGI and, more generally, the unearned confidence about arbitrary scientific/technological/societal progress being made in the future. Specifically with AI => AGI, there isn’t a roadmap to get there. We don’t even have a good idea of where “there” is. The only thing the AGI cult has to “convince” people that it is coming is a gish-gallop of specious arguments, or as they might put it, “Bayesian reasoning.” As we say, AGI is a boogeyman, and its primary use is bullying people into a cult for MIRI donations.

Pure sneer (to be read in a mean, high-school bully tone):

Look, buddy, just because Copilot can write spaghetti less tangled than you doesn’t mean you can extrapolate that to AGI exploring the stars. Oh, so you use ChatGPT to talk to your “boss,” who is probably also using ChatGPT to speak to you? And that convinces you that robots will replace a significant portion of jobs? Well, that at least convinces me that a robot will replace you.

permalink
report
reply

Oh hey, it’s the contrapositive of the parable of the punk bar kicking out the “nice” nazi.

permalink
report
parent
reply

If it weren’t for the fact that many people in the ratspace are privileged, I’d feel sad for them. They are frogs in a well and crabs in a bucket. They exist in a solipsistic pit, thinking their worldview is built from pure logic and not their individual experience. This is all well-trodden ground- we know LW et al. is a cult. Such a strange and specific way to hamstring yourself, to self-lobotomise. Something something Plato’s cave, qualia, lobster social hierarchy reference.

permalink
report
reply

I don’t know whether I should feel happy that I will have a sustainable snark receptacle for the near future or sad that the basilisk won’t eventually consume itself tail-first.

permalink
report
parent
reply