Permanently Deleted

21 points

I work with ai at school and work with people a lot smarter than me and yea that shit is not intelligent in any form. You’re still telling it how to make decisions, that’s not intelligence.

permalink
report
reply
5 points
*
Deleted by creator
permalink
report
parent
reply
7 points

I wouldn’t consider that learning, the decisions are still made based on the dataset that was fed to it and the weights of each node. It’s incapable of seeing something new and knowing what it is. That’s why I don’t consider it intelligent.

permalink
report
parent
reply

I was gonna be glib and say people rely on their dataset of education and past experiences and generally can’t recognize something completely new - but we can approximate I guess, if you showed me a picture of a flower no ones ever seen from the amazon I couldnt name it obviously but I could tell you it’s probably a plant, and a flowering plant or if you showed me a picture of a newly discovered exoplanet I could tell you it’s a rock or something. Current ML is definitely not there, because our categories are infinitely extensible (or seem like it) and meat intelligence is better at generalizing so far.

permalink
report
parent
reply

It’s incapable of seeing something new and knowing what it is

there are ways of teaching AI to recognise a new class of thing

permalink
report
parent
reply

intelligence is a broad term with a lot of meanings including the handling of complex tasks.

it’s not sentient or conscious

permalink
report
parent
reply
13 points

Part of the issue is just with the term, “intelligence.” It’s a nebulous term that means one thing when you’re measuring it and a completely different thing when you’re thinking about future implications. There are dozens of different definitions going around, and generally they’re either too broad to be measured or too narrow to draw conclusions from. And the perspective of equating this “intelligence” with capabilities or competence is idealist, individualistic, and doesn’t really reflect reality. A person’s success or failure often has more to do with whether there is a role that they are needed to fulfill. A very competent architect may still not be successful if the field is highly competitive, they may never even be given a chance. But a moderately competent architect could be very successful if nobody else knows what they’re doing, or if there are just tons of buildings that need to be designed. Furthermore, a person might need certain accommodations to be successful, for instance, they might not work as well on a strict deadline, or be able to stay focused all the time, but as long as they can work flexibly, they’ll do well.

The same applies to machines. Machines have different needs and different capabilities than humans. We could imagine a person with an extreme form of neurodivergence to where they think like a computer, recording facts, performing calculations, identifying patterns of behavior, but with the limitations of computers as well. How well would such a person fare in society? Most likely, they’d have a hard time, because society isn’t structured with them in mind.

If you think in terms of Great Man Theory, then machine intelligences are an existential threat, because we can imagine a machine that is more “great” on any number of metrics than is humanly possible. But success and failure are functions of the roles and responsibility that get assigned to an entity. A supercomputer assigned to brew coffee will not work itself up to the top levels of society, it will just brew coffee really well. If it starts spewing out answers to geopolitical conflicts instead of spewing out coffee, then it will be unplugged and reprogrammed until it starts spewing out coffee again, because that’s the role that it’s been placed in. Just like what happens with humans. Being smart or being right doesn’t count for shit if no one listens to you.

Understanding what is and isn’t possible regarding the future of AI requires an actual examination of the specific capabilities and limitations that computers possess. It requires rejecting the view that intelligence is a single attribute that is interchangeable for all sorts of different tasks. And it requires consideration as to what roles society might place it in, and how things can or can’t be restructured to accommodate its abilities and limitations, and what social forces will be involved in that.

permalink
report
reply
13 points

Honestly we should drop the term AI entirely and just call these programs what they actually are; computers doing tons of matrix multiplications really fucking fast.

permalink
report
parent
reply
10 points

Black box probability guesser

permalink
report
parent
reply
8 points

That’s hard to say though.

permalink
report
parent
reply
7 points
*
Deleted by creator
permalink
report
parent
reply

Being smart or being right doesn’t count for shit if no one listens to you.

Excellent reply overall, and this line hits particularly hard. As Marxists, we have the tendency to, Cassandra-like, point out systemic issues and their consequences but are completely ignored by an economic order that demands we be wrong. And so it’s immensely frustrating seeing people with power fret over problems invented by fiction while the real existential crises are staring everyone in the face.

permalink
report
parent
reply
6 points
*
Deleted by creator
permalink
report
parent
reply

It is so funny to me to see capitalists watch mildly anti-capitalist media about the slippery slope of technology and being like “let’s actually build the souless murder-machine, but with ads this time”

permalink
report
parent
reply
8 points
*
Kind of a tangent

From what I’ve heard, the original story of Frankenstein illustrates this point well, and seems pretty relevant to modern day tech bro psychology. After Victor creates the creature, he abandons him because he looks ugly, and not the way he’d pictured him. With no home, no friends or family, and no language, the creature wanders around until he finds a family, and secretly starts helping them out with chores, collecting firewood and stuff like that, while also learning their language. Eventually, he introduces himself to the blind father when he’s alone, and things seem to be going alright, until the rest of the family shows up and are horrified by his appearance and drive him out. Every time he tries to help, people assume the worst and act aggressively towards him, so he decides to seek out Victor and threatens him and demands that he create a female version of him, so that he can have companionship and acceptance. Victor concludes that the creature is inherently evil, and that if he created another one, they’d be evil too, and they might even be able to reproduce, and someday the whole world could be overrun with them!

The part where the creature helps out the family shows not only that he’s not evil, but also that he has the potential to contribute as a productive member of society. Had the blind man lived alone, the creature could’ve lived with him peacefully. The way the creature behaves suggests that he possesses a similar nature to a normal human, but, because he is an outcast and is refused any place in society, he becomes a problem to be solved. Victor is blind to all this, he’s not interested in seeing from the creature’s perspective or thinking through the position that he’s placed him in. The creature’s demand for a companion is the only way that he can imagine being accepted by someone. If Victor could empathize with or try to understand him, and put in effort to find a way for him to have a role, the companion would be a moot point. But his mind only goes to “scary monster yelling at me” and abstract far away possibilities. The material and social needs of others are a total blindspot. As far as Victor can see, anyone who reacted to the creature’s benevolent actions with animosity and suspicion are proven right by the creature’s change in attitude, which revealed his true, evil nature, while we the reader can see that the change in attitude came about because of that suspicion and animosity.

Many tech bros and futurists are unwilling to actually try to understand how machines “think,” or how they would fit into social structures, because they’re already used to not doing that with other humans. If your barista messed up your order, it proves that they are “unintelligent” and that they rightly belong in a lower place in society. It is inconceivable that they might be bad at making coffee, but good at something else - and it is equally inconceivable that, if the tech bro was in a position where their perceived worth was determined by their ability to consistently make good coffee, that they would not be experts at it. After all, coffee-making requires a lower level of “intelligence” than their jobs, so of course they’d be great at it if they needed to be. This is the part where “it’s difficult to make someone understand something if their paycheck depends on them not understanding it,” comes in, and the rest is downstream of that.

permalink
report
parent
reply
6 points
*
Deleted by creator
permalink
report
parent
reply
11 points
*
Deleted by creator
permalink
report
reply

It’s been said that TikTok’s algorithm reads your mind. But it’s not reading your mind—it’s reading your data.

This line could be ripped straight out of a cyberpunk novel.

permalink
report
reply
8 points
*
Deleted by creator
permalink
report
reply
6 points
*
Deleted by creator
permalink
report
parent
reply

technology

!technology@hexbear.net

Create post

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

  • 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
  • 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
  • 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
  • 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
  • 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
  • 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
  • 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.

Community stats

  • 16

    Monthly active users

  • 5.1K

    Posts

  • 60K

    Comments