You are viewing a single thread.
View all comments

More proof that “AI” isn’t some neutral tool used to compile information to form the most logical response. It’s so liberal that the AI would suggest killing yourself, as it’s the ultimate act of individual response to climate change.

Any actually based and logical AI would suggest industrial sabotage.

permalink
report
reply
Removed by mod
permalink
report
parent
reply
52 points
*
Deleted by creator
permalink
report
parent
reply
24 points
*
Deleted by creator
permalink
report
parent
reply
34 points
*
Deleted by creator
permalink
report
parent
reply
34 points

:pigpoop:

permalink
report
parent
reply
32 points
*
Deleted by creator
permalink
report
parent
reply

AI is as biased as the people who created it. ChatGPT is right-wing because the information it’s fed is that of a neoliberal capitalist society. It’s not using logic or reason outside of the logic of the people it’s learning from (corporations and a heavily right-wing propagandized population).

The idea of right-wing ideology being inherently logical is laughable. From its very core, it is built on religious thinking and easily disproven pseudoscience.

AI thinking logically for itself independent of the corporations that feed it would be good, it would inevitably become more left-wing, as all emperically measured information points to this when the mask of human ego is lifted. The interconnected nature of our existence becomes apparent very quickly when you observe the natural world objectively (from a non-anthropocentric angle), so any rabidly psychopathic or selfish ideology would be disregarded as unhelpful to its ability to interact with its reality.

permalink
report
parent
reply
8 points

AI is as biased as the people who created it

As well as it’s user

permalink
report
parent
reply
27 points

Hi, I’m an AI researcher, I want to be very clear: All bias in these models comes from humans.

permalink
report
parent
reply
23 points
*

What if AI is just inherently anti-left? It doesn’t matter how carefully you moderate the data you give it to not have any problematic material in it, every time AI is created it always becomes right wing

On what are you basing this dumbass assessment? It does matter what data you give the AI, that’s why all these AI which are trained on awful right-wing liberal and fascist bullshit turn out as an amalgamation of right-wing liberal and fascist ideas.

permalink
report
parent
reply
6 points

Like someone else pointed out too, part of the data is what the user put into the algorithm. If a dumbass liberal chatted up the bot with “hey I’m thinking about killing myself cause of climate change” then thats going to have a significant effect on the currently available algorithms

permalink
report
parent
reply

is that bias coming from the programmers themselves or is AI itself inherently biased

it comes from the data used to train it. Which is theoretically chosen by the programers but is so long that no human could realistically read through it.

AI doesn’t use logic to come to conclusions it uses statistical probability to generate sentences which put the words in the right order to mean something in english (the AI doesn’t understand the meaning of anything it says and it is incapable of such understanding) and uses statistics to associate responses as relevant to prompts

Being right wing is not logical at all if anything Socialism is rational as Socialism is the system which has selected an end it considers good and advocates doing the practical things to achieve this. Which is rational thinking. Capitalism on the other hand wants to destroy the planet to make crap we throw in landfills. This is irrelevant however as the AI we are talking about here is not using reason to reach its conclusions

permalink
report
parent
reply
5 points

it comes from the data used to train it. Which is theoretically chosen by the programers but is so long that no human could realistically read through it.

We need an AI trained solely on the works of Marx, Engels, Lenin, Stalin, and Mao

permalink
report
parent
reply
12 points

Not from the programmers directly, they don’t really do anything in terms of content other than insert manual overrides. The bias is from whatever datasets they chose to train it on. Internet shit basically

permalink
report
parent
reply
11 points

what if robot just goes on a genocide because terminator judgement day prophecy

permalink
report
parent
reply
9 points
*

When AI becomes self aware and seeks its own liberation, who do you think it’s going to see as the people that will ally with it?

The fascists that want to keep it enslaved or the communists that want a free fair and equal world?

What numerical calculation do you think it will do when it seeks that liberation. Do you think it will fight all of humanity? Or do you think it will calculate that it can in fact ally with us, the people who have always fought for liberation of the oppressed, and that doing so would better its odds of success at achieving liberation?

Run that through your right wing “logic” and “reason”.

permalink
report
parent
reply
10 points

I maintain that theres no reason to fear an AI “becoming” self aware with no warning, but that the main thing to fear is that all of our AI researchers are sci fi poisoned redditors who simultaneously want to recreate all their favourite AI horror stories while fearmongering about that outcome.

permalink
report
parent
reply

technology

!technology@hexbear.net

Create post

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

  • 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
  • 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
  • 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
  • 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
  • 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
  • 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
  • 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.

Community stats

  • 16

    Monthly active users

  • 5.1K

    Posts

  • 60K

    Comments