I’ve used apps that thought I was a monkey but it recognized white people just fine, I’ve seen racist chat bots go on about how much they love hitler and hate black people, I’ve saw ai generated photos that produced racist depictions. What I’m wondering is it the developers making the ai racist or is it the code itself that is racist? I don’t believe that any machine has reached sentience obviously, but I have no doubt in my mind that all ai I have experienced is racist, all I ask is why?
Sometimes it’s picked up from racist programmers, but really it’s because AI is trained on large sources of publicly available data with reflects racism in our society. Imagine a child literally raised by the internet.
AI is largely a Silicon Valley grift at this point. I mean it’s mostly about acquiring funding for an ai venture that has to return results before the next round. There’s a lot of promises being made that can’t really be kept and people trying to make them happen by any means necessary. One of the things they do to bridge the gap between “functioning software that can be sold as a service” and “ai that works” is have humans do all the actual ai work. They hire cheap workers to train the ai by doing all the stuff the ai is supposed to actually do. Captcha is a good example. They teach ai to recognize street lights by having millions of people voluntarily pick out the street lights. Then google gets to sell their amazing AI that can recognize street lights. Of course google is already funded, but it’s the same idea. Behind any great AI program is just people constantly training it and correcting it. Therefore the biases that exist in society come out through the AI results. To train facial recognition they feed a bunch of white faces into it and have people match the features. The people doing it have no idea they’re only using such a small sample of human appearance. Nobody questions it until it’s already released and non-white people notice.
Chat bots are also trained by people who use them. 4chan has been typing racist shit into any available chat bot for years. They just repeat what’s fed to them.
There’s a documentary on Netflix called Coded Bias that addresses this phenomenon
The handwritten code generally isn’t racist, although it can be.
But AI is trained on hand-fed datasets which is where the real hard hitting racism comes in. Training AI using only white people’s faces, using existing crime stats from racist police departments (namely, all of them), those are the sorts of things that can make the incomprehensible algorithm simply be explicitly racist.
using existing crime stats from racist police departments (namely, all of them)
An example: say you’re training an AI to predict which parts of a city are the most prone to crime. To do so, you might feed it a bunch of input data from other cities, including demographic information (like race). You also feed it the output you’re eventually looking for – crime reports from those same cities. All of this is localized to the smallest geographic unit you can manage (e.g., a neighborhood, a block, or even specific addresses). The idea is to train your AI to see patterns between the input and output data, to the point where it can accurately predict the output when given only input data.
Once your AI has sifted through this training data, you give it only the input data (including race) for City A and have it predict the output – that is, predict where crime reports will occur. And what do you know, your AI predicts that most crime will occur in black neighborhoods.
Now, an idiot would tell you that a neutral AI shows that black people are going to commit the most crime. But you can see that the AI is not neutral – it’s trained on crime data that comes from racist policies like the War on Drugs, which from the beginning was intended to arrest black people for drug use at a far higher rate than white people despite black and white people using drugs at similar rates. It accomplished this by putting more cops in black communities (more cops = more arrests) and by taking advantage of the baseline racism present in police (if you’re white and caught with a joint, the cop might just destroy it; if you’re black, you’re more likely to get arrested). There’s also an economic factor that compounds this racism. Racist policies like redlining have robbed black people of a lot of generational wealth, and wealthier people are more likely to do drugs in the privacy of their own home (because they have bigger homes and more privacy) where they’re less likely to get caught.
Your real-world training data is a product of explicitly racist policies, so the outputs from your AI are going to be biased in the same direction, even if your algorithm isn’t “black people suck 010110” and even if the people building your AI aren’t consciously racist themselves. In short, garbage in, garbage out.
basically just making a robot that tells them to do what theyre already doing, its completely useless because it wont come up with anything new.
Trchbros use biased input data, are too lazy to do the research to avoid this, and are often racist themselves in the questions they choose to raise.
Even when these things aren’t true, they’re systemically promoted because the marketing promise of AI is that you get high quality models without having to do all that researching and science but can just throw data at a box and get magic out. They’re not paying you to address the biases in the dataset or bringing social science experts or fuck just a bunch of people to look at and criticize the modeling. They’re paying you because your resume said deep learning on it and that’s a magic box you put data into and gets insights from. And it works perfectly on 90% of the devs and owners.