25 points

Crap, I left my $199 yearly subscription info inside my butler’s Lamborghini. Could your personal valet sky-write your login credentials for nature.com above my Tuscan estate? Specifically, above the Eastern alpaca pens—this Murano glass monocle of mine isn’t a bi-focal. Cheers.

permalink
report
reply
10 points

Okay this has to be a new hexbear site tagline

permalink
report
parent
reply

Excuse me, but it’s only 3.90 for each issue…

Of course I get my money’s worth by reading every single one

permalink
report
parent
reply
17 points

The actual scientific article is open-access: https://www.nature.com/articles/s41586-024-07856-5

permalink
report
reply
14 points

shit goes in, shit comes out

permalink
report
reply
7 points

Would you like the opportunity to explain why African American English is “shit” and comparable to racism?

permalink
report
parent
reply
19 points

i meant shit as in racist internet writings, which the llms are taught with

permalink
report
parent
reply
11 points
*

Ohh sorry. Like the model was trained on bad inputs

permalink
report
parent
reply
4 points
*

People be downvoting things or no reason 😑

permalink
report
parent
reply
8 points

Yeah it turns out when your entire tech industry is dominated by cishet white techbros and the entire foundation of their education and the production of such models is based on that then you get racist as fuck outcomes from any given algorithm that is a product of that same set of normative standards.

If you have the time I highly recommend reading Palo Alto by Malcolm Harris, it’s a great primer on how all this shit got started and why we should frankly just burn Silicon Valley to the ground.

permalink
report
reply
5 points

References weren’t paywalled, so I assume this is the paper in question:

Hofmann, V., Kalluri, P.R., Jurafsky, D. et al. AI generates covertly racist decisions about people based on their dialect. Nature (2024).

Abstract

Hundreds of millions of people now interact with language models, with uses ranging from help with writing1,2 to informing hiring decisions3. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans4,5,6,7. Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement8,9. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models’ overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.

permalink
report
reply
4 points

Thanks, and yes, you’re correct

permalink
report
parent
reply

Science

!science@mander.xyz

Create post

General discussions about “science” itself

Be sure to also check out these other Fediverse science communities:

https://lemmy.ml/c/science

https://beehaw.org/c/science

Community stats

  • 9

    Monthly active users

  • 343

    Posts

  • 247

    Comments