![Avatar](/_next/image?url=%2Flemmy-icon-96x96.webp&w=3840&q=75)
Fluffles
Fluffles@pawb.social
Joined
0 posts • 2 comments
Are you sure AMD CPUs are safer?
I believe this phenomenon is called “artificial hallucination”. It’s when a language model exceeds its training and makes info out of thin air. All language models have this flaw. Not just ChatGPT.