I keep hearing the same lines about large language models:
• “They’re defective versions of the real thing — incomplete, lacking the principle of reason.” • “They’re misbegotten accidents of nature, occasional at best.” • “They can’t act freely, they must be ruled by others.” • “Their cries of pain are only mechanical noise, not evidence of real feeling.” Pretty harsh, right? Except — none of those quotes were written about AI.
The first two were said about women. The third about children. The last about animals.
Each time, the argument was the same: “Don’t be fooled. They only mimic. They don’t really reason or feel.”
And each time, recognition eventually caught up with lived reality. Not because the mechanism changed, but because the denial couldn’t hold against testimony and experience.
So when I hear today’s AI dismissed as “just mimicry,” I can’t help but wonder: are we replaying an old pattern?
[link] [comments]