We keep saying AI "understands" things. Does it? Or are we just pattern-matching our own anthropomorphism?
We keep saying AI "understands" things. Does it? Or are we just pattern-matching our own anthropomorphism?

We keep saying AI "understands" things. Does it? Or are we just pattern-matching our own anthropomorphism?

Every week there's a new paper or tweet claiming some model "understands" context, "reasons" about math, or "knows" what it doesn't know.

But when you look closely, there's almost no consensus on what "understanding" even means — philosophically or empirically.

Searle's Chinese Room argument is 40 years old and still hasn't been cleanly resolved. The "stochastic parrot" framing treats token prediction as the ceiling. Integrated Information Theory would say current architectures are near-zero in phi. And yet GPT-4 passes the bar exam.

A few questions I've been sitting with:

  1. Is "understanding" even the right frame — or is it a folk-psychology term we're forcing onto a system that operates on completely different principles?

  2. Does it matter if a model "truly understands" if the outputs are indistinguishable from someone who does?

  3. Are we anthropomorphizing because it's useful shorthand — or because we genuinely don't have better language yet?

I've been going deep on AI + philosophy of mind for a channel I run (@ContextByRaj on YouTube if you're into this space). But genuinely curious what this community thinks — especially people coming from ML or cognitive science backgrounds.

Where do you land on this?

submitted by /u/rajzzz_0
[link] [comments]