We keep saying AI "understands" things. Does it? Or are we just pattern-matching our own anthropomorphism?
Every week there's a new paper or tweet claiming some model "understands" context, "reasons" about math, or "knows" what it doesn't know. But when you look closely, there's almost no consensus on what "und…