LLMs don’t have self knowledge, and it is beneficial for predicting their correctness?
LLMs don’t have self knowledge, and it is beneficial for predicting their correctness?

LLMs don’t have self knowledge, and it is beneficial for predicting their correctness?

Previous works have suggested / used LLMs having self knowledge, e.g., identifying/preferring their own generations [https://arxiv.org/abs/2404.13076\], or ability to predict their uncertainty [https://arxiv.org/abs/2306.13063 ]. But some papers [https://arxiv.org/html/2509.24988v1 ] claim specifically that LLMs don't have knowledge about their own correctness. Curious on everyone's intuition for what LLMs have / does not have self knowledge about, and whether this result fit your predictions.

submitted by /u/Envoy-Insc
[link] [comments]