LLMs don’t have self knowledge, and it is beneficial for predicting their correctness?
Previous works have suggested / used LLMs having self knowledge, e.g., identifying/preferring their own generations [https://arxiv.org/abs/2404.13076\], or ability to predict their uncertainty [https://arxiv.org/abs/2306.13063 ]. But some papers [https…