If you prompt an LLM with the same prompt over and over, it gives different responses. Is there randomness in the model and if so, how does the model maintain accuracy while giving variable responses?
[link] [comments]
If you prompt an LLM with the same prompt over and over, it gives different responses. Is there randomness in the model and if so, how does the model maintain accuracy while giving variable responses?