A control-theoretic approach to maintaining coherence in LLMs without modifying weights
Large language models perform well at short-horizon reasoning but consistently lose coherence over long interactions. This manifests as semantic drift, goal inconsistency, and gradual degradation of intent alignment. Scaling model size or context lengt…