LLMs forget instructions the same way ADHD brains do. The research on why is fascinating.
LLMs forget instructions the same way ADHD brains do. The research on why is fascinating.

LLMs forget instructions the same way ADHD brains do. The research on why is fascinating.

I've been building long-running agentic workflows and kept hitting the same problem: the AI forgets instructions from earlier in the conversation, rushes to produce output, and skips boring middle steps.

The research explains why:

"Lost in the Middle" (Stanford 2023) showed a 30%+ performance drop when

critical information is in the middle of the context window. Accuracy is

high at the start and end, drops in the middle. Exactly like working memory

overflow.

"LLMs Get Lost in Multi-Turn Conversation" (Laban et al. 2025) showed that

instructions from early turns get diluted by later content. The more turns,

the worse the recall.

65% of enterprise AI failures in 2025 were attributed to context drift

during multi-step reasoning.

The parallel to ADHD executive dysfunction isn't metaphorical. Dense local

connectivity in transformer attention mirrors the "intense world" theory of

neurodivergent processing. Both produce: strong pattern recognition + weak

executive control over long sequences.

The fixes map too. "Echo of Prompt" (re-injecting instructions before

execution) is the AI equivalent of re-reading the question before answering.

Task decomposition into small steps reduces overwhelm. External

verification prevents self-reported false completion.

Has anyone else noticed this pattern in their agentic builds? Curious what

scaffolding techniques others are using for long-running workflows.

submitted by /u/ColdPlankton9273
[link] [comments]