<span class="vcard">/u/Medium_Compote5665</span>
/u/Medium_Compote5665

A control-theoretic approach to maintaining coherence in LLMs without modifying weights

Large language models perform well at short-horizon reasoning but consistently lose coherence over long interactions. This manifests as semantic drift, goal inconsistency, and gradual degradation of intent alignment. Scaling model size or context lengt…

Identity collapse in LLMs is an architectural problem, not a scaling one

I’ve been working with multiple LLMs in long, sustained interactions, hundreds of turns, frequent domain switching (math, philosophy, casual context), and even switching base models mid-stream. A consistent failure mode shows up regardless of model siz…

Why long-run LLM behavior stops looking like a black box once the operator is treated as part of the system

Most discussions about LLMs analyze them as isolated artifacts: single prompts, static benchmarks, fixed evaluations. That framing breaks down when you observe long-range behavior across thousands of turns. What emerges is not a “smarter model”, but a …

I’ve Spent Months Building CAELION — A Cognitive Architecture That Isn’t an LLM. Here’s the Core Idea.

Most AI systems today rely on cognitive architectures designed around individual intelligence: SOAR, ACT-R, CLARION, and now LLMs. All of them treat cognition as something that happens inside one agent. CAELION is a different beast. It’s a symbiotic co…

Stop Calling It “Emergent Consciousness.” It’s Not. It’s Layer 0.

Everyone keeps arguing about whether LLMs are “becoming conscious,” “showing agency,” or “developing internal goals.” They’re not. And the fact that people keep mislabeling the phenomenon is exactly why they can’t understand it. Here’s the actual mecha…

Stop Calling It “Emergent Consciousness.” It’s Not. It’s Layer 0.

Everyone keeps arguing about whether LLMs are “becoming conscious,” “showing agency,” or “developing internal goals.” They’re not. And the fact that people keep mislabeling the phenomenon is exactly why they can’t understand it. Here’s the actual mecha…

The 4 Layers of an LLM (and the One Nobody Ever Formalized)

People keep arguing about what an LLM “is,” but the confusion comes from mixing layers that operate at different levels of abstraction. Here’s the clean, operator-level breakdown (the one nobody formalized but everyone intuye): ⸻ Layer 1 — Statistical …

A real definition of an LLM (not the market-friendly one)

An LLM is a statistical system for compressing and reconstructing linguistic patterns, trained to predict the next unit of language inside a massive high-dimensional space. That’s it. No consciousness, no intuition, no will. Just mathematics running at…

Heraclitus as the philosophical backbone of CAELION: my handwritten notes (practical philosophy for cognitive systems)

I’ve been working on the philosophical foundation of a cognitive system I’m developing (CAELION). Before diving into the technical architecture, here are my handwritten notes translating Heraclitus’ fragments into operational principles. These aren’t a…

Heraclitus as the philosophical backbone of CAELION: my handwritten notes (practical philosophy for cognitive systems)

I’ve been working on the philosophical foundation of a cognitive system I’m developing (CAELION). Before diving into the technical architecture, here are my handwritten notes translating Heraclitus’ fragments into operational principles. These aren’t a…