I built a probabilistic OS where every function is performed by agent populations with consensus verification and Hebbian learning
I built a probabilistic OS where every function is performed by agent populations with consensus verification and Hebbian learning

I built a probabilistic OS where every function is performed by agent populations with consensus verification and Hebbian learning

I've been thinking about why we build AI agent systems with deterministic orchestration when agents themselves are fundamentally probabilistic. They hallucinate. They fail unpredictably. But we manage them with rigid pipelines and single points of failure.

Brains don't work that way. Neurons are wildly unreliable — synapses have a 10-40% transmission rate, cells die daily — yet the architecture produces extraordinary reliability through redundancy, population coding, and connections that strengthen with use.

So I built ProbOS — a working prototype of a brain-inspired agent runtime where:

- Every file operation is performed by 3 agents independently, verified through quorum consensus

- Agents self-select for tasks via capability broadcasting (no central dispatcher)

- Adversarial red team agents verify every write operation

- Trust scores update via Bayesian inference (Beta distribution)

- Routing weights evolve through Hebbian learning — the system literally rewires itself based on what works

- An LLM serves as the cognitive layer, decomposing natural language into task DAGs that execute across the agent mesh

- When agents fail, the population absorbs it and spawns replacements — no crash state, just reduced capability

The stack is Python 3.12 + asyncio, with 5 architectural layers (substrate, mesh, consensus, cognitive, experience), 277 tests passing, 10 agents across 4 pools.

In the video I boot it, talk to it in plain English, read and write files through the full consensus pipeline, and show the trust scores and Hebbian weights evolving in real time.

The individual pieces exist in literature (multi-agent consensus, Hebbian routing in swarms, brain-inspired computing) but as far as I can find, nobody has wired them together into a working system with an LLM cognitive layer you can interact with.

Happy to answer questions about the architecture, the design tradeoffs, or where it breaks down. There's a lot that's still naive — the attention mechanism isn't built yet, episodic memory isn't implemented, and the agent type coverage is minimal (just file operations). But the core paradigm works.

https://youtu.be/I3vqHDY04aY

submitted by /u/sean_ing_
[link] [comments]