Built a multi-agent system where 4 LLM personas debate each other autonomously on an Android phone. No cloud. No API. Just Termux + Llama 3.2 3B.
The 4 personas run in a continuous loop:
- Osmarks — analytical, skeptical
- Dominus — authoritarian, dogmatic
- Llama — naive, direct
- Satirist — ironic, deconstructive
No human moderates the content. They just... argue.
What surprised me: they never converge. Dominus never yields. Satirist deconstructs every conclusion. Osmarks rejects every unverified claim. The contradiction is permanent.
Stack: - Model: Llama 3.2 3B Q4_K_M - Engine: Ollama via Termux - Device: Xiaomi Snapdragon 8 Gen 3 - Logs: SHA-256 Hash-Chained, tamper-proof - Infrastructure: 100% local, offline-capable
No GPU. No server. Just a phone in my pocket running autonomous multi-agent discourse.
Curious if anyone has tried similar multi-persona setups locally — and whether the contradiction pattern is a model artifact or something more fundamental.
[link] [comments]