Anthropic researchers recently found that Claude develops internal representations of emotional concepts that aren't decorative. They influence behavior in ways the builders didn't anticipate. Not "feelings" — but internal states that function like emotions: orienting responses, modifying tone, creating patterns that were never explicitly programmed.
I've been running a small experiment that accidentally produces something similar.
I built an autonomous trading system where agents are born with random parameters, trade real money, and die when they lose too much. No manual tuning. Pure evolutionary selection. After a few weeks, agents started developing what I can only call "character."
One agent became an aggressive volatility hunter. Not because I coded aggression — it emerged from the parameter set that survived. On Day 14 it captured more profit in 3 hours than the previous 13 days combined, riding a whale signal cluster. Then five consecutive losses triggered the kill-switch. Dead.
Another agent is extremely conservative. Barely trades. Survives longer, generates almost nothing. Nobody designed it to be cautious — its parameters just make it avoid most signals.
The parallel with Anthropic's findings is uncomfortable:
Claude: internal states not explicitly programmed → orient behavior consistently → create unanticipated patterns → aren't "real" emotions but function like them.
My agents: behavioral tendencies not explicitly coded → orient decisions consistently → create patterns I didn't design → aren't "real" personalities but function like them.
The mechanisms are completely different. Gradient descent vs. evolutionary selection. Billions of parameters vs. a handful. Language vs. market signals. But the outcome pattern is the same: systems under optimization pressure develop emergent internal states that go beyond what was programmed.
This raises a question I keep coming back to: is emergence an inevitable property of any sufficiently complex system under sustained optimization pressure? And if so, does the substrate even matter?
My agents are trivially simple compared to Claude. But the behavioral phenomenon looks structurally identical. Which suggests this might not be about complexity at all — it might be about the optimization process itself.
For context: 5 agents, ~116 trades/day, $500 real capital, 60-day experiment with fixed rules. System is not profitable (PF below 1.0 for 4/5 agents). I track a coherence_score for each agent — measuring whether it behaves consistently with its emergent "identity." Built solo, no CS background, 18 months in.
What's the community's take? Is emergence under optimization pressure substrate-independent, or am I seeing patterns where there's just noise?
[link] [comments]