Towards safe ASI – would this help?
Towards safe ASI – would this help?

Towards safe ASI – would this help?

Towards safe ASI - would this help?

Could LLMs be inheriting a deep-rooted “war-power” bias from humanity’s evolutionary past? Early human communities competed fiercely for resources, and these narratives of conflict and dominance are woven into much of our text and media. If AI is trained on all that content—complete with war metaphors and power-centric stories—does it risk developing an overemphasis on conflict-driven “solutions”?

One idea would be to filter or reduce the most aggressive content to avoid an overrepresentation of old survival strategies. Another approach would be supplementing AI training with insights or texts from cooperative species (real, like bonobos, or purely hypothetical societies using high quality synthetic data) to emphasize empathy and collaborative problem-solving.

A naive test would be if the loss of an LLM is high or low given some extremely low empathy content. If the LLM cannot predict what "skynet" does next, i would feel saver.

submitted by /u/No_Lime_5130
[link] [comments]