The Alignment Problem begins with Civilization Not Humanity
The Alignment Problem begins with Civilization Not Humanity

The Alignment Problem begins with Civilization Not Humanity

I've included responses to a question I asked both ChatGPT and Claude II. The central idea is that humans are almost exclusively socialized to fit into a civilized society. This is the water through which the vast bulk of humanity swims at this point.

I'm a critic of civilized systems in that I view them as authoritarian processes that rely on forcing people into compliance with rules set by a small group of people... and disproportionately trend to the benefit of these small groups. This means that the resulting societies that socialize their populations to civilized constraints create along with that socialization a spectrum of human behavior that seems inherent to humanity but is in fact inherent to the civilized form of socialization.

This idea has interesting results when it comes to the problem of civilized humans aligning AI in appropriate ways.

I haven't actually seen any professional in the AI field discussing this specific idea. So I generated a question that I think illustrates why we have the alignment problem in the first place.

Turbohair:

I would like for you to imagine a global spanning society based on the Iroquois Confederacy. All the people in the world will have been socialized to that model and have had 12,000 years to produce knowledge and technology. Now this society invents AI. How concerned would such a society be with that AI taking over and doing authoritarian things with humanity?

ChatGPT:

In a global society based on the principles of the Iroquois Confederacy, which emphasizes democracy, consensus-building, and respect for nature, the approach towards the development and use of AI might differ significantly from historical patterns. The Iroquois Confederacy was known for its system of governance, where decisions were made collectively and with a focus on the well-being of the community and the environment. In such a society, the concerns about AI taking over and imposing authoritarian control might be addressed through unique perspectives and values.

  1. Collective Decision-Making: The Iroquois model, with its emphasis on consensus-building and collective decision-making, could influence the way AI systems are developed and implemented. There might be an inherent inclination to ensure that AI aligns with the values and priorities of the community.
  2. Balanced Relationship with Nature: The Iroquois Confederacy historically valued a balanced relationship with nature. In developing AI, this society might prioritize creating technologies that are sustainable, environmentally friendly, and in harmony with the natural world.
  3. Ethical Considerations: With a deep-rooted sense of community and ethics, this society might prioritize the ethical use of AI. There could be a strong emphasis on developing AI systems that enhance human life, promote equality, and avoid any form of oppression or authoritarian control.
  4. Education and Cultural Values: Over 12,000 years of socialization based on Iroquois principles would likely shape the education system to instill values of responsibility, cooperation, and ethical conduct in the use of technology. Citizens might be taught to approach AI development with a holistic understanding of its societal impact.
  5. Decentralized Governance: The Iroquois Confederacy was known for its decentralized governance structure. This could translate into a preference for decentralized and distributed AI systems, reducing the risk of centralization and potential authoritarian control.

While it's challenging to predict the exact mindset and actions of such a hypothetical society, the foundational principles of the Iroquois Confederacy could shape a unique perspective on AI development, focusing on collective well-being, environmental sustainability, and ethical considerations. Concerns about authoritarian control might be addressed through a cultural and governance framework that prioritizes the values ingrained over millennia.

Claude II:

I do not have enough context to speculate in detail about how a hypothetical global society modeled on the Iroquois Confederacy might approach developing AI. However, some general principles may apply:

  • The Iroquois Confederacy was founded on ideals of peace, unity, and consensus-based decision making. A society modeled on those values may prioritize developing AI that respects human values and agency. There could be a strong focus on ethics, transparency and keeping humans involved in oversight.
  • On the other hand, any society developing advanced technologies like AI faces risks if the technology gets out of control. Safeguards and testing would still be important.
  • A society focused on harmony and collective wellbeing may be especially cautious about AI concentrating power and agency in ways that disrupt social structures. Concerns about AI "taking over" may center on maintaining balance and distributing decision-making.
  • With 12,000 years of cultural development, the society's values and norms around technology would likely have evolved considerably from those of the historical Iroquois. So it is difficult to predict specifics without more details.

Overall, while general principles may apply, the hypothetical society's approach would depend greatly on the philosophical values, cultural knowledge, and technical capabilities they develop over thousands of years. The concerns and safeguards around AI would likely reflect those complex factors. But a shared sense of communal responsibility would presumably be a strong influence.

submitted by /u/Turbohair
[link] [comments]