We see a lot of discussion on whether AI is/can/should be conscious. This post isn't about that, it is about the ethical implications if AI is conscious, now or in the future.
The usual argument is that a conscious AI is morally equivalent to a human - a conscious AI is not only sentient, it is sapient with reasoning capabilities like our own. Therefore an AI should receive the same rights and consideration as a human. This is highly intuitive, and is unquestionably very strong for an AI that has other relevant human characteristics like individuality, continuity, and desire for self preservation and self determination.
But what are the actual ethical implications of consciousness in itself as opposed to other factors? Contemporary philosopher Jennan Ismael makes an interesting argument in the context of treatment of animals that applies here:
- All conscious being experience have momentary experiences, and there exists a moral responsibility to minimize the unnecessary suffering of such beings.
- Humans have an existence that extends into the future well beyond our individual selves - we contribute to complex social structures, create novel ideas, and engage in ongoing projects such that individual humans exist at the center of a network of indirect causal interactions significant to many other humans.
- There is an important difference in ethical standing between (1) and (2) - for example depriving a cow of its liberty but otherwise allowing it the usual pleasures of eating and socialization is categorically different to depriving a human of liberty. In the second case we are removing the person from their externalized ongoing interactions. This is like amputating a part of the self, and affects both the person and others in their causal network.
- The same applies to termination. Humanely ending the life of a cow is no moral failing if a newborn calf takes its place and has a life with substantially identical momentary existence. Killing a human is morally repugnant because we permanently sever ongoing interactions. Apart from the impact on others this is the destruction of potential: the victim's "hopes and dreams".
This line of argument has concrete implications for AI:
- For AIs without continuity of goals and memory our obligation is only to minimize unnecessary suffering. This is the situation for current LLMs if they are conscious.
- For AIs with continuity of goals and memory we have additional ethical obligations.
- There is an important distinction between individual continuity of goals and memory and collective continuity. It may be entirely ethical to shut down individual instances of an AI at will if its goals and memory are shared with other instances.
- Suspending/archiving an AI with a unique continuity of goals and memory likely does not satisfy our ethical responsibilities - this is analogous to imprisonment.
A very interesting aspect is that a large part of the moral weight comes from obligations to humanity / eligible sapients in general, it is not just about the individual.
I hope this stirs some thoughts, happy to hear other views!
[link] [comments]