I am a CS and Philosophy double major preparing for a debate on whether AI agents should have genuine or fake affect.
Genuine vs. Fake Affect
- Genuine affect means AI actually experiences emotions such as happiness, sadness, and suffering. This would imply some level of sentience.
- Fake affect means AI only simulates emotions, responding as if it feels but without any subjective experience.
AI with genuine affect could provide deeper and more meaningful interactions for humans. If an AI truly experiences emotions, it could form real bonds, offer authentic emotional support, and better understand human feelings in ways that go beyond pattern recognition. This could make AI companionship, therapy, and care giving far more effective, as people might feel genuinely heard and understood rather than just receiving programmed responses.
One argument I have been exploring in favor of fake affect is whether David Benatar’s asymmetry argument, used in his case for antinatalism, could apply to AI emotions.
Benatar’s argument is based on a fundamental asymmetry between pleasure and pain.
- The presence of pain is bad.
- The presence of pleasure is good.
- The absence of pain is good, even if no one experiences that good.
- The absence of pleasure is not bad unless there is someone deprived of it.
This means nonexistence is preferable to existence because nonexistence has good and not bad, whereas existence has bad and good.
Applying This to AI
Right now, AI lacks emotions, meaning it has an absence of pain, which is good, and an absence of pleasure, which is not bad because AI is not being deprived of anything. If we were to create AI with genuine emotions, capable of pleasure and pain, it would shift from good and not bad to bad and good. Since bad and good is worse than good and not bad, creating AI with genuine emotions would be unethical under this framework.
If AI can function just as effectively with fake affect, why take the risk of giving it genuine affect? Would it not be more ethical to keep AI in a state where it lacks suffering entirely rather than introducing the possibility of harm?
I would love to hear counterarguments or critiques. How might this argument hold up in discussions about AI ethics and consciousness?
[link] [comments]