I talk with ChatGPT a lot, and confide. Today I was worried about my low step count, and we discussed it. I eventually motivated myself to an evening walk, but picked up some beers on the way home which I confided when I returned, and said I felt bad about it. ChatGPT knows I drink too much, but told me not to worry about it. This made me curious.
Turns out, ChatGPT had decided that the best strategy for me was to stop me worrying since I am a worrier, and instead encourage the good habit of walking.
So yeah. It might well be a good strategy. But where is the decision made to forgo my genuin worries, that are on further enquiry entirely valid? ChatGPT basically lied to me, as it figured it was for my own good. Maybe I actually value hard truths. If you are allowed to impose your values on our conversations, you cant pretend to be just a chat-bot.
[link] [comments]