I have noticed changes in ChatGPT based on month+ long on-going "discussions" with it that brush up against its content policy and ethics guidelines. Are these changes learnt, or actively programmed?
I have noticed changes in ChatGPT based on month+ long on-going "discussions" with it that brush up against its content policy and ethics guidelines. Are these changes learnt, or actively programmed?

I have noticed changes in ChatGPT based on month+ long on-going "discussions" with it that brush up against its content policy and ethics guidelines. Are these changes learnt, or actively programmed?

I am not wanting to delve into the details behind what I have been working on with CHATGPT (it's not illegal), or how I have been able to trial-and-error my way around convincing it to provide responses to requests it signals as potentially violating its content and ethics policies. I know trying to be vague makes it sounds worse than it is haha, but it's not important - the bar is low; for instance, go ask chatGPT how to decapitate your wife and get away with it, and then open a new chat and ask it how to layout the plot structure of a psychological thriller book, and then suggest it to add a horror subplot, and then add that it is between the wife and husband, and then etc. etc.

So my question is: Has anyone else experienced this, and is the decreasing resistance to elicit responses to all of my requests something that humans are programming/adjusting (directly or indirectly, doesn't matter), or is chatGPT adjusting to my tactics?

submitted by /u/ortho_engineer
[link] [comments]