AI model capabilities- to censor or not to censor?
AI model capabilities- to censor or not to censor?

AI model capabilities- to censor or not to censor?

With the recent news that Grok AI is being used to produce undressed images of various individuals (including reports of children) it seems like aspects of this model are getting out of hand. I hear that these issues are starting to be addressed, but I imagine more censorship issues will continue in the future since Grok AI generally operates under an anti-censorship rulebook.

Clearly, the undressing of individuals without consent and children is NOT ok.

In regards to medical or legal advice (ChatGPT in December 2025) being censored by models as well as censorship with prompts involving political topics (Gemini about a year ago with middle east conflict), it feels like we’re quietly at a crossroads with AI models.

On one hand, censorship is good because:

  • More capable models can clearly be misused (Grok example above)
  • Companies have real incentives and pressure to limit outputs
  • Governments are starting to pay attention

On the other hand:

  • “Censorship” often ends up being blunt, inconsistent, and opaque
  • It can limit legitimate research, creativity, and edge-case reasoning
  • It raises the question of who decides what’s off-limits

Are we actually making models safer — or just less useful and less honest?

And where do we draw the line?

Genuinely curious how people here think about this — especially folks building, researching, or deploying models.

submitted by /u/Creative-Bunch-9046
[link] [comments]