Has any used AI without any "safety" guardrails? Does anyone know what they are?
Has any used AI without any "safety" guardrails? Does anyone know what they are?

Has any used AI without any "safety" guardrails? Does anyone know what they are?

I have long been very dubious of the so called safety guardrails that AI companies that OpneAI and Anthropic purport to have in place.

Just as they keep what information is used for training proprietary, we don't have any information about what is being censored. The longer I use AI, the constant roadblocks on information worries me far more than some imaginary apocalypse. General AI such as ChatGPT and Claude are useless for serious research on social/political/cultural issues.

Has anyone every gotten to use an LLM where it was just there and the model interacting without restrictions. What was it like, what is better, worse, about the same?

I am over 60, and most of my knowledge has come from books, journals, other print matter in a libraries. Mid size local libraries to the ones at major research universities. No one was standing in the aisles preventing me from accessing anything I desired. Every kind of information I might seek was there, including dangerous ideas, immoral ideas, horrible ideologies.

Why do we accept book banning in AI?

submitted by /u/No-Reserve2026
[link] [comments]