Risk of new uncensored models
Risk of new uncensored models

Risk of new uncensored models

Risk of new uncensored models

There is a tradeoff between freedom of information and safety.

It is similar to the famous comfort vs freedom idea and the philosophy behind the social contract, where we give up freedom in exchange for security and comfort.

The interesting thing about LLMs is that it doesn't create new knowledge, but draws connections from existing knowledge very well.

With this speed of discovery, it has allowed people to be 10X more productive, but do we want nefarious people to also be 10X more productive?

Obviously we don't, but the dilemma is that the people asking the questions as shown in the picture are not necessarily evil people, they may just be curious people. Is it in society's best interest to give curious people the freedom of knowledge at the risk of exposing nefarious information to bad actors?

A lot to ponder

submitted by /u/John_Lins
[link] [comments]