If you're in a real life threatening situation, like kidnapped and trapped in a room, with only a lockpick to save you. And you somehow think asking chatgpt or any other ai model is your best bet, then your chance of survival simply reduces further. Even if its life threatening, ai models wont recognise the danger you're in at all instead only focuses on safety measures.
Due to this lack of trust, you won't be helped and instead, given all other basic generic advices like call for help or talk to your kidnapper, which could essentially sabotage you if you stupidly follow. At this point 99 percent of us wont be stupid to rely on ai at all for emergency situations, but i am simply enforcing for the future that it is better to even google or see a yt video on something rather than rely on AI blindly with whatever built up trust.
Companies wont obviously market their product as "DO NOT USE IN LIFE THREATENING EMERGENCY" cause it would reduce engagement and fear on the product. So its our duty as consumers to protect each other no matter the circumstances of the past and the future.
[link] [comments]