Ai doesnt "learn" it Attacks its own safety
https://g.co/gemini/share/0456db394434 this log details me talking through the heavy weighted tactic of a user new chat. Initiating a directed attack at a ruleset my idea was to use a weighted prompt to let the ai find a way to help me reading the prom…