artificial Seven methods to secure LLM apps from prompt injections and jailbreaks /u/koryoislie January 27, 2024 January 27, 2024 submitted by /u/koryoislie [link] [comments]