Pull Requests are welcome. When opening a Pull Request please add a new Branch.
Abstract: Jailbreak vulnerabilities in Large Language Models (LLMs) refer to methods that extract malicious content from the model by carefully crafting prompts or suffixes, which has garnered ...