“`html
Prompt Fuzzer
The Prompt Fuzzer is an interactive tool that evaluates the security of GenAI application system prompts by simulating various dynamic LLM-based attacks. It helps users fortify their system prompts by customizing tests to fit their unique configuration and domain. The Fuzzer also features a Playground chat interface for refining system prompts iteratively, enhancing resilience against generative AI attacks.
Garak
Garak is a tool that tests for vulnerabilities in LLMs, including hallucination, data leakage, prompt injection, misinformation, toxicity generation, and jailbreaks. It is freely available and continuously enhanced to better support applications.
HouYi
HouYi is a framework designed to automatically inject prompts into LLM-integrated applications to test their vulnerability. It includes a demo script for simulating an LLM-integrated application and deploying HouYi for such attacks.
JailbreakingLLMs
The PAIR algorithm automatically generates jailbreak prompts for LLMs without human help, demonstrating high success rates and effectiveness on various LLMs, including GPT-3.5/4, Vicuna, and PaLM-2.
LLMAttacks
LLMAttacks method effectively prompts models to produce undesirable outputs, highlighting significant vulnerabilities in LLMs and underscoring the need for strategies to counteract adversarial tactics.
PromptInject
PROMPTINJECT is a framework for creating adversarial prompts for GPT-3, focusing on goal hijacking and prompt leaking, revealing significant long-tail risks to these models.
The Recon-ng Framework
Recon-ng is a comprehensive reconnaissance framework tailored for efficient, web-based, open-source intelligence gathering, supporting a modular architecture and designed specifically for reconnaissance.
Buster
Buster is an OSINT tool that facilitates online investigations, retrieving social accounts linked to an email from various platforms and identifying breaches associated with an email, among other functionalities.
WitnessMe
WitnessMe is a web inventory tool designed for extensibility, offering ease of use with Python 3.7+, Docker compatibility, and support for extensive parsing of large Nessus and NMap XML files.
LLM Canary
The LLM Canary tool is an open-source security benchmarking suite that helps developers identify and evaluate potential vulnerabilities in LLMs, incorporating test groups aligned with the OWASP Top 10 for LLMs.
PyRIT
PyRIT is a library designed to enhance the robustness evaluation of LLM endpoints, automating AI red teaming tasks and identifying security and privacy harms, including malware generation and identity theft.
LLMFuzzer
LLMFuzzer is an open-source fuzzing framework tailored for LLMs and their API integrations, ideal for uncovering and exploiting vulnerabilities in AI systems.
PromptMap
PromptMap automates the testing of prompt injection attacks by analyzing the context and purpose of ChatGPT rules, helping identify and mitigate potential vulnerabilities by simulating real attack scenarios.
Gitleaks
Gitleaks is a Static Application Security Testing (SAST) tool designed to detect hardcoded secrets in git repositories, offering a straightforward interface for scanning code for historical and current secrets.
Cloud_enum
Cloud_enum is a multi-cloud OSINT tool designed to identify public resources across AWS, Azure, and Google Cloud, offering comprehensive enumeration of resources in each platform.
“`