CMU Researchers Introduce AdaTest++: Enhancing the Auditing of Large Language Models through Advanced Human-AI Collaboration Techniques

CMU researchers have introduced AdaTest++, an advanced auditing tool for Large Language Models (LLMs). The tool streamlines the auditing process, enhances sensemaking, and facilitates communication between auditors and LLMs. AdaTest++ includes features such as prompt templates, organizing tests into schemas, top-down and bottom-up exploration, and validation and refinement. It has demonstrated remarkable effectiveness in uncovering unexpected model behaviors and promoting transparency in AI systems.




Review: Auditing Large Language Models with AdaTest++