computer-scienceangular-roadmapbackend-roadmapblockchain-roadmapdba-roadmapdeveloper-roadmapdevops-roadmapfrontend-roadmapgo-roadmaphactoberfestjava-roadmapjavascript-roadmapnodejs-roadmappython-roadmapqa-roadmapreact-roadmaproadmapstudy-planvue-roadmapweb3-roadmap
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
9 lines
913 B
9 lines
913 B
# LLM Security Testing |
|
|
|
The core application area for many AI Red Teamers today involves specifically testing Large Language Models for vulnerabilities like prompt injection, jailbreaking, harmful content generation, bias, and data privacy issues using specialized prompts and evaluation frameworks. |
|
|
|
Learn more from the following resources: |
|
|
|
- [@course@AI Red Teaming Courses - Learn Prompting](https://learnprompting.org/blog/ai-red-teaming-courses) - Courses focused on testing LLMs. |
|
- [@dataset@SecBench: A Comprehensive Multi-Dimensional Benchmarking Dataset for LLMs in Cybersecurity - arXiv](https://arxiv.org/abs/2412.20787) - Dataset for evaluating LLMs on security tasks. |
|
- [@guide@The Ultimate Guide to Red Teaming LLMs and Adversarial Prompts (Kili Technology)](https://kili-technology.com/large-language-models-llms/red-teaming-llms-and-adversarial-prompts) - Guide specifically on red teaming LLMs.
|
|
|