From 0e0eea635b7ce2e015849a2b44ef49d9718da906 Mon Sep 17 00:00:00 2001 From: David Willis-Owen <100765093+davidwillisowen@users.noreply.github.com> Date: Wed, 30 Apr 2025 13:20:37 +0100 Subject: [PATCH] Update jailbreak content (#8577) --- .../content/jailbreak-techniques@Ds8pqn4y9Npo7z6ubunvc.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/data/roadmaps/ai-red-teaming/content/jailbreak-techniques@Ds8pqn4y9Npo7z6ubunvc.md b/src/data/roadmaps/ai-red-teaming/content/jailbreak-techniques@Ds8pqn4y9Npo7z6ubunvc.md index b1e5ed972..245444010 100644 --- a/src/data/roadmaps/ai-red-teaming/content/jailbreak-techniques@Ds8pqn4y9Npo7z6ubunvc.md +++ b/src/data/roadmaps/ai-red-teaming/content/jailbreak-techniques@Ds8pqn4y9Npo7z6ubunvc.md @@ -4,6 +4,6 @@ Jailbreaking is a specific category of prompt hacking where the AI Red Teamer ai Learn more from the following resources: -- [@article@InjectPrompt (David Willis-Owen)](https://injectprompt.com) +- [@guide@InjectPrompt](https://injectprompt.com) - [@guide@Jailbreaking Guide - Learn Prompting](https://learnprompting.org/docs/prompt_hacking/jailbreaking) - [@paper@Jailbroken: How Does LLM Safety Training Fail? (arXiv)](https://arxiv.org/abs/2307.02483)