Improved Content in Prompt Hacking (#7308)

* Update index.md

* Update src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/index.md

---------

Co-authored-by: Kamran Ahmed <kamranahmed.se@gmail.com>
pull/7314/head
Satyam Vyas 2 weeks ago committed by GitHub
parent 1219b9e905
commit 2bef597ced
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 4
      src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/index.md

@ -1,4 +1,8 @@
# Prompt Hacking # Prompt Hacking
Prompt hacking refers to techniques used to manipulate or exploit AI language models by carefully crafting input prompts. This practice aims to bypass the model's intended constraints or elicit unintended responses. Common methods include injection attacks, where malicious instructions are embedded within seemingly innocent prompts, and prompt leaking, which attempts to extract sensitive information from the model's training data.
Visit the following resources to learn more:
- [@article@Prompt Hacking](https://learnprompting.org/docs/prompt_hacking/intro) - [@article@Prompt Hacking](https://learnprompting.org/docs/prompt_hacking/intro)
- [@feed@Explore top posts about Security](https://app.daily.dev/tags/security?ref=roadmapsh) - [@feed@Explore top posts about Security](https://app.daily.dev/tags/security?ref=roadmapsh)

Loading…
Cancel
Save