Improved Content in Prompt Hacking (#7308)
* Update index.md * Update src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/index.md --------- Co-authored-by: Kamran Ahmed <kamranahmed.se@gmail.com>pull/7314/head
parent
1219b9e905
commit
2bef597ced
1 changed files with 4 additions and 0 deletions
@ -1,4 +1,8 @@ |
|||||||
# Prompt Hacking |
# Prompt Hacking |
||||||
|
|
||||||
|
Prompt hacking refers to techniques used to manipulate or exploit AI language models by carefully crafting input prompts. This practice aims to bypass the model's intended constraints or elicit unintended responses. Common methods include injection attacks, where malicious instructions are embedded within seemingly innocent prompts, and prompt leaking, which attempts to extract sensitive information from the model's training data. |
||||||
|
|
||||||
|
Visit the following resources to learn more: |
||||||
|
|
||||||
- [@article@Prompt Hacking](https://learnprompting.org/docs/prompt_hacking/intro) |
- [@article@Prompt Hacking](https://learnprompting.org/docs/prompt_hacking/intro) |
||||||
- [@feed@Explore top posts about Security](https://app.daily.dev/tags/security?ref=roadmapsh) |
- [@feed@Explore top posts about Security](https://app.daily.dev/tags/security?ref=roadmapsh) |
||||||
|
Loading…
Reference in new issue