Update content for prompt engineering

pull/3776/head
Kamran Ahmed 2 years ago
parent b886f20570
commit 83057d65cd
  1. 2
      src/data/roadmaps/prompt-engineering/content/102-prompts/good-prompts/102-style-information.md
  2. 55
      src/data/roadmaps/prompt-engineering/content/106-llm-settings/102-other-hyper-params.md
  3. 48
      src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/100-prompt-injection.md
  4. 21
      src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/101-prompt-leaking.md
  5. 5
      src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/102-jailbreaking.md
  6. 18
      src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/103-defensive-measures.md
  7. 32
      src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/104-offensive-measures.md
  8. 23
      src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/index.md

@ -1,4 +1,4 @@
## Style Information
# Style Information
By providing explicit instructions regarding the desired tone, you can influence the language model's writing style and ensure it aligns with your specific requirements.

@ -1,57 +1,2 @@
# Other Hyperparameters
Aside from LLM settings, there are other hyperparameters that you may need to fine-tune in order to get the best results for your generated text. In this section, we will discuss some of these important hyperparameters and their effects on the model's performance.
## Temperature
The `temperature` parameter is a crucial hyperparameter that controls the randomness of the model's output. A high temperature value (e.g., 1.0) will make the model's output more random and creative, while a low value (e.g., 0.2) will make it more focused and deterministic.
Adjusting the temperature can significantly change the model's behavior, so it's essential to experiment with different settings to find the optimal balance between creativity and coherence for your specific use-case.
Example usage:
```
model.generate(prompt, temperature=0.8)
```
## Max Tokens
The `max_tokens` parameter allows you to limit the length of the model's output. It can be useful when you have specific constraints on the length of the generated text or when you want to avoid excessively long responses.
Specifying a lower value for `max_tokens` can help prevent the model from rambling on while still providing a useful output. However, setting it too low might result in the model's generated content being cut off and not making sense.
Example usage:
```
model.generate(prompt, max_tokens=50)
```
### Top P and Top K Sampling
Instead of using the default greedy sampling method, you might want to use more advanced sampling techniques like `top_p` (nucleus) sampling or `top_k` sampling. These methods provide better control over the diversity and quality of the potential generated tokens.
- `top_p`: Fraction of total probability mass to consider in the model's softmax output. A lower value will make the sampling process more strict, leading to a smaller set of high-probability tokens being considered.
- `top_k`: Limits the sampling process to only the k most probable tokens. Lower values enforce more determinism, and higher values allow for more diversity in the output.
You can experiment with different values for `top_p` and `top_k` to see which setting works best for your task.
Example usage:
```
model.generate(prompt, top_p=0.9, top_k=50)
```
## Number of Generated Texts
Sometimes, especially when using techniques like `top_p` or `top_k` sampling, it can be helpful to generate more than one output from the model. By generating multiple outputs, you can quickly review different variations of the generated text and choose the one that fits your requirements best.
You can set the `num_return_sequences` parameter to control the number of generated texts from the model.
Example usage:
```
model.generate(prompt, num_return_sequences=5)
```
In conclusion, adjusting these hyperparameters can significantly impact the behavior and performance of the text generation model. Therefore, it is essential to experiment with different settings to achieve the desired results for your specific use-case.

@ -1,50 +1,2 @@
# Prompt Injection
Prompt injection is a technique used in prompt engineering to fine-tune and manipulate model outputs more effectively. Instead of simply asking a question or giving a command, prompt injection involves carefully inserting additional context or instructions into the prompt to guide the model and achieve desired responses.
### Examples of Prompt Injection Techniques:
1. **Persistent context:** Repeat important context between turns, especially when the conversation is long or the model fails to retain necessary information.
```markdown
User: What is the capital of France?
AI: The capital of France is Paris.
User: How many people live there approximately?
AI: ...
```
With prompt injection:
```markdown
User: What is the capital of France?
AI: The capital of France is Paris.
User: How many people live in Paris, the capital of France, approximately?
AI: ...
```
2. **Instruct the model:** Explicitly instruct the model to provide a certain type of answer or deliver the output in a specific manner.
```markdown
User: Write a summary of the French Revolution.
AI: ...
```
With prompt injection:
```markdown
User: Write a brief, unbiased, and informative summary of the French Revolution focusing on major events and outcomes.
AI: ...
```
3. **Ask for step-by-step explanations:** Encourage the model to think step by step or weigh pros and cons before arriving at a conclusion.
```markdown
User: Should I buy this stock?
AI: ...
```
With prompt injection:
```markdown
User: What are the potential risks and benefits of buying this stock, and what factors should I consider before making a decision?
AI: ...
```
Keep in mind that prompt injection requires experimentation and iterating on your approach to find the most effective phrasing or context. By combining prompt injection with other prompt engineering techniques, you can enhance model performance and tailor outputs to meet specific user requirements.

@ -1,23 +1,2 @@
# Prompt Leaking
Prompt leaking is a phenomenon that occurs when the model starts incorporating or internalizing the assumptions and biases present in the prompt itself. Instead of focusing on providing accurate, unbiased information or response, the model might end up reinforcing the inherent biases in the question, leading to results that are not useful, controversial, or potentially harmful.
## Causes
1. **Inherent Biases**: GPT-3 and other state-of-the-art language models are trained on large and diverse datasets. Unfortunately, these datasets also contain many biases that can be embedded into the models, and may come into play during the response generation process.
2. **Biased Prompt**: When the prompt provided by the user contains biased phrases or loaded questions, it can influence the model unintentionally and cause prompt leaking. This kind of bias may lead the model to produce an undesired output conforming to the assumptions in the question.
## Prevention and Strategies
To prevent or mitigate prompt leaking, consider these strategies:
1. **Neutralize the prompt**: Try to rephrase the prompt in a more neutral way or remove any biases present, focusing the question on the desired information.
2. **Counter biases**: If you know a particular bias is involved, build a prompt that counteracts the bias and pushes the model towards a more balanced response.
3. **Use Step-by-Step Instruction**: Guide the model step-by-step or ask the model to think through the reasoning before answering the main question. This can help you steer the model towards the desired response.
4. **Iterative Refinement**: You can adopt an iterative approach to ask the model to improve or further clarify its response, enabling it to rectify potential biases in prior responses.
5. **Model Understanding**: Enhance your understanding of the model's behavior and identify its strengths and weaknesses. This can help you better prepare the prompt and avoid biases.

@ -1,7 +1,2 @@
# Jailbreaking
Jailbreaking, in the context of prompt engineering, is a technique used to enable the assistant to provide outputs that may be considered beyond its normal limitations. This is often compared to the process of jailbreaking a smartphone, which allows users to unlock certain restrictions and access additional features that would not be otherwise available.
Jailbreaking the assistant involves carefully designing the prompt to encourage the AI to act resourcefully and generate information which can be both creative and useful. Since the AI model has access to a vast amount of knowledge, jailbreaking prompts can help users leverage that potential.
Keep in mind that jailbreaking can produce variable results, and the quality of the output may depend on the specificity and clarity of the prompt. Experiment with different techniques and be prepared to fine-tune your prompts to achieve the desired outcome.

@ -1,20 +1,2 @@
# Defensive Measures
As the author of this guide, I want to ensure that you're able to implement defensive measures when prompt hacking. These measures are crucial as they protect against undesired result manipulations and help maintain the authenticity of generated responses. In this section, we'll briefly summarize the key defensive strategies for prompt engineering.
## 1. Mitigate risks through system design
To minimize the chances of prompt hacking, it's essential to design your system robustly. Some best practices include proper access control, limiting user input types, and using monitoring and logging tools to detect anomalies in query patterns.
## 2. Sanitize user input
Before processing user inputs, ensure that you sanitize and validate them. This step helps in preventing harmful input patterns capable of triggering unintended system behaviors. Stringently filter out any input elements that could manipulate the system’s behavior, such as special characters or explicit control tokens.
## 3. Use rate-limiting
Set limits on the number of requests users can make within a specific timeframe. Rate-limiting techniques prevent prompt hacking attempts in progress by restricting excessive queries.
## 4. Encourage safe practices
Educate users on the necessity of responsible AI usage, as well as the potential consequences of prompting maliciously. Establish guidelines on system usage and consequences for non-compliance.
## 5. Continuously update system security
Keep your system up to date by continuously improving its security measures. Stay informed about the latest advances in AI safety research and incident reports, to learn from and implement better defenses against evolving threats.
By implementing these defensive measures, you'll be better equipped to safeguard your AI system against prompt hacking, ensuring the safety and reliability of your AI-generated responses.

@ -1,34 +1,2 @@
# Offensive Measures
Offensive measures in prompt hacking are techniques used to actively exploit a system, service, or user. These techniques often involve creatively manipulating or structuring prompts to elicit sensitive information or gain unauthorized access. While understanding these measures is important for prompt engineers to create secure systems, we must stress that these methods should not be exploited for illegal or unethical purposes. Here are some commonly used offensive measures:
### 1. Social Engineering
This technique involves exploiting human psychology to trick users into revealing valuable data or granting unauthorized access. Common methods include:
- **Phishing:** Crafting emails or prompts that imitate legitimate organizations and request sensitive data.
- **Pretexting:** Creating a convincing backstory or pretext to give the impression of a legitimate request or interaction.
- **Baiting:** Enticing users to reveal information or grant access with the promise of specific rewards.
### 2. Input Manipulation
Manipulating the input given to a prompt can lead to unintended results, including bypassing security constraints or retrieving valuable data. Some examples:
- **SQL Injection:** Crafting prompts that include SQL code that can exploit a vulnerability in the target system's database.
- **Cross-Site Scripting (XSS):** Injecting malicious scripts into trusted websites or platforms, which can compromise user data and security settings.
### 3. Brute Force
Repeatedly trying different input combinations in an attempt to crack a password or bypass security. This approach can be refined using:
- **Dictionary Attacks:** Attempting a collection of commonly used passwords, phrases, or patterns.
- **Credential Stuffing:** Exploiting previously compromised or leaked credentials by trying them on other services or platforms.
### 4. Exploiting Vulnerabilities
Taking advantage of known or newly discovered security flaws in software or hardware. Offenders often use these vulnerabilities to:
- **Execute Unauthorized Commands:** By exploiting a vulnerability, attackers can run commands without proper authorization.
- **Escalate Privileges:** Attackers may raise their access level, allowing them to access restricted data or features.
To protect against offensive measures, it's essential to implement strong security practices, stay informed about the latest threats, and share knowledge with fellow engineers.

@ -1,25 +1,2 @@
# Prompt Hacking
Prompt hacking is a creative process of modifying, adjusting, or enhancing the original OpenAI model's prompt to generate more desired and effective outcomes. The goal is to exploit the model's inherent strengths and mitigate its weaknesses by using a combination of techniques and strategies.
Here are some key aspects of prompt hacking:
1. **Explicit Instructions**: Make your prompt as clear and explicit as possible. Specify important details, like the desired format or the expected content.
Example: Instead of 'summarize this article,' use 'write a 150-word summary of the key points in this article.'
2. **Questions over Statements**: It's often more effective to frame prompts as questions than statements. The model tends to treat questions more like requests for information and generate responses accordingly.
Example: Instead of 'write a brief history of the Eiffel Tower,' use 'what is a brief history of the Eiffel Tower?'
3. **Step-by-Step Approach**: Break complex tasks into a series of smaller prompts. This can help in generating more focused and relevant responses by guiding the model to specific aspects.
Example: For an essay on climate change, start with prompts like 'what are the main causes of climate change?' and 'what are the primary impacts of climate change on human societies?'
4. **Debiasing**: Reduce potential biases present in model outputs by explicitly asking for neutral or unbiased information.
Example: Instead of 'explain the benefits of a plant-based diet, use 'what are the pros and cons of a plant-based diet?'
5. **Prompt Iteration**: If the initial prompt doesn't produce the desired results, iterate and experiment with different variations and approaches.
Remember, prompt hacking is an art, and it requires practice, experimentation, and patience. Don't be afraid to get creative and test a variety of strategies to find the ones that work best for your desired outcomes.
Loading…
Cancel
Save