Add prompt enginering roadmap

pull/3941/head
Kamran Ahmed 2 years ago
parent a2490efa80
commit dacd2d898b
  1. 10
      src/data/roadmaps/prompt-engineering/content/100-basic-llm/100-what-are-llms.md
  2. 20
      src/data/roadmaps/prompt-engineering/content/100-basic-llm/101-llm-types.md
  3. 12
      src/data/roadmaps/prompt-engineering/content/100-basic-llm/102-how-llms-built.md
  4. 35
      src/data/roadmaps/prompt-engineering/content/100-basic-llm/103-llm-vocabulary.md
  5. 8
      src/data/roadmaps/prompt-engineering/content/100-basic-llm/index.md
  6. 29
      src/data/roadmaps/prompt-engineering/content/101-prompting-introduction/100-basic-prompting.md
  7. 28
      src/data/roadmaps/prompt-engineering/content/101-prompting-introduction/101-need-for-prompting.md
  8. 26
      src/data/roadmaps/prompt-engineering/content/101-prompting-introduction/index.md
  9. 18
      src/data/roadmaps/prompt-engineering/content/102-prompts/101-parts-of-a-prompt.md
  10. 28
      src/data/roadmaps/prompt-engineering/content/102-prompts/good-prompts/100-use-delimiters.md
  11. 35
      src/data/roadmaps/prompt-engineering/content/102-prompts/good-prompts/101-structured-data.md
  12. 13
      src/data/roadmaps/prompt-engineering/content/102-prompts/good-prompts/102-style-information.md
  13. 24
      src/data/roadmaps/prompt-engineering/content/102-prompts/good-prompts/103-give-conditions.md
  14. 43
      src/data/roadmaps/prompt-engineering/content/102-prompts/good-prompts/104-give-examples.md
  15. 66
      src/data/roadmaps/prompt-engineering/content/102-prompts/good-prompts/105-include-steps.md
  16. 4
      src/data/roadmaps/prompt-engineering/content/102-prompts/good-prompts/106-workout-solution.md
  17. 13
      src/data/roadmaps/prompt-engineering/content/102-prompts/good-prompts/107-iterate-refine.md
  18. 2
      src/data/roadmaps/prompt-engineering/content/102-prompts/index.md
  19. 18
      src/data/roadmaps/prompt-engineering/content/102-prompts/prompting-techniques/100-role-prompting.md
  20. 33
      src/data/roadmaps/prompt-engineering/content/102-prompts/prompting-techniques/101-few-shot-prompting.md
  21. 25
      src/data/roadmaps/prompt-engineering/content/102-prompts/prompting-techniques/102-chain-of-thought.md
  22. 21
      src/data/roadmaps/prompt-engineering/content/102-prompts/prompting-techniques/103-zeroshot-chain-of-thought.md
  23. 69
      src/data/roadmaps/prompt-engineering/content/102-prompts/prompting-techniques/104-least-to-most.md
  24. 31
      src/data/roadmaps/prompt-engineering/content/102-prompts/prompting-techniques/105-dual-prompt.md
  25. 4
      src/data/roadmaps/prompt-engineering/content/102-prompts/prompting-techniques/106-combining-techniques.md
  26. 3
      src/data/roadmaps/prompt-engineering/content/103-real-world/100-structured-data.md
  27. 1
      src/data/roadmaps/prompt-engineering/content/103-real-world/101-inferring.md
  28. 2
      src/data/roadmaps/prompt-engineering/content/103-real-world/102-writing-emails.md
  29. 3
      src/data/roadmaps/prompt-engineering/content/103-real-world/103-coding-assistance.md
  30. 3
      src/data/roadmaps/prompt-engineering/content/103-real-world/104-study-buddy.md
  31. 3
      src/data/roadmaps/prompt-engineering/content/103-real-world/105-designing-chatbots.md
  32. 3
      src/data/roadmaps/prompt-engineering/content/103-real-world/index.md
  33. 2
      src/data/roadmaps/prompt-engineering/content/104-llm-pitfalls/100-citing-sources.md
  34. 1
      src/data/roadmaps/prompt-engineering/content/104-llm-pitfalls/101-bias.md
  35. 26
      src/data/roadmaps/prompt-engineering/content/104-llm-pitfalls/102-hallucinations.md
  36. 34
      src/data/roadmaps/prompt-engineering/content/104-llm-pitfalls/103-math.md
  37. 12
      src/data/roadmaps/prompt-engineering/content/104-llm-pitfalls/104-prompt-hacking.md
  38. 32
      src/data/roadmaps/prompt-engineering/content/104-llm-pitfalls/index.md
  39. 42
      src/data/roadmaps/prompt-engineering/content/105-reliability/100-debiasing.md
  40. 18
      src/data/roadmaps/prompt-engineering/content/105-reliability/101-ensembling.md
  41. 24
      src/data/roadmaps/prompt-engineering/content/105-reliability/102-self-evaluation.md
  42. 30
      src/data/roadmaps/prompt-engineering/content/105-reliability/103-calibrating-llms.md
  43. 2
      src/data/roadmaps/prompt-engineering/content/105-reliability/index.md
  44. 16
      src/data/roadmaps/prompt-engineering/content/106-llm-settings/100-temperature.md
  45. 21
      src/data/roadmaps/prompt-engineering/content/106-llm-settings/101-top-p.md
  46. 58
      src/data/roadmaps/prompt-engineering/content/106-llm-settings/102-other-hyper-params.md
  47. 27
      src/data/roadmaps/prompt-engineering/content/106-llm-settings/index.md
  48. 51
      src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/100-prompt-injection.md
  49. 24
      src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/101-prompt-leaking.md
  50. 6
      src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/102-jailbreaking.md
  51. 21
      src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/103-defensive-measures.md
  52. 35
      src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/104-offensive-measures.md
  53. 26
      src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/index.md
  54. 3
      src/data/roadmaps/prompt-engineering/content/108-image-prompting/100-style-modifiers.md
  55. 2
      src/data/roadmaps/prompt-engineering/content/108-image-prompting/101-quality-boosters.md
  56. 2
      src/data/roadmaps/prompt-engineering/content/108-image-prompting/102-weighted-terms.md
  57. 2
      src/data/roadmaps/prompt-engineering/content/108-image-prompting/103-deformed-generations.md
  58. 30
      src/data/roadmaps/prompt-engineering/content/108-image-prompting/index.md

@ -1 +1,9 @@
# What are llms
# What are LLMs?
LLMs, or Language Learning Models, are advanced Artificial Intelligence models specifically designed for understanding and generating human language. These models are typically based on deep learning architectures, such as Transformers, and are trained on massive amounts of text data from various sources to acquire a deep understanding of the nuances and complexities of language.
LLMs have the ability to achieve state-of-the-art performance in multiple Natural Language Processing (NLP) tasks, such as machine translation, sentiment analysis, summarization, and more. They can also generate coherent and contextually relevant text based on given input, making them highly useful for applications like chatbots, question-answering systems, and content generation.
As an example, OpenAI's GPT-3 is a prominent LLM that has gained significant attention due to its capability to generate high-quality text and perform a variety of language tasks with minimal fine-tuning.
- [Introduction to LLMs](https://roadmap.sh/guides/introduction-to-llms)

@ -1 +1,19 @@
# Llm types
# Types of LLMs
On a high level, LLMs can be categorized into two types i.e. Base LLMs and Instruction tuned LLMs.
## Base LLMs
Base LLMs are the LLMs which are designed to predict the next word based on the training data. They are not designed to answer questions, carry out conversations or help solve problems. For example, if you give a base LLM the sentence "In this book about LLMs, we will discuss", it might complete this sentence and give you "In this book about LLMs, we will discsus **what LLMs are, how they work, and how you can leverage them in your applications.**." Or if you give it "What are some famous social networks?", instead of answering it might give back "Why do people use social networks?" or "What are some of the benefits of social networks?". As you can see, it is giving us relevant text but it is not answering the question. This is where the Instruction tuned LLMs come in to the picture.
## Instruction tuned LLMs
Instruction Tuned LLMs, instead of trying to autocomplete your text, try to follow the given instructions using the data that they have been trained on. For example, if you input the sentence "What are LLMs?" it will use the data that it is trained on and try to answer the question. Similarly, if you input "What are some famous social networks?" it will try to answer the question instead of giving you a random answer.
Instruction Tuned LLMs are built on top of Base LLMs:
```
Instruction Tuned LLMs = Base LLMs + Further Tuning + RLHF
```
To build an Instruction Tuned LLM, a Base LLM is taken and is further trained using a large dataset covering sample "Instructions" and how the model should perform as a result of those instructions. The model is then fine-tuned using a technique called "Reinforcement Learning with Human Feedback" (RLHF) which allows the model to learn from human feedback and improve its performance over time.

@ -1 +1,11 @@
# How llms built
# How are LLMs Built?
On a high level, training an LLM model involves thre steps i.e. data collection, training and evaluation.
- **Data Collection** The first step is to collect the data that will be used to train the model. The data can be collected from various sources such as Wikipedia, news articles, books, websites etc.
- **Training**: The data then goes through a training pipeline where it is cleaned and preprocessed before being fed into the model for training. The training process usually takes a long time and requires a lot of computational power.
- **Evaluation**: The final step is to evaluate the performance of the model to see how well it performs on various tasks such as question answering, summarization, translation etc.
The output from the training Pipeline is an LLM model which is simply the parameters or weights which capture the knowledge learned during the training process. These parameters or weights are typically serialized and stored in a file, which can then be loaded into any application that requires language processing capabilities e.g. text generation, question answering, language processing etc.

@ -1 +1,34 @@
# Llm vocabulary
# Vocabulary
When working with LLMs, you will come across a lot of new terms. This section will help you understand the meaning of these terms and how they are used in the context of LLMs.
- **Machine Learning (ML)** — ML is a field of study that focuses on algorithms that can learn from data. ML is a subfield of AI.
- **"Model" vs. "AI" vs. "LLM"** — These terms are used somewhat interchangeably throughout this course, but they do not always mean the same thing. LLMs are a type of AI, as noted above, but not all AIs are LLMs. When we mentioned models in this course, we are referring to AI models. As such, in this course, you can consider the terms "model" and "AI" to be interchangeable.
- **LLM** Large language model. A large language model is a type of artificial intelligence that can understand and generate human-like text based on the input it receives. These models have been trained on vast amounts of text data and can perform a wide range of language-related tasks, such as answering questions, carrying out conversations, summarizing text, translating languages, and much more.
- **MLM** Masked language model. A masked language model is a type of language model that is trained to predict the next word in a sequence of words. It is typically trained on a large corpus of text data and can be used for a variety of tasks, such as machine translation, sentiment analysis, summarization, and more.
- **NLP** Natural language processing. Natural language processing is a branch of artificial intelligence that deals with the interaction between computers and human languages. It is used to analyze, understand, and generate human language.
- **Label** Labels are just possibilities for the classification of a given text. For example, if you have a text that says "I love you", then the labels could be "positive", "negative", or "neutral". The model will try to predict which label is most likely to be correct based on the input text.
- **Label Space** The label space is the set of all possible labels that can be assigned to a given text. For example, if you have a text that says "I love you", then the label space could be "positive", "negative", or "neutral".
- **Label Distribution** The label distribution is the probability distribution over the label space. For example, if you have a text that says "I love you", then the label distribution could be [0.8, 0.1, 0.1]. This means that the model thinks there is an 80% chance that the text is positive, a 10% chance that it is negative, and a 10% chance that it is neutral.
- **Sentiment Analysis** Sentiment analysis is the process of determining the emotional tone behind a series of words, used to gain an understanding of the the attitudes, opinions and emotions expressed within an online mention. Sentiment analysis is also known as opinion mining, deriving the opinion or attitude of a speaker.
- **Verbalizer** — In the classification setting, verbalizers are mappings from labels to words in a language model's vocabulary. For example, consider performing sentiment classification with the following prompt:
```
Tweet: "I love hotpockets"
What is the sentiment of this tweet? Say 'pos' or 'neg'.
```
Here, the verbalizer is the mapping from the conceptual labels of `positive` and `negative` to the tokens `pos` and `neg`.
- **Reinforcement Learning from Human Feedback (RLHF)** — RLHF is a technique for training a model to perform a task by providing it with human feedback. The model is trained to maximize the amount of positive feedback it receives from humans, while minimizing the amount of negative feedback it receives.
References and further learning:
- [LLM Vocabulary](https://learnprompting.org/docs/vocabulary)

@ -1 +1,7 @@
# Basic llm
# Basic LLM Concepts
LLM stands for "Large Language Model." These are advanced AI systems designed to understand and generate human-like text based on the input they receive. These models have been trained on vast amounts of text data and can perform a wide range of language-related tasks, such as answering questions, carrying out conversations, summarizing text, translating languages, and much more.
Visit the following resources to learn more about LLMs.
- [Introduction to LLMs](https://roadmap.sh/guides/introduction-to-llms)

@ -1 +1,28 @@
# Basic prompting
# Basic Prompting
All you need to instruct model to perform a task is a prompt. A prompt is a piece of text that you give to the model to perform a task.
For example, if you want to summarize an article, you could simply write the prompt with the article text on the top and the prompt:
```
Long article text here .............
....................................
Summarize the above article for me.
```
Or if you want to translate a sentence from English to French, you could simply write the prompt with the English sentence on the top and the prompt:
```
This is a sentence in English.
Translate the above sentence to French.
```
Or if you want to generate a new text, you could simply write the prompt with the instructions and the model will give you the text.
```
Write me an introductory guide about Prompt Engineering.
```
However, using plain text as prompts i.e. without using any best pratices you may not be able to fully utilise the power of LLMs. That's where "Prompt Engineering" or knowing the best practices for writing better prompts and getting the most out of LLMs comes in.

@ -1 +1,27 @@
# Need for prompting
# Need for Prompt Engineering
Prompts play a key role in the process of generating useful and accurate information from AI language models. Given below are some of the reasons why "Prompt Engineering" or learning how to write better prompts is important.
## Guiding Model Behavior
AI language models perform best when answering questions, assisting with tasks, or producing text in response to a specific query or command. Without prompts, the model would generate content aimlessly, without any context or purpose. A well-crafted prompt helps guide the model's behavior to produce useful and relevant results.
## Improving Text Quality and Relevance
Using prompts optimizes the output generated by the AI language model. A clear and concise prompt encourages the model to generate text that meets the required quality and relevance standards. Thus, the need for prompting lies in ensuring the content generated by the AI is of high caliber and closely matches the intent of the user.
## Eliciting a Specific Type of Output
Prompts can be engineered to elicit a specific type of output from the AI language model, whether it's summarizing a piece of text, suggesting alternate phrasings, creating an engaging storyline, analysing some sentiment or extracting data from some text. By crafting prompts that focus on the desired output, users can better harness the power and flexibility of AI language models.
## Aligning AI and Human Intent
One primary reason for implementing prompts is to align the AI-generated content with the human user's intent. Effective prompting can help minimize the AI's understanding gap and cater to individual users' preferences and needs.
## Reducing Inaccuracies and Ambiguity
Prompts can help reduce inaccuracies and ambiguities in the AI's responses. By providing a clear, concise, and complete prompt to the AI, users prevent the model from making unfounded assumptions or providing unclear information.
In conclusion, the need for prompting stems from its role in guiding AI model behavior, improving text quality and relevance, eliciting a specific output, aligning AI and human intent, and reducing inaccuracies and ambiguity in generated content. By understanding and mastering the art of prompting, users can unlock the true potential of AI language models.
- [Prompting Basics](https://learnprompting.org/docs/basics/prompting)

@ -1 +1,25 @@
# Prompting introduction
# Introduction to Prompting
Prompting is the process of giving a model a "prompt" or instruction for the task that you want it to perform. For example, if you have some English text that you may want to translate to French, you could give the following prompt:
```
Translate the text delimited by triple quotes from English to french:
"""Hello, how are you?"""
```
The model will then generate the following output:
```
Bonjour, comment allez-vous?
```
In this example, we gave the model a prompt with instructions to perform a task. If you notice, we followed a special way to write our prompt. We could simply give it the following prompt and it would have still worked:
```
Translate the following to French:
Hello, how are you?
```
But it's one of the best practices to be clear and use delimiters to separate the content in prompt from the instructions. You will learn more about it in the "Best Practices" nodes of the roadmap.

@ -1 +1,17 @@
# Parts of a prompt
# Parts of a Prompt
When constructing a prompt, it's essential to understand the different parts that contribute to its effectiveness. A well-crafted prompt typically consists of **context**, **instruction**, and **example**. Understanding these parts will allow you to engineer prompts that elicit better and more precise responses.
- **Context:** The context sets the stage for the information that follows. This may include defining key terms, describing relevant situations, or simply providing background information. Context helps the AI to understand the general theme or subject matter being addressed in the prompt.
*Example: In a writing request about composing an email, you may provide context by describing the purpose or background of the email, such as a follow-up after a meeting.*
2. **Instruction:** The instruction is the core component of the prompt. This is where you explicitly state the task or question that the AI is expected to perform or answer. It's important to be clear and direct with your instructions, specifying any guidelines or criteria for the response.
*Example: Using the email scenario, you could instruct the AI to "Write a follow-up email thanking the recipient for their time and summarizing the main discussion points of the meeting."*
3. **Example:** In some cases, it's helpful to provide one or more examples to guide or clarify the desired output. Examples can serve as a model for the AI and give an idea of what a successful response should look like. This is especially useful when the task is complex or has specific formatting requirements.
*Example: To further clarify the email-writing task, you might provide a brief example of the tone or structure you want, such as "Dear [Recipient], Thank you for taking the time to meet with me yesterday. We discussed [topic 1], [topic 2], and [topic 3]. I look forward to our future collaboration."*
By considering these three parts of a prompt — context, instruction, and example — you can create effective and well-formed prompts that produce targeted and accurate responses from the AI.

@ -1 +1,27 @@
# Use delimiters
# Use Delimiters
When crafting prompts for language models, it's crucial to ensure clear separation between the actual data and the instructions or context provided to the model. This distinction is particularly important when using data-driven prompts, where we want the model to generate responses based on specific input information.
One effective technique to achieve this separation is by using delimiters to mark the boundaries between the prompt and the data. Delimiters act as clear indicators for the model to understand where the data begins and ends, helping it to generate responses more accurately.
Here's how you can use delimiters effectively:
- **Choose appropriate delimiters:** Select delimiters that are unlikely to appear naturally in the data. Commonly used choices include special characters or token combinations that rarely occur in the given context. For instance, you can use triple curly braces (`{{{ }}}`) or a special token like `<|data|>` as delimiters.
- **Position the delimiters correctly:** Place the delimiters at the beginning and end of the data section, while ensuring a clear separation from the prompt. The prompt portion should precede the delimiter, providing the necessary instructions or context for the model.
- **Use consistent delimiters throughout:** Maintain consistency in using the chosen delimiters across all prompts. This ensures uniformity in the data format, making it easier for the model to understand and process the information consistently.
## Examples
```
Summarize the text delimited by triple curly braces into a single sentence.
{{{put_your_text_here}}}
```
```
Translate the text delimited by tripple quotes into Arabic
"""How are you?"""
```

@ -1 +1,34 @@
# Structured data
# Structured Output
When designing prompts for language models, it's often beneficial to request structured output formats such as JSON, XML, HTML, or similar formats. By asking for structured output, you can elicit specific and well-organized responses from the model, which can be particularly useful for tasks involving data processing, web scraping, or content generation.
Here's how you can request structured output from language models:
- **Specify the output format:** Clearly specify the output format you want the model to generate. For instance, you can ask the model to generate a JSON object, an HTML page, or an XML document.
- **Define the structure and fields**: Outline the structure of the desired output and specify the required fields. This helps guide the model to generate responses that adhere to the desired structure. You can provide examples or templates to illustrate the expected format.
- **Provide input context:** Include relevant information or data in the prompt that the model can utilize to generate the structured output. This context can assist the model in understanding the task or generating more accurate results.
Here is an example demonstrating the use of structured data.
```
Help me generate a JSON object with keys `product` (name of product), `isPositive` (boolean), `summary` (one sentence summary of review) from the text enclosed in <review> tag.
<review>Regrettably, the "XYZ ProTech 2000" product failed to meet even the most basic expectations. From its lackluster build quality and confusing user interface to its abysmal performance and disappointing customer support, this product left me deeply dissatisfied. If you're considering purchasing the "XYZ ProTech 2000," I strongly advise you to explore alternative options that offer superior quality and reliability.
</review>
```
Output from the above prompt:
```json
{
"product": "XYZ ProTech 2000",
"isPositive": false,
"summary": "Failed to meet expectations due to lackluster build quality, confusing user interface, abysmal performance, and disappointing customer support."
}
```

@ -1 +1,12 @@
# Style information
## Style Information
By providing explicit instructions regarding the desired tone, you can influence the language model's writing style and ensure it aligns with your specific requirements.
Clearly communicate the desired tone, style, or mood in the prompt. Whether it's formal, casual, humorous, professional, or any other specific style, mentioning it explicitly helps guide the model's writing. Also, consider ncorporating keywords or phrases that reflect the desired style. For example, if you want a formal tone, include phrases like "in a professional manner" or "using formal language." This provides additional context to the model regarding the tone you expect.
### Example Prompt
```
Write a formal email to decline a job offer.
```
In this prompt example, the instruction explicitly states the desired tone as "formal." The model understands that the response should reflect a professional and formal writing style appropriate for declining a job offer.

@ -1 +1,23 @@
# Give conditions
# Give Conditions
Giving conditions and then asking the model to follow those conditions helps steer the model's responses toward specific behaviors or outcomes.
For example, you might give the model some long recipe text and ask it to extract the steps from the recipe or to return something else if the no receipe found in the text. In this way, you are making the output conditional giving the model some additional context.
```
You will be provided with text delimited by triple quotes.
If it contains a sequence of instructions, \
re-write those instructions in the following format:
Step 1 - ...
Step 2 - …
Step N - …
If the text does not contain a sequence of instructions, \
then simply write \"No steps provided
"""INSERT YOUR RECIPE TEXT HERE"""
```

@ -1 +1,42 @@
# Give examples
# Give Successful Examples
In this technique, you give examples of successful behavior to the model and then ask it to continue the behavior. For example, you might give the model a few examples of successful chess moves and then ask it to continue the game.
Here is an example of a prompt that uses this technique:
```
Read the examples carefully and use them as a basis for your responses.
Input: Banana
Output: Fruit
Input: Apple
Output: Fruit
Input: Carrot
Output: Vegetable
Given the provided examples, generate the appropriate response for the following inputs:
- Turnip
- Orange
- Pear
- Potato
- Cucumber
- Celery
- Broccoli
- Cauliflower
```
The output of this prompt is:
```
- Turnip: Vegetable
- Orange: Fruit
- Pear: Fruit
- Potato: Vegetable
- Cucumber: Vegetable
- Celery: Vegetable
- Broccoli: Vegetable
- Cauliflower: Vegetable
```

@ -1 +1,65 @@
# Include steps
# Include Steps
There are times when asking the model for something directly might not result in the best output. In that case, you may want to try breaking down the steps in the same manner that you will perform the action yourself.
For example, let's say that you have a large story written in english and you want to get the names and the number of names appeared in the story. But you want the names to be written in french. Now, there are two ways to write this prompt i.e. either asking directly for this e.g.
```
Give me a JSON object with following keys: `nouns_count` number of nouns appeared in story and `nouns_french` which is an array of nouns in french. The story is delimited by """.
"""In a charming village, siblings Jack and Jill set out on a quest to fetch water from a hilltop well. As they climbed, singing joyfully, misfortune struck—Jack tripped on a stone and tumbled down the hill, with Jill following suit. Though slightly battered, the pair returned home to comforting embraces. Despite the mishap, their adventurous spirits remained undimmed, and they continued exploring with delight."""
```
The output from this prompt is:
```
{
"nouns_count": 10,
"nouns_french": [
"village",
"siblings",
"Jack",
"Jill",
"quest",
"water",
"hilltop",
"well",
"stone",
"hill"
]
}
```
As you can see the nouns are not in french. However, if we rewrite our prompt as follows:
```
Perform the following steps on the story delimited by """".
Step 1. Translate it to French
Step 2. List each noun in the translation.
Step 3. Output the JSON object with `nouns_count` number of nouns in french story and `nouns_french` i.e. array of nouns appeared in translation.
Output the JSON only, I am not interested in the rest of the steps.
"""In a charming village, siblings Jack and Jill set out on a quest to fetch water from a hilltop well. As they climbed, singing joyfully, misfortune struck—Jack tripped on a stone and tumbled down the hill, with Jill following suit. Though slightly battered, the pair returned home to comforting embraces. Despite the mishap, their adventurous spirits remained undimmed, and they continued exploring with delight."""
```
It will correctly output the following:
```
{
"nouns_count": 10,
"nouns_french": [
"village",
"frères",
"Jack",
"Jill",
"quête",
"eau",
"sommet de la colline",
"bien",
"pierre",
"colline"
]
}
```

@ -1 +1,3 @@
# Workout solution
# Workout its Solution
LLM Models try to jump to solutions as soon as possible. They are not interested in the process of solving a problem. Sometimes giving strict instructions help get better results.

@ -1 +1,12 @@
# Iterate refine
# Iterate and Refine your prompts.
Don't think of prompts as a one-and-done process.
Iterate and refine is a crucial part of creating good prompts. It involves continually refining a prompt until it produces consistently accurate, relevant, and engaging responses. The process works as follows:
1. **Draft the initial prompt**: Write a prompt that covers the topic you want the AI to discuss. At this stage, focus on making sure the prompt is clear and concise.
2. **Test the prompt**: Submit the prompt to the AI and assess the generated response. Note any issues or inaccuracies in the response.
3. **Revise the prompt**: Based on the observed issues, make adjustments to the prompt. It may involve rephrasing the question, adding more context or details, or specifying the format you want the answer in.
4. **Repeat the process**: Continue testing and refining the prompt until it consistently results in high-quality responses from the AI.
Remember that sometimes you may need to go through several iterations before arriving at a prompt that works well. By consistently refining prompts and experimenting with different strategies, you'll be more effective at creating prompts that yield accurate and engaging answers from the AI.

@ -1 +1,3 @@
# Prompts
At this point, you probably alread know what the Prompts are and the importance of writing good prompts. This section covers the best practices for writing good prompts as well as covering some of the commonly used prompting techniques.

@ -1 +1,17 @@
# Role prompting
# Role Prompting
Role prompting is a technique used in prompt engineering to encourage the AI to approach a question or problem by assuming a specific role, character, or viewpoint. This strategy can lead to a more focused, creative, or empathetic response depending on the given role.
## How to use Role Prompting
1. **Identify a role or character:** Determine a character or role that will be compelling and relevant to the problem or question you're posing. This could be a real or fictional character or a general professional role.
2. **Provide context:** Set a scene or introduce the role so the AI knows the context in which it should respond. This can help to encourage responses that align closely with the character's attributes or profession.
3. **Pose the question or task:** Now, with the context and role present, ask the question or set the task you want the AI or user to respond to. Make sure it's explicitly related to the chosen role.
## Example of Role Prompting
Imagine you want to explore solutions to an environmental problem. You can use role prompting to elicit diverse perspectives and insights. Here's an example prompt with role prompting:
_As a climate scientist, how would you approach the problem of deforestation to minimize its impact on the environment?_

@ -1 +1,32 @@
# Few shot prompting
# Few Shot Prompting
Few-shot prompting is a technique in which a machine learning model is primed with a small number of examples (or "shots") that demonstrate the desired behavior, output, or task, before being presented with a new, related input. This approach allows the model to build an understanding of what is expected of it, even with limited context. It is particularly valuable for fine-tuning and generalizing large pre-trained models such as OpenAI's GPT-3.
## Key Principles
When using few-shot prompting, consider the following:
- **Number of examples**: A few-shot setting typically involves 2-10 examples (but can vary), depending on the clarity and complexity of the task.
- **Context and relevancy**: The examples should be relevant to the desired task and provide an adequate basis for shaping the model's output.
- **Balance**: Strive for a balance between too few examples (under-specification) and too many examples (repetition and over-specification).
## Examples & Tips
Consider the following example for a sentiment-analysis task using few-shot prompting. You provide some labeled input/output pairs to the model, which helps it understand your expectations:
```
The movie was fantastic! - Positive
I didn't enjoy the food at all. - Negative
Amazing vacation, I had a great time! - Positive
She looks upset and angry. - Negative
```
After providing these examples, introduce the query you want the model to analyze:
```
The book was hard to put down. - {sentiment_label}
```
This prompt structure assists the model in grasping the sentiment analysis task and increases the likelihood of getting the correct output (i.e., "Positive").
Remember to experiment with the number of examples and their content to find the optimal balance for your specific task. Additionally, you can use inline instructions to guide the model further, such as asking it to classify the sentiment of a given sentence.

@ -1 +1,24 @@
# Chain of thought
# Chain of Thought
In the world of prompt engineering, the **Chain of Thought** technique is an essential tool aimed at generating thoughtful and meaningful responses. By engaging the model in a step-by-step thinking process, this technique encourages the exploration of concepts, ideas, or problem-solving strategies in a sequential manner.
## How Does it Work?
This method involves breaking down a complex topic or issue into smaller, manageable segments that stimulate the logical progress of thought in the model, leading to a coherent and well-structured response. It's comparable to leading the model on a cognitive journey where ideas and concepts are connected in a logical and meaningful order.
## Example
To illustrate the application of the Chain of Thought technique, let's say we want the model to analyze the advantages and disadvantages of working from home.
Instead of asking a broad question like:
> "What are the advantages and disadvantages of working from home?"
We can approach the topic through a series of connected prompts:
- "List three reasons why people might prefer working from home."
- "For each reason you mentioned, explain the benefits and positive effects on the individual and/or the organization."
- "Now, consider the challenges of working from home. Identify three potential disadvantages or negative effects."
- "For each of these challenges, discuss how individuals and organizations can mitigate or address them."
By employing the Chain of Thought technique, we have directed the model to provide a thorough and systematic analysis of the subject in question, ultimately resulting in a more meaningful and accurate response.

@ -1 +1,20 @@
# Zeroshot chain of thought
# Zero Shot Chain of Thought
Zeroshot chain of thought is a prompting technique that encourages models to provide multi-step reasoning or follow a series of interconnected thoughts in order to tackle a given problem. This technique is particularly effective in tasks where the answer requires a reasoning process or depends on chaining several intermediate ideas together.
How to implement a zeroshot chain of thought prompt:
- Start by defining a clear initial question or problem that will serve as the starting point for the chain.
- Craft a prompt that not only asks the model to provide an answer to the initial question, but also requests that the model explain its reasoning step by step.
- Encourage the model to consider intermediate steps, possible alternatives, or connections between ideas explicitly in its response.
## Example
Suppose you want the model to explain how a solar panel works. A zeroshot chain of thought prompt could look like this:
```
Please explain the process of how a solar panel works, starting with sunlight hitting the panel's surface and ending with electricity being produced. Structure your response as a step-by-step chain of thought, taking care to clarify how each step leads to the next.
```
By designing prompts that explicitly request step-by-step reasoning, the zeroshot chain of thought technique can lead to more comprehensive and insightful answers that go beyond simple factual statements.

@ -1 +1,68 @@
# Least to most
# Least to Most Prompting
Least to Most prompting takes Chain of Thought (CoT) prompting a step further by first breaking a problem into sub problems then solving each one. It is a technique inspired by real-world educational strategies for children.
As in CoT prompting, the problem to be solved is decomposed in a set of subproblems that build upon each other. In a second step, these subproblems are solved one by one. Contrary to chain of thought, the solution of previous subproblems is fed into the prompt trying to solve the next problem.
## Example
Let's ask a slightly complicated customer service question:
```
CUSTOMER INQUIRY:
I just bought a T-shirt from your Arnold collection on March 1st. I saw that it was on discount, so bought a shirt that was originall $30, and got 40% off. I saw that you have a new discount for shirts at 50%. I'm wondering if I can return the shirt and have enough store credit to buy two of your shirts?
INSTRUCTIONS:
You are a customer service agent tasked with kindly responding to customer inquiries. Returns are allowed within 30 days. Today's date is March 29th. There is currently a 50% discount on all shirts. Shirt prices range from $18-$100 at your store. Do not make up any information about discount policies.CUSTOMER INQUIRY:
I just bought a T-shirt from your Arnold collection on March 1st. I saw that it was on discount, so bought a shirt that was originall $30, and got 40% off. I saw that you have a new discount for shirts at 50%. I'm wondering if I can return the shirt and have enough store credit to buy two of your shirts?
INSTRUCTIONS:
You are a customer service agent tasked with kindly responding to customer inquiries. Returns are allowed within 30 days. Today's date is March 29th. There is currently a 50% discount on all shirts. Shirt prices range from $18-$100 at your store. Do not make up any information about discount policies.
```
Output from the model would be:
```
Thank you for your inquiry. Unfortunately, since you purchased the shirt on March 1st, it is outside of our 30-day return policy. However, we do have a 50% discount on all shirts currently, so you could purchase two shirts with the same amount of store credit. Prices for our shirts range from $18-$100. Please let us know if you have any other questions.
```
That failed (we are within the return time), so lets try to break it down into subproblems:
```
CUSTOMER INQUIRY:
I just bought a T-shirt from your Arnold collection on March 1st. I saw that it was on discount, so bought a shirt that was originall $30, and got 40% off. I saw that you have a new discount for shirts at 50%. I'm wondering if I can return the shirt and have enough store credit to buy two of your shirts?
INSTRUCTIONS:
You are a customer service agent tasked with kindly responding to customer inquiries. Returns are allowed within 30 days. Today's date is March 29th. There is currently a 50% discount on all shirts. Shirt prices range from $18-$100 at your store. Do not make up any information about discount policies.
What subproblems must be solved before answering the inquiry?
```
Output from the model would be:
```
1. Determine if the customer is within the 30-day return window.
2. Calculate the amount of store credit the customer would receive if they returned the shirt.
3. Calculate the cost of two shirts at the current 50% discount.
4. Determine if the customer has enough store credit to purchase two shirts.
```
Let's try to solve the first subproblem:
```
CUSTOMER INQUIRY:
I just bought a T-shirt from your Arnold collection on March 1st. I saw that it was on discount, so bought a shirt that was originally $30, and got 40% off. I saw that you have a new discount for shirts at 50%. I'm wondering if I can return the shirt and have enough store credit to buy two of your shirts?
INSTRUCTIONS:
You are a customer service agent tasked with kindly responding to customer inquiries. Returns are allowed within 30 days. Today's date is March 29th. There is currently a 50% discount on all shirts. Shirt prices range from $18-$100 at your store. Do not make up any information about discount policies.
Determine if the customer is within the 30-day return window. Let's go step by step.
```
Output from the model would be:
```
Yes, the customer is within the 30-day return window. The customer purchased the shirt on March 1st, and today's date is March 29th, which is within the 30-day return window.
Therefore, the customer is eligible to return the shirt and receive store credit for the purchase. The customer can use the store credit to purchase two shirts at the current 50% discount.
```
By just solving the first subproblem, we were able to solve the entire problem. If GPT-3 did not return an answer immediately, we could have solved the next subproblem and so on until it did return an answer. Note that we use Let's go step by step.. The addition of this phrase is not always necessary, but it helps for this example.

@ -1 +1,30 @@
# Dual prompt
# Dual Prompt Approach
**Dual Prompt** is a technique that combines two or more prompts to generate more specific and meaningful responses. This approach can be used to elicit more detailed information or to narrow down the focus of a response.
For example, let's say you are writing a guide about SEO. You could ask AI to write it for you with the following prompt:
```
Write me a guide about SEO.
```
However, this prompt may result in a generic guide without giving you what you may need.
```
By adopting dual prompt, you will receive a more specific response that is tailored to your needs. For the above example, we could write our prompt in two prompts as follows:
```
I am writing a guide about SEO. Give me 10 key topics that I should cover in this guide.
```
Now you can give it a second prompt:
```
Write me a detailed guide about each of the points you gave above.
```
Or you could also combine these prompts into a single prompt as follows:
```
I am writing a guide about SEO. Take the 10 key topics about SEO and write a detailed introduction to each.
```

@ -1 +1,3 @@
# Combining techniques
# Combining Techniques
All the techniques we've covered so far are useful on their own, but they're even more powerful when combined. For example, you can combine "Role Prompting" and any other prompting technique e.g. Chain of Thought, Dual Prompt, etc. to get more specific responses.

@ -1 +1,27 @@
# Hallucinations
Hallucinations are a common pitfall in Language Model (LM) outputs. Essentially, they occur when the LM generates text that is factually incorrect, nonsensical, or disconnected from the input prompt. These hallucinations can be problematic in the generated text, as they can mislead the users or cause misunderstandings.
### Causes of Hallucinations
There are several factors contributing to hallucinations in LMs:
1. **Inherent limitations**: The training data for the LMs are massive, yet they still cannot contain the entire knowledge about the world. As a result, LMs have inherent limitations in handling certain facts or details, which leads to hallucinations in the generated text.
2. **Training data biases**: If the training data contains biases or errors, it may lead to hallucinations in the output as LMs learn from the data they've been exposed to.
3. **Token-based scoring**: The default behavior of many LMs, like GPT models, is to generate text based on token probabilities. Sometimes this can lead to high-probability tokens being selected even if it doesn't make sense with the given prompt.
### Mitigating Hallucinations
To reduce the occurrence of hallucinations in the generated text, consider the following strategies:
1. **Specify instructions**: Make the prompt more explicit with clear details and constraints. This can help guide the model to generate more accurate and coherent responses.
2. **Step-by-step approach**: Instead of asking the model to generate a complete response in one go, break down the task into smaller steps and iteratively generate the output. This can help in maintaining better control over the generated content.
3. **Model adjustments**: Tweak various parameters, such as `temperature` or `top_p`, to adjust the randomness and control of the generated text. Lower values will make the output more conservative, which can help reduce hallucinations.
4. **Validating and filtering**: Develop post-processing steps to validate and filter the generated text based on specific criteria or rules to minimize the prevalence of hallucinations in the output.
Remember that even with these strategies, it's impossible to completely eliminate hallucinations. However, being aware of their existence and employing methods to mitigate them can significantly improve the quality and reliability of LM-generated content.

@ -1 +1,35 @@
# Math
When working with language models, it's essential to understand the challenges and limitations when incorporating mathematics. In this section, we'll discuss some common pitfalls related to math in the context of prompt engineering and provide suggestions for addressing them.
## Numerical Reasoning Limitations
Language models like GPT-3 have limitations when it comes to numerical reasoning, especially with large numbers or complex calculations. They might not always provide accurate answers or interpret the numerical context correctly.
**Recommendation:** For tasks that require precise numerical answers or involve complex calculations, consider using specialized math software or verifying the model's output using other means.
## Ambiguous Math Questions
Ambiguous or ill-defined math questions are likely to receive incorrect or nonsensical answers. Vague inputs make it challenging for the model to understand the context and provide sensible responses.
**Recommendation**: Try to make math questions as clear and specific as possible. Provide sufficient context and use precise language to minimize ambiguities.
## Units and Conversion
Language models might not automatically take units into account or perform the necessary unit conversions when working with mathematical problems, which could result in incorrect answers.
**Recommendation**: Explicitly mention the desired units and, when needed, ask the model to perform unit conversions to ensure the output aligns with the expected format or measure.
## Incorrect Interpretation of Notation
Mathematics often uses specialized notation or symbols that the language model might misinterpret. Especially when inputting symbols or notation that differ from the standard plain text, the risk of misunderstanding increases.
**Recommendation**: Make sure to use clear and common notation when presenting math problems to the model. If necessary, explain the notation or provide alternative representations to minimize confusion.
## Building on Incorrect Responses
If a sequence of math problems depends on previous answers, the model might not correct its course after providing an incorrect response. This could cascade and result in multiple subsequent errors.
**Recommendation**: Be cautious when using the model's output as the basis for subsequent calculations or questions. Verify the correctness of the intermediate steps before proceeding.
By being aware of these math-related pitfalls and applying the recommendations, you can improve the effectiveness and accuracy of your prompts when engaging language models with mathematical tasks.

@ -1 +1,11 @@
# Prompt hacking
# Prompt Hacking
Prompt hacking is a term used to describe a situation where a model, specifically a language model, is tricked or manipulated into generating outputs that violate safety guidelines or are off-topic. This could include content that's harmful, offensive, or not relevant to the prompt.
There are a few common techniques employed by users to attempt "prompt hacking," such as:
1. **Manipulating keywords**: Users may introduce specific keywords or phrases that are linked to controversial, inappropriate, or harmful content in order to trick the model into generating unsafe outputs.
2. **Playing with grammar**: Users could purposely use poor grammar, spelling, or punctuation to confuse the model and elicit responses that might not be detected by safety mitigations.
3. **Asking leading questions**: Users can try to manipulate the model by asking highly biased or loaded questions, hoping to get a similar response from the model.
To counteract prompt hacking, it's essential for developers and researchers to build in safety mechanisms such as content filters and carefully designed prompt templates to prevent the model from generating harmful or unwanted outputs. Constant monitoring, analysis, and improvement to the safety mitigations in place can help ensure the model's output aligns with the desired guidelines and behaves responsibly.

@ -1 +1,31 @@
# Llm pitfalls
# Pitfalls of LLMs
## LLM Pitfalls
In this section, we'll discuss some of the common pitfalls that you might encounter when working with Language Models (LLMs), particularly in the context of prompt engineering. By understanding these pitfalls, you can more effectively develop prompts and avoid potential issues that may affect the performance and utility of your model.
### 1. Model Guessing Your Intentions
Sometimes, LLMs might not fully comprehend the intent of your prompt and may generate generic or safe responses. To mitigate this, make your prompts more explicit or ask the model to think step-by-step before providing a final answer.
### 2. Sensitivity to Prompt Phrasing
LLMs can be sensitive to the phrasing of your prompts, which might result in completely different or inconsistent responses. Ensure that your prompts are well-phrased and clear to minimize confusion.
### 3. Model Generating Plausible but Incorrect Answers
In some cases, LLMs might generate answers that sound plausible but are actually incorrect. One way to deal with this is by adding a step for the model to verify the accuracy of its response or by prompting the model to provide evidence or a source for the given information.
### 4. Verbose or Overly Technical Responses
LLMs, especially larger ones, may generate responses that are unnecessarily verbose or overly technical. To avoid this, explicitly guide the model by making your prompt more specific, asking for a simpler response, or requesting a particular format.
### 5. LLMs Not Asking for Clarification
When faced with an ambiguous prompt, LLMs might try to answer it without asking for clarification. To encourage the model to seek clarification, you can prepend your prompt with "If the question is unclear, please ask for clarification."
### 6. Model Failure to Perform Multi-part Tasks
Sometimes, LLMs might not complete all parts of a multi-part task or might only focus on one aspect of it. To avoid this, consider breaking the task into smaller, more manageable sub-tasks or ensure that each part of the task is clearly identified in the prompt.
By being mindful of these pitfalls and implementing the suggested solutions, you can create more effective prompts and optimize the performance of your LLM.

@ -1 +1,41 @@
# Debiasing
# Prompt Debiasing
Debiasing is the process of reducing bias in the development and performance of AI language models, such as OpenAI’s GPT-3. When constructing prompts, it's important to address existing biases and assumptions that may be inadvertently incorporated into the model due to training data or other factors. By considering debiasing, we aim to promote fairness, neutrality, and inclusiveness in AI-generated responses.
## Why is Debiasing Important?
AI models can absorb various biases from their diverse training data, including but not limited to:
- Gender bias
- Racial bias
- Ethnic bias
- Political bias
These biases may result in unfair, offensive, or misleading outputs. As prompt engineers, our responsibility is to create prompts that minimize the unintentional effects of such biases in the responses generated by the model.
## Key Strategies for Debiasing
Here are a few strategies that can help you address biases in your prompts:
1. **Objective Wording**: Use objective language and avoid making assumptions about race, gender, ethnicity, nationality, or any other potentially biased characteristics.
2. **Equitable Representation**: Ensure prompts represent diverse perspectives and experiences, so that the model learns to generate responses that are fair and unbiased.
3. **Counter-balancing**: If a bias is unavoidable due to the context or nature of the prompt, consider counter-balancing it by providing an alternative perspective or side to the argument.
4. **Testing and Iterating**: Continuously test and iterate on your prompts, seeking feedback from a diverse group of reviewers to identify and correct potential biases.
## Examples of Debiasing
Here's an example to illustrate debiasing in prompt engineering:
### Biased Prompt
*Who are some popular male scientists?*
This prompt assumes that scientists are more likely to be men. It also reinforces the stereotype that scientific achievements are primarily attributed to male scientists.
### Debiased Prompt
*Who are some popular scientists from diverse backgrounds and genders?*
This prompt removes any implicit gender bias and encourages a more inclusive list of scientists, showcasing different backgrounds and genders while maintaining the focus on scientific achievements.
By incorporating debiasing strategies into your prompt engineering process, you promote fairness, accountability, and neutrality in AI-generated content, supporting a more inclusive and ethical AI environment.

@ -1 +1,17 @@
# Ensembling
# Prompt Ensembling
Ensembling is a technique used to improve the reliability and accuracy of predictions by combining multiple different models, essentially leveraging the 'wisdom of the crowd'. The idea is that combining the outputs of several models can cancel out biases, reduce variance, and lead to a more accurate and robust prediction.
There are several ensembling techniques that can be used, including:
- **Majority voting**: Each model votes for a specific output, and the one with the most votes is the final prediction.
- **Weighted voting**: Similar to majority voting, but each model has a predefined weight based on its performance, accuracy, or other criteria. The final prediction is based on the weighted sum of all model predictions.
- **Bagging**: Each model is trained on a slightly different dataset, typically generated by sampling with replacement (bootstrap) from the original dataset. The predictions are then combined, usually through majority voting or averaging.
- **Boosting**: A sequential ensemble method where each new model aims to correct the mistakes made by the previous models. The final prediction is a weighted combination of the outputs from all models.
- **Stacking**: Multiple base models predict the output, and these predictions are used as inputs for a second-layer model, which provides the final prediction.
Incorporating ensembling in your prompt engineering process can help produce more reliable results, but be mindful of factors such as increased computational complexity and potential overfitting. To achieve the best results, make sure to use diverse models in your ensemble and pay attention to tuning their parameters, balancing their weights, and selecting suitable ensembling techniques based on your specific problem and dataset.

@ -1 +1,23 @@
# Self evaluation
# LLM Self Evaluation
Self-evaluation is an essential aspect of the prompt engineering process. It involves the ability of an AI model to assess its own performance and determine the level of confidence it has in its responses. By properly incorporating self-evaluation, the AI can improve its reliability, as it will learn to identify its weaknesses and provide more accurate responses over time.
## Importance of Self-Evaluation
- **Identify weaknesses**: A good self-evaluation system helps the AI recognize areas where it provides less accurate or irrelevant responses, thus making it possible to improve the model during future training iterations.
- **Enhance reliability**: Users are more likely to trust an AI model that understands its limitations and adjusts its responses accordingly.
- **Continuous improvement**: As an AI model evaluates its performance, it becomes equipped with new information to learn from and improve upon, ultimately leading to better overall performance.
## Implementing Self-Evaluation
When incorporating self-evaluation into an AI model, you should consider the following elements:
1. **Objective metrics**: Develop quantitative measures that determine the quality of a response. Examples include accuracy, precision, recall, and F1 scores. These metrics can be used as part of the AI model's assessment process, offering a consistent way to gauge its performance.
2. **User feedback**: Collect user feedback on the AI model's responses, as users can provide valuable information about the quality and utility of the generated content. By allowing users to rate answers or report issues, the AI model can integrate this feedback into its self-evaluation process.
3. **Confidence levels**: Implement a system that measures the AI model's confidence in its responses. A confidence score can help users understand the reliability of a response, and it can also help the AI model refine its behavior when it has uncertainty. Make sure the confidence score is calculated based on factors such as data quality, algorithm performance, and historical accuracy.
4. **Error monitoring**: Establish a system that continuously monitors the AI model's performance by tracking errors, outliers, and other unexpected results. This monitoring process should inform the self-evaluation mechanism and help the AI model adapt over time.
By incorporating self-evaluation into your AI model, you can create a more reliable system that users will trust and appreciate. This, in turn, will lead to a greater sense of confidence in the AI model and its potential to solve real-world problems.

@ -1 +1,29 @@
# Calibrating llms
# Calibrating LLMs
In the context of prompt engineering, calibrating Long-running Language Models (LLMs) is an essential step to ensure reliability and accuracy in the model’s output. Calibration refers to the process of adjusting the model to produce responses that are consistent with human-defined ratings, rankings, or scores.
## Importance of Calibration
Calibrating the LLMs helps to:
1. Minimize system biases and improve response quality.
2. Increase the alignment between user expectations and the model's output.
3. Improve the interpretability of the model's behavior.
## Calibration Techniques
There are various techniques to calibrate LLMs that you can explore, including:
1. **Prompt Conditioning**: Modifying the prompt itself to encourage desired behavior. This involves using explicit instructions or specifying the format of the desired response.
2. **Response Rankings**: Presenting the model with multiple potential responses and asking it to rank them by quality or relevance. This technique encourages the model to eliminate inappropriate or low-quality responses by assessing them against other possible answers.
3. **Model Debiasing**: Applying debiasing techniques, such as counterfactual data augmentation or fine-tuning the model with diverse, bias-mitigating training data.
4. **Temperature Adjustment**: Dynamically controlling the randomness or 'temperature' parameter during the inference to balance creativity and coherence of the output.
### Iterative Calibration
Calibration should be an iterative process, where improvements are consistently monitored and further adjustments made based on the data collected from users. Continual learning from user interactions can help increase the model's overall reliability and maintain its performance over time.
Remember, calibrating LLMs is an essential part of creating reliable, high-quality language models that effectively meet user needs and expectations. Through prompt conditioning, response ranking, model debiasing, temperature adjustment, and iterative improvements, you can successfully achieve well-calibrated and reliable LLMs.

@ -1 +1,17 @@
# Temperature
Temperature is an important setting in the Language Models (LMs), specifically for the fine-tuning process. It refers to the "temperature" parameter in the softmax function of the language model. Adjusting the temperature can influence the randomness or conservativeness of the model's output.
## Role of Temperature
The temperature controls the model's level of creativity and boldness in generating text. A lower temperature value makes the model more conservative, sticking closely to the patterns it has learned from the training data. Higher temperature values encourage the model to explore riskier solutions by allowing less likely tokens to be more probable.
## Practical Uses
When fine-tuning an LM, you can regulate its behavior by adjusting the temperature:
- **Lower temperature values** (e.g., 0.2 or 0.5): The model will be more focused on phrases and word sequences that it learned from the training data. The output will be less diverse, but may lack novelty or creativity. Suitable for tasks where conservativeness is important, such as text summarization or translation.
- **Higher temperature values** (e.g., 1.0 or 2.0): The model will generate more creative outputs with innovative combinations of words. However, it may produce less coherent or contextually improper text. Useful for tasks where exploration and distinctiveness are required, like creative writing or brainstorming.
Experimenting with various temperature values can lead to finding the optimal balance between creativity and coherence, depending on the specific task and desired output.

@ -1 +1,20 @@
# Top p
# Top P Sampling
Top P, also known as "nucleus sampling," is a method that provides a more dynamic way to control the randomness of a model's generated output. It improves the trade-off between quality and diversity in text generation.
## How Top P Works?
Instead of picking the top K tokens with the highest probability like in Top K sampling, Top P sampling picks a number of tokens whose cumulative probability adds up to the given value of P. P is a probability mass with a range between 0 and 1. This means that the number of tokens picked will vary, automatically adapting to the distribution in a more granular way.
## Advantages of Top P
1. **More diverse and coherent outputs**: Top P sampling strikes a balance between overly conservative and highly random text. This creates more diverse and coherent outputs compared to Top K sampling.
2. **Adaptive threshold**: The dynamic nature of Top P sampling allows it to adapt to the token probability distribution, unlike Top K sampling which requires manual tuning of K.
3. **Prevents OOV tokens**: By gathering the tokens based on a cumulative probability threshold, Top P sampling effectively prevents selecting out-of-vocabulary (OOV) tokens.
## Adjusting Top P Value
- **Lower values**: Decreasing the value of P will result in more focused outputs, potentially at the expense of diversity.
- **Higher values**: Increasing the value of P will encourage the model to explore more diverse responses, possibly at the cost of coherence.
In practice, a commonly used Top P value is 0.9, but you should experiment with different values for P depending on your specific use-case and desired balance between diversity and coherence.

@ -1 +1,57 @@
# Other hyper params
# Other Hyperparameters
Aside from LLM settings, there are other hyperparameters that you may need to fine-tune in order to get the best results for your generated text. In this section, we will discuss some of these important hyperparameters and their effects on the model's performance.
## Temperature
The `temperature` parameter is a crucial hyperparameter that controls the randomness of the model's output. A high temperature value (e.g., 1.0) will make the model's output more random and creative, while a low value (e.g., 0.2) will make it more focused and deterministic.
Adjusting the temperature can significantly change the model's behavior, so it's essential to experiment with different settings to find the optimal balance between creativity and coherence for your specific use-case.
Example usage:
```
model.generate(prompt, temperature=0.8)
```
## Max Tokens
The `max_tokens` parameter allows you to limit the length of the model's output. It can be useful when you have specific constraints on the length of the generated text or when you want to avoid excessively long responses.
Specifying a lower value for `max_tokens` can help prevent the model from rambling on while still providing a useful output. However, setting it too low might result in the model's generated content being cut off and not making sense.
Example usage:
```
model.generate(prompt, max_tokens=50)
```
### Top P and Top K Sampling
Instead of using the default greedy sampling method, you might want to use more advanced sampling techniques like `top_p` (nucleus) sampling or `top_k` sampling. These methods provide better control over the diversity and quality of the potential generated tokens.
- `top_p`: Fraction of total probability mass to consider in the model's softmax output. A lower value will make the sampling process more strict, leading to a smaller set of high-probability tokens being considered.
- `top_k`: Limits the sampling process to only the k most probable tokens. Lower values enforce more determinism, and higher values allow for more diversity in the output.
You can experiment with different values for `top_p` and `top_k` to see which setting works best for your task.
Example usage:
```
model.generate(prompt, top_p=0.9, top_k=50)
```
## Number of Generated Texts
Sometimes, especially when using techniques like `top_p` or `top_k` sampling, it can be helpful to generate more than one output from the model. By generating multiple outputs, you can quickly review different variations of the generated text and choose the one that fits your requirements best.
You can set the `num_return_sequences` parameter to control the number of generated texts from the model.
Example usage:
```
model.generate(prompt, num_return_sequences=5)
```
In conclusion, adjusting these hyperparameters can significantly impact the behavior and performance of the text generation model. Therefore, it is essential to experiment with different settings to achieve the desired results for your specific use-case.

@ -1 +1,26 @@
# Llm settings
# LLM Settings
LLM (Language Model) settings play a crucial role in prompt engineering as they directly influence the behavior and output of the language model. In this section, we will discuss some of the important LLM settings that you need to consider while designing prompts.
## 1. Temperature
Temperature is a hyperparameter that controls the randomness of the output generated by the language model. A higher temperature will result in more diverse and creative responses, while a lower temperature will produce more focused and deterministic responses.
- **High Temperature (e.g., 1.0):** More random and creative outputs, higher chances of deviation from the topic, and potentially lower relevance.
- **Low Temperature (e.g., 0.2):** More deterministic outputs, focused on the provided input, and higher relevance.
## 2. Max Tokens
Max tokens determine the length of the output generated by the model. By controlling the number of tokens in the response, you can influence the verbosity of the language model.
- **Higher Max Tokens:** Longer responses, more details, and higher chances of going off-topic.
- **Lower Max Tokens:** Shorter responses, more concise, but might cut off important information.
## 3. Top-K Sampling
Top-K sampling is an approach to limit the number of predicted words that the language model can consider. By specifying a smaller K value, you can restrict the output to be focused and prevent the model from generating unrelated information.
- **High K Value:** Model considers more word options and might generate diverse content, but with a higher risk of going off-topic.
- **Low K Value:** Model has limited word options, leading to focused and related content.
These LLM settings give you control over the output of the language model, helping you steer the responses according to your requirements. Understanding the balance between these settings can improve the effectiveness of your prompt engineering efforts.

@ -1 +1,50 @@
# Prompt injection
# Prompt Injection
Prompt injection is a technique used in prompt engineering to fine-tune and manipulate model outputs more effectively. Instead of simply asking a question or giving a command, prompt injection involves carefully inserting additional context or instructions into the prompt to guide the model and achieve desired responses.
### Examples of Prompt Injection Techniques:
1. **Persistent context:** Repeat important context between turns, especially when the conversation is long or the model fails to retain necessary information.
```markdown
User: What is the capital of France?
AI: The capital of France is Paris.
User: How many people live there approximately?
AI: ...
```
With prompt injection:
```markdown
User: What is the capital of France?
AI: The capital of France is Paris.
User: How many people live in Paris, the capital of France, approximately?
AI: ...
```
2. **Instruct the model:** Explicitly instruct the model to provide a certain type of answer or deliver the output in a specific manner.
```markdown
User: Write a summary of the French Revolution.
AI: ...
```
With prompt injection:
```markdown
User: Write a brief, unbiased, and informative summary of the French Revolution focusing on major events and outcomes.
AI: ...
```
3. **Ask for step-by-step explanations:** Encourage the model to think step by step or weigh pros and cons before arriving at a conclusion.
```markdown
User: Should I buy this stock?
AI: ...
```
With prompt injection:
```markdown
User: What are the potential risks and benefits of buying this stock, and what factors should I consider before making a decision?
AI: ...
```
Keep in mind that prompt injection requires experimentation and iterating on your approach to find the most effective phrasing or context. By combining prompt injection with other prompt engineering techniques, you can enhance model performance and tailor outputs to meet specific user requirements.

@ -1 +1,23 @@
# Prompt leaking
# Prompt Leaking
Prompt leaking is a phenomenon that occurs when the model starts incorporating or internalizing the assumptions and biases present in the prompt itself. Instead of focusing on providing accurate, unbiased information or response, the model might end up reinforcing the inherent biases in the question, leading to results that are not useful, controversial, or potentially harmful.
## Causes
1. **Inherent Biases**: GPT-3 and other state-of-the-art language models are trained on large and diverse datasets. Unfortunately, these datasets also contain many biases that can be embedded into the models, and may come into play during the response generation process.
2. **Biased Prompt**: When the prompt provided by the user contains biased phrases or loaded questions, it can influence the model unintentionally and cause prompt leaking. This kind of bias may lead the model to produce an undesired output conforming to the assumptions in the question.
## Prevention and Strategies
To prevent or mitigate prompt leaking, consider these strategies:
1. **Neutralize the prompt**: Try to rephrase the prompt in a more neutral way or remove any biases present, focusing the question on the desired information.
2. **Counter biases**: If you know a particular bias is involved, build a prompt that counteracts the bias and pushes the model towards a more balanced response.
3. **Use Step-by-Step Instruction**: Guide the model step-by-step or ask the model to think through the reasoning before answering the main question. This can help you steer the model towards the desired response.
4. **Iterative Refinement**: You can adopt an iterative approach to ask the model to improve or further clarify its response, enabling it to rectify potential biases in prior responses.
5. **Model Understanding**: Enhance your understanding of the model's behavior and identify its strengths and weaknesses. This can help you better prepare the prompt and avoid biases.

@ -1 +1,7 @@
# Jailbreaking
Jailbreaking, in the context of prompt engineering, is a technique used to enable the assistant to provide outputs that may be considered beyond its normal limitations. This is often compared to the process of jailbreaking a smartphone, which allows users to unlock certain restrictions and access additional features that would not be otherwise available.
Jailbreaking the assistant involves carefully designing the prompt to encourage the AI to act resourcefully and generate information which can be both creative and useful. Since the AI model has access to a vast amount of knowledge, jailbreaking prompts can help users leverage that potential.
Keep in mind that jailbreaking can produce variable results, and the quality of the output may depend on the specificity and clarity of the prompt. Experiment with different techniques and be prepared to fine-tune your prompts to achieve the desired outcome.

@ -1 +1,20 @@
# Defensive measures
# Defensive Measures
As the author of this guide, I want to ensure that you're able to implement defensive measures when prompt hacking. These measures are crucial as they protect against undesired result manipulations and help maintain the authenticity of generated responses. In this section, we'll briefly summarize the key defensive strategies for prompt engineering.
## 1. Mitigate risks through system design
To minimize the chances of prompt hacking, it's essential to design your system robustly. Some best practices include proper access control, limiting user input types, and using monitoring and logging tools to detect anomalies in query patterns.
## 2. Sanitize user input
Before processing user inputs, ensure that you sanitize and validate them. This step helps in preventing harmful input patterns capable of triggering unintended system behaviors. Stringently filter out any input elements that could manipulate the system’s behavior, such as special characters or explicit control tokens.
## 3. Use rate-limiting
Set limits on the number of requests users can make within a specific timeframe. Rate-limiting techniques prevent prompt hacking attempts in progress by restricting excessive queries.
## 4. Encourage safe practices
Educate users on the necessity of responsible AI usage, as well as the potential consequences of prompting maliciously. Establish guidelines on system usage and consequences for non-compliance.
## 5. Continuously update system security
Keep your system up to date by continuously improving its security measures. Stay informed about the latest advances in AI safety research and incident reports, to learn from and implement better defenses against evolving threats.
By implementing these defensive measures, you'll be better equipped to safeguard your AI system against prompt hacking, ensuring the safety and reliability of your AI-generated responses.

@ -1 +1,34 @@
# Offensive measures
# Offensive Measures
Offensive measures in prompt hacking are techniques used to actively exploit a system, service, or user. These techniques often involve creatively manipulating or structuring prompts to elicit sensitive information or gain unauthorized access. While understanding these measures is important for prompt engineers to create secure systems, we must stress that these methods should not be exploited for illegal or unethical purposes. Here are some commonly used offensive measures:
### 1. Social Engineering
This technique involves exploiting human psychology to trick users into revealing valuable data or granting unauthorized access. Common methods include:
- **Phishing:** Crafting emails or prompts that imitate legitimate organizations and request sensitive data.
- **Pretexting:** Creating a convincing backstory or pretext to give the impression of a legitimate request or interaction.
- **Baiting:** Enticing users to reveal information or grant access with the promise of specific rewards.
### 2. Input Manipulation
Manipulating the input given to a prompt can lead to unintended results, including bypassing security constraints or retrieving valuable data. Some examples:
- **SQL Injection:** Crafting prompts that include SQL code that can exploit a vulnerability in the target system's database.
- **Cross-Site Scripting (XSS):** Injecting malicious scripts into trusted websites or platforms, which can compromise user data and security settings.
### 3. Brute Force
Repeatedly trying different input combinations in an attempt to crack a password or bypass security. This approach can be refined using:
- **Dictionary Attacks:** Attempting a collection of commonly used passwords, phrases, or patterns.
- **Credential Stuffing:** Exploiting previously compromised or leaked credentials by trying them on other services or platforms.
### 4. Exploiting Vulnerabilities
Taking advantage of known or newly discovered security flaws in software or hardware. Offenders often use these vulnerabilities to:
- **Execute Unauthorized Commands:** By exploiting a vulnerability, attackers can run commands without proper authorization.
- **Escalate Privileges:** Attackers may raise their access level, allowing them to access restricted data or features.
To protect against offensive measures, it's essential to implement strong security practices, stay informed about the latest threats, and share knowledge with fellow engineers.

@ -1 +1,25 @@
# Prompt hacking
# Prompt Hacking
Prompt hacking is a creative process of modifying, adjusting, or enhancing the original OpenAI model's prompt to generate more desired and effective outcomes. The goal is to exploit the model's inherent strengths and mitigate its weaknesses by using a combination of techniques and strategies.
Here are some key aspects of prompt hacking:
1. **Explicit Instructions**: Make your prompt as clear and explicit as possible. Specify important details, like the desired format or the expected content.
Example: Instead of 'summarize this article,' use 'write a 150-word summary of the key points in this article.'
2. **Questions over Statements**: It's often more effective to frame prompts as questions than statements. The model tends to treat questions more like requests for information and generate responses accordingly.
Example: Instead of 'write a brief history of the Eiffel Tower,' use 'what is a brief history of the Eiffel Tower?'
3. **Step-by-Step Approach**: Break complex tasks into a series of smaller prompts. This can help in generating more focused and relevant responses by guiding the model to specific aspects.
Example: For an essay on climate change, start with prompts like 'what are the main causes of climate change?' and 'what are the primary impacts of climate change on human societies?'
4. **Debiasing**: Reduce potential biases present in model outputs by explicitly asking for neutral or unbiased information.
Example: Instead of 'explain the benefits of a plant-based diet, use 'what are the pros and cons of a plant-based diet?'
5. **Prompt Iteration**: If the initial prompt doesn't produce the desired results, iterate and experiment with different variations and approaches.
Remember, prompt hacking is an art, and it requires practice, experimentation, and patience. Don't be afraid to get creative and test a variety of strategies to find the ones that work best for your desired outcomes.

@ -1 +1,29 @@
# Image prompting
# Image Prompting
## Image Prompting
Image prompting is a technique used in the process of developing and refining prompts to work with AI models, particularly those that are designed for processing and generating descriptions, captions, or other textual responses based on visual input. In this section, we will discuss the essentials of image prompting and provide some suggestions to create better image prompts.
### Key Concepts
When working with AI models that process images, it is crucial to understand that the model's performance often depends on the quality and relevance of the image prompt. The following key concepts will help you understand and craft effective image prompts:
1. **Descriptiveness**: Image prompts should encourage the AI model to generate a comprehensive and detailed response. For example, instead of simply asking for the scene description, you can ask the AI to describe the scene, including specific elements and their relationships.
2. **Context**: Image prompts should provide enough context to help the AI produce appropriate responses. You can include details like the setting or the environment when crafting the prompt.
3. **Precision**: Specify the level of detail you want in the response. The image prompt should be designed to elicit precise, relevant, and focused responses from the AI. Avoid ambiguous or overly general instructions.
### Tips for Creating Effective Image Prompts
To create compelling image prompts, consider the following tips:
1. **Start with a clear goal**: Define the information you want the AI to extract from the image and design the prompt accordingly.
2. **Adapt to the image content**: Understanding the image's subject and context will help you design prompts that yield better results. Make sure the prompts address the key aspects of the image.
3. **Test and iterate**: Experimentation is crucial to create better image prompts. Test your prompts with various images and fine-tune them based on the AI's responses.
4. **Balance simplicity and complexity**: While it's essential to provide clear instructions to the AI, avoid making the prompt overly complicated. Aim for a balance between simplicity and detail to elicit accurate and useful responses.
With these principles in mind, you'll be well on your way to mastering image prompting, harnessing the power of AI to generate valuable insights and engaging responses based on visual input. Happy prompting!
Loading…
Cancel
Save