Improve Prompt Engineering - Pitfalls of LLMs - Content & Links (#7666)
* 📃 docs, data (Image Prompting) Update Topic/Sub Topics - In Place Edits. - intent: Update topic from May 2023 to Oct 2024 - data: src/data/roadmaps/prompt-engineering/content/ - modify - 10X .ms --- Co-authored-by: @iPoetDev <ipoetdev-github-no-reply@outlook.com> * 📃 docs, data (Prompt Engineering Roadmap) Basic Concepts - In Place Edits. - changes: single paragraphs (74-125 words)> - concerns: if any more concise, topics looses fidelity, meaning and utility. - data: src/data/roadmaps/prompt-engineering/content/ - 📂 100-basic-llm - modify: Topic - update content: - index.md - 100-what-are-llm.md - 101-llm-types.md - 102-how-llms-built.md --- Co-authored-by: @iPoetDev <ipoetdev-github-no-reply@outlook.com> * 📃 docs: (Prompt Eng.) Basic LLM Concepts - New Links. - intent: Update topic from May 2023 to Oct 2024 - 📂 100 basic-llm - modify topics: - add links - 100-what-are-llms.md - 101-types-llms.md - 102-how-llms-are-bilt.md BREAKING CHANGE: ❌ --- Co-authored-by: @iPoetDev <ipoetdev-github-no-reply@outlook.com> * docs: (Prompt Eng.) Prompting Introduction - New Links. - intent: Update topic from May 2023 to Oct 2024 - 📂 101-prompting-introduction - modify topics: - add links - index.md - 100-basic-prompting.md - 101-need-for-prompting.md BREAKING CHANGE: ❌ --- Co-authored-by: @iPoetDev <ipoetdev-github-no-reply@outlook.com> * 📃 docs: (Prompt Eng.) Real World Uses - Content & Links. - intent: - Update topic and links from May 2023 to Oct 2024. - Realword use cases are dynamic and evolving. - Remodelled existing examples. - data: src/data/roadmaps/prompt-engineering/content/ - 📂 103-real-world - modify: Content Improve, 1st paragraph. - modify: Expanded Content paragraphs - index.md - 100-structured-data.md - 101-inferring.md - 102-writing-emails.md - 103-coding-assistance.md - 104-study-buddy.md - 105-designing-chatbots.md - modify: Links New - index.md - 100-structured-data.md - 101-inferring.md - 102-writing-emails.md - 103-coding-assistance.md - 104-study-buddy.md - 105-designing-chatbots.md BREAKINGCHANGE: ❌ --- Co-authored-by: @iPoetDev <ipoetdev-github-no-reply@outlook.com> * 📃 docs: (Prompt Eng.) LLM Pitfalls - Links. - intent: Insert Links from May 2023 to Oct 2024 - data: src/data/roadmaps/prompt-engineering/content/ - 📂 104-llm-pitfalls - modify: Links New - index.md - 100-citing-sources.md - 101-bias.md - 102-halluncinations.md - 103-math.md - 104-prompt-hacking.md - modify: Copy Refresh - index.md - 100-citing-sources.md - 101-bias.md - 102-halluncinations.md - 103-math.md - 104-prompt-hacking.md BREAKINGCHANGE: ❌ --- Co-authored-by: @iPoetDev <ipoetdev-github-no-reply@outlook.com> * Apply suggestions from code review spacing and styling fixes --------- Co-authored-by: Dan <daniel.s.holdsworth@gmail.com>7664-python-roadmap-need-to-include-jinja2-template-engine
parent
779935cc60
commit
f69130e918
6 changed files with 95 additions and 47 deletions
@ -1,5 +1,8 @@ |
|||||||
# Citing Sources |
# Citing Sources |
||||||
|
|
||||||
LLMs for the most part cannot accurately cite sources. This is because they do not have access to the Internet, and do not exactly remember where their information came from. They will frequently generate sources that look good, but are entirely inaccurate. |
As advancements have been made in the ability of Large Language Models (LLMs) to cite sources — particularly through realtime API access, search-augmented generation and specialized training — significant limitations persist. LLMs continue to struggle with hallucinations, generating inaccurate or fictitious citation. Many LLM lack real-time API access, which hampers their ability to provide up-to-date information or are limited by their knowledge cut off dates. They sometimes cannot independently verify sources or fully grasp the contextual relevance of citations, raising concerns regarding plagiarism and intellectual property. To address these challenges, ongoing efforts focus on improving realtime retrieval (RAG) methods, enhancing training, and integrating human oversight to ensure accuracy in citations. |
||||||
|
|
||||||
Strategies like search augmented LLMs (LLMs that can search the Internet and other sources) can often fix this problem though. |
Learn more from the following resources: |
||||||
|
|
||||||
|
- [@guides@Why Don’t Large Language Models Share URL References in Their Responses](https://medium.com/@gcentulani/why-dont-large-language-models-share-url-references-in-their-responses-bf427e513861) |
||||||
|
- [@article@Effective large language model adaptation for improved grounding](https://research.google/blog/effective-large-language-model-adaptation-for-improved-grounding/) |
@ -1,4 +1,11 @@ |
|||||||
# Bias |
# Bias |
||||||
|
|
||||||
LLMs are often biased towards generating stereotypical responses. Even with safe guards in place, they will sometimes say sexist/racist/homophobic things. Be careful when using LLMs in consumer-facing applications, and also be careful when using them in research (they can generate biased results). |
Bias in Large Language Models (LLMs) remains a significant challenge, with models often generating stereotypical or discriminatory responses despite advancements in mitigation techniques. These biases can manifest in various forms, including gender, racial, and cultural prejudices, potentially leading to underfitting or overfitting in model outputs. Recent studies have highlighted persistent biases in LLM-generated content, emphasizing the need for caution when deploying these models in consumer-facing applications or research settings. Efforts to address this issue include developing diverse training datasets, implementing regulatory frameworks, and creating new evaluation tools. However, the challenge remains substantial as LLMs continue to influence societal perceptions. Developers and users must be aware of these pitfalls to avoid reputational damage and unintended negative impacts on individuals or communities. |
||||||
|
|
||||||
|
Learn more from the following resources: |
||||||
|
|
||||||
|
- [@guides@Biases in Prompts: Learn how to tackle them](https://mindfulengineer.ai/understanding-biases-in-prompts/) |
||||||
|
- [@guides@Bias in AI: tackling the issues through regulations and standards](https://publicpolicy.ie/papers/bias-in-ai-tackling-the-issues-through-regulations-and-standards/) |
||||||
|
- [@article@What Is AI Bias?](https://www.ibm.com/topics/ai-bias) |
||||||
|
- [@article@What Is Algorithmic Bias?](https://www.ibm.com/think/topics/algorithmic-bias) |
||||||
|
- [@article@AI Bias Examples](https://www.ibm.com/think/topics/shedding-light-on-ai-bias-with-real-world-examples) |
@ -1,3 +1,7 @@ |
|||||||
# Math |
# Math |
||||||
|
|
||||||
LLMs are often bad at math. They have difficulty solving simple math problems, and they are often unable to solve more complex math problems. |
LLMs struggle with math. While they may have improved in solving simple math problems; they, however, coming up short when solving more complex math problems when minor semantic variation happens. This is particularly relevant in terms of mathematical reasoning. Despite advancements, they often fail at solving simple math problems and are unable to handle more complex ones effectively. Studies show that LLMs rely heavily on pattern recognition rather than genuine logical reasoning, leading to significant performance drops when faced with minor changes in problem wording or irrelevant information. This highlights a critical limitation in their reasoning capabilities. |
||||||
|
|
||||||
|
Learn more from the following resources: |
||||||
|
|
||||||
|
- [@article@Apple Says AI’s Math Skills Fall Short](https://www.pymnts.com/artificial-intelligence-2/2024/apple-says-ais-math-skills-fall-short/) |
@ -1,13 +1,11 @@ |
|||||||
# Prompt Hacking |
# Prompt Hacking |
||||||
|
|
||||||
Prompt hacking is a term used to describe a situation where a model, specifically a language model, is tricked or manipulated into generating outputs that violate safety guidelines or are off-topic. This could include content that's harmful, offensive, or not relevant to the prompt. |
Prompt hacking is a form of adversarial prompting where language models are manipulated to generate outputs that violate safety guidelines or are off-topic. Common techniques include manipulating keywords, exploiting grammar and negations, and using leading questions. To combat this, developers implement safety mechanisms such as content filters, continual analysis, and carefully designed prompt templates. As language models become more integrated into digital infrastructure, concerns about prompt injection, data leakage, and potential misuse have grown. In response, evolving defense strategies like prompt shields, enhanced input validation, and fine-tuning for adversarial detection are being developed. Continuous monitoring and improvement of these safety measures are crucial to ensure responsible model behaviour and output alignment with desired guidelines. |
||||||
|
|
||||||
There are a few common techniques employed by users to attempt "prompt hacking," such as: |
Learn more from the following resources: |
||||||
|
|
||||||
1. **Manipulating keywords**: Users may introduce specific keywords or phrases that are linked to controversial, inappropriate, or harmful content in order to trick the model into generating unsafe outputs. |
- [@article@Prompt Hacking](https://learnprompting.org/docs/category/-prompt-hacking) |
||||||
2. **Playing with grammar**: Users could purposely use poor grammar, spelling, or punctuation to confuse the model and elicit responses that might not be detected by safety mitigations. |
- [@article@LLM Security Guide - Understanding the Risks of Prompt Injections and Other Attacks on Large Language Models ](https://www.mlopsaudits.com/blog/llm-security-guide-understanding-the-risks-of-prompt-injections-and-other-attacks-on-large-language-models) |
||||||
3. **Asking leading questions**: Users can try to manipulate the model by asking highly biased or loaded questions, hoping to get a similar response from the model. |
- [@guides@OWASP Top 10 for LLM & Generative AI Security](https://genai.owasp.org/llm-top-10/) |
||||||
|
- [@video@Explained: The OWASP Top 10 for Large Language Model Applications](https://www.youtube.com/watch?v=cYuesqIKf9A) |
||||||
To counteract prompt hacking, it's essential for developers and researchers to build in safety mechanisms such as content filters and carefully designed prompt templates to prevent the model from generating harmful or unwanted outputs. Constant monitoring, analysis, and improvement to the safety mitigations in place can help ensure the model's output aligns with the desired guidelines and behaves responsibly. |
- [@video@Artificial Intelligence: The new attack surface](https://www.youtube.com/watch?v=_9x-mAHGgC4) |
||||||
|
|
||||||
Read more about prompt hacking here [Prompt Hacking](https://learnprompting.org/docs/category/-prompt-hacking). |
|
@ -1,27 +1,76 @@ |
|||||||
# Pitfalls of LLMs |
# Pitfalls of LLMs |
||||||
|
|
||||||
LLMs are extremely powerful, but they are by no means perfect. There are many pitfalls that you should be aware of when using them. |
LLMs are extremely powerful. There are many pitfalls, safety challenges and risks that you should be aware of when using them. |
||||||
|
|
||||||
### Model Guessing Your Intentions |
### Language Translation |
||||||
|
|
||||||
Sometimes, LLMs might not fully comprehend the intent of your prompt and may generate generic or safe responses. To mitigate this, make your prompts more explicit or ask the model to think step-by-step before providing a final answer. |
There are several risks associated with LLMs in language translation. |
||||||
|
|
||||||
### Sensitivity to Prompt Phrasing |
- Inaccurate translations |
||||||
|
- Contextual misinterpretation |
||||||
|
- Biased translations |
||||||
|
- Deepfakes |
||||||
|
- Privacy and data security |
||||||
|
- Legal and regulatory compliance |
||||||
|
|
||||||
LLMs can be sensitive to the phrasing of your prompts, which might result in completely different or inconsistent responses. Ensure that your prompts are well-phrased and clear to minimize confusion. |
### Text Generation |
||||||
|
|
||||||
### Model Generating Plausible but Incorrect Answers |
Text generation is a powerful capability of LLMs but also introduces certain risks and challenges. |
||||||
|
|
||||||
In some cases, LLMs might generate answers that sound plausible but are actually incorrect. One way to deal with this is by adding a step for the model to verify the accuracy of its response or by prompting the model to provide evidence or a source for the given information. |
- Misinformation and fake news |
||||||
|
- Bias amplification |
||||||
|
- Offensive or inappropriate content |
||||||
|
- Plagiarism and copyright infringement |
||||||
|
- Lack of transparency |
||||||
|
- Privacy breaches |
||||||
|
|
||||||
### Verbose or Overly Technical Responses |
### Question Answering |
||||||
|
|
||||||
LLMs, especially larger ones, may generate responses that are unnecessarily verbose or overly technical. To avoid this, explicitly guide the model by making your prompt more specific, asking for a simpler response, or requesting a particular format. |
LLMs present several risks in the domain of question answering. |
||||||
|
|
||||||
### LLMs Not Asking for Clarification |
- Hallucination |
||||||
|
- Outdated information |
||||||
|
- Bias |
||||||
|
- Harmful answers |
||||||
|
- Lack of contextual understanding |
||||||
|
- Privacy and security concerns |
||||||
|
- Lack of transparency and xxplainability |
||||||
|
|
||||||
When faced with an ambiguous prompt, LLMs might try to answer it without asking for clarification. To encourage the model to seek clarification, you can prepend your prompt with "If the question is unclear, please ask for clarification." |
### Text summarization |
||||||
|
|
||||||
### Model Failure to Perform Multi-part Tasks |
Text summarization is a powerful application of LLMs but also introduces certain risks and challenge |
||||||
|
|
||||||
Sometimes, LLMs might not complete all parts of a multi-part task or might only focus on one aspect of it. To avoid this, consider breaking the task into smaller, more manageable sub-tasks or ensure that each part of the task is clearly identified in the prompt. |
- Information loss |
||||||
|
- Bias amplification |
||||||
|
- Contextual misinterpretation |
||||||
|
|
||||||
|
### Sentiment analysis |
||||||
|
|
||||||
|
Sentiment analysis, the process of determining a piece of text’s sentiment or emotional tone, is an application where LLMs are frequently employed. |
||||||
|
|
||||||
|
- Biased sentiment analysis |
||||||
|
- Cultural and contextual nuances |
||||||
|
- Limited domain understanding |
||||||
|
- Misinterpretation of negation and ambiguity |
||||||
|
- Overgeneralization and lack of individual variation |
||||||
|
|
||||||
|
### Code Assistance |
||||||
|
|
||||||
|
Code assistance and generation is an area where LLMs have shown promising capabilities. |
||||||
|
|
||||||
|
- Security vulnerabilities |
||||||
|
- Performance and efficiency challenges |
||||||
|
- Quality and reliability concerns |
||||||
|
- Insufficient understanding of business or domain context |
||||||
|
- Intellectual property concerns |
||||||
|
|
||||||
|
Read more from [Risks of Large Language Models: A comprehensive guide](https://www.deepchecks.com/risks-of-large-language-models/). |
||||||
|
|
||||||
|
Learn more from the following resources: |
||||||
|
|
||||||
|
- [@video@Risks of Large Language Models - IBM](https://www.youtube.com/watch?v=r4kButlDLUc) |
||||||
|
- [@article@Risks of Large Language Models: A comprehensive guide](https://www.deepchecks.com/risks-of-large-language-models/) |
||||||
|
- [@article@Limitations of LLMs: Bias, Hallucinations, and More](https://learnprompting.org/docs/basics/pitfalls) |
||||||
|
- [@guides@Risks & Misuses | Prompt Engineering Guide](https://www.promptingguide.ai/risks) |
||||||
|
- [@guides@OWASP Top 10 for LLM & Generative AI Security](https://genai.owasp.org/llm-top-10/) |
||||||
|
- [@guides@LLM Security Guide - Understanding the Risks of Prompt Injections and Other Attacks on Large Language Models ](https://www.mlopsaudits.com/blog/llm-security-guide-understanding-the-risks-of-prompt-injections-and-other-attacks-on-large-language-models) |
Loading…
Reference in new issue