diff --git a/src/data/roadmaps/prompt-engineering/content/104-llm-pitfalls/index.md b/src/data/roadmaps/prompt-engineering/content/104-llm-pitfalls/index.md index de64987ad..3835b9a2f 100644 --- a/src/data/roadmaps/prompt-engineering/content/104-llm-pitfalls/index.md +++ b/src/data/roadmaps/prompt-engineering/content/104-llm-pitfalls/index.md @@ -34,7 +34,7 @@ LLMs present several risks in the domain of question answering. - Harmful answers - Lack of contextual understanding - Privacy and security concerns -- Lack of transparency and xxplainability +- Lack of transparency and explainability ### Text summarization @@ -73,4 +73,4 @@ Learn more from the following resources: - [@article@Limitations of LLMs: Bias, Hallucinations, and More](https://learnprompting.org/docs/basics/pitfalls) - [@guides@Risks & Misuses | Prompt Engineering Guide](https://www.promptingguide.ai/risks) - [@guides@OWASP Top 10 for LLM & Generative AI Security](https://genai.owasp.org/llm-top-10/) -- [@guides@LLM Security Guide - Understanding the Risks of Prompt Injections and Other Attacks on Large Language Models ](https://www.mlopsaudits.com/blog/llm-security-guide-understanding-the-risks-of-prompt-injections-and-other-attacks-on-large-language-models) \ No newline at end of file +- [@guides@LLM Security Guide - Understanding the Risks of Prompt Injections and Other Attacks on Large Language Models ](https://www.mlopsaudits.com/blog/llm-security-guide-understanding-the-risks-of-prompt-injections-and-other-attacks-on-large-language-models)