fix: content typo

Typo. "xxplainability" should be "explainability".
pull/7987/head
Ivan Delgado 2 weeks ago committed by GitHub
parent 6bbf384b73
commit 7b2a047046
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 4
      src/data/roadmaps/prompt-engineering/content/104-llm-pitfalls/index.md

@ -34,7 +34,7 @@ LLMs present several risks in the domain of question answering.
- Harmful answers
- Lack of contextual understanding
- Privacy and security concerns
- Lack of transparency and xxplainability
- Lack of transparency and explainability
### Text summarization
@ -73,4 +73,4 @@ Learn more from the following resources:
- [@article@Limitations of LLMs: Bias, Hallucinations, and More](https://learnprompting.org/docs/basics/pitfalls)
- [@guides@Risks & Misuses | Prompt Engineering Guide](https://www.promptingguide.ai/risks)
- [@guides@OWASP Top 10 for LLM & Generative AI Security](https://genai.owasp.org/llm-top-10/)
- [@guides@LLM Security Guide - Understanding the Risks of Prompt Injections and Other Attacks on Large Language Models ](https://www.mlopsaudits.com/blog/llm-security-guide-understanding-the-risks-of-prompt-injections-and-other-attacks-on-large-language-models)
- [@guides@LLM Security Guide - Understanding the Risks of Prompt Injections and Other Attacks on Large Language Models ](https://www.mlopsaudits.com/blog/llm-security-guide-understanding-the-risks-of-prompt-injections-and-other-attacks-on-large-language-models)

Loading…
Cancel
Save