Improve Prompt Engineering - Basic LLM & Prompt Introduction: Links (#7639)

* 📃 docs, data (Image Prompting) Update Topic/Sub Topics - In Place Edits.

- intent: Update topic from May 2023 to Oct 2024

- data: src/data/roadmaps/prompt-engineering/content/

- modify - 10X .ms
---

Co-authored-by: @iPoetDev <ipoetdev-github-no-reply@outlook.com>

* 📃 docs, data (Prompt Engineering Roadmap) Basic Concepts - In Place Edits.

- changes: single paragraphs (74-125 words)>
- concerns: if any more concise, topics looses fidelity, meaning and utility.

- data: src/data/roadmaps/prompt-engineering/content/
    - 📂 100-basic-llm

- modify: Topic
    - update content:
        - index.md
        - 100-what-are-llm.md
        - 101-llm-types.md
        - 102-how-llms-built.md
---

Co-authored-by: @iPoetDev <ipoetdev-github-no-reply@outlook.com>

* 📃 docs: (Prompt Eng.)  Basic LLM Concepts - New Links.

- intent: Update topic from May 2023 to Oct 2024
   - 📂 100 basic-llm

- modify topics:
    - add links
        - 100-what-are-llms.md
        - 101-types-llms.md
        - 102-how-llms-are-bilt.md

BREAKING CHANGE: 
---

Co-authored-by: @iPoetDev <ipoetdev-github-no-reply@outlook.com>

* docs: (Prompt Eng.) Prompting Introduction - New Links.

- intent: Update topic from May 2023 to Oct 2024
   - 📂 101-prompting-introduction

- modify topics:
    - add links
        - index.md
        - 100-basic-prompting.md
        - 101-need-for-prompting.md

BREAKING CHANGE: 
---

Co-authored-by: @iPoetDev <ipoetdev-github-no-reply@outlook.com>
pull/7646/head
Charles J. Fowler 3 weeks ago committed by GitHub
parent 44f1b01da3
commit cf5a7d055a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 6
      src/data/roadmaps/prompt-engineering/content/100-basic-llm/100-what-are-llms.md
  2. 6
      src/data/roadmaps/prompt-engineering/content/100-basic-llm/101-llm-types.md
  3. 6
      src/data/roadmaps/prompt-engineering/content/100-basic-llm/102-how-llms-built.md
  4. 6
      src/data/roadmaps/prompt-engineering/content/101-prompting-introduction/100-basic-prompting.md
  5. 5
      src/data/roadmaps/prompt-engineering/content/101-prompting-introduction/101-need-for-prompting.md
  6. 3
      src/data/roadmaps/prompt-engineering/content/101-prompting-introduction/index.md

@ -6,6 +6,12 @@ LLMs have the ability to achieve state-of-the-art performance in multiple Natura
As an example, OpenAI's GPT-3 is a prominent LLM that has gained significant attention due to its capability to generate high-quality text and perform a variety of language tasks with minimal fine-tuning. As an example, OpenAI's GPT-3 is a prominent LLM that has gained significant attention due to its capability to generate high-quality text and perform a variety of language tasks with minimal fine-tuning.
Learn more from the following resources:
- [@roadmap.sh@Introduction to LLMs](https://roadmap.sh/guides/introduction-to-llms) - [@roadmap.sh@Introduction to LLMs](https://roadmap.sh/guides/introduction-to-llms)
- [@article@Large language model](https://en.wikipedia.org/wiki/Large_language_model)
- [@video@Intro to Large Language Models](https://www.youtube.com/watch?v=zjkBMFhNj_g) - [@video@Intro to Large Language Models](https://www.youtube.com/watch?v=zjkBMFhNj_g)
- [@video@Large Language Model Operations (LLMOps) Explained](https://www.youtube.com/watch?v=cvPEiPt7HXo)
- [@video@How Large Language Models Work](https://youtu.be/5sLYAQS9sWQ)
- [@feed@Explore top posts about LLM](https://app.daily.dev/tags/llm?ref=roadmapsh) - [@feed@Explore top posts about LLM](https://app.daily.dev/tags/llm?ref=roadmapsh)

@ -17,3 +17,9 @@ Instruction Tuned LLMs = Base LLMs + Further Tuning + RLHF
``` ```
To build an Instruction Tuned LLM, a Base LLM is taken and is further trained using a large dataset covering sample "Instructions" and how the model should perform as a result of those instructions. The model is then fine-tuned using a technique called "Reinforcement Learning with Human Feedback" (RLHF) which allows the model to learn from human feedback and improve its performance over time. To build an Instruction Tuned LLM, a Base LLM is taken and is further trained using a large dataset covering sample "Instructions" and how the model should perform as a result of those instructions. The model is then fine-tuned using a technique called "Reinforcement Learning with Human Feedback" (RLHF) which allows the model to learn from human feedback and improve its performance over time.
Learn more from the following resources:
- [@article@Understanding AI Models: Base Language Learning Models vs. Instruction Tuned Language Learning Models - Olivier Mills](https://oliviermills.com/articles/understanding-ai-models-base-language-learning-models-vs-instruction-tuned-language-learning-models)
- [@video@Why Are There So Many Foundation Models?](https://www.youtube.com/watch?v=QPQy7jUpmyA)
- [@video@How to Pick the Right AI Foundation Model](https://www.youtube.com/watch?v=pePAAGfh-IU)

@ -9,3 +9,9 @@ On a high level, training an LLM model involves three steps i.e. data collection
- **Evaluation**: The final step is to evaluate the performance of the model to see how well it performs on various tasks such as question answering, summarization, translation etc. - **Evaluation**: The final step is to evaluate the performance of the model to see how well it performs on various tasks such as question answering, summarization, translation etc.
The output from the training Pipeline is an LLM model which is simply the parameters or weights which capture the knowledge learned during the training process. These parameters or weights are typically serialized and stored in a file, which can then be loaded into any application that requires language processing capabilities e.g. text generation, question answering, language processing etc. The output from the training Pipeline is an LLM model which is simply the parameters or weights which capture the knowledge learned during the training process. These parameters or weights are typically serialized and stored in a file, which can then be loaded into any application that requires language processing capabilities e.g. text generation, question answering, language processing etc.
Learn more from the following resources:
- [@article@What is LLM & How to Build Your Own Large Language Models?](https://www.signitysolutions.com/blog/how-to-build-large-language-models)
- [@guides@Large language model](https://en.wikipedia.org/wiki/Large_language_model)
- [@video@Five Steps to Create a New AI Model](https://youtu.be/jcgaNrC4ElU)

@ -26,3 +26,9 @@ Write me an introductory guide about Prompt Engineering.
``` ```
However, using plain text as prompts i.e. without using any best practices you may not be able to fully utilise the power of LLMs. That's where "Prompt Engineering" or knowing the best practices for writing better prompts and getting the most out of LLMs comes in. However, using plain text as prompts i.e. without using any best practices you may not be able to fully utilise the power of LLMs. That's where "Prompt Engineering" or knowing the best practices for writing better prompts and getting the most out of LLMs comes in.
- [@guides@Basics of Prompting | Prompt Engineering Guide](https://www.promptingguide.ai/introduction/basics)
- [@article@Prompting Basics](https://learnprompting.org/docs/basics/prompting)
- [@offical@Prompt engineering - OpenAI API](https://platform.openai.com/docs/guides/prompt-engineering)
- [@offical@Prompt engineering overview - Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview)
- [@course@Introduction to Prompt Engineering (Playlist)](https://youtube.com/playlist?list=PLYio3GBcDKsPP2_zuxEp8eCulgFjI5a3g&si=n3Ot-tFECp4axL8L)

@ -24,4 +24,7 @@ Prompts can help reduce inaccuracies and ambiguities in the AI's responses. By p
In conclusion, the need for prompting stems from its role in guiding AI model behavior, improving text quality and relevance, eliciting a specific output, aligning AI and human intent, and reducing inaccuracies and ambiguity in generated content. By understanding and mastering the art of prompting, users can unlock the true potential of AI language models. In conclusion, the need for prompting stems from its role in guiding AI model behavior, improving text quality and relevance, eliciting a specific output, aligning AI and human intent, and reducing inaccuracies and ambiguity in generated content. By understanding and mastering the art of prompting, users can unlock the true potential of AI language models.
- [@article@Prompting Basics](https://learnprompting.org/docs/basics/prompting) - [@article@Prompting Basics](https://learnprompting.org/docs/basics/prompting)
- [@video@AI prompt engineering: A deep dive](https://youtu.be/T9aRN5JkmL8?si=3uW2BQuNHLcHjqTv)
- [@video@What is Prompt Tuning?](https://www.youtube.com/watch?v=yu27PWzJI_Y)
- [@guides@What is Prompt Engineering? A Detailed Guide For 2024](https://www.datacamp.com/blog/what-is-prompt-engineering-the-future-of-ai-communication)

@ -24,4 +24,5 @@ Hello, how are you?
But it's one of the best practices to be clear and use delimiters to separate the content in prompt from the instructions. You will learn more about it in the "Best Practices" nodes of the roadmap. But it's one of the best practices to be clear and use delimiters to separate the content in prompt from the instructions. You will learn more about it in the "Best Practices" nodes of the roadmap.
- [@article@Basic Prompting](https://learnprompting.org/docs/basics/intro) - [@article@Basic Prompting - Learn Prompting](https://learnprompting.org/docs/basics/intro)
- [@guides@Basics of Prompting - Prompt Engineering Guide](https://www.promptingguide.ai/introduction/basics)
Loading…
Cancel
Save