doc: fix typo (#4889)

typo fix
pull/5008/head
Ahmed Abdul Saad 11 months ago committed by GitHub
parent 85214da400
commit 4d6d943b4e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 4
      src/data/roadmaps/prompt-engineering/content/100-basic-llm/102-how-llms-built.md

@ -1,6 +1,6 @@
# How are LLMs Built?
On a high level, training an LLM model involves thre steps i.e. data collection, training and evaluation.
On a high level, training an LLM model involves three steps i.e. data collection, training and evaluation.
- **Data Collection** The first step is to collect the data that will be used to train the model. The data can be collected from various sources such as Wikipedia, news articles, books, websites etc.
@ -8,4 +8,4 @@ On a high level, training an LLM model involves thre steps i.e. data collection,
- **Evaluation**: The final step is to evaluate the performance of the model to see how well it performs on various tasks such as question answering, summarization, translation etc.
The output from the training Pipeline is an LLM model which is simply the parameters or weights which capture the knowledge learned during the training process. These parameters or weights are typically serialized and stored in a file, which can then be loaded into any application that requires language processing capabilities e.g. text generation, question answering, language processing etc.
The output from the training Pipeline is an LLM model which is simply the parameters or weights which capture the knowledge learned during the training process. These parameters or weights are typically serialized and stored in a file, which can then be loaded into any application that requires language processing capabilities e.g. text generation, question answering, language processing etc.

Loading…
Cancel
Save