finalize remaining 44 topics.

pull/8581/head
Vedansh 1 week ago
parent 23e7549ad1
commit 9645a8a1b4
  1. 5
      src/data/roadmaps/ai-agents/ai-agents.md
  2. 5
      src/data/roadmaps/ai-agents/content/perception--user-input@LU76AhCYDjxdBhpMQ4eMU.md
  3. 5
      src/data/roadmaps/ai-agents/content/personal-assistant@PPdAutqJF5G60Eg9lYBND.md
  4. 7
      src/data/roadmaps/ai-agents/content/planner-executor@6YLCMWzystao6byCYCTPO.md
  5. 6
      src/data/roadmaps/ai-agents/content/presence-penalty@Vd8ycw8pW-ZKvg5WYFtoh.md
  6. 6
      src/data/roadmaps/ai-agents/content/pricing-of-common-models@B8dzg61TGaknuruBgkEJd.md
  7. 6
      src/data/roadmaps/ai-agents/content/prompt-injection--jailbreaks@SU2RuicMUo8tiAsQtDI1k.md
  8. 6
      src/data/roadmaps/ai-agents/content/provide-additional-context@6I42CoeWX-kkFXTKAY7rw.md
  9. 5
      src/data/roadmaps/ai-agents/content/rag-agent@cW8O4vLLKEG-Q0dE8E5Zp.md
  10. 6
      src/data/roadmaps/ai-agents/content/rag-and-vector-databases@wkS4yOJ3JdZQE_yBID8K7.md
  11. 6
      src/data/roadmaps/ai-agents/content/ragas@YzEDtGEaMaMWVt0W03HRt.md
  12. 5
      src/data/roadmaps/ai-agents/content/react-reason--act@53xDks6JQ33fHMa3XcuCd.md
  13. 5
      src/data/roadmaps/ai-agents/content/reason-and-plan@ycPRgRYR4lEBQr_xxHKnM.md
  14. 5
      src/data/roadmaps/ai-agents/content/reasoning-vs-standard-models@N3yZfUxphxjiupqGpyaS9.md
  15. 6
      src/data/roadmaps/ai-agents/content/remote--cloud@dHNMX3_t1KSDdAWqgdJXv.md
  16. 7
      src/data/roadmaps/ai-agents/content/rest-api-knowledge@QtTwecLdvQa8pgELJ6i80.md
  17. 7
      src/data/roadmaps/ai-agents/content/safety--red-team-testing@63nsfJFO1BwjLX_ZVaPFC.md
  18. 11
      src/data/roadmaps/ai-agents/content/short-term--memory@M3U6RfIqaiut2nuOibY8W.md
  19. 8
      src/data/roadmaps/ai-agents/content/smol-depot@eWxQiBrxIUG2JNcrdfIHS.md
  20. 7
      src/data/roadmaps/ai-agents/content/specify-length-format-etc@wwHHlEoPAx0TLxbtY6nMA.md
  21. 7
      src/data/roadmaps/ai-agents/content/stopping-criteria@K0G-Lw069jXUJwZqHtybd.md
  22. 10
      src/data/roadmaps/ai-agents/content/streamed-vs-unstreamed-responses@i2NE6haX9-7mdoV5LQ3Ah.md
  23. 8
      src/data/roadmaps/ai-agents/content/structured-logging--tracing@zs6LM8WEnb0ERWpiaQCgc.md
  24. 5
      src/data/roadmaps/ai-agents/content/summarization--compression@jTDC19BTWCqxqMizrIJHr.md
  25. 7
      src/data/roadmaps/ai-agents/content/temperature@L1zL1GzqjSAjF06pIIXhy.md
  26. 6
      src/data/roadmaps/ai-agents/content/token-based-pricing@1fiWPBV99E2YncqdCgUw2.md
  27. 5
      src/data/roadmaps/ai-agents/content/tokenization@GAjuWyJl9CI1nqXBp6XCf.md
  28. 5
      src/data/roadmaps/ai-agents/content/tool-definition@qakbxB8xe7Y8gejC5cZnK.md
  29. 6
      src/data/roadmaps/ai-agents/content/tool-sandboxing--permissioning@UVzLGXG6K7HQVHmw8ZAv2.md
  30. 6
      src/data/roadmaps/ai-agents/content/top-p@icbp1NjurQfdM0dHnz6v2.md
  31. 6
      src/data/roadmaps/ai-agents/content/transformer-models-and-llms@ZF5_5Y5zqa75Ov22JACX6.md
  32. 6
      src/data/roadmaps/ai-agents/content/tree-of-thought@Nmy1PoB32DcWZnPM8l8jT.md
  33. 6
      src/data/roadmaps/ai-agents/content/tree-of-thought@hj1adjkG9nalXKZ-Youn0.md
  34. 6
      src/data/roadmaps/ai-agents/content/understand-the-basics-of-rag@qwVQOwBTLA2yUgRISzC8k.md
  35. 6
      src/data/roadmaps/ai-agents/content/unit-testing-for-individual-tools@qo_O4YAe4-MTP_ZJoXJHR.md
  36. 6
      src/data/roadmaps/ai-agents/content/use-examples-in-your-prompt@yulzE4ZNLhXOgHhG7BtZQ.md
  37. 6
      src/data/roadmaps/ai-agents/content/use-relevant-technical-terms@sUwdtOX550tSdceaeFPmF.md
  38. 7
      src/data/roadmaps/ai-agents/content/user-profile-storage@QJqXHV8VHPTnfYfmKPzW7.md
  39. 7
      src/data/roadmaps/ai-agents/content/web-scraping--crawling@5oLc-235bvKhApxzYFkEc.md
  40. 5
      src/data/roadmaps/ai-agents/content/web-search@kBtqT8AduLoYDWopj-V9_.md
  41. 6
      src/data/roadmaps/ai-agents/content/what-are-ai-agents@aFZAm44nP5NefX_9TpT0A.md
  42. 5
      src/data/roadmaps/ai-agents/content/what-are-tools@2zsOUWJQ8e7wnoHmq1icG.md
  43. 8
      src/data/roadmaps/ai-agents/content/what-is-agent-memory@TBH_DZTAfR8Daoh-njNFC.md
  44. 6
      src/data/roadmaps/ai-agents/content/what-is-prompt-engineering@Y8EqzFx3qxtrSh7bWbbV8.md

@ -23,12 +23,12 @@ seo:
title: 'AI Agents Roadmap - roadmap.sh'
description: 'Step by step guide to learn AI Agents in 2025. We also have resources and short descriptions attached to the roadmap items so you can get everything you want to learn in one place.'
keywords:
- 'ai agents tutorial'
- 'step by step guide for ai agents'
- 'how to learn ai agents'
- 'use ai agents in production'
- 'ai agents roadmap 2025'
- 'guide to learning ai agents'
- 'ai agents roadmap 2025'
- 'ai agents tutorial'
- 'ai agents for beginners'
- 'ai agents roadmap'
- 'ai agents learning path'
@ -44,6 +44,7 @@ seo:
- 'learn ai agents for development'
- 'become an ai agents expert'
- 'what is ai agents'
- 'what is ai agent'
relatedRoadmaps:
- 'ai-engineer'
- 'ai-data-scientist'

@ -1,3 +1,8 @@
# Perception / User Input
Perception, also called user input, is the first step in an agent loop. The agent listens and gathers data from the outside world. This data can be text typed by a user, spoken words, camera images, sensor readings, or web content pulled through an API. The goal is to turn raw signals into a clear, usable form. The agent may clean the text, translate speech to text, resize an image, or drop noise from sensor values. Good perception means the agent starts its loop with facts, not guesses. If the input is wrong or unclear, later steps will also fail. So careful handling of perception keeps the whole agent loop on track.
Visit the following resources to learn more:
- [@article@Perception in AI: Understanding Its Types and Importance](https://marktalks.com/perception-in-ai-understanding-its-types-and-importance/)
- [@article@What Is AI Agent Perception? - IBM](https://www.ibm.com/think/topics/ai-agent-perception)

@ -1,3 +1,8 @@
# Personal assistant
A personal assistant AI agent is a smart program that helps one person manage daily tasks. It can check a calendar, set reminders, and send alerts so you never miss a meeting. It can read emails, highlight key points, and even draft quick replies. If you ask a question, it searches trusted sources and gives a short answer. It can order food, book rides, or shop online when you give simple voice or text commands. Because it learns your habits, it suggests the best time to work, rest, or travel. All these actions run in the background, saving you time and reducing stress.
Visit the following resources to learn more:
- [@article@A Complete Guide on AI-powered Personal Assistants](https://medium.com/@alexander_clifford/a-complete-guide-on-ai-powered-personal-assistants-with-examples-2f5cd894d566)
- [@article@9 Best AI Personal Assistants for Work, Chat and Home](https://saner.ai/best-ai-personal-assistants/)

@ -1,3 +1,8 @@
# Planner Executor
A planner-executor agent splits its work into two clear parts. First, the planner thinks ahead. It looks at a goal, lists the steps needed, and puts them in the best order. Second, the executor acts. It takes each planned step and carries it out, checking results as it goes. If something fails or the world changes, the planner may update the plan, and the executor follows the new steps. This divide-and-conquer style lets the agent handle big tasks without losing track of small actions. It is easy to debug, supports reuse of plans, and helps keep the agent’s behavior clear and steady.
A **planner-executor agent** is a type of AI agent that splits its work into two clear parts: planning and execution. The **planner** thinks ahead, taking a goal and breaking it down into a sequence of steps, ordering them in a logical and efficient manner. The **executor**, on the other hand, takes each planned step and carries it out, monitoring the results and reporting back to the planner. If something fails or the world changes, the planner may update the plan, and the executor follows the new steps. This modular approach allows the agent to handle complex tasks by dividing them into manageable parts, making it easier to debug, reuse plans, and maintain clear and consistent behavior.
Visit the following resources to learn more:
- [@article@Plan-and-Execute Agents](https://blog.langchain.dev/planning-agents/)
- [@article@Plan and Execute: AI Agents Architecture](https://medium.com/@shubham.ksingh.cer14/plan-and-execute-ai-agents-architecture-f6c60b5b9598)

@ -1,3 +1,9 @@
# Presence Penalty
Presence penalty is a setting you can adjust when you ask a large language model to write. It pushes the model to choose words it has not used yet. Each time a word has already appeared, the model gets a small score cut for picking it again. A higher penalty gives bigger cuts, so the model looks for new words and fresh ideas. A lower penalty lets the model reuse words more often, which can help with repeats like rhymes or bullet lists. Tuning this control helps you steer the output toward either more variety or more consistency.
Visit the following resources to learn more:
- [@article@Understanding Presence Penalty and Frequency Penalty](https://medium.com/@pushparajgenai2025/understanding-presence-penalty-and-frequency-penalty-in-openai-chat-completion-api-calls-2e3a22547b48)
- [@article@Difference between Frequency and Presence Penalties?](https://community.openai.com/t/difference-between-frequency-and-presence-penalties/2777)
- [@article@LLM Parameters Explained: A Practical Guide with Examples](https://learnprompting.org/blog/llm-parameters)

@ -1,3 +1,9 @@
# Pricing of Common Models
When you use a large language model, you usually pay by the amount of text it reads and writes, counted in “tokens.” A token is about four characters or three-quarters of a word. Providers list a price per 1,000 tokens. For example, GPT-3.5 Turbo may cost around $0.002 per 1,000 tokens, while GPT-4 is much higher, such as $0.03 to $0.06 for prompts and $0.06 to $0.12 for replies. Smaller open-source models like Llama-2 can be free to use if you run them on your own computer, but you still pay for the hardware or cloud time. Vision or audio models often have extra fees because they use more compute. When planning costs, estimate the tokens in each call, multiply by the price, and add any hosting or storage charges.
Visit the following resources to learn more:
- [@official@OpenAI Pricing](https://openai.com/api/pricing/)
- [@article@Executive Guide To AI Agent Pricing](https://www.forbes.com/councils/forbesbusinesscouncil/2025/01/28/executive-guide-to-ai-agent-pricing-winning-strategies-and-models-to-drive-growth/)
- [@article@AI Pricing: How Much Does Artificial Intelligence Cost In 2025?](https://www.internetsearchinc.com/ai-pricing-how-much-does-artificial-intelligence-cost/)

@ -1,3 +1,9 @@
# Prompt Injection / Jailbreaks
Prompt injection, also called a jailbreak, is a trick that makes an AI system break its own rules. An attacker hides special words or symbols inside normal-looking text. When the AI reads this text, it follows the hidden instructions instead of its safety rules. The attacker might force the AI to reveal private data, produce harmful content, or give wrong advice. This risk grows when the AI talks to other software or pulls text from the internet, because harmful prompts can slip in without warning. Good defenses include cleaning user input, setting strong guardrails inside the model, checking outputs for policy breaks, and keeping humans in the loop for high-risk tasks.
Visit the following resources to learn more:
- [@article@Prompt Injection vs. Jailbreaking: What's the Difference?](https://learnprompting.org/blog/injection_jailbreaking)
- [@article@Prompt Injection vs Prompt Jailbreak](https://codoid.com/ai/prompt-injection-vs-prompt-jailbreak-a-detailed-comparison/)
- [@article@How Prompt Attacks Exploit GenAI and How to Fight Back](https://unit42.paloaltonetworks.com/new-frontier-of-genai-threats-a-comprehensive-guide-to-prompt-attacks/)

@ -1,3 +1,9 @@
# Provide additional context
Provide additional context means giving the AI enough background facts, constraints, and goals so it can reply in the way you need. Start by naming the topic and the purpose of the answer. Add who the answer is for, the tone you want, and any limits such as length, format, or style. List key facts, data, or examples that matter to the task. This extra detail stops the model from guessing and keeps replies on target. Think of it like guiding a new teammate: share the details they need, but keep them short and clear.
Visit the following resources to learn more:
- [@article@What is Context in Prompt Engineering?](https://www.godofprompt.ai/blog/what-is-context-in-prompt-engineering)
- [@article@The Importance of Context for Reliable AI Systems](https://medium.com/mathco-ai/the-importance-of-context-for-reliable-ai-systems-and-how-to-provide-context-009bd1ac7189/)
- [@article@Context Engineering: Why Feeding AI the Right Context Matters](https://inspirednonsense.com/context-engineering-why-feeding-ai-the-right-context-matters-353e8f87d6d3)

@ -1,3 +1,8 @@
# RAG Agent
A RAG (Retrieval-Augmented Generation) agent mixes search with language generation so it can answer questions using fresh and reliable facts. When a user sends a query, the agent first turns that query into an embedding—basically a number list that captures its meaning. It then looks up similar embeddings in a vector database that holds passages from web pages, PDFs, or other text. The best-matching passages come back as context. The agent puts the original question and those passages into a large language model. The model writes the final reply, grounding every sentence in the retrieved text. This setup keeps the model smaller, reduces wrong guesses, and lets the system update its knowledge just by adding new documents to the database. Common tools for building a RAG agent include an embedding model, a vector store like FAISS or Pinecone, and an LLM connected through a framework such as LangChain or LlamaIndex.
Visit the following resources to learn more:
- [@article@What is RAG? - Retrieval-Augmented Generation AI Explained](https://aws.amazon.com/what-is/retrieval-augmented-generation/)
- [@article@What Is Retrieval-Augmented Generation, aka RAG?](https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/)

@ -1,3 +1,9 @@
# RAG and Vector Databases
RAG, short for Retrieval-Augmented Generation, lets an AI agent pull facts from stored data each time it answers. The data sits in a vector database. In that database, every text chunk is turned into a number list called a vector. Similar ideas create vectors that lie close together, so the agent can find related chunks fast. When the user asks a question, the agent turns the question into its own vector, finds the nearest chunks, and reads them. It then writes a reply that mixes the new prompt with those chunks. Because the data store can hold a lot of past chats, documents, or notes, this process gives the agent a working memory without stuffing everything into the prompt. It lowers token cost, keeps answers on topic, and allows the memory to grow over time.
Visit the following resources to learn more:
- [@article@Understanding Retrieval-Augmented Generation (RAG) and Vector Databases](https://pureai.com/Articles/2025/03/03/Understanding-RAG.aspx)
- [@article@Build Advanced Retrieval-Augmented Generation Systems](https://learn.microsoft.com/en-us/azure/developer/ai/advanced-retrieval-augmented-generation)
- [@article@What Is Retrieval-Augmented Generation, aka RAG?](https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/)

@ -1,3 +1,9 @@
# Ragas
Ragas is an open-source tool used to check how well a Retrieval-Augmented Generation (RAG) agent works. You give it the user question, the passages the agent pulled from a knowledge base, and the final answer. Ragas then scores the answer for things like correctness, relevance, and whether the cited passages really support the words in the answer. It uses large language models under the hood, so you do not need to write your own scoring rules. Results appear in a clear report that shows strong and weak spots in the pipeline. With this feedback you can change prompts, retriever settings, or model choices and quickly see if quality goes up. This makes testing RAG systems faster, repeatable, and less guess-based.
Visit the following resources to learn more:
- [@official@Ragas Documentation](https://docs.ragas.io/en/latest/)
- [@article@Evaluating RAG Applications with RAGAs](https://towardsdatascience.com/evaluating-rag-applications-with-ragas-81d67b0ee31a/n)
- [@opensource@explodinggradients/ragas](https://github.com/explodinggradients/ragas)

@ -1,3 +1,8 @@
# ReAct (Reason + Act)
ReAct is an agent pattern that makes a model alternate between two simple steps: Reason and Act. First, the agent writes a short thought that sums up what it knows and what it should try next. Then it performs an action such as calling an API, running code, or searching a document. The result of that action is fed back, giving the agent fresh facts to think about. This loop repeats until the task is done. By showing its thoughts in plain text, the agent can be inspected, debugged, and even corrected on the fly. The clear split between thinking and doing also cuts wasted moves and guides the model toward steady progress. ReAct works well with large language models because they can both generate the chain of thoughts and choose the next tool in the very same response.
Visit the following resources to learn more:
- [@official@ReAct: Synergizing Reasoning and Acting in Language Models](https://react-lm.github.io/)
- [@article@ReAct Systems: Enhancing LLMs with Reasoning and Action](https://learnprompting.org/docs/agents/react)

@ -1,3 +1,8 @@
# Reason and Plan
Reason and Plan is the moment when an AI agent thinks before it acts. The agent starts with a goal and the facts it already knows. It looks at these facts and asks, “What do I need to do next to reach the goal?” It breaks the goal into smaller steps, checks if each step makes sense, and orders them in a clear path. The agent may also guess what could go wrong and prepare backup steps. Once the plan feels solid, the agent is ready to move on and take the first action.
Visit the following resources to learn more:
- [@official@ReAct: Synergizing Reasoning and Acting in Language Models](https://react-lm.github.io/)
- [@article@ReAct Systems: Enhancing LLMs with Reasoning and Action](https://learnprompting.org/docs/agents/react)

@ -1,3 +1,8 @@
# Reasoning vs Standard Models
Reasoning models break a task into clear steps and follow a line of logic, while standard models give an answer in one quick move. A reasoning model might write down short notes, check each note, and then combine them to reach the final reply. This helps it solve math problems, plan actions, and spot errors that simple pattern matching would miss. A standard model depends on patterns it learned during training and often guesses the most likely next word. That works well for everyday chat, summaries, or common facts, but it can fail on tricky puzzles or tasks with many linked parts. Reasoning takes more time and computer power, yet it brings higher accuracy and makes the agent easier to debug because you can see its thought steps. Many new AI agents mix both styles: they use quick pattern recall for simple parts and switch to step-by-step reasoning when a goal needs deeper thought.
Visit the following resources to learn more:
- [@official@ReAct: Synergizing Reasoning and Acting in Language Models](https://react-lm.github.io/)
- [@article@ReAct Systems: Enhancing LLMs with Reasoning and Action](https://learnprompting.org/docs/agents/react)

@ -1,3 +1,9 @@
# Remote / Cloud
Remote or cloud deployment places the MCP server on a cloud provider instead of a local machine. You package the server as a container or virtual machine, choose a service like AWS, Azure, or GCP, and give it compute, storage, and a public HTTPS address. A load balancer spreads traffic, while auto-scaling adds or removes copies of the server as demand changes. You secure the endpoint with TLS, API keys, and firewalls, and you send logs and metrics to the provider’s monitoring tools. This setup lets the server handle many users, updates are easier, and you avoid local hardware limits, though you must watch costs and protect sensitive data.
Visit the following resources to learn more:
- [@official@Edge AI vs. Cloud AI: Real-Time Intelligence Models](https://medium.com/@hassaanidrees7/edge-ai-vs-cloud-ai-real-time-intelligence-vs-centralized-processing-df8c6e94fd11)
- [@article@Cloud AI vs. On-premises AI](https://www.pluralsight.com/resources/blog/ai-and-data/ai-on-premises-vs-in-cloud)
- [@article@Cloud vs On-Premises AI Deployment](https://toxigon.com/cloud-vs-on-premises-ai-deployment)

@ -1 +1,8 @@
# REST API Knowledge
A **REST API** (Representational State Transfer) is an architectural style for designing networked applications. In AI agents, REST APIs enable communication between the agent and external systems, allowing for data exchange and integration. The agent can use REST APIs to retrieve data from external sources, send data to external systems, and interact with other AI agents or services. This provides a flexible and scalable way to integrate with various systems, enabling the agent to access a wide range of data and services. REST APIs in AI agents support a variety of functions, including data retrieval, data sending, and system interaction. They play a crucial role in facilitating communication between AI agents and external systems, making them a fundamental component of AI agent architecture.
Visit the following resources to learn more:
- [@article@What is RESTful API? - RESTful API Explained - AWS](https://aws.amazon.com/what-is/restful-api/)
- [@article@What Is a REST API? Examples, Uses & Challenges ](https://blog.postman.com/rest-api-examples/)

@ -1,3 +1,10 @@
# Safety + Red Team Testing
Safety + Red Team Testing is the practice of checking an AI agent for harmful or risky behavior before and after release. Safety work sets rules, guardrails, and alarms so the agent follows laws, keeps data private, and treats people fairly. Red team testing sends skilled testers to act like attackers or troublemakers. They type tricky prompts, try to leak private data, force biased outputs, or cause the agent to give dangerous advice. Every weakness they find is logged and fixed by adding filters, better training data, stronger limits, or live monitoring. Running these tests often lowers the chance of real-world harm and builds trust with users and regulators.
Visit the following resources to learn more:
- [@roadmap@Visit Dedicated AI Red Teaming Roadmap](https://roadmap.sh/ai-red-teaming)
- [@article@Enhancing AI safety: Insights and lessons from red teaming](https://www.microsoft.com/en-us/microsoft-cloud/blog/2025/01/14/enhancing-ai-safety-insights-and-lessons-from-red-teaming/)
- [@article@AI Safety Testing in the Absence of Regulations](https://aisecuritycentral.com/ai-safety-testing/)
- [@article@A Guide to AI Red Teaming - HiddenLayer](https://hiddenlayer.com/innovation-hub/a-guide-to-ai-red-teaming/)

@ -2,8 +2,7 @@
Short term memory are the facts which are passed as a part of the prompt to the LLM e.g. there might be a prompt like below:
```
```text
Users Profile:
- name: {name}
- age: {age}
@ -18,3 +17,11 @@ Help the user achieve the goals.
```
Notice how we injected the user's profile, current topic and goals in the prompt. These are all short term memories.
Visit the following resources to learn more:
- [@article@Memory Management in AI Agents](https://python.langchain.com/docs/how_to/chatbots_memory/)
- [@article@Build Smarter AI Agents: Manage Short-term and Long-term Memory](https://redis.io/blog/build-smarter-ai-agents-manage-short-term-and-long-term-memory-with-redis/)
- [@article@Storing and Retrieving Knowledge for Agents](https://www.pinecone.io/learn/langchain-retrieval-augmentation/)
- [@article@Short-Term vs Long-Term Memory in AI Agents](https://adasci.org/short-term-vs-long-term-memory-in-ai-agents/)
- [@video@Building Brain-Like Memory for AI Agents](https://www.youtube.com/watch?v=VKPngyO0iKg)

@ -1,3 +1,9 @@
# Smol Depot
Smol Depot is an open-source kit that lets you bundle all the parts of a small AI agent in one place. You keep prompts, settings, and code files together in a single folder, then point the Depot tool at that folder to spin the agent up. The tool handles tasks such as loading models, saving chat history, and calling outside APIs, so you do not have to write that glue code yourself. A simple command can copy a starter template, letting you focus on the logic and prompts that make your agent special. Because everything lives in plain files, you can track changes with Git and share the agent like any other project. This makes Smol Depot a quick way to build, test, and ship lightweight agents without a heavy framework.
Smol Depot is an open-source kit that lets you bundle all the parts of a small AI agent in one place. You keep prompts, settings, and code files together in a single folder, then point the Depot tool at that folder to spin the agent up. The tool handles tasks such as loading models, saving chat history, and calling outside APIs, so you do not have to write that glue code yourself. A simple command can copy a starter template, letting you focus on the logic and prompts that make your agent special. Because everything lives in plain files, you can track changes with Git and share the agent like any other project.
Visit the following resources to learn more:
- [@official@smol.ai - Continuous Fine-tuning Platform for AI Engineers](https://smol.candycode.dev/)
- [@article@5-min Smol AI Tutorial](https://www.ai-jason.com/learning-ai/smol-ai-tutorial)
- [@video@Smol AI Full Beginner Course](https://www.youtube.com/watch?v=d7qFVrpLh34)

@ -1,3 +1,8 @@
# Specify Length, format etc
# Specify Length & Format
When you give a task to an AI, make clear how long the answer should be and what shape it must take. Say “Write 120 words” or “Give the steps as a numbered list.” If you need a table, state the column names and order. If you want bullet points, mention that. Telling the AI to use plain text, JSON, or markdown stops guesswork and saves time. Clear limits on length keep the reply focused. A fixed format makes it easier for people or other software to read and use the result. Always put these rules near the start of your prompt so the AI sees them as important.
Visit the following resources to learn more:
- [@article@Mastering Prompt Engineering: Format, Length, and Audience](https://techlasi.com/savvy/mastering-prompt-engineering-format-length-and-audience-examples-for-2024/)
- [@article@Ultimate Guide to Prompt Engineering](https://promptdrive.ai/prompt-engineering/)

@ -1,3 +1,8 @@
# Stopping Criteria
Stopping criteria tell the language model when to stop writing more text. Without them, the model could keep adding words forever, waste time, or spill past the point we care about. Common rules include a maximum number of tokens, a special end-of-sequence token, or a custom string such as “\n\n”. We can also stop when the answer starts to repeat or reaches a score that means it is off topic. Good stopping rules save cost, speed up replies, and avoid nonsense or unsafe content.
Stopping criteria tell the language model when to stop writing more text. Without them, the model could keep adding words forever, waste time, or spill past the point we care about. Common rules include a maximum number of tokens, a special end-of-sequence token, or a custom string such as `“\n\n”`. We can also stop when the answer starts to repeat or reaches a score that means it is off topic. Good stopping rules save cost, speed up replies, and avoid nonsense or unsafe content.
Visit the following resources to learn more:
- [@article@Defining Stopping Criteria in Large Language Models](https://www.metriccoders.com/post/defining-stopping-criteria-in-large-language-models-a-practical-guide)
- [@article@Stopping Criteria for Decision Tree Algorithm and Tree Plots](https://aieagle.in/stopping-criteria-for-decision-tree-algorithm-and-tree-plots/)

@ -1,3 +1,11 @@
# Streamed vs Unstreamed Responses
Streamed and unstreamed responses describe how an AI agent sends its answer to the user. With a streamed response, the agent starts sending words as soon as it generates them. The user sees the text grow on the screen in real time. This feels fast and lets the user stop or change the request early. It is useful for long answers and chat-like apps. An unstreamed response waits until the whole answer is ready, then sends it all at once. This makes the code on the client side simpler and is easier to cache or log, but the user must wait longer, especially for big outputs. Choosing between the two depends on the need for speed, the length of the answer, and how complex you want the client and server to be.
Streamed and unstreamed responses describe how an AI agent sends its answer to the user. With a streamed response, the agent starts sending words as soon as it generates them. The user sees the text grow on the screen in real time. This feels fast and lets the user stop or change the request early. It is useful for long answers and chat-like apps.
An unstreamed response waits until the whole answer is ready, then sends it all at once. This makes the code on the client side simpler and is easier to cache or log, but the user must wait longer, especially for big outputs. Choosing between the two depends on the need for speed, the length of the answer, and how complex you want the client and server to be.
Visit the following resources to learn more:
- [@article@Streaming Responses in AI: How AI Outputs Are Generated in Real Time](https://dev.to/pranshu_kabra_fe98a73547a/streaming-responses-in-ai-how-ai-outputs-are-generated-in-real-time-18kb)
- [@article@AI for Web Devs: Faster Responses with HTTP Streaming](https://austingil.com/ai-for-web-devs-streaming/)
- [@article@Master the OpenAI API: Stream Responses](https://www.toolify.ai/gpts/master-the-openai-api-stream-responses-139447)

@ -1,3 +1,9 @@
# Structured logging & tracing
# Structured Logging & Tracing
Structured logging and tracing are ways to record what an AI agent does so you can find and fix problems fast. Instead of dumping plain text, the agent writes logs in a fixed key-value format, such as time, user_id, step, and message. Because every entry follows the same shape, search tools can filter, sort, and count events with ease. Tracing links those log lines into a chain that follows one request or task across many functions, threads, or microservices. By adding a unique trace ID to each step, you can see how long each part took and where errors happened. Together, structured logs and traces offer clear, machine-readable data that helps developers spot slow code paths, unusual behavior, and hidden bugs without endless manual scans.
Visit the following resources to learn more:
- [@article@Understanding Structured Logging: A Comprehensive Guide](https://www.graphapp.ai/blog/understanding-structured-logging-a-comprehensive-guide)
- [@article@Structured Logging & Cloud Logging](https://cloud.google.com/logging/docs/structured-logging)
- [@article@Best Practices for Logging in AI Applications](https://www.restack.io/p/best-ai-practices-software-compliance-answer-logging-best-practices-cat-ai)

@ -1,3 +1,8 @@
# Summarization / Compression
Summarization or compression lets an AI agent keep the gist of past chats without saving every line. After a talk, the agent runs a small model or rule set that pulls out key facts, goals, and feelings and writes them in a short note. This note goes into long-term memory, while the full chat can be dropped or stored elsewhere. Because the note is short, the agent spends fewer tokens when it loads memory into the next prompt, so costs stay low and speed stays high. Good summaries leave out side jokes and filler but keep names, dates, open tasks, and user preferences. The agent can update the note after each session, overwriting old points that are no longer true. This process lets the agent remember what matters even after hundreds of turns.
Visit the following resources to learn more:
- [@article@Evaluating LLMs for Text Summarization](https://insights.sei.cmu.edu/blog/evaluating-llms-for-text-summarization-introduction/)
- [@article@The Ultimate Guide to AI Document Summarization](https://www.documentllm.com/blog/ai-document-summarization-guide)

@ -1,3 +1,10 @@
# Temperature
Temperature is a setting that changes how random or predictable an AI model’s text output is. The value usually goes from 0 to 1, sometimes higher. A low temperature, close to 0, makes the model pick the most likely next word almost every time, so the answer is steady and safe but can feel dull or repetitive. A high temperature, like 0.9 or 1.0, lets the model explore less-likely word choices, which can give fresh and creative replies, but it may also add mistakes or drift off topic. By adjusting temperature, you balance reliability and creativity to fit the goal of your task.
Visit the following resources to learn more:
- [@roadmap@What Temperature Means in Natural Language Processing and AI](https://thenewstack.io/what-temperature-means-in-natural-language-processing-and-ai/)
- [@article@LLM Temperature: How It Works and When You Should Use It](https://www.vellum.ai/llm-parameters/temperature)
- [@article@What is LLM Temperature? - IBM](https://www.ibm.com/think/topics/llm-temperature)
- [@article@How Temperature Settings Transform Your AI Agent's Responses](https://docsbot.ai/article/how-temperature-settings-transform-your-ai-agents-responses)

@ -1,3 +1,9 @@
# Token Based Pricing
Token-based pricing is how many language-model services charge for use. A token is a small chunk of text, roughly four characters or part of a word. The service counts every token that goes into the model (your prompt) and every token that comes out (the reply). It then multiplies this total by a listed price per thousand tokens. Some plans set one price for input tokens and a higher or lower price for output tokens. Because the bill grows with each token, users often shorten prompts, trim extra words, or cap response length to spend less.
Visit the following resources to learn more:
- [@article@Explaining Tokens — the Language and Currency of AI](https://blogs.nvidia.com/blog/ai-tokens-explained/)
- [@article@What Are AI Tokens?](https://methodshop.com/what-are-ai-tokens/)
- [@article@Pricing - OpenAI](https://openai.com/api/pricing/)

@ -1,3 +1,8 @@
# Tokenization
Tokenization is the step where raw text is broken into small pieces called tokens, and each token is given a unique number. A token can be a whole word, part of a word, a punctuation mark, or even a space. The list of all possible tokens is the model’s vocabulary. Once text is turned into these numbered tokens, the model can look up an embedding for each number and start its math. By working with tokens instead of full sentences, the model keeps the input size steady and can handle new or rare words by slicing them into familiar sub-pieces. After the model finishes its work, the numbered tokens are turned back into text through the same vocabulary map, letting the user read the result.
Visit the following resources to learn more:
- [@article@Explaining Tokens — the Language and Currency of AI](https://blogs.nvidia.com/blog/ai-tokens-explained/)
- [@article@What is Tokenization? Types, Use Cases, Implementation](https://www.datacamp.com/blog/what-is-tokenization)

@ -1,3 +1,8 @@
# Tool Definition
A tool is any skill or function that an AI agent can call to get a job done. It can be as simple as a calculator for math or as complex as an API that fetches live weather data. Each tool has a name, a short description of what it does, and a clear list of the inputs it needs and the outputs it returns. The agent’s planner reads this definition to decide when to use the tool. Good tool definitions are precise and leave no room for doubt, so the agent will not guess or misuse them. They also set limits, like how many times a tool can be called or how much data can be pulled, which helps control cost and errors. Think of a tool definition as a recipe card the agent follows every time it needs that skill.
Visit the following resources to learn more:
- [@article@Understanding the Agent Function in AI: Key Roles and Responsibilities](https://pingax.com/ai/agent/function/understanding-the-agent-function-in-ai-key-roles-and-responsibilities/)
- [@article@What is an AI Tool?](https://www.synthesia.io/glossary/ai-tool)

@ -1,3 +1,9 @@
# Tool sandboxing / Permissioning
Tool sandboxing keeps the AI agent inside a safe zone where it can only run approved actions and cannot touch the wider system. Permissioning sets clear rules that say which files, networks, or commands the agent may use. Together they stop errors, leaks, or abuse by limiting what the agent can reach and do. Developers grant the smallest set of rights, watch activity, and block anything outside the plan. If the agent needs new access, it must ask and get a fresh permit. This simple fence protects user data, reduces harm, and builds trust in the agent’s work.
Visit the following resources to learn more:
- [@article@AI Sandbox | Harvard University Information Technology](https://www.huit.harvard.edu/ai-sandbox)
- [@article@How to Set Up AI Sandboxes to Maximize Adoption](https://medium.com/@emilholmegaard/how-to-set-up-ai-sandboxes-to-maximize-adoption-without-compromising-ethics-and-values-637c70626130)
- [@article@Sandboxes for AI - The Datasphere Initiative](https://www.thedatasphere.org/datasphere-publish/sandboxes-for-ai/)

@ -1,3 +1,9 @@
# Top-p
Top-p, also called nucleus sampling, is a setting that guides how an LLM picks its next word. The model lists many possible words and sorts them by probability. It then finds the smallest group of top words whose combined chance adds up to the chosen p value, such as 0.9. Only words inside this group stay in the running; the rest are dropped. The model picks one word from the kept group at random, weighted by their original chances. A lower p keeps only the very likely words, so output is safer and more focused. A higher p lets in less likely words, adding surprise and creativity but also more risk of error.
Visit the following resources to learn more:
- [@article@Nucleus Sampling](https://nn.labml.ai/sampling/nucleus.html)
- [@article@Sampling Techniques in Large Language Models (LLMs)](https://medium.com/@shashankag14/understanding-sampling-techniques-in-large-language-models-llms-dfc28b93f518)
- [@article@Temperature, top_p and top_k for chatbot responses](https://community.openai.com/t/temperature-top-p-and-top-k-for-chatbot-responses/295542)

@ -1,3 +1,9 @@
# Transformer Models and LLMs
Transformer models are a type of neural network that read input data—like words in a sentence—all at once instead of one piece at a time. They use “attention” to find which parts of the input matter most for each other part. This lets them learn patterns in language very well. When a transformer has been trained on a very large set of text, we call it a Large Language Model (LLM). An LLM can answer questions, write text, translate languages, and code because it has seen many examples during training. AI agents use these models as their “brains.” They feed tasks or prompts to the LLM, get back text or plans, and then act on those results. This structure helps agents understand goals, break them into steps, and adjust based on feedback, making them useful for chatbots, research helpers, and automation tools.
Visit the following resources to learn more:
- [@article@Exploring Open Source AI Models: LLMs and Transformer Architectures](https://llmmodels.org/blog/exploring-open-source-ai-models-llms-and-transformer-architectures/)
- [@article@Transformer Models Vs Llm Comparison](https://www.restack.io/p/transformer-models-answer-vs-llm-cat-ai)
- [@article@How Transformer LLMs Work](https://www.deeplearning.ai/short-courses/how-transformer-llms-work/)

@ -1,3 +1,9 @@
# Tree-of-Thought
Tree-of-Thought is a way to organize an AI agent’s reasoning as a branching tree. At the root, the agent states the main problem. Each branch is a small idea, step, or guess that could lead to a solution. The agent expands the most promising branches, checks if they make sense, and prunes paths that look wrong or unhelpful. This setup helps the agent explore many possible answers while staying focused on the best ones. Because the agent can compare different branches side by side, it is less likely to get stuck on a bad line of thought. The result is more reliable and creative problem solving.
Visit the following resources to learn more:
- [@article@Tree of Thoughts (ToT) | Prompt Engineering Guide](https://www.promptingguide.ai/techniques/tot)
- [@article@What is tree-of-thoughts? - IBM](https://www.ibm.com/think/topics/tree-of-thoughts)
- [@article@The Revolutionary Approach of Tree-of-Thought Prompting in AI](https://medium.com/@WeavePlatform/the-revolutionary-approach-of-tree-of-thought-prompting-in-ai-eb7c0872247b)

@ -1,3 +1,9 @@
# Tree-of-Thought
Tree-of-Thought is a way to let an AI agent plan its steps like branches on a tree. The agent writes down one “thought” at a time, then splits into several follow-up thoughts, each leading to new branches. It can look ahead, compare branches, and drop weak paths while keeping strong ones. This helps the agent explore many ideas without getting stuck on the first answer. The method is useful for tasks that need careful reasoning, such as solving puzzles, coding, or writing. Because the agent can backtrack and revise earlier thoughts, it often finds better solutions than a straight, single-line chain of steps.
Visit the following resources to learn more:
- [@article@Tree of Thoughts (ToT) | Prompt Engineering Guide](https://www.promptingguide.ai/techniques/tot)
- [@article@What is tree-of-thoughts? - IBM](https://www.ibm.com/think/topics/tree-of-thoughts)
- [@article@The Revolutionary Approach of Tree-of-Thought Prompting in AI](https://medium.com/@WeavePlatform/the-revolutionary-approach-of-tree-of-thought-prompting-in-ai-eb7c0872247b)

@ -1,3 +1,9 @@
# Understand the Basics of RAG
RAG, short for Retrieval-Augmented Generation, is a way to make language models give better answers by letting them look things up before they reply. First, the system turns the user’s question into a search query and scans a knowledge source, such as a set of documents or a database. It then pulls back the most relevant passages, called “retrievals.” Next, the language model reads those passages and uses them, plus its own trained knowledge, to write the final answer. This mix of search and generation helps the model stay up to date, reduce guesswork, and cite real facts. Because it adds outside information on demand, RAG often needs less fine-tuning and can handle topics the base model never saw during training.
Visit the following resources to learn more:
- [@article@What Is RAG in AI and How to Use It?](https://www.v7labs.com/blog/what-is-rag)
- [@article@An Introduction to RAG and Simple & Complex RAG](https://medium.com/enterprise-rag/an-introduction-to-rag-and-simple-complex-rag-9c3aa9bd017b)
- [@video@Learn RAG From Scratch](https://www.youtube.com/watch?v=sVcwVQRHIc8)

@ -1,3 +1,9 @@
# Unit Testing for Individual Tools
Unit testing checks that each tool an AI agent uses works as expected when it stands alone. You write small tests that feed the tool clear input and then compare its output to a known correct answer. If the tool is a function that parses dates, you test many date strings and see if the function gives the right results. Good tests cover normal cases, edge cases, and error cases. Run the tests every time you change the code. When a test fails, fix the tool before moving on. This habit keeps bugs from spreading into larger agent workflows and makes later debugging faster.
Visit the following resources to learn more:
- [@article@Unit Testing Agents](https://docs.patronus.ai/docs/agent_evals/unit_testing)
- [@article@Best AI Tools for Unit Testing: A Look at Top 14 AI Tools](https://thetrendchaser.com/best-ai-tools-for-unit-testing/)
- [@article@AI for Unit Testing: Revolutionizing Developer Productivity](https://www.diffblue.com/resources/ai-for-unit-testing-revolutionizing-developer-productivity/)

@ -1,3 +1,9 @@
# Use Examples in your Prompt
A clear way to guide an AI is to place one or two short samples inside your prompt. Show a small input and the exact output you expect. The AI studies these pairs and copies their pattern. Use plain words in the sample, keep the format steady, and label each part so the model knows which is which. If you need a list, show a list; if you need a table, include a small table. Good examples cut guesswork, reduce errors, and save you from writing long rules.
Visit the following resources to learn more:
- [@article@10 Real-World AI Agent Examples in 2025](https://www.chatbase.co/blog/ai-agent-examples)
- [@article@GPT-4.1 Prompting Guide](https://cookbook.openai.com/examples/gpt4-1_prompting_guide)
- [@article@AI Agent Examples & Use Cases: Real Applications in 2025](https://eastgate-software.com/ai-agent-examples-use-cases-real-applications-in-2025/)

@ -1,3 +1,9 @@
# Use relevant technical terms
When a task involves a special field such as law, medicine, or computer science, include the correct domain words in your prompt so the AI knows exactly what you mean. Ask for “O(n log n) sorting algorithms” instead of just “fast sorts,” or “HTTP status code 404” instead of “page not found error.” The right term narrows the topic, removes guesswork, and points the model toward the knowledge base you need. It also keeps the answer at the right level, because the model sees you understand the field and will reply with matching depth. Check spelling and letter case; “SQL” and “sql” are seen the same, but “Sequel” is not. Do not overload the prompt with buzzwords—add only the words that truly matter. The goal is clear language plus the exact technical labels the subject uses.
Visit the following resources to learn more:
- [@article@AI Terms Glossary: AI Terms To Know In 2024](https://www.moveworks.com/us/en/resources/ai-terms-glossary)
- [@article@15 Essential AI Agent Terms You Must Know](https://shivammore.medium.com/15-essential-ai-agent-terms-you-must-know-6bfc2f332f6d)
- [@article@AI Agent Examples & Use Cases: Real Applications in 2025](https://eastgate-software.com/ai-agent-examples-use-cases-real-applications-in-2025/)

@ -1,3 +1,8 @@
# User Profile Storage
User profile storage is the part of an AI agent’s memory that holds stable facts about each user, such as name, age group, language, past choices, and long-term goals. The agent saves this data in a file or small database so it can load it each time the same user returns. By keeping the profile separate from short-term conversation logs, the agent can remember preferences without mixing them with temporary chat history. The profile is updated only when the user states a new lasting preference or when old information changes, which helps prevent drift or bloat. Secure storage, access controls, and encryption protect the data so that only the agent and the user can see it. Good profile storage lets the agent give answers that feel personal and consistent.
User profile storage is the part of an AI agent’s memory that holds stable facts about each user, such as name, age group, language, past choices, and long-term goals. The agent saves this data in a file or small database so it can load it each time the same user returns. By keeping the profile separate from short-term conversation logs, the agent can remember preferences without mixing them with temporary chat history. The profile is updated only when the user states a new lasting preference or when old information changes, which helps prevent drift or bloat.
Visit the following resources to learn more:
- [@article@Storage Technology Explained: AI and Data Storage](https://www.computerweekly.com/feature/Storage-technology-explained-AI-and-the-data-storage-it-needs)
- [@partner@The Architect's Guide to Storage for AI - The New Stack](https://thenewstack.io/the-architects-guide-to-storage-for-ai/)

@ -1,3 +1,10 @@
# Web Scraping / Crawling
Web scraping and crawling let an AI agent collect data from many web pages without human help. The agent sends a request to a page, reads the HTML, and pulls out parts you ask for, such as prices, news headlines, or product details. It can then follow links on the page to reach more pages and repeat the same steps. This loop builds a large, up-to-date dataset in minutes or hours instead of days. Companies use it to track market prices, researchers use it to gather facts or trends, and developers use it to feed fresh data into other AI models. Good scraping code also respects site rules like robots.txt and avoids hitting servers too fast, so it works smoothly and fairly.
Visit the following resources to learn more:
- [@article@Crawl AI - Build Your AI With One Prompt](https://www.crawlai.org/)
- [@article@AI-Powered Web Scraper with Crawl4AI and DeepSeek](https://brightdata.com/blog/web-data/crawl4ai-and-deepseek-web-scraping)
- [@article@Best Web Scraping Tools for AI Applications](https://www.thetoolnerd.com/p/best-web-scraping-tools-for-ai-applications)
- [@article@8 Best AI Web Scraping Tools I Tried - HubSpot Blog](https://blog.hubspot.com/website/ai-web-scraping)

@ -1,3 +1,8 @@
# Web Search
Web search lets an AI agent pull fresh facts, news, and examples from the internet while it is working. The agent turns a user request into search words, sends them to a search engine, and reads the list of results. It then follows the most promising links, grabs the page text, and picks out the parts that answer the task. This helps the agent handle topics that were not in its training data, update old knowledge, or double-check details. Web search covers almost any subject and is much faster than manual research, but the agent must watch for ads, bias, or wrong pages and cross-check sources to stay accurate.
Visit the following resources to learn more:
- [@article@8 Best AI Search Engines for 2025](https://usefulai.com/tools/ai-search-engines)
- [@article@Web Search Agent - PraisonAI Documentation](https://docs.praison.ai/agents/websearch)

@ -1,3 +1,9 @@
# What are AI Agents?
An AI agent is a computer program or robot that can sense its surroundings, think about what it senses, and then act to reach a goal. It gathers data through cameras, microphones, or software inputs, decides what the data means using rules or learned patterns, and picks the best action to move closer to its goal. After acting, it checks the results and learns from them, so it can do better next time. Chatbots, self-driving cars, and game characters are all examples.
Visit the following resources to learn more:
- [@article@What are AI Agents? - Agents in Artificial Intelligence Explained](https://aws.amazon.com/what-is/ai-agents/)
- [@article@AI Agents Explained in Simple Terms for Beginners](https://www.geeky-gadgets.com/ai-agents-explained-for-beginners/)
- [@video@What are AI Agents?](https://www.youtube.com/watch?v=F8NKVhkZZWI)

@ -1,3 +1,8 @@
# What are Tools?
Tools are extra skills or resources that an AI agent can call on to finish a job. A tool can be anything from a web search API to a calculator, a database, or a language-translation engine. The agent sends a request to the tool, gets the result, and then uses that result to move forward. Tools let a small core model handle tasks that would be hard or slow on its own. They also help keep answers current, accurate, and grounded in real data. Choosing the right tool and knowing when to use it are key parts of building a smart agent.
Visit the following resources to learn more:
- [@article@Compare 50+ AI Agent Tools in 2025 - AIMultiple](https://research.aimultiple.com/ai-agent-tools/)
- [@article@AI Agents Explained in Simple Terms for Beginners](https://www.geeky-gadgets.com/ai-agents-explained-for-beginners/)

@ -1,3 +1,11 @@
# What is Agent Memory?
Agent memory is the part of an AI agent that keeps track of what has already happened. It stores past user messages, facts the agent has learned, and its own previous steps. This helps the agent remember goals, user likes and dislikes, and important details across turns or sessions. Memory can be short-term, lasting only for one conversation, or long-term, lasting across many. With a good memory the agent avoids repeating questions, stays consistent, and plans better actions. Without it, the agent would forget everything each time and feel unfocused.
Visit the following resources to learn more:
- [@article@Agentic Memory for LLM Agents](https://arxiv.org/abs/2502.12110)
- [@article@Memory Management in AI Agents](https://python.langchain.com/docs/how_to/chatbots_memory/)
- [@article@Storing and Retrieving Knowledge for Agents](https://www.pinecone.io/learn/langchain-retrieval-augmentation/)
- [@article@Short-Term vs Long-Term Memory in AI Agents](https://adasci.org/short-term-vs-long-term-memory-in-ai-agents/)
- [@video@Building Brain-Like Memory for AI Agents](https://www.youtube.com/watch?v=VKPngyO0iKg)

@ -1,3 +1,9 @@
# What is Prompt Engineering
Prompt engineering is the skill of writing clear questions or instructions so that an AI system gives the answer you want. It means choosing the right words, adding enough detail, and giving examples when needed. A good prompt tells the AI what role to play, what style to use, and what facts to include or avoid. By testing and refining the prompt, you can improve the quality, accuracy, and usefulness of the AI’s response. In short, prompt engineering is guiding the AI with well-designed text so it can help you better.
Visit the following resources to learn more:
- [@roadmap@Visit Dedicated Prompt Engineering Roadmap](https://roadmap.sh/prompt-engineering)
- [@article@What is Prompt Engineering? - AI Prompt Engineering Explained - AWS](https://aws.amazon.com/what-is/prompt-engineering/)
- [@article@What is Prompt Engineering? A Detailed Guide For 2025](https://www.datacamp.com/blog/what-is-prompt-engineering-the-future-of-ai-communication)

Loading…
Cancel
Save