refactor first 27 topics.

pull/8581/head
Vedansh 1 week ago
parent 82400cd7a6
commit 91679dc8e8
  1. 5
      src/data/roadmaps/ai-agents/content/acting--tool-invocation@sHYd4KsKlmw5Im3nQ19W8.md
  2. 3
      src/data/roadmaps/ai-agents/content/agent-loop@Eih4eybuYB3C2So8K0AT3.md
  3. 4
      src/data/roadmaps/ai-agents/content/anthropic-tool-use@1EZFbDHA5J5_5BPMLMxXb.md
  4. 5
      src/data/roadmaps/ai-agents/content/api-requests@52qxjZILV-X1isup6dazC.md
  5. 5
      src/data/roadmaps/ai-agents/content/autogen@7YtnQ9-KIvGPSpDzEDexl.md
  6. 8
      src/data/roadmaps/ai-agents/content/basic-backend-development@VPI89s-m885r2YrXjYxdd.md
  7. 6
      src/data/roadmaps/ai-agents/content/be-specific-in-what-you-want@qFKFM2qNPEN7EoD0V-1SM.md
  8. 5
      src/data/roadmaps/ai-agents/content/bias--toxicity-guardrails@EyLo2j8IQsIK91SKaXkmK.md
  9. 5
      src/data/roadmaps/ai-agents/content/chain-of-thought-cot@qwdh5pkBbrF8LKPxbZp4F.md
  10. 8
      src/data/roadmaps/ai-agents/content/closed-weight-models@tJYmEDDwK0LtEux-kwp9B.md
  11. 7
      src/data/roadmaps/ai-agents/content/code-execution--repl@mS0EVCkWuPN_GkVPng4A2.md
  12. 6
      src/data/roadmaps/ai-agents/content/code-generation@PK8w31GlvtmAuU92sHaqr.md
  13. 8
      src/data/roadmaps/ai-agents/content/context-windows@dyn1LSioema-Bf9lLTgUZ.md
  14. 7
      src/data/roadmaps/ai-agents/content/creating-mcp-servers@1NXIN-Hbjl5rPy_mqxQYW.md
  15. 9
      src/data/roadmaps/ai-agents/content/crewai@uFPJqgU4qGvZyxTv-osZA.md
  16. 8
      src/data/roadmaps/ai-agents/content/dag-agents@Ep8RoZSy_Iq_zWXlGQLZo.md
  17. 9
      src/data/roadmaps/ai-agents/content/data-analysis@wKYEaPWNsR30TIpHaxSsq.md
  18. 8
      src/data/roadmaps/ai-agents/content/data-privacy--pii-redaction@rdlYBJNNyZUshzsJawME4.md
  19. 6
      src/data/roadmaps/ai-agents/content/database-queries@sV1BnA2-qBnXoKpUn-8Ub.md
  20. 7
      src/data/roadmaps/ai-agents/content/deepeval@0924QUH1wV7Mp-Xu0FAhF.md
  21. 5
      src/data/roadmaps/ai-agents/content/email--slack--sms@qaNr5I-NQPnfrRH7ynGTl.md
  22. 5
      src/data/roadmaps/ai-agents/content/embeddings-and-vector-search@UIm54UmICKgep6s8Itcyv.md
  23. 6
      src/data/roadmaps/ai-agents/content/episodic-vs-semantic-memory@EfCCNqLMJpWKKtamUa5gK.md
  24. 9
      src/data/roadmaps/ai-agents/content/file-system-access@BoJqZvdGam4cd6G6yK2IV.md
  25. 8
      src/data/roadmaps/ai-agents/content/fine-tuning-vs-prompt-engineering@5OW_6o286mj470ElFyJ_5.md
  26. 5
      src/data/roadmaps/ai-agents/content/forgetting--aging-strategies@m-97m7SI0XpBnhEE8-_1S.md
  27. 5
      src/data/roadmaps/ai-agents/content/frequency-penalty@z_N-Y0zGkv8_qHPuVtimL.md

@ -2,4 +2,9 @@
Acting, also called tool invocation, is the step where the AI chooses a tool and runs it to get real-world data or to change something. The agent looks at its current goal and the plan it just made. It then picks the best tool, such as a web search, a database query, or a calculator. The agent fills in the needed inputs and sends the call. The external system does the heavy work and returns a result. Acting ends when the agent stores that result so it can think about the next move.
Visit the following resources to learn more:
- [@article@What are Tools in AI Agents?](https://huggingface.co/learn/agents-course/en/unit1/tools)
- [@article@AI Planning - Stanford Encyclopedia of Philosophy](https://plato.stanford.edu/entries/planning/)
- [@article@ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629)
- [@article@Planning - LangChain](https://python.langchain.com/v0.2/docs/concepts/#planning)

@ -2,5 +2,8 @@
An agent loop is the cycle that lets an AI agent keep working toward a goal. First, the agent gathers fresh data from its tools, sensors, or memory. Next, it updates its internal state and decides what to do, often by running a planning or reasoning step. Then it carries out the chosen action, such as calling an API, writing to a file, or sending a message. After acting, it checks the result and stores new information. The loop starts again with the latest data, so the agent can adjust to changes and improve over time. This fast repeat of observe–decide–act gives the agent its power.
Visit the following resources to learn more:
- [@article@What is an Agent Loop?](https://huggingface.co/learn/agents-course/en/unit1/agent-steps-and-structure)
- [@article@Let's Build your Own Agentic Loop](https://www.reddit.com/r/AI_Agents/comments/1js1xjz/lets_build_our_own_agentic_loop_running_in_our/)
- [@article@AgentExecutor](https://python.langchain.com/v0.2/docs/concepts/#agent-executor)

@ -2,4 +2,6 @@
Anthropic Tool Use lets you connect a Claude model to real software functions so the agent can do useful tasks on its own. You give Claude a list of tools, each with a name, a short description, and a strict JSON schema that shows the allowed input fields. During a chat you send user text plus this tool list. Claude decides if a tool should run, picks one, and returns a JSON block that matches the schema. Your code reads the JSON, calls the matching function, and sends the result back to Claude for the next step. This loop repeats until no more tool calls are needed. Clear schemas, small field sets, and helpful examples make the calls accurate. By keeping the model in charge of choosing tools while your code controls real actions, you gain both flexibility and safety.
- [@article@Anthropic Tool Use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/overview)
Visit the following resources to learn more:
- [@official@Anthropic Tool Use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/overview)

@ -1,3 +1,8 @@
# API Requests
API requests let an AI agent ask another service for data or for an action. The agent builds a short message that follows the service’s rules, sends it over the internet, and waits for a reply. For example, it can call a weather API to get today’s forecast or a payment API to charge a customer. Each request has a method like GET or POST, a URL, and often a small block of JSON with needed details. The service answers with another JSON block that the agent reads and uses. Because API requests are fast and clear, they are a common tool for connecting the agent to many other systems without extra work.
Visit the following resources to learn more:
- [@article@Introduction to APIs - MDN Web Docs](https://developer.mozilla.org/en-US/docs/Web/API/Introduction_to_APIs)
- [@article@How APIs Power AI Agents: A Comprehensive Guide](https://blog.treblle.com/api-guide-for-ai-agents/)

@ -1,3 +1,8 @@
# AutoGen
AutoGen is an open-source Python framework that helps you build AI agents without starting from scratch. It lets you define each agent with a role, goals, and tools, then handles the chat flow between them and a large language model such as GPT-4. You can chain several agents so they plan, code, review, and run tasks together. The library includes ready-made modules for memory, task planning, tool calling, and function execution, so you only write the parts that are unique to your app. AutoGen connects to OpenAI, Azure, or local models through a simple settings file. Logs, cost tracking, and step-by-step debugging come built in, which makes testing easy. Because the agents are plain Python objects, you can mix them with other libraries or your own code. AutoGen is still young, so expect fast changes and keep an eye on usage costs, but it is a strong choice when you want to turn a prompt into a working multi-agent system in hours instead of weeks.
Visit the following resources to learn more:
- [@official@AutoGen - Microsoft Research](https://www.microsoft.com/en-us/research/project/autogen/)
- [@opensource@GitHub - microsoft/autogen](https://github.com/microsoft/autogen)

@ -1 +1,9 @@
# Basic Backend Development
Basic backend development involves creating the server-side logic and infrastructure that powers web applications and services. For AI agents, this often means building custom APIs that agents can call as tools, setting up databases to store agent memory or application data, handling user authentication, and managing the server environment where the agent or its supporting services might run. Understanding basic backend concepts is crucial for creating bespoke functionalities for agents, enabling them to interact with proprietary data sources or execute specific actions not covered by off-the-shelf tools.
Visit the following resources to learn more:
- [@article@Introduction to the server-side](https://developer.mozilla.org/en-US/docs/Learn/Server-side/First_steps/Introduction)
- [@article@What is a REST API? - Red Hat](https://www.redhat.com/en/topics/api/what-is-a-rest-api)
- [@article@What is a Database? - Oracle](https://www.oracle.com/database/what-is-database/)

@ -1,3 +1,9 @@
# Be specific in what you want
When you ask an AI to do something, clear and exact words help it give the answer you want. State the goal, the format, and any limits up front. Say who the answer is for, how long it should be, and what to leave out. If numbers, dates, or sources matter, name them. For example, rather than “Explain World War II,” try “List three key events of World War II with dates and one short fact for each.” Being this precise cuts down on guesswork, avoids unwanted extra detail, and saves time by reducing follow-up questions.
Visit the following resources to learn more:
- [@article@Prompt Engineering Guide](https://www.promptingguide.ai/)
- [@article@AI Prompting Examples, Templates, and Tips For Educators](https://honorlock.com/blog/education-ai-prompt-writing/)
- [@article@How to Ask AI for Anything: The Art of Prompting](https://sixtyandme.com/using-ai-prompts/)

@ -1,3 +1,8 @@
# Bias & Toxicity Guardrails
Bias and toxicity guardrails keep an AI agent from giving unfair or harmful results. Bias shows up when training data favors certain groups or views. Toxicity is language that is hateful, violent, or rude. To stop this, start with clean and balanced data. Remove slurs, stereotypes, and spam. Add examples from many voices so the model learns fair patterns. During training, test the model often and adjust weights or rules that lean one way. After training, put filters in place that block toxic words or flag unfair answers before users see them. Keep logs, run audits, and ask users for feedback to catch new issues early. Write down every step so builders and users know the limits and risks. These actions protect people, follow laws, and help users trust the AI.
Visit the following resources to learn more:
- [@article@Define the Agent Guardrails](https://trailhead.salesforce.com/content/learn/modules/agentforce-agent-planning/define-the-agent-guardrails)
- [@article@How to Build Safe AI Agents: Best Practices for Guardrails](https://medium.com/@sahin.samia/how-to-build-safe-ai-agents-best-practices-for-guardrails-and-oversight-a0085b50c022)

@ -1,3 +1,8 @@
# Chain of Thought (CoT)
Chain of Thought (CoT) is a way for an AI agent to think out loud. Before giving its final answer, the agent writes short notes that show each step it takes. These notes can list facts, name sub-tasks, or do small bits of math. By seeing the steps, the agent stays organized and is less likely to make a mistake. People who read the answer can also check the logic and spot any weak points. The same written steps can be fed back into the agent so it can plan, reflect, or fix itself. Because it is easy to use and boosts trust, CoT is one of the most common designs for language-based agents today.
Visit the following resources to learn more:
- [@article@Chain-of-Thought Prompting Elicits Reasoning in Large Language Models](https://arxiv.org/abs/2201.11903)
- [@article@Evoking Chain of Thought Reasoning in LLMs - Prompting Guide](https://www.promptingguide.ai/techniques/cot)

@ -1,3 +1,11 @@
# Closed Weight Models
Closed-weight models are AI systems whose trained parameters—the numbers that hold what the model has learned—are not shared with the public. You can send prompts to these models through an online service or a software kit, but you cannot download the weights, inspect them, or fine-tune them on your own computer. The company that owns the model keeps control and sets the rules for use, often through paid APIs or tight licences. This approach helps the owner protect trade secrets, reduce misuse, and keep a steady income stream. The downside is less freedom for users, higher costs over time, and limited ability to audit or adapt the model. Well-known examples include GPT-4, Claude, and Gemini.
Visit the following resources to learn more:
- [@article@Open-Source LLMs vs Closed LLMs](https://hatchworks.com/blog/gen-ai/open-source-vs-closed-llms-guide/)
- [@article@2024 Comparison of Open-Source Vs Closed-Source LLMs](https://blog.spheron.network/choosing-the-right-llm-2024-comparison-of-open-source-vs-closed-source-llms)
- [@official@Open AI's GPT-4](https://openai.com/gpt-4)
- [@official@Claude](https://www.anthropic.com/claude)
- [@official@Gemini](https://deepmind.google/technologies/gemini/)

@ -1,3 +1,10 @@
# Code Execution / REPL
Code Execution or REPL (Read-Eval-Print Loop) lets an AI agent run small pieces of code on demand, see the result right away, and use that result to decide what to do next. The agent “reads” the code, “evaluates” it in a safe sandbox, “prints” the output, and then loops back for more input. With this tool the agent can test ideas, perform math, transform text, call APIs, or inspect data without waiting for a full build or deployment. Python, JavaScript, or even shell commands are common choices because they start fast and have many libraries. Quick feedback helps the agent catch errors early and refine its plan step by step. Sandboxing keeps the host system safe by blocking dangerous actions such as deleting files or making forbidden network calls. Overall, a Code Execution / REPL tool gives the agent a fast, flexible workbench for problem-solving.
Visit the following resources to learn more:
- [@article@What is a REPL?](https://docs.replit.com/getting-started/intro-replit)
- [@article@Code Execution AI Agent](https://docs.praison.ai/features/codeagent)
- [@article@Building an AI Agent's Code Execution Environment](https://murraycole.com/posts/ai-code-execution-environment)
- [@article@Python Code Tool](https://python.langchain.com/docs/integrations/tools/python/)

@ -1,3 +1,9 @@
# Code generation
Code-generation agents take a plain language request, understand the goal, and then write or edit source code to meet it. They can build small apps, add features, fix bugs, refactor old code, write tests, or translate code from one language to another. This saves time for developers, helps beginners learn, and reduces human error. Teams use these agents inside code editors, chat tools, and automated pipelines. By handling routine coding tasks, the agents free people to focus on design, logic, and user needs.
Visit the following resources to learn more:
- [@article@Multi-Agent-based Code Generation](https://arxiv.org/abs/2312.13010)
- [@article@From Prompt to Production: Github Blog](https://github.blog/ai-and-ml/github-copilot/from-prompt-to-production-building-a-landing-page-with-copilot-agent-mode/)
- [@official@Github Copilot](https://github.com/features/copilot)

@ -1,3 +1,11 @@
# Context Windows
A context window is the chunk of text a large language model can read at one time. It is measured in tokens, which are pieces of words. If a model has a 4,000-token window, it can only “look at” up to about 3,000 words before it must forget or shorten earlier parts. New tokens push old ones out, like a sliding window moving over text. The window size sets hard limits on how long a prompt, chat history, or document can be. A small window forces you to keep inputs short or split them, while a large window lets the model follow longer stories and hold more facts. Choosing the right window size balances cost, speed, and how much detail the model can keep in mind at once.
New techniques, like retrieval-augmented generation (RAG) and long-context transformers (e.g., Claude 3, Gemini 1.5), aim to extend usable context without hitting model limits directly.
Visit the following resources to learn more:
- [@article@What is a Context Window in AI?](https://www.ibm.com/think/topics/context-window)
- [@article@Scaling Language Models with Retrieval-Augmented Generation (RAG)](https://arxiv.org/abs/2005.11401)
- [@article@Long Context in Language Models - Anthropic's Claude 3](https://www.anthropic.com/news/claude-3-family)

@ -1,3 +1,8 @@
# Creating MCP Servers
Creating an MCP server means building a program that stores and shares conversation data for AI agents using the Model Context Protocol. Start by choosing a language and web framework, then set up REST endpoints such as /messages, /state, and /health. Each endpoint sends or receives JSON that follows the MCP schema. Use a database or an in-memory store to keep session logs, and tag every entry with a session ID, role, and timestamp. Add token-based authentication so only trusted agents can read or write. Include filters and range queries so an agent can ask for just the parts of the log it needs. Limit message size and request rate to avoid overload. Finish by writing unit tests, adding monitoring, and running load checks to be sure the server stays reliable as traffic grows.
An MCP server stores and shares conversation data for AI agents using the Model Context Protocol (MCP), a standard for agent memory management. Start by picking a language and web framework, then create REST endpoints like `/messages`, `/state`, and `/health`. Each endpoint exchanges JSON following the MCP schema. Store session logs with a session ID, role, and timestamp using a database or in-memory store. Add token-based authentication and filters so agents can fetch only what they need. Set limits on message size and request rates to avoid overload. Finally, write unit tests, add monitoring, and run load tests to ensure stability.
Visit the following resources to learn more:
- [@official@Model Context Protocol (MCP) Specification](https://www.anthropic.com/news/model-context-protocol)
- [@article@How to Build and Host Your Own MCP Servers in Easy Steps?](https://collabnix.com/how-to-build-and-host-your-own-mcp-servers-in-easy-steps/)

@ -1,3 +1,10 @@
# CrewAI
CrewAI is an open-source Python framework that lets you join several language-model agents into one team, called a crew. Each agent gets a name, a role, and a set of skills, and the library handles planning, task routing, and chat among them. To use it, you install the package, import it, define your agents in a few lines of code, link them with a Crew object, and give the crew a mission prompt. CrewAI then talks to an LLM such as OpenAI GPT-4 or Claude, passes messages between agents, runs any tools you attach, and returns a single answer. You can plug in web search, Python functions, or vector stores for memory, and you can tune settings like temperature or max tokens. Built-in logs show every step so you can debug and improve the workflow. The result is a fast way to build multi-step agent systems for tasks like research, code review, or content creation without writing a lot of low-level glue code.
CrewAI is an open-source Python framework for creating teams of AI agents, called a crew. Each agent is assigned a name, role, and set of tools, and the system manages planning, communication, and execution between them. To use it, install the package, define agents in code, connect them with a `Crew` object, and assign a mission prompt. CrewAI interacts with an LLM like GPT-4 or Claude, passes messages, runs tools, and returns a final output. You can also add web search, custom functions, or memory stores. Logs are built-in to help debug and optimize workflows.
Visit the following resources to learn more:
- [@official@CrewAI](https://crewai.com/)
- [@official@CrewAI Documentation](https://docs.crewai.com/)
- [@article@Getting Started with CrewAI: Building AI Agents That Work Together](https://medium.com/@cammilo/getting-started-with-crewai-building-ai-agents-that-work-together-9c1f47f185ca)
- [@video@Crew AI Full Tutorial For Beginners](https://www.youtube.com/watch?v=q6QLGS306d0)

@ -1,3 +1,9 @@
# DAG Agents
A DAG (Directed Acyclic Graph) agent is built from many small parts, called nodes, that form a one-way graph with no loops. Each node does a clear task, then passes its result to the next node along a directed edge. Because the graph has no cycles, data always moves forward and never gets stuck in endless repeats. This makes the flow of work easy to follow and test. The layout lets you run nodes that do not depend on each other at the same time, so the agent can work faster. If one node fails, you can see the exact path it took and fix just that part. DAG agents work well for jobs like data cleaning, multi-step reasoning, or any long chain of steps where order matters and backtracking is not needed.
A DAG (Directed Acyclic Graph) agent is made of small parts called nodes that form a one-way graph with no loops. Each node does a task and passes its result to the next. Because there are no cycles, data always moves forward, making workflows easy to follow and debug. Independent nodes can run in parallel, speeding up tasks. If a node fails, you can trace and fix that part without touching the rest. DAG agents are ideal for jobs like data cleaning, multi-step reasoning, or workflows where backtracking isn’t needed.
Visit the following resources to learn more:
- [@official@Airflow: Directed Acyclic Graphs Documentation](https://airflow.apache.org/docs/apache-airflow/stable/concepts/dags.html)
- [@article@What are DAGs in AI Systems?](https://www.restack.io/p/version-control-for-ai-answer-what-is-dag-in-ai-cat-ai)
- [@video@DAGs Explained Simply](https://www.youtube.com/watch?v=1Yh5S-S6wsI)

@ -1,3 +1,8 @@
# Data analysis
# Data Analysis
AI agents can automate many steps of data analysis. They pull data from files, databases, or live streams and put it into a tidy shape. They spot missing entries, flag odd numbers, and fill gaps with smart guesses. Once the data is clean, the agent looks for patterns, such as spikes in sales or drops in sensor readings. It can build simple charts or full dashboards, saving hours of manual work. Some agents run basic statistics, while others use machine learning to forecast next week’s demand. They also send alerts if the numbers move outside set limits. This keeps people informed without constant checking.
AI agents can automate data analysis by pulling information from files, databases, or live streams. They clean the data by spotting missing values, outliers, and making smart corrections. After cleaning, agents find patterns like sales spikes or sensor drops and can build charts or dashboards. Some run basic statistics, others apply machine learning to predict trends. Agents can also send alerts if numbers go beyond set limits, helping people stay informed without constant monitoring.
Visit the following resources to learn more:
- [@article@How AI Will Transform Data Analysis in 2025](https://www.devfi.com/ai-transform-data-analysis-2025/)
- [@article@How AI Has Changed The World Of Analytics And Data Science](https://www.forbes.com/councils/forbestechcouncil/2025/01/28/how-ai-has-changed-the-world-of-analytics-and-data-science/k)

@ -1,3 +1,9 @@
# Data Privacy + PII Redaction
AI agents often handle user text, images, and logs that carry personal data such as names, phone numbers, addresses, or ID numbers. If this data leaks, people may face fraud, stalking, or other harm. Privacy laws like GDPR and CCPA require teams to keep such data safe and to use it only for clear, lawful reasons. A key safeguard is PII redaction: the system scans each input and output, finds any detail that can identify a person, and masks or deletes it before storage or sharing. Redaction methods include simple pattern rules, machine-learning models, or a mix of both. Keep audit trails, set strong access limits, and test the redaction flow often to be sure no private detail slips through.
AI agents often process text, images, and logs that include personal data like names, phone numbers, or addresses. Leaks can cause fraud, stalking, or other harm, so laws like GDPR and CCPA require strict protections. A key method is PII redaction: scanning inputs and outputs to find and mask any personal details before storage or sharing. Redaction uses pattern rules, machine learning, or both. Teams should also keep audit logs, enforce access controls, and test their redaction flows often to prevent leaks.
Visit the following resources to learn more:
- [@official@GDPR Compliance Overview](https://gdpr.eu/)
- [@article@Protect Sensitive Data with PII Redaction Software](https://redactor.ai/blog/pii-redaction-software-guide)
- [@article@A Complete Guide on PII Redaction](https://enthu.ai/blog/what-is-pii-redaction/)

@ -1,3 +1,9 @@
# Database Queries
Database queries let an AI agent fetch, add, change, or remove data stored in a database. The agent sends a request written in a query language, most often SQL. The database engine then looks through its tables and returns only the rows and columns that match the rules in the request. With this tool, the agent can answer questions that need up-to-date numbers, user records, or other stored facts. It can also write new entries or adjust old ones to keep the data current. Because queries work in real time and follow clear rules, they give the agent a reliable way to handle large sets of structured information.
Visit the following resources to learn more:
- [@official@PostgreSQL Documentation](https://www.postgresql.org/docs/)
- [@article@Building Your Own Database Agent](https://www.deeplearning.ai/short-courses/building-your-own-database-agent/)
- [@video@SQL Tutorial for Beginners](https://www.youtube.com/watch?v=HXV3zeQKqGY)

@ -1,3 +1,10 @@
# DeepEval
DeepEval is an open-source tool that helps you test and score the answers your AI agent gives. You write small test cases that show an input and the reply you hope to get, or a rule the reply must follow. DeepEval runs the agent, checks the reply with built-in measures such as similarity, accuracy, or safety, and then marks each test as pass or fail. You can add your own checks, store tests in code or YAML files, and run them in a CI pipeline so every new model or prompt version gets the same quick audit. The fast feedback makes it easy to spot errors, cut down on hallucinations, and compare different models before you ship.
Visit the following resources to learn more:
- [@official@DeepEval - The Open-Source LLM Evaluation Framework](https://www.deepeval.com/)
- [@opensource@DeepEval GitHub Repository](https://github.com/confident-ai/deepeval)
- [@article@Evaluate LLMs Effectively Using DeepEval: A Pratical Guide](https://www.datacamp.com/tutorial/deepeval)
- [@video@DeepEval - LLM Evaluation Framework](https://www.youtube.com/watch?v=ZNs2dCXHlfo)

@ -1,3 +1,8 @@
# Email / Slack / SMS
Email, Slack, and SMS are message channels an AI agent can use to act on tasks and share updates. The agent writes and sends emails to give detailed reports or collect files. It posts to Slack to chat with a team, answer questions, or trigger alerts inside a workspace. It sends SMS texts for quick notices such as reminders, confirmations, or warnings when a fast response is needed. By picking the right channel, the agent reaches users where they already communicate, makes sure important information arrives on time, and can even gather replies to keep a task moving forward.
Visit the following resources to learn more:
- [@official@Twilio Messaging API](https://www.twilio.com/docs/usage/api)
- [@official@Slack AI Agents](https://slack.com/ai-agents)

@ -1,3 +1,8 @@
# Embeddings and Vector Search
Embeddings turn words, pictures, or other data into lists of numbers called vectors. Each vector keeps the meaning of the original item. Things with similar meaning get vectors that sit close together in this number space. Vector search scans a large set of vectors and finds the ones nearest to a query vector, even if the exact words differ. This lets AI agents match questions with answers, suggest related items, and link ideas quickly.
Visit the following resources to learn more:
- [@official@OpenAI Embeddings API Documentation](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings)
- [@article@Understanding Embeddings and Vector Search (Pinecone Blog)](https://www.pinecone.io/learn/vector-embeddings/)

@ -1,3 +1,9 @@
# Episodic vs Semantic Memory
Agent memory often has two parts. Episodic memory is relevant to the context of the current conversation and may be lost after the conversation ends. Semantic memory is relevant to the broader knowledge of the agent and is persistent.
Visit the following resources to learn more:
- [@article@What Is AI Agent Memory? - IBM](https://www.ibm.com/think/topics/ai-agent-memory)
- [@article@Episodic Memory vs. Semantic Memory: The Key Differences](https://www.magneticmemorymethod.com/episodic-vs-semantic-memory/)
- [@article@Memory Systems in LangChain](https://python.langchain.com/docs/how_to/chatbots_memory/)

@ -1,3 +1,10 @@
# File System Access
File system access lets an AI agent read, create, change, or delete files and folders on a computer or server. With this power, the agent can open a text file to pull data, write a new report, save logs, or tidy up old files without human help. It can also move files between folders to keep things organised. This tool is useful for tasks such as data processing, report generation, and backup jobs. Strong safety checks are needed so the agent touches only the right files, avoids private data, and cannot harm the system by mistake.
File system access lets an AI agent read, create, change, or delete files and folders on a computer or server. With this power, the agent can open a text file to pull data, write a new report, save logs, or tidy up old files without human help. It can also move files between folders to keep things organized. This tool is useful for tasks such as data processing, report generation, and backup jobs. Strong safety checks are needed so the agent touches only the right files, avoids private data, and cannot harm the system by mistake.
Visit the following resources to learn more:
- [@article@Filesystem MCP server for AI Agents](https://playbooks.com/mcp/mateicanavra-filesystem)
- [@article@File System Access API](https://developer.mozilla.org/en-US/docs/Web/API/File_System_Access_API)
- [@article@Understanding File Permissions and Security](https://linuxize.com/post/understanding-linux-file-permissions/)
- [@video@How File Systems Work?](https://www.youtube.com/watch?v=KN8YgJnShPM)

@ -1,3 +1,9 @@
# Fine-tuning vs Prompt Engineering
Fine-tuning and prompt engineering are two ways to get better answers from a large language model. Fine-tuning means you take an existing model and train it more on your own examples so it adapts to a narrow task. You need extra data, computer power, and time, but the model then learns the style and facts you want. Prompt engineering means you leave the model as it is and adjust the words you send to it. You give clear instructions, show examples, or set rules inside the prompt so the model follows them right away. This is faster, cheaper, and safer if you have no special data. Fine-tuning is best when you need deep knowledge of a field or a fixed voice across many calls. Prompt engineering is enough when you want quick control, small changes, or are still testing ideas.
Fine-tuning and prompt engineering are two ways to get better outputs from a language model. Fine-tuning means training an existing model further with your own examples so it adapts to specific tasks. It needs extra data, computing power, and time but creates deeply specialized models. Prompt engineering, in contrast, leaves the model unchanged and focuses on crafting better instructions or examples in the prompt itself. It is faster, cheaper, and safer when no custom data is available. Fine-tuning suits deep domain needs; prompt engineering fits quick control and prototyping.
Visit the following resources to learn more:
- [@article@OpenAI Fine Tuning](https://platform.openai.com/docs/guides/fine-tuning)
- [@article@Prompt Engineering Guide](https://www.promptingguide.ai/)
- [@article@Prompt Engineering vs Prompt Tuning: A Detailed Explanation](https://medium.com/@aabhi02/prompt-engineering-vs-prompt-tuning-a-detailed-explanation-19ea8ce62ac4)

@ -1,3 +1,8 @@
# Forgetting / Aging Strategies
Forgetting or aging strategies help an AI agent keep only the useful parts of its memory and drop the rest over time. The agent may tag each memory with a time stamp and lower its importance as it gets older, or it may remove items that have not been used for a while, much like a “least-recently-used” list. Some systems give each memory a relevance score; when space runs low, they erase the lowest-scoring items first. Others keep a fixed-length sliding window of the most recent events or create short summaries and store those instead of raw details. These methods stop the memory store from growing without limits, cut storage costs, and let the agent focus on current goals. Choosing the right mix of aging rules is a trade-off: forget too fast and the agent loses context, forget too slow and it wastes resources or reacts to outdated facts.
Visit the following resources to learn more:
- [@article@Memory Management](https://python.langchain.com/docs/how_to/chatbots_memory/)
- [@article@Memory Management for AI Agents](https://techcommunity.microsoft.com/blog/azure-ai-services-blog/memory-management-for-ai-agents/4406359)

@ -1,3 +1,8 @@
# Frequency Penalty
Frequency penalty is a setting that tells a language model, “Stop repeating yourself.” As the model writes, it keeps track of how many times it has already used each word. A positive frequency-penalty value lowers the chance of picking a word again if it has been seen many times in the current reply. This helps cut down on loops like “very very very” or long blocks that echo the same phrase. A value of 0 turns the rule off, while higher numbers make the model avoid repeats more strongly. If the penalty is too high, the text may miss common words that are still needed, so you often start low (for example 0.2) and adjust. Frequency penalty works together with other controls such as temperature and top-p to shape output that is clear, varied, and not boring.
Visit the following resources to learn more:
- [@article@Frequency Penalty Explanation](https://docs.aipower.org/docs/ai-engine/openai/frequency-penalty)
- [@article@Understanding Frequency Penalty and Presence Penalty](https://medium.com/@the_tori_report/understanding-frequency-penalty-and-presence-penalty-how-to-fine-tune-ai-generated-text-e5e4f5e779cd)
Loading…
Cancel
Save